top of page
Jorge del Val

On starting Latent Technology

I’m very excited to announce that we closed a $2.1M pre-seed round led by Root Ventures and Spark Capital, with participation from gaming fund Bitkraft. We are excited to grow the team and release the first version of our product, which will allow game developers to build worlds with unprecedented interactivity and immersiveness.


This milestone calls for a reflection on our company and our vision.



The problem with interactive animation


During my career as a Deep Learning researcher at EA Games and Embark Studios, my motivation has always been finding new ways to bring magic to gaming experiences. Magic isn’t but the combination of advanced technology with meaningful ways to experience it, and interactive animation is one of the most fundamental parts of virtual worlds and magical experiences. Currently, it is pretty much a shadow of what it could be: thousands of discrete animations are typically crafted over thousands of hours, loaded and played in specific parts of the game, for example, when a sword hits a character. Ideally, the interaction should emerge from the laws of physics (or the laws of the game’s universe), but it is tremendously challenging to incorporate physics in the traditional animation process, let alone at runtime. The current approach to achieve realistic interactive movement from characters is merely to pre-build more and more animations and define more and more specific events in the game to play them. This correlates directly with team size and cost.


Furthermore, the result of this heavy process, even with a ridiculous number of animations, is still limited by design: create a situation that was not attempted by developers or testers and the animation won’t be realistic. Visualize, for example, a rock hitting a character: the resulting reaction of the character should depend on the specific size, momentum, direction and place where the rock is hitting, and all of these permutations should be scripted by animators before the game is shipped for it to be realistic. Even simpler: imagine pushing a character with your hand in a VR world; or make them walk through different terrains. Most animations in video games (about 70% by our estimates) are built solely to address these kinds of reactions.


The resulting characters cannot be fully interactive, by definition, and the resulting experiences cannot be emergent: every situation which can happen will have been previously programmed: the experience is not yours to live.


Our approach: Generative Physics Animation


Having spent our careers refining Machine Learning techniques, we knew how to fix this problem. Animations shouldn’t be pre-programmed: they should happen naturally as a consequence of the interaction. They should be generated by the characters themselves as they go.


First, characters should be fully physical to allow for truly emergent interactions without needing to pre-load thousands of discrete animations. However, the problem then becomes more challenging because characters no longer need to just move: they need to, for example, balance, and control their own bodies through physical forces. It’s a problem more known to the field of robotics than to the field of animation. Our approach to solve this problem is allowing the characters to learn to move by themselves with experience in a simulated environment, leveraging the latest advances in reinforcement learning (RL) techniques.


If nothing else is taken into account, nevertheless, the resulting learned behaviors are often not considered realistic or high quality. That’s why we also incorporate data from real humans or creatures in the training process, borrowing from the generative modeling literature, so that characters can as well learn to move like them. Along with a suite of other techniques, we plan to achieve high-quality, fully interactive, controllable physical behaviors which will be able to run real-time in the game engine.


Having been a main part of the team which pioneered RL locomotion in actual video games, we are convinced this new approach will cause a “WOW!” effect to anyone that plays with it. Also as a result, you won’t need to millimetrically define animations; instead you may describe, at a high-level, what you want the character to do. Rather than a programmer, you can be something more akin to a movie director, describing what you want to see instead of defining it millimetrically.


If you think this approach would remove artistic control altogether, that’s far from our intention. We will work to make the resulting behaviors more and more controllable by artists and programmers in ways that are appealing to them and easy to understand.


The future and Latent


As we build a new generation of virtual worlds, we will need more immersive and emergent interactions, along with more efficient ways to create them and new ways to experience them. We are convinced that traditional approaches won’t be enough, and that the wave of generative AI will take over big parts of the game development process. Creators will no longer require thousands of hours of rote work to build magical experiences. We have no doubt that Latent will play an important role in this process.


Generative interactions through physics-based animation is a starting point, but by no means the end. We aim at nothing less than to help reinvent how virtual worlds are experienced and created, so that we can bring magic back to players and game creators.


We are building a world-class team to help us accomplish this ambitious mission. We look, first and foremost, for passion and a sense of adventure.


Want to join us in the adventure? Visit our careers page for more information.


Jorge

CEO



Comentarios


bottom of page