Machine Learning: Changing the Motion Capture

The use of Motion Capture (MoCap) is no anomaly in the world of VFX. For years, the technique has seen actors suiting up in professional spandex and painting dots on their faces, all to capture their expressions and movements so they can live on through their animated counterparts. But now, thanks to machine learning, this process may be set to change.

The technique has proved itself to solve the more difficult problems of the VFX industry, helping artists to achieve results they couldn’t before in a faster and more effective way.

With this in mind, it becomes easier to see why machine learning techniques are being applied to motion capture. Where previously MoCap involved more tedious tasks, like artists painting out head rigs, the use of machine learning promises to make the process more efficient and whilst it’s a relatively new concept, there are a number of VFX companies turning to machine learning and uncovering its possibilities.

 

The technique has proved itself to solve the more difficult problems of the VFX industry, helping artists to achieve results they couldn’t before in a faster and more effective way.

While motion capture remains one of the more popular sources of motion data, it typically requires a vast amount of equipment. This is one of the main reasons there has been a shift over to markerless motion capture technology, with it even making an appearance in more high profile films like The Irishman and its use of deepfake technology.

And even though this was a successful use of the technique, it required heavy-duty camera set-ups and laborious work from the VFX team. However, research coming out of the University of California explores the application of deep reinforcement learning as a way to capture motion data. This particular branch of machine learning focuses on training AI models to learn from past experiences and aims to diminish the intense use of equipment.

Using the concept, the research team discovered that AI models could learn from any number of reference materials they’ve been given, including more limited sources like YouTube videos. This opens up the possibility for faster and easier designs of characters, alongside a more efficient way to clean-up analog-style noise—all based on what the model learned about the way the human body moves.

Avatar

Vincent Vardnoush

VFX Producer, Film Colourist, Creative Web Developer.

No Comments

Leave a reply