The robot was able to perform aerial acrobatics perfectly

Using machine learning techniques, researchers from the University of California, USA have taught human simulation robots to perform more than 25 natural movements, from acrobatics to high kicks and street dances. This technique could pave the way for more realistic video games and more agile robots.

Computer simulation animation has never been better, but there are still many opportunities for improvement. We can still distinguish between simulated images and real images. And this is where improvements can be made, we can equip virtual characters with so many natural appearances and movements that virtual confusion, indistinguishable.

To achieve that goal, Xue Bin "Jason" Peng, a University of California graduate student with colleagues, combined two techniques - motion recording technology and advanced machine learning - to create create something completely new: a system that teaches robots to perform complex physical movements in a more realistic way. Learning from the beginning, and with limited human intervention, digital characters have learned how to kick, jump and tumble. Furthermore, these characters even learn how to interact with objects in their environment, such as barriers that are placed in the path of the object or objects that are thrown directly towards them.

Typically, computer animation makers must create custom controls for all skills or movements. These controllers are quite detailed, and include discrete skills like walking, jogging, somersault, or whatever the character needs to do. The movements created with this technique look quite nice, but each movement must be manually created. Another approach is to use advanced learning methods, such as the GAIL technique of artificial intelligence DeepMind. This technique is very impressive because robots learn how to do things from scratch - but often create odd, unpredictable, and unnatural results.

The new system, called DeepMimic , works a little differently. Instead of forcing characters to simulate achieving a specific end goal, such as walking, DeepMimic uses moving video clips to "show" the AI ​​what the ultimate goal looks like. In the experiments, Bin's team took the recording data moving from more than 25 different physical skills, from jogging, throwing to jumping and tumbling, to "determine the style and look of hope. want " of skills, as Peng explained on the Berkeley Center for Artificial Intelligence Research (BAIR).

Results are not available overnight. The virtual characters slide and fall face-down several times until they finally make the right moves. It takes a month to develop each simulation "practice" skill, when robots have gone through millions of tests to try to perfect aerial somersaults or high kicks. But each failure is one adjustment to help the robot get closer to the desired goal.

By using this technique, researchers were able to create characters that behave in a realistic, natural way. Most prominent, robots can manage in situations that have never been met, such as challenging terrain or obstacles. This is an additional reward for advanced learning, not what researchers try to achieve.

Picture 1 of The robot was able to perform aerial acrobatics perfectly
This platform can be used to produce more realistic computer animation, but also to test robots.

"We offer a simple [advanced learning] conceptual framework, allowing simulated characters to learn dynamic and acrobatic skills from reference motion clips, which can be provided as recorded data. Motion pictures are recorded from human subjects, " Peng said. " With a unique skill, such as a dangerous kick or aerial swing, our character can learn to imitate simulation skills. This helps to create movements that are nearly impossible. We are aiming to create a virtual stuntman ".

Researchers have also used DeepMimic to create realistic movements from lions, dinosaurs, and mythical monsters. They even created a virtual version of ATLAS, the human robot was voted the most likely to destroy humanity. This platform can be used to produce more realistic computer animation, but also to test robots.

The work will be presented at the SIGGRAPH 2018 conference on computer graphics and interactive technology in August this year.