Connect with us

News

Facebook is working on VR avatars that look just like you

This isn’t creepy, nope, not at all.

facebook vr machine
Image: Facebook

Seemingly not content with the real world, Facebook has been working on better approximations of the physical to use in Virtual Reality (VR). Facebook’s Reality Labs’ has been building scarily accurate virtual avatars of real people. The real kicker? Those avatars can also move just like your face does, filling in the “uncanny valley” and building a parking lot over it.

Virtual avatars are nothing new, the internet has been using them since the early days of forums. From those humble beginnings as flat, 2D pictures, Sierra On-Line introduced GIF avatars, Cybertown introduced the first three-dimensional avatars, and VR avatars became the virtual embodiment of ourselves, just like the cyberpunk futures we had been shown by movies and novels.

The thing is, even though those VR avatars could move like us, by tracking the headsets and controllers we use, they were still pretty simplistic visually. VRChat can make you look like whatever you want, from a Neko to a hulking mecha, but they still only approximate facial expressions, if at all.

Facebook’s VR avatars next it to the next (creepy) level

The VR avatars that Facebook Reality Labs are working on are still years away from public use, but even at this early stage, they’re pretty darn impressive. Named Codec Avatars, the creation process uses machine learning and a lot of cameras and sensors to collect, learn and then re-create human social expressions.

Codec Avatars side-by-side comparison

A side-by-side comparison of an individual and her avatar shows the technology's photorealism. The left side of the face is a recorded video of a real person making different facial expressions. The right side of the face is an avatar. Unlike other realistic avatars we see today, Codec Avatars are generated automatically — a requirement for effortless Social VR.

Posted by Facebook Engineering on Monday, March 11, 2019

So, how does it work?

Currently, that capture process is a long, drawn-out, two-stage affair. First up, “Mugsy”, where the user sits in a dome studded with 132 cameras and 350 lights. They then spend about an hour following the instructions of a coach, so that exaggerated facial expressions are captured. Stuff like “puff up your cheeks” or “show all your teeth.”

Then it’s off to a larger room, where 180 high-speed, high-resolution cameras capture body movements. Great, spin class is bad enough without it being recorded from every angle.

All of that captured data gets fed into a neural network, with the aim of it being able to reproduce expressions and movement to sounds and muscle deformations, from any angle, just as our bodies do.

You can see how spookily close the team has gotten, with a real person on one side, and the Codec Avatar that the neural net has created from the scans of their face and body. Never mind deepfakes, this is almost enough to replace people in videos.

With how difficult it is to animate facial expressions in video games or in CGI-filled movies, this work at Facebook’s Reality Labs’ could mean realistic video games in the future. It could also mean it would enable harassment or make video evidence harder to use in court proceedings. Yaser Sheikh, the brains running the team agrees with the need to protect users:

Authenticity isn’t just crucial to the success of this, it’s crucial to protecting users as well. If you get a call from your mother and you hear her voice, there isn’t an iota of doubt in your mind that what she says is what you hear, right? We have to build that trust and maintain it from the start.

What do you think? Does this creep you out or is the technology promising? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

Comments
Advertisement

More in News