Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Seemingly not content with the real world, Facebook has been working on better approximations of the physical to use in Virtual Reality (VR). Facebook’s Reality Labs’ has been building scarily accurate virtual avatars of real people. The real kicker? Those avatars can also move just like your face does, filling in the “uncanny valley” and building a parking lot over it.
Virtual avatars are nothing new, the internet has been using them since the early days of forums. From those humble beginnings as flat, 2D pictures, Sierra On-Line introduced GIF avatars, Cybertown introduced the first three-dimensional avatars, and VR avatars became the virtual embodiment of ourselves, just like the cyberpunk futures we had been shown by movies and novels.
The thing is, even though those VR avatars could move like us, by tracking the headsets and controllers we use, they were still pretty simplistic visually. VRChat can make you look like whatever you want, from a Neko to a hulking mecha, but they still only approximate facial expressions, if at all.
Facebook’s VR avatars next it to the next (creepy) level
The VR avatars that Facebook Reality Labs are working on are still years away from public use, but even at this early stage, they’re pretty darn impressive.
READ MORE: How to create a custom avatar for Facebook and Instagram
Named Codec Avatars, the creation process uses machine learning and a lot of cameras and sensors to collect, learn and then re-create human social expressions.
So, how does it work?
Currently, that capture process is a long, drawn-out, two-stage affair. First up, “Mugsy”, where the user sits in a dome studded with 132 cameras and 350 lights. They then spend about an hour following the instructions of a coach, so that exaggerated facial expressions are captured. Stuff like “puff up your cheeks” or “show all your teeth.”
Then it’s off to a larger room, where 180 high-speed, high-resolution cameras capture body movements. Great, spin class is bad enough without it being recorded from every angle.
All of that captured data gets fed into a neural network, with the aim of it being able to reproduce expressions and movement to sounds and muscle deformations, from any angle, just as our bodies do.
You can see how spookily close the team has gotten, with a real person on one side, and the Codec Avatar that the neural net has created from the scans of their face and body. Never mind deepfakes, this is almost enough to replace people in videos.
With how difficult it is to animate facial expressions in video games or in CGI-filled movies, this work at Facebook’s Reality Labs’ could mean realistic video games in the future. It could also mean it would enable harassment or make video evidence harder to use in court proceedings. Yaser Sheikh, the brains running the team agrees with the need to protect users:
Authenticity isn’t just crucial to the success of this, it’s crucial to protecting users as well. If you get a call from your mother and you hear her voice, there isn’t an iota of doubt in your mind that what she says is what you hear, right? We have to build that trust and maintain it from the start.
What do you think? Does this creep you out or is the technology promising? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.
Editors’ Recommendations:
- Audi is working with Disney to introduce in-car VR experiences
- A young Mark Zuckerberg talks about this little thing called Facebook
- Apple’s Augmented Reality headset could arrive as early as 2020
- Will VR experiences need VR employees?
- In case you missed it, Facebook, Whatsapp, and Instagram went down overnight