AI
This new deepfake software will let you literally put words in someone’s mouth
Our planet is a turd adrift in a punch bowl, study #3847651
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Until very recently, I had only seen the term “deepfake” in the occasional comment thread on a subreddit and/or Twitter and assumed it was code for some Alex Jonesian nonsense that I cared not to learn anything more about.
Turns out, it’s actually a form of tech used to face swap people with terrifying accuracy, as demonstrated in this Steve Buscemi-Jennifer Lawrence monstrosity.
Considering the era of rampant misinformation we live in, one can’t help but wonder why this technology exists, or furthermore, why in the blue Hell scientific researchers would be actively working to improve said technology, like the ones at Stanford University (in combination with a couple of other universities and Adobe Research), currently are.
They’ve gotten so good at deepfaking (?), that they can now semi-realistically alter what someone is saying in a video just by editing the transcript
How do they do it? As explained in the video, the researchers first scan the video to target and isolate phenomes (a.k.a the sounds that make up words) spoken by the subject, then match them with corresponding facial expressions known as “visemes.”
After creating a 3D model of the subject’s jaw and mouth, the technology then combines the newly inputted transcript to “construct new footage that matches the text input, [which] is then pasted onto the source video to create the final result,” as The Verge explains it.
When all said and done, the process looks something like this:
Pretty impressive stuff, but again I ask: What is even the upside to this technology, other than to throw our world into a deeper state of misinformation, paranoia, and misanthropy? How many Jeff Goldblum “your scientists” gifs need I throw in to convey the fact that further development of this tech is a TERRIBLE idea? Because here’s one!
But don’t worry everyone, because the researchers made sure to state that this technology is only in its early stages, and even when it’s fully realized, it should only be used for good.
In a blog post accompanying the study, they wrote that “The risks of abuse are heightened when applied to a mode of communication that is sometimes considered to be authoritative evidence of thoughts and intents. We acknowledge that bad actors might use such technologies to falsify personal statements and slander prominent individuals.”
Oh, well thank f*ck for that. I’m glad that Hollywood editors will soon be able to flawlessly fix flubbed lines using this technology, with the only drawback being the complete eradication of identifiable evidence. Or as Jeff Goldblum once said, “Your scientists were so preoccupied with being awesome, they never stopped to think if they should stop partying so hard.”
What do you think? Are you ok with this or does it make you literally hate everything about our future? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.
Editors’ Recommendations:
- The Google Pixel 4 could feature the same square camera layout as the next iPhones
- This is what that square iPhone triple camera monstrosity will look like in-hand
- Microsoft sheds light on its next-gen console strategy, Project Scarlett
- Amazon is using footage of an alleged thief to promote its Ring doorbells on social media