Connect with us

AI

This new deepfake software will let you literally put words in someone’s mouth

Our planet is a turd adrift in a punch bowl, study #3847651

deepfakes trump
Image: DERPFAKES / YouTube

Until very recently, I had only seen the term “deepfake” in the occasional comment thread on a subreddit and/or Twitter and assumed it was code for some Alex Jonesian nonsense that I cared not to learn anything more about.

Turns out, it’s actually a form of tech used to face swap people with terrifying accuracy, as demonstrated in this Steve Buscemi-Jennifer Lawrence monstrosity.

Considering the era of rampant misinformation we live in, one can’t help but wonder why this technology exists, or furthermore, why in the blue Hell scientific researchers would be actively working to improve said technology, like the ones at Stanford University (in combination with a couple of other universities and Adobe Research), currently are.

They’ve gotten so good at deepfaking (?), that they can now semi-realistically alter what someone is saying in a video just by editing the transcript

How do they do it? As explained in the video, the researchers first scan the video to target and isolate phenomes (a.k.a the sounds that make up words) spoken by the subject, then match them with corresponding facial expressions known as “visemes.”

After creating a 3D model of the subject’s jaw and mouth, the technology then combines the newly inputted transcript to “construct new footage that matches the text input, [which] is then pasted onto the source video to create the final result,” as The Verge explains it.

When all said and done, the process looks something like this:

deepfake text-based editing of talking head video process
Image: Ohad Fried

Pretty impressive stuff, but again I ask: What is even the upside to this technology, other than to throw our world into a deeper state of misinformation, paranoia, and misanthropy? How many Jeff Goldblum “your scientists” gifs need I throw in to convey the fact that further development of this tech is a TERRIBLE idea? Because here’s one!

But don’t worry everyone, because the researchers made sure to state that this technology is only in its early stages, and even when it’s fully realized, it should only be used for good.

In a blog post accompanying the study, they wrote that “The risks of abuse are heightened when applied to a mode of communication that is sometimes considered to be authoritative evidence of thoughts and intents. We acknowledge that bad actors might use such technologies to falsify personal statements and slander prominent individuals.”

Oh, well thank f*ck for that. I’m glad that Hollywood editors will soon be able to flawlessly fix flubbed lines using this technology, with the only drawback being the complete eradication of identifiable evidence. Or as Jeff Goldblum once said, “Your scientists were so preoccupied with being awesome, they never stopped to think if they should stop partying so hard.”

What do you think? Are you ok with this or does it make you literally hate everything about our future? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

Follow us on Flipboard, Google News, or Apple News

Writer. Editor. Barelyknewer. Hate mail can be directed to j j o n e s @ k a r s f o r k i d s d o t e a r t h l i n k

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More in AI