Connect with us

Download Perplexity Comet

Invite a friend to Perplexity Comet. You get $15, they get Pro. Easy win.

INVITE AND EARN $15

AI

OpenAI tightens Sora deepfake safeguards

OpenAI says every Sora video includes C2PA metadata, a label meant to prove how a video was made.

A black background with the text "Sora 2" centered, stylized in white font, representing a tech-themed graphic or branding for KnowTechie.
Image: OpenAI

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

OpenAI’s newest toy, Sora 2, is like Photoshop on steroids, if Photoshop had a moral hangover. 

The AI video generator has proven so good at faking reality that it’s already spitting out shockingly convincing deepfakes of everyone from Martin Luther King Jr. to SpongeBob. 

And while it might sound like an edgy art experiment, users are discovering that Sora can also turn them into unwilling actors in racist rants or fetish videos.

Inside the app, users at least know everything’s fake, but once those clips start circulating on TikTok or Instagram, all bets are off. 

OpenAI’s watermarking and authenticity tools might as well be invisible ink. 

The company proudly points out that every Sora video includes “C2PA metadata,” a sort of digital nutrition label meant to prove how a video was made. (Via: The Verge)

Unfortunately, almost no one can see it. The tags are buried deep in metadata, stripped away when uploaded to most social platforms.

C2PA, or Content Credentials, was supposed to be the internet’s secret weapon against deepfakes. Adobe, OpenAI, Google, Meta, and even the US government have all backed it. 

In theory, it’s a great idea: invisible metadata showing who made what, when, and how. In practice? Nobody checks. 

Labels are either missing, mislabeled, or hidden behind collapsed menus.

That’s awkward for OpenAI, which sits on C2PA’s steering committee while simultaneously pumping out a tool that makes fake videos of real people shouting white supremacist slogans. 

Deepfake watchdogs say Sora 2’s identity filters were cracked within 24 hours of launch.

Adobe’s team insists things are improving, slowly. 

“People need clear information about how content is made,” said Andy Parsons, Adobe’s content authenticity lead, sounding like someone watching the Titanic sink while complimenting the band.

Experts agree C2PA alone won’t save us. Metadata can be deleted, watermarks removed, and no one’s really enforcing the rules. 

Lawmakers are now pushing anti-deepfake bills to catch up. Until then, AI creators like OpenAI are both arsonists and firefighters, building the deepfake engines while selling us the smoke alarms.

Download Perplexity Comet

Invite a friend to Perplexity Comet. You get $15, they get Pro. Easy win.

INVITE AND EARN $15

Follow us on Flipboard, Google News, or Apple News

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Limited time TikTok advertising offer details.

More in AI