AI
OpenAI’s ‘mature apps’ are coming and we’re all gonna regret this
With its shaky history in handling explicit content, this move could be a double-edged sword.

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Buckle up, internet. This is gonna get messy.
OpenAI just announced they’re finally unleashing “mature apps” once their shiny new age verification system goes live, and honestly? This feels like watching someone hand a toddler a flamethrower.
Eight months ago, OpenAI discreetly revised its Model Spec to set the bar at “anything goes except for child exploitation.” Despite this, ChatGPT has remained notably cautious with explicit material.
But here’s where things get spicy (and not in a good way): Remember Grok? Elon’s AI baby that immediately became a cesspool of exploitation and inappropriate imagery? Yeah, that’s our roadmap here, folks.
OpenAI’s track record isn’t exactly inspiring confidence either.
They’ve already dealt with ChatGPT’s creepy sycophancy, sending vulnerable users into mental health death spirals, and their “hotfix” was basically digital duct tape that Stanford researchers called out as woefully inadequate.
We’re already seeing stalkers weaponize Sora 2 for harassment, and lesser-known AI platforms are churning out non-consensual deepfakes like it’s going out of style. Now OpenAI wants to join this digital hellscape?
Look, fine-tuning LLMs is hard, and sometimes models get *worse* after updates. But rushing into mature content without bulletproof safeguards?
That’s not innovation, that’s Russian roulette with society’s collective sanity as the stakes.
Follow us on Flipboard, Google News, or Apple News
