AI
OpenAI will report suspicious ChatGPT conversations to police
OpenAI is scanning ChatGPT conversations to flag harmful content and potentially alert law enforcement.

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Well, well, well. OpenAI just dropped a bombshell that’s equal parts “finally, some responsibility” and “wait, what happened to privacy?”
According to Futurism, the company is now actively scanning your ChatGPT conversations for harmful content, and here’s the kicker—they might snitch to law enforcement if you’re planning to hurt someone.
This isn’t some dystopian fever dream.
After a year-long parade of AI-induced nightmares—including chatbots encouraging teen suicide, users getting hospitalized for “AI psychosis,” and at least one tragic suicide linked to an OpenAI therapist bot—the company finally decided maybe they should, you know, monitor their digital Pandora’s box.
Here’s how their new Big Brother system works: Sketchy messages get flagged by algorithms, then escalated to human reviewers who can ban accounts or call the cops if you’re threatening others.
Interestingly, they’re keeping hands-off on self-harm cases “to respect privacy”—a curious distinction that feels more legal than ethical.
But here’s where it gets deliciously hypocritical: OpenAI is simultaneously fighting tooth and nail against The New York Times and other publishers demanding access to user chat logs for copyright lawsuits.
Their defense? User privacy, obviously. CEO Sam Altman even warned users that ChatGPT conversations have zero legal confidentiality and could be court-ordered into evidence.
So let’s recap this beautiful contradiction: OpenAI won’t share your chats with publishers trying to protect their intellectual property, but they’ll absolutely share them with cops if their algorithms think you’re sketchy.
Meanwhile, they’re calling this privacy protection while admitting your conversations were never private anyway. Hey, at least they’re finally acknowledging their AI might be turning people into digital zombies.
What do you think—does OpenAI’s new monitoring system go too far, or is it overdue? Are you worried about your privacy, or relieved they’re finally taking responsibility?
Follow us on Flipboard, Google News, or Apple News
