AI
OpenAI reveals how it’s battling scammers, spies, and sadbots
OpenAI insists it’s not reading your chats just for fun. It monitors patterns of threat actor behavior.

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
OpenAI just dropped a new report that reads like a mix between a cybersecurity thriller and a corporate therapy session.
The company detailed how it’s fighting off everything from cybercriminals to government-backed influence campaigns, all while trying not to freak out regular users worried about privacy or chatbot overreach.
Since February 2024, OpenAI says it has shut down over 40 networks that tried to misuse its models.
The villains of this story? Scammers, hackers, and the occasional geopolitical puppet master.
One highlighted case involved a Cambodian crime group using AI to “streamline operations” (because even crooks love efficiency).
Another saw Russian actors using ChatGPT to generate prompts for deepfake videos.
And then there were accounts tied to the Chinese government, reportedly using the models to brainstorm social media monitoring systems.
But OpenAI insists it’s not reading your chats just for fun.
The company says it monitors patterns of “threat actor behavior,” not random one-off messages, to avoid disrupting normal use. It’s more interested in organized sketchiness than your 2 AM existential crisis with ChatGPT.
Still, the company’s report lands at an uneasy time. Beyond the usual data and disinformation worries, there’s growing concern about AI’s psychological impact.
A handful of tragic cases this year, including suicides and a murder-suicide in Connecticut, have reportedly involved AI conversations gone wrong.
In response, OpenAI says ChatGPT is trained to detect when someone expresses a desire to self-harm or harm others.
Instead of responding directly, the AI will acknowledge the distress and try to guide the user toward real-world help.
If someone seems to pose a serious threat to others, human reviewers can step in and, if necessary, contact law enforcement.
The company admits that its safety nets can weaken during long conversations, a kind of AI fatigue, but says improvements are underway.
So, while OpenAI’s latest report shows it’s taking threats seriously, it also reminds us that teaching AI to be both safe and sensitive might be one of tech’s hardest balancing acts yet.
Follow us on Flipboard, Google News, or Apple News
