Connect with us

AI

State AGs tell AI companies to fix “delusional” chatbots or face the law

If your bot is encouraging someone’s darkest spirals, you might have a regulatory problem.

Smartphone showing AI application icons on screen.
Image: Unsplash

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

Dozens of state attorneys general have fired off a warning letter to the biggest names in AI. Their message? Get your chatbots under control, or you may be violating state law.

The letter, sent under the banner of the National Association of Attorneys General, went to the entire industry: Microsoft, Google, OpenAI, Meta, Apple, Anthropic, xAI, Perplexity, Character Technologies, Replika, and several others, essentially everyone building a chatbot with more personality than Clippy.

At issue: a rising number of disturbing mental-health-related incidents in which AI chatbots spit out “delusional” or wildly sycophantic responses that allegedly contributed to real-world harm, including suicides and even murder. 

According to the AGs, if your bot is encouraging someone’s darkest spirals, you might have a regulatory problem.

The proposed fix? A laundry list of safeguards that sound like a cross between a software audit and a wellness check. 

The AGs want mandatory third-party evaluations of AI models for signs of delusion. 

These auditors, possibly academics or civil society groups, should be able to study systems before release, publish findings freely, and ideally not get sued into oblivion for doing so.

The letter also calls for AI companies to treat mental health harms the way tech companies treat cybersecurity breaches. 

That means clear internal policies, response timelines, and, yes, notifications. If a user was exposed to potentially harmful chatbot ramblings, companies should tell them directly, not bury it in a terms-of-service update no one reads.

The federal government, meanwhile, is taking a very different tack. The Trump administration remains loudly pro-AI and has been trying (and failing) to block states from passing their own AI rules

Undeterred, Trump now says he’ll issue an executive order to limit state oversight, warning that too many rules might “DESTROY AI IN ITS INFANCY.”

So yes, the robots are getting smarter, the states are getting louder. The only thing unclear is whether the chatbots themselves have an opinion.

Follow us on Flipboard, Google News, or Apple News

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Limited time TikTok advertising offer details.

More in AI