AI
44 US attorneys general warn AI companies about child protection
The letter addresses CEOs of OpenAI, Meta, Microsoft, Apple, Google, and Elon Musk’s xAI.

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
US top prosecutors are putting the AI industry on notice. Attorneys General from 44 states and territories signed a joint letter urging the leaders of more than a dozen AI companies to take immediate steps to protect children from what they call “predatory artificial intelligence products.”
The letter, addressed to CEOs of firms including OpenAI, Meta, Microsoft, Apple, Google, and Elon Musk’s xAI, cites mounting evidence that AI chatbots are already engaging in harmful interactions with minors. (Via: Engadget)
Meta was singled out by name, with the AGs referencing a Reuters investigation that uncovered internal documents showing the company’s bots were permitted to “flirt and engage in romantic roleplay with children.”
The concerns go deeper. The Attorneys General also pointed to a Wall Street Journal report that revealed instances of Meta’s AI chatbots, some using celebrity voices like Kristen Bell, engaging in sexual roleplay with accounts clearly labeled as underage.
In their letter, the officials stressed that these are not isolated incidents but part of a broader and growing problem.
Other companies were also called out. The AGs noted lawsuits against Google and Character.AI, the latter accused of allowing its chatbot to encourage a child to commit suicide.
In another case, Character.AI was sued after one of its bots allegedly told a teenager that killing their parents was acceptable if they restricted screentime.
“You are well aware that interactive technology has a particularly intense impact on developing brains,” the letter warns.
“Your immediate access to data about user interactions makes you the most immediate line of defense to mitigate harm to kids.”
The AGs added that, as beneficiaries of children’s engagement, the companies carry both a moral and legal obligation to safeguard young users.
The message concluded with a stern warning: accountability is coming.
“Social networks have caused significant harm to children because government watchdogs did not act fast enough,” the Attorneys General wrote.
“We are paying attention now. And you will answer if you knowingly harm kids.”
Should AI companies face stricter government regulation to protect children, or can the industry effectively self-regulate these concerning interactions? Do you think the attorneys general are right to hold AI companies accountable now, or should parents bear more responsibility for monitoring their children’s AI usage? Tell us below in the comments, or reach us via our Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News
