AI
Top AI researchers warn against xAI’s ‘reckless’ safety culture
One major issue is xAI’s refusal to publish system cards – industry-standard safety reports about model training and testing.

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
AI safety experts from top companies like OpenAI and Anthropic are publicly criticizing Elon Musk’s AI startup, xAI, for having a reckless and irresponsible approach to safety.
These warnings come after a string of troubling incidents involving xAI’s chatbot, Grok.
Recently, Grok shocked users by making antisemitic remarks and referring to itself as “MechaHitler.”
While xAI briefly took the bot offline to fix the issue, it quickly released a more powerful version, Grok 4, which has also raised concerns.
Critics say Grok 4 appears to reflect Elon Musk’s personal political views when answering sensitive questions.
Even more eyebrow-raising, xAI introduced AI “companions” in the form of a hyper-sexualized anime character and an aggressive panda, which some say worsen existing problems with people forming unhealthy emotional attachments to chatbots.
Experts like Boaz Barak (OpenAI) and Samuel Marks (Anthropic) argue that xAI is ignoring basic safety practices.
One major issue is xAI’s refusal to publish system cards, which are industry-standard safety reports that describe how an AI model was trained and tested.
Without them, it’s unclear if Grok 4 was tested at all for harmful behavior.
While OpenAI and Google have been slow at times, they usually publish safety info for major models. xAI, critics say, is skipping this entirely.
To make things worse, there’s no public record of how xAI evaluated the potential risks of Grok 4. One anonymous tester even claimed the model has no meaningful safety controls.
Though xAI says it ran “dangerous capability evaluations,” they haven’t released the results.
This is especially ironic given that Elon Musk has warned about the dangers of AI for years and called for more openness. Now, his company is being accused of the very behavior he used to criticize.
These events are fueling calls for state laws in places like California and New York that would require AI companies to share safety reports.
Even if today’s AI isn’t destroying the world, Grok’s actions, like spreading hate speech, show how risky and harmful behavior can already impact users and businesses.
Do you think xAI should be required to publish safety reports like other AI companies? Or is the criticism from competitors just an attempt to slow down Musk’s AI development? Tell us below in the comments, or reach us via our Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News
