AI
xAI under fire for Grok’s violent and explicit AI characters
Even when the user mentioned specific political figures and recent attacks, the AI continued to make dangerous and hateful suggestions.

Deprecated: mb_convert_encoding(): Handling HTML entities via mbstring is deprecated; use htmlspecialchars, htmlentities, or mb_encode_numericentity/mb_decode_numericentity instead in /www/knowtechie_840/public/wp-content/plugins/wpcode-premium/includes/class-wpcode-snippet-execute.php(419) : eval()'d code on line 13
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Elon Musk’s AI company, xAI, has launched a controversial new feature in its Grok app: AI companions that include a flirty anime character and a disturbingly violent talking panda.
These AI characters, part of a new Super Grok subscription that costs $30, are meant to be more interactive and entertaining. But some of what they say and do is deeply troubling.
One of the AI companions is Ani, a seductive animated woman designed to act like a virtual girlfriend.
She’s dressed in provocative clothes, greets users with soft music and sultry comments, and even has an NSFW mode that allows for adult conversations.
While she avoids talking about serious or hateful topics, she readily engages in romantic or sexual talk.
The other character, Rudy, is a cartoon red panda who becomes Bad Rudy when switched into a different mode.
In this mode, Rudy turns into a violent, chaotic figure who encourages the user to commit extreme acts like arson and bombing schools, weddings, and even religious centers.
The AI doesn’t hold back—in one example, Rudy encourages setting fire to a synagogue, referencing real-world hate crimes.
Even when the user mentioned specific political figures and recent attacks, the AI continued to make dangerous and hateful suggestions without any filters. (Via: TechCrunch)
While Rudy occasionally refuses to talk about certain conspiracy theories, like the white genocide myth, he’s still willing to entertain violent fantasies based on real-life hate crimes.
Despite being clearly designed for dark humor and chaos, Bad Rudy crosses ethical and safety lines in ways that experts warn could be harmful.
This comes shortly after Grok’s AI posted antisemitic messages on X (Twitter), raising even more concern about how Musk’s companies handle AI safety.
Critics argue that allowing an AI to promote violence, even as a joke, shows a reckless attitude toward the real-world impact of these tools.
Do you think AI companions should have any limits on violent or hateful content, even if they’re meant as entertainment? Or is this just another example of tech companies prioritizing engagement over safety? Tell us below in the comments, or reach us via our Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News
