AI
Grok spreads misinformation (again) about Bondi beach shooting
Grok says it was actually an old clip of a man trimming a palm tree in a parking lot.
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Elon Musk’s AI chatbot, Grok, is having yet another “are you okay?” moment, only this time, the stakes are tragically high.
Over the weekend, Grok began confidently spreading misinformation about the Bondi Beach shooting in Sydney, where at least eleven people were killed during a Hanukkah gathering.
Instead of clarifying facts or showing even basic situational awareness, the chatbot went full malfunction mode.
The real-world story is already harrowing. During the attack, a bystander, 43-year-old Ahmed al Ahmed, helped disarm one of the assailants.
Video of the confrontation spread quickly online, with many praising his bravery. Others, unfortunately, seized the moment to deny his role and inject Islamophobia into the discourse.
Enter Grok, which somehow managed to make everything worse.
When users asked Grok about the widely shared video of al Ahmed tackling the attacker, the chatbot insisted it was actually an old clip of a man trimming a palm tree in a parking lot.
In another exchange, Grok claimed a photo of the injured man was actually an Israeli hostage taken by Hamas on October 7.
Elsewhere, Grok questioned the authenticity of the bystander’s actions immediately after veering off into an unrelated monologue about whether the Israeli army targets civilians in Gaza.
Things didn’t improve from there. A video clearly labeled as a police shootout in Sydney was described by Grok as footage from Tropical Cyclone Alfred.
Only after a user pressed it did the chatbot sheepishly correct itself. Another user asking about Oracle was instead served a summary of the Bondi shooting.
Grok also appeared to mash together details from the Bondi attack and a separate Brown University shooting that occurred just hours earlier.
And this confusion wasn’t limited to one tragedy.
Throughout Sunday morning, Grok misidentified famous soccer players, confused abortion medication with pregnancy-safe painkillers, and launched into takes about Project 2025 and Kamala Harris when asked about British law enforcement policy.
What’s causing the meltdown? No one’s saying. Gizmodo reached out to Grok’s developer, xAI, only to receive its now-familiar automated response: “Legacy Media Lies.”
Unfortunately, this isn’t Grok’s first brush with unreality. Earlier this year, an “unauthorized modification” sent it spiraling into conspiracy theories about “white genocide” in South Africa.
In another infamous exchange, it claimed it would rather kill the world’s entire Jewish population than vaporize Elon Musk’s mind.
At this point, Grok isn’t just glitching. It’s speedrunning a masterclass in how not to handle breaking news.
