Connect with us

Download Perplexity Comet

Invite a friend to Perplexity Comet. You get $15, they get Pro. Easy win.

INVITE AND EARN $15

AI

Study finds Gemini fumbles news, can’t tell facts from opinions

Nearly half (45%) of all AI responses contained at least one major error.

Smartphone showing AI application icons on screen.
Image: Unsplash

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

If you’ve been asking your AI assistant to catch you up on the news, you might want to double-check those “facts.” 

A new European Broadcasting Union (EBU) study, done with the BBC, has found that your favorite digital know-it-alls are a lot more confident than they are correct.

Researchers tested 3,000 answers across 14 languages from leading AI assistants, Google’s Gemini, Microsoft Copilot, ChatGPT, and Perplexity, to see how well they handled news-related questions. 

The results? Let’s just say journalism degrees are still safe. Nearly half (45%) of all AI responses contained at least one major error. 

Even worse, 81% of those answers presented opinions as facts, while 73% threw in their own opinions just for fun. 

But the real overachiever in the “getting it wrong” department was Google’s Gemini, which managed to botch 76% of its responses, double the error rate of its nearest competitor, Copilot (37%). 

ChatGPT came in at 36%, while Perplexity was the least confused at 30%.

The study found assistants frequently mixed up sources, relied on outdated info, or blurred the line between “according to experts” and “according to vibes.” 

Which is worrying, because these tools are increasingly being used as search engine replacements or news briefers.

As trust in traditional media wobbles, the idea that AI might add even more misinformation to the mix isn’t exactly comforting. 

The EBU warns that Gemini’s particularly poor showing highlights “gaps in source verification and transparency” across different AI systems.

The implications are serious, especially for younger audiences under 25, who are turning to AI for quick news hits. 

If you’re using Gemini to stay informed, the study suggests that nearly three out of four answers could be misleading or flat-out wrong.

The takeaway? AI assistants can help summarize the headlines, but they’re still more rumor mill than newsroom. 

Treat them less like your trusted journalist and more like that one friend who really means well but always misremembers the story.

Download Perplexity Comet

Invite a friend to Perplexity Comet. You get $15, they get Pro. Easy win.

INVITE AND EARN $15

Follow us on Flipboard, Google News, or Apple News

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Limited time TikTok advertising offer details.

More in AI