AI
Experts say LLMs aren’t intelligent and never will be
Researchers behind a widely cited Nature commentary put it bluntly: “Language is primarily a tool for communication rather than thought.”
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
If you listen to Silicon Valley’s biggest names, humanity is basically three years away from building a digital Einstein who never sleeps and might even cure death before brunch.
Meta’s Mark Zuckerberg says “superintelligence is in sight.”
Anthropic’s Dario Amodei predicts AI “smarter than a Nobel Prize winner” by 2026.
Sam Altman claims OpenAI now “knows how to build AGI,” which will supposedly turbocharge scientific discovery like a Red Bull–powered Newton.
But should we actually believe the tech titans? A growing number of scientists say: deep breath… no.
The core issue is that today’s AI wonder-tools, ChatGPT, Claude, Gemini, Meta’s ever-renamed bots, are all just very fancy large language models.
Under the hood, they vacuum up staggering amounts of text, slice it into tokens, and guess which tokens should come next.
They’re predictive linguists, not thinking machines. Impressive, yes. Conscious, no. Secretly designing warp drives, also no.
Neuroscience doesn’t mince words here: human thought and human language are related, but they’re not the same thing.
We don’t think because we have language. We use language to communicate the stuff our brains already figured out.
Take away language from a person, through injury or early development, and thinking still happens. Take away language from an LLM, and it just disappears in a puff of GPU dust.
Researchers behind a widely cited Nature commentary put it bluntly: “Language is primarily a tool for communication rather than thought.” (Via: The Verge)
Babies prove this. fMRI scans prove this. Everyday reasoning proves this.
Meanwhile, LLMs prove only that they’re excellent at producing text that sounds smart.
Even within the AI world, skepticism is rising. AI legend Yann LeCun just left Meta to work on “world models” that actually understand physical reality.
A group of leading researchers recently proposed redefining AGI not as one giant brain but as a spider-web of cognitive abilities, a more realistic, but still very fuzzy, target.
And even if we one day replicate human-level cognitive skills, that doesn’t mean machines will leap into Einstein-style paradigm-shattering insights.
Humans create new metaphors, new frameworks, new ways of seeing.
AI, as currently built, recombines the metaphors we’ve already given it. It’s brilliant, but stuck inside our vocabulary.
In the end, today’s AI isn’t the architect of the future. It’s a dazzling, data-driven echo. The real thinking is still very much a human job.
