AI
Anthropic says no to AI surveillance in Trump’s America
Anthropic’s usage policy flat-out bans its AI from being used for criminal justice, censorship, or surveillance.

Anthropic is working overtime to brand itself as the chill, responsible one in AI, though not without ruffling some powerful feathers.
According to a Semafor report, the firm’s strict no-go rules on surveillance have put it at odds with the Trump administration and federal law enforcement agencies, who aren’t thrilled about being told “no” by a chatbot company.
Here’s the deal: Anthropic’s usage policy flat-out bans its AI from being used for criminal justice, censorship, or surveillance. (Via: Gizmodo)
That means no analyzing someone’s emotional state, no tracking a person’s movements, no censoring government critics.
This has allegedly frustrated agencies like the FBI, Secret Service, and ICE, all of whom have been exploring AI tools to supercharge their surveillance capabilities.
Anthropic even offers the federal government access to its Claude tools for just $1, but unlike competitors such as OpenAI, it leaves fewer loopholes.
OpenAI, for example, only restricts “unauthorized monitoring,” which could leave wiggle room for “legal” surveillance. Anthropic, by contrast, is the strict parent shutting down the party early.
That doesn’t mean Claude is off-limits to Uncle Sam entirely.
The company built ClaudeGov, a special version for the intelligence community, which has received “High” FedRAMP authorization, meaning it’s cleared for sensitive workloads like cybersecurity. Still, domestic surveillance remains a red line.
One administration official complained that Anthropic’s policy “makes a moral judgment” about how law enforcement operates.
Which, yes, it does. But it’s also a legal shield, as much about liability as ethics.
If the government is annoyed that it can’t use Claude to automate surveillance, the real headline might be that the government wants to automate surveillance in the first place.
Anthropic’s principled stance comes as part of its broader PR play. Earlier this month, it became the only major AI firm to back California’s proposed AI safety bill, which could force companies to prove their models aren’t ticking time bombs.
And yet, the halo is a little tarnished: the company just agreed to a $1.5 billion settlement for pirating millions of books and papers to train Claude, while its valuation ballooned to nearly $200 billion.
So yes, Anthropic is trying to be the “good guy” in AI. Just don’t ask the authors, it’s underpaid, or the FBI.
Is Anthropic’s refusal to allow AI surveillance a principled stand for civil liberties, or does it create unnecessary obstacles for legitimate law enforcement activities? Should AI companies have the right to set moral boundaries on how their technology is used by government agencies, or should these decisions be left to courts and legislators? Tell us below in the comments, or reach us via our Twitter or Facebook.
