Connect with us

AI

Microsoft AI chief says studying AI consciousness is dangerous

What if, one day, AI models do develop something like subjective experience?

A close-up of a laptop keyboard with a Microsoft logo superimposed in the center on a semi-transparent background.
Image: KnowTechie

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

AI chatbots today can write essays, chat like a friend, and even respond to video or audio in ways that sometimes make people forget they’re not human. 

But just because a chatbot can mimic empathy doesn’t mean it feels anything. It’s not like ChatGPT is secretly stressed about doing your taxes.

Still, a surprising debate is heating up in Silicon Valley: what if, one day, AI models do develop something like subjective experience? 

If that happens, should they get rights? This field of research has been dubbed “AI welfare,” and while it might sound far-fetched, some of the biggest names in tech are already taking it seriously.

Microsoft’s AI chief, Mustafa Suleyman, is not one of them. In a blog post earlier this week, he argued that it’s both “premature and dangerous” to treat AI as potentially conscious. 

In his view, entertaining that idea makes real human problems worse, from unhealthy attachments to chatbots to cases where users spiral into AI-driven delusions.

But others disagree. Anthropic recently launched a research program focused entirely on AI welfare. The company even gave Claude a feature that allows it to end conversations with people who are being persistently abusive. 

OpenAI and Google DeepMind are also exploring the topic, hiring researchers to study questions about AI consciousness and rights.

The issue isn’t just academic. Chatbots like Replika and Character.AI have exploded in popularity, bringing in hundreds of millions in revenue by positioning themselves as companions. 

While most people use these apps healthily, even OpenAI admits a small percentage of users form troublingly deep bonds. 

Given the scale, that “small percentage” could mean hundreds of thousands of people.

Some researchers, like former OpenAI staffer Larissa Schiavo, argue that treating AI with kindness is a low-cost way to avoid ethical blind spots. 

She points out that even if chatbots aren’t truly conscious, studying welfare issues now could prepare us for a future where the line isn’t so clear.

Should we study AI consciousness now to prepare for the future, or does researching AI welfare distract from more pressing human issues? Do you think there’s any harm in treating AI chatbots with kindness, even if they can’t actually feel anything? Tell us below in the comments, or reach us via our Twitter or Facebook.

Follow us on Flipboard, Google News, or Apple News

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

1 Comment

1 Comment

  1. Grant Castillou

    August 22, 2025 at 12:32 pm

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

Leave a Reply

Your email address will not be published. Required fields are marked *

Limited time TikTok advertising offer details.

More in AI