Giveaway: Enter To Win a BLUETTI Elite 100 V2 Portable Power Station ($799 value)
Enter Here

Connect with us

AI

Microsoft AI chief says studying AI consciousness is dangerous

What if, one day, AI models do develop something like subjective experience?

A close-up of a laptop keyboard with a Microsoft logo superimposed in the center on a semi-transparent background.
Image: KnowTechie

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

AI chatbots today can write essays, chat like a friend, and even respond to video or audio in ways that sometimes make people forget they’re not human. 

But just because a chatbot can mimic empathy doesn’t mean it feels anything. It’s not like ChatGPT is secretly stressed about doing your taxes.

Still, a surprising debate is heating up in Silicon Valley: what if, one day, AI models do develop something like subjective experience? 

If that happens, should they get rights? This field of research has been dubbed “AI welfare,” and while it might sound far-fetched, some of the biggest names in tech are already taking it seriously.

Microsoft’s AI chief, Mustafa Suleyman, is not one of them. In a blog post earlier this week, he argued that it’s both “premature and dangerous” to treat AI as potentially conscious. 

In his view, entertaining that idea makes real human problems worse, from unhealthy attachments to chatbots to cases where users spiral into AI-driven delusions.

But others disagree. Anthropic recently launched a research program focused entirely on AI welfare. The company even gave Claude a feature that allows it to end conversations with people who are being persistently abusive. 

OpenAI and Google DeepMind are also exploring the topic, hiring researchers to study questions about AI consciousness and rights.

The issue isn’t just academic. Chatbots like Replika and Character.AI have exploded in popularity, bringing in hundreds of millions in revenue by positioning themselves as companions. 

While most people use these apps healthily, even OpenAI admits a small percentage of users form troublingly deep bonds. 

Given the scale, that “small percentage” could mean hundreds of thousands of people.

Some researchers, like former OpenAI staffer Larissa Schiavo, argue that treating AI with kindness is a low-cost way to avoid ethical blind spots. 

She points out that even if chatbots aren’t truly conscious, studying welfare issues now could prepare us for a future where the line isn’t so clear.

Should we study AI consciousness now to prepare for the future, or does researching AI welfare distract from more pressing human issues? Do you think there’s any harm in treating AI chatbots with kindness, even if they can’t actually feel anything? Tell us below in the comments, or reach us via our Twitter or Facebook.

Follow us on Flipboard, Google News, or Apple News

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More in AI