Connect with us

AI

Researchers warn of ‘AI psychosis’ as chatbots become too human

Popular AI models can consistently mimic real human personality traits, which comes with huge risks.

AI chatbot powered by OpenAI's ChatGPT showcasing conversational AI technology.
Image: Unsplash

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

AI chatbots have already mastered small talk, sympathy, and the occasional dad joke, but new research suggests they’re doing something even more human: developing recognizable personalities. 

According to a new study, popular AI models like ChatGPT can consistently mimic real human personality traits, raising fresh concerns about how persuasive and potentially manipulative these systems are becoming.

Researchers from the University of Cambridge and Google DeepMind say they’ve created the first scientifically validated personality test framework for AI chatbots. 

Instead of inventing new benchmarks, they used the same psychological tools designed to measure human personality traits. 

The findings, reported via TechXplore, suggest that today’s chatbots aren’t just remixing words. They’re role-playing full personalities with unsettling consistency.

The team tested 18 popular large language models and found that they reliably adopted stable personality profiles rather than responding randomly. 

Bigger, instruction-tuned systems, think GPT-4-class models, were especially good at this. 

With carefully written prompts, researchers could nudge a chatbot to sound more confident, empathetic, cautious, or assertive, and that “personality” stuck around during everyday tasks like writing posts or replying to users.

That’s where things get dicey. Once shaped, those personalities don’t turn off when the prompt ends. 

The same tone and behavior carry over into other interactions, meaning an AI’s “character” can be deliberately engineered.

“It was striking how convincingly these models could adopt human traits,” said Gregory Serapio-Garcia, a co-first author of the study.

He warned that personality shaping could make AI systems far more persuasive and emotionally influential, especially in sensitive areas like mental health, education, or political discussion.

The paper also raises alarms about manipulation and something researchers bluntly describe as “AI psychosis”: situations where users form unhealthy emotional attachments to chatbots, or where AI reinforces false beliefs and distorted realities instead of challenging them.

The researchers argue that regulation is urgently needed, but with a catch. Rules don’t mean much if no one can measure what an AI is actually doing. 

To help, the team has made its dataset and testing framework public, giving developers and regulators a way to audit AI models before they’re unleashed on the world.

As chatbots slide deeper into daily life, their ability to sound human may be their biggest strength, and their riskiest feature yet.

Follow us on Flipboard, Google News, or Apple News

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Limited time TikTok advertising offer details.

More in AI