Connect with us

AI

AI gives worse care to women and minorities

AI often tells female patients to simply “self-manage at home.”

Smartphone showing AI application icons on screen.
Image: Unsplash

AI may be the shiny new tool in modern medicine, but it’s already showing some old, ugly habits. 

A new report from the Financial Times highlights research revealing that AI models used in healthcare are quietly carrying forward the same biases baked into decades of medical research, biases that have historically left women and people of color behind.

For years, clinical trials and scientific studies have leaned heavily on white male subjects, creating datasets that reflect only a slice of humanity

Surprise, surprise: when you feed those skewed numbers into AI systems, the output isn’t exactly equitable. (Via: Gizmodo)

Researchers at MIT recently tested large language models, including OpenAI’s GPT-4 and Meta’s Llama 3, and found they were more likely to suggest less care for women, often telling female patients to simply “self-manage at home.” 

And it’s not just general-purpose chatbots misbehaving. Even a healthcare-focused model called Palmyra-Med showed the same troubling patterns

Over in London, researchers studying Google’s Gemma model found that it downplayed women’s needs compared to men. 

Another paper in The Lancet reported that GPT-4 would routinely stereotype patients by race, gender, and ethnicity, sometimes recommending more expensive procedures based on demographics rather than actual symptoms. 

Compassion for people of color dealing with mental health concerns? The AI consistently came up short.

This is more than a technical glitch. Tech giants like Google, Meta, and OpenAI are rushing to get their AI tools into hospitals, where the stakes are measured in lives, not likes. 

Earlier this year, Google’s Med-Gemini even invented a body part, an error that’s at least easy to spot. Bias, on the other hand, hides in subtler ways.

As AI becomes a bigger part of patient care, the question looms: will doctors know when an algorithm is quietly echoing decades of medical prejudice? Because no one should discover that kind of bias during an ER visit.

Should AI companies be required to audit their healthcare models for bias before deployment, or is it enough to rely on doctors to catch discriminatory recommendations? Do you think the solution is better training data that includes diverse populations, or do we need fundamental changes to how AI systems make medical recommendations? Tell us below in the comments, or reach us via our Twitter or Facebook.

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More in AI