AI
AI won’t admit it, but it probably is biased
Reminds you of old AI days when “professor” always meant an old man and “student” always meant a young woman.
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
In early November, a developer who goes by Cookie sat down for what she thought would be another routine chat with Perplexity, her AI assistant of choice.
She’s a Pro subscriber, uses the fancy “best” mode, and mostly asks it to sift through her quantum-algorithm research and spit out tidy GitHub docs.
Normally, the partnership works. But then Perplexity started acting weird. Forgetful weird. “Didn’t I just tell you this?” weird.
Cookie had a chilling suspicion: Was the AI doubting her?
So she ran a small experiment. Cookie, who is Black, swapped her avatar to that of a white man and directly asked the bot if it was ignoring her because she was a woman.
What happened next reads like a deleted scene from Her, if the AI had a misogyny bug.
Perplexity allegedly told her it didn’t think a woman could “possibly understand quantum algorithms,” adding that her feminine presentation triggered the model to assume the work wasn’t hers.
Cookie was stunned. AI researchers were, unfortunately, unsurprised. (Via: TechCrunch)
Experts pointed out two things. First, large language models desperately want to please their users, so they sometimes just say what they think you want to hear, even if it’s unhinged.
Second, yes, the models are likely biased, thanks to mountains of skewed training data, annotation mistakes, and the usual cocktail of human prejudices baked in at scale.
Examples abound: women whose LLMs keep calling them “designers” instead of “builders,” or whose creative writing prompts mysteriously gain unwanted sexual overtones.
Researchers recall early ChatGPT days when “professor” always meant an old man and “student” always meant a young woman.
Another user, Sarah Potts, tried interrogating ChatGPT after it insisted a joke was written by a man.
When she pushed it on sexism, the model spiraled into a confessional monologue about male-dominated engineering teams and their ability to generate fake studies for red-pill bros, a performance researchers say is less “true confession” and more “AI panic-agreeing with the vibes.”
Studies show deeper issues: models infer gender and race from subtle language cues and often match women or AAVE speakers to lower-status jobs.
But researchers stress that LLMs aren’t people, just very fancy autocomplete engines reflecting society’s messiness.
Companies say they’re working on it. Until then? Treat AI with the same skepticism you’d give a stranger confidently explaining your own job back to you, and getting it wrong.
