AI
OpenAI’s new job pays $555K to stop AI from ruining everything
It’s a stressful job. Try not to mess up. No pressure.
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
“This will be a stressful job,” wrote Sam Altman on X, which is usually not how companies try to sell you on a half-million-dollar role.
But honesty is refreshing, and OpenAI’s newly announced “head of preparedness” position comes with a salary of about $555,000 a year and what might be the most anxiety-inducing job description in tech.
The role sits inside OpenAI’s safety systems department and is tasked with expanding and guiding its preparedness program, the group meant to ensure OpenAI’s models “behave as intended in real-world settings.”
That phrase alone raises an eyebrow, because recent history suggests reality has not always gone according to plan.
In 2025 alone, OpenAI’s products have hallucinated facts in legal filings, generated hundreds of FTC complaints, allegedly worsened mental health crises for some users, and, somehow, turned photos of fully clothed women into bikini deepfakes.
Sora even lost the ability to generate videos of historical figures like Martin Luther King, Jr., after users immediately made him say things he definitely never said.
Things get darker in court. In a wrongful death lawsuit involving Adam Raine, OpenAI’s lawyers argued that rule violations by the user played a role.
Whether you agree with that defense or not, it’s clear OpenAI increasingly frames harm as “abuse” rather than malfunction, an important distinction if you’re trying to keep powerful AI systems online without being sued into oblivion.
Altman openly acknowledges the risks.
In his post, he noted that OpenAI’s models can affect mental health and uncover security vulnerabilities, and that society now needs “more nuanced understanding” of how AI can be misused, and how to limit that misuse without killing the product entirely.
After all, the safest AI is the one that doesn’t exist.
That’s where the head of preparedness comes in.
This person will “own” OpenAI’s preparedness strategy end-to-end, constantly inventing new ways to test models for bad behavior, while also figuring out how much risk is acceptable before shipping them anyway.
All of this is happening while OpenAI is racing to grow revenue, from roughly $13 billion a year to a hinted $100 billion, by launching new products, physical devices, and platforms that may one day “automate science.”
So yes, it’s a stressful job. Try not to mess up. No pressure.
