AI
Former OpenAI engineer reveals what it’s like to work at OpenAI
He wasn’t unhappy or caught in any drama—he simply wanted to return to his roots as a startup founder.

Deprecated: mb_convert_encoding(): Handling HTML entities via mbstring is deprecated; use htmlspecialchars, htmlentities, or mb_encode_numericentity/mb_decode_numericentity instead in /www/knowtechie_840/public/wp-content/plugins/wpcode-premium/includes/class-wpcode-snippet-execute.php(419) : eval()'d code on line 13
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Calvin French-Owen, a software engineer who worked at OpenAI for a year, recently left the company and shared his experience in a detailed blog post.
He wasn’t unhappy or caught in any drama—he simply wanted to return to his roots as a startup founder, having previously co-founded the tech company Segment (later bought for $3.2 billion).
In his post, French-Owen gave a behind-the-scenes look at OpenAI, especially during the intense weeks his team spent building Codex, a coding tool that competes with apps like Cursor and Claude Code.
His team of about 17 people built and launched Codex in just seven weeks, working long hours with barely any sleep. (Via: TechCrunch)
When it launched inside ChatGPT, users started using it immediately, showing the huge influence ChatGPT has.
OpenAI has grown extremely fast, tripling in size from 1,000 to 3,000 employees in just a year.
That kind of rapid growth created chaos: poor communication, unclear reporting lines, and overlapping work across teams.
For example, multiple teams were building similar tools without knowing it.
The company still feels like a startup in many ways—there’s little red tape, employees can take initiative freely, and everything still runs on Slack.
But the codebase, called the back-end monolith, can be messy, partly because team members range from highly experienced engineers to fresh PhDs, and they all work in Python, which is easy to use but can lead to inconsistency.
French-Owen also addressed misconceptions about OpenAI. While the public often worries that the company isn’t focused enough on AI safety, he said that’s not true.
Inside, there’s serious attention given to real-world risks like hate speech, misinformation, political manipulation, and even bioweapons, not just science fiction-style “AI takes over the world” scenarios.
OpenAI is very secretive and tightly monitors public reaction, especially on social media like X (Twitter). If a post about ChatGPT goes viral, they notice.
Do you think OpenAI’s rapid growth and startup culture helps or hurts their ability to develop AI safely? Or is the chaos worth it if they can innovate faster than competitors? Tell us below in the comments, or reach us via our Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News
