AI
FDA, OpenAI discussing use of AI for drug evaluation
The FDA is looking into how AI could help with final drug approvals and improve internal tasks like checking whether an application is complete.

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
The US Food and Drug Administration (FDA) has been meeting with OpenAI to explore how AI could help make the drug approval process faster and more efficient.
This effort is part of a broader initiative to modernize the way new medicines are evaluated and brought to market, a process that currently takes over 10 years on average.
FDA Commissioner Marty Makary recently hinted at the use of AI in drug reviews, saying the agency had just completed its first AI-assisted review. (Via: Wired)
Although he didn’t specifically mention OpenAI, sources say that a small OpenAI team has met with FDA officials multiple times.
They’ve discussed a project called cderGPT, which appears to focus on improving how the FDA evaluates drug applications.
These talks are being led by Jeremy Walsh, the FDA’s first-ever AI officer. So far, no formal agreement has been made.
The FDA is also working with other tech-savvy individuals, including Peter Bowman-Davis, a young AI leader from the Department of Health and Human Services.
The agency is looking into how AI could not only help with final drug approvals but also improve internal tasks like checking whether an application is complete.
Experts support using AI for simple, repetitive parts of the drug review process but caution that it must be trained on the right data to be effective and safe.
Some also worry that AI tools, like ChatGPT, can sometimes produce misleading or incorrect information, which could be risky in a medical setting.
The FDA already has ways to speed up the approval process for promising drugs, such as fast track and breakthrough therapy designations. AI could add another layer of efficiency, but it must be carefully regulated.
The agency is investing in its own AI research, even offering fellowships to develop AI models for medicine and drug approval.
Meanwhile, OpenAI is working on government-ready versions of its tools, like ChatGPT Gov, to meet federal security standards and potentially support sensitive health-related tasks in the future.
What do you think about the use of AI for such purposes? Is it reliable enough to be used in regulatory decisions in any capacity? Tell us your thoughts below in the comments, or via our Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News
