AI
OpenAI launches new o3, o4-mini AI reasoning models
These models can help ChatGPT give better and more accurate answers based on what it sees, not just what you write.

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
OpenAI has just announced two new AI models that make ChatGPT even smarter. The more advanced of the two is called o3, which OpenAI says is its best model so far for advanced reasoning.
This means it’s much better at things like solving math problems, writing code, understanding science, and even making sense of images.
Alongside it, OpenAI also introduced a smaller and faster version called o4-mini. While it’s not as powerful as o3, it’s designed to give quick, cost-effective responses for similar types of tasks.
These updates come shortly after OpenAI launched its GPT-4.1 models, which already offered faster and more accurate responses.
The big highlight now is that both o3 and o4-mini can understand and reason using images, not just text. This means ChatGPT can now “think with images.”
For example, it can look at a photo, analyze it, zoom in, crop it, or adjust it to get more useful information.
This ability can help ChatGPT give better and more accurate answers based on what it sees, not just what you write.
This new image understanding feature works together with other tools ChatGPT already uses, like web browsing, writing code, or analyzing data.
OpenAI believes this combination of skills could help build even more powerful AI tools in the future.
In practical use, you could now upload things like messy handwritten notes, flowcharts, or real-world objects in a photo, and ChatGPT would understand what’s in the image, even if you don’t explain it fully in words.
This brings ChatGPT closer to other AI, like Google’s Gemini, which can understand live video.
However, these advanced models aren’t available to everyone. Right now, they’re only accessible to ChatGPT Plus, Pro, and Team users.
Business and education customers will get access soon, while free users will get limited access to o4-mini when they click the “Think” button in the chat box.
OpenAI is being cautious about how many people use these features, likely to avoid overwhelming usage again like it faced with Ghibli-style image requests.
What do you think about these new OpenAI models? Are you looking forward to using them? Tell us below in the comments, or via our Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News
