OpenAI Unveils New AI Models of In-Depth Reasoning

OpenAI company presented two new models of artificial intelligence — o3 and o4-mini, focused on deeper thinking before forming answers. The company considers the o3 model to be the most powerful in its history: it demonstrates high performance in the fields of mathematics, logic, programming, natural sciences and visual analysis. At the same time, o4-mini is aimed at developers who seek a combination of cost, performance and speed.
Both models support ChatGPT features, including web browsing, Python code execution, image and PDF processing. Along with them, the o4-mini-high variant is also introduced, which spends more time building the answers to achieve higher accuracy. New products are available to users of the Pro, Plus and Team tariff plans, as well as through the API for developers.
Launching new models is part of OpenAI’s strategy to compete with Google, Meta, xAI, Anthropic and DeepSeek. Although OpenAI was the first to present the o1 reasoning model, other companies quickly closed the gap. The o3 model showed the highest result in the SWE-bench test without additional tips – 69.1%, while the o4-mini reached 68.1%. For comparison, the Claude 3.7 Sonnet scored 62.3%, and the previous model o3-mini scored 49.3%.
New models are capable of visual thinking: they can analyze sketches, graphs, PDF fragments and other images before forming an answer. They also recognize and process blurry or low-quality images, change scale and orientation during analysis.
Regarding cost: using o3 costs $10 per million incoming tokens and $40 per weekend, for o4-mini $1.10 and $4.40 respectively. An o3-pro release is expected in the future, which will only be available to ChatGPT Pro users. It is likely that these models will be the last in the line of considerations before the appearance of GPT-5.