Hyung Won Chung, an OpenAI researcher, just unveiled o1-pro, the latest heavyweight reasoning model. This new version is all about deep thinking and tackles prompts thoroughly. Starting today, developers at any of OpenAI’s paid tiers can access it through the OpenAI API. If you’re on a higher usage tier, you’ll benefit from increased request and batch queue limits.
What’s the deal with the pricing? o1-pro costs $150 for 1 million input tokens and $600 for 1 million output tokens. That’s a steep jump compared to the base model: $15 for 1 million input tokens and $60 for the same number of output tokens.
So, what do you get for that price? o1-pro is now one of the priciest models available, surpassing even the popular GPT-4.5, which costs $75 per million input tokens and $150 for output. OpenAI aims to attract researchers, engineers, and professionals in fields like science, medicine, and technology with o1-pro. It’s specialized for reasoning tasks, while simpler models may suffice for transcription or moderation needs.
With o1-pro, you get a 200,000 context window and 100,000 maximum output tokens. It can process data from machine vision and accepts both text and image inputs, but it only delivers text outputs. You’ll also find features like:
– Function calling
– Structured Outputs that follow a developer’s JSON Schema
– Integration with the Responses API for creating AI agents capable of web interaction, including running searches
– Integration with the Batch API for asynchronous requests, reducing costs and increasing rate limits for projects with 24-hour deadlines.
OpenAI hasn’t disclosed the exact knowledge cutoff for o1-pro, but previous models had data up until late 2023.
OpenAI initially presented o1 as “Strawberry” back in September 2024. It’s in a competitive race with other reasoning models like DeepSeek’s R1, Anthropic’s Claude Sonnet 3.7, Grok 3, and Google’s Gemini 2.0. Meanwhile, Meta is diving into “theory of mind reasoning,” focusing on advanced evaluations of AI models.