Just got an email from OpenAI, and it's a big one for developers, especially those building AI apps at scale.
💸 o3 Is Now 80% Cheaper!
OpenAI has dropped the price of the o3 model to:
$2 / 1M input tokens
$8 / 1M output tokens
This isn’t a stripped-down version — it’s the same o3 model, just cheaper thanks to inference stack optimizations.
👨💻 Why You Should Try o3 Now
OpenAI recommends using o3 for:
Coding tasks (same price per token as GPT-4.1, cheaper than GPT-4o)
Agentic tool calling
Function calling
Instruction following
If you're building apps with tool use, structured tasks, or agents — o3 is now way more cost-effective.
🔍 Introducing: o3-pro
For devs dealing with more complex reasoning or production use cases, OpenAI just launched o3-pro — a version of o3 that uses more compute to give deeper, more reliable responses.
o3-pro Pricing:
$20 / 1M input tokens
$80 / 1M output tokens
💰 That’s 87% cheaper than the old o1-pro model.
Supported Features:
Image inputs
Function calling
Structured Outputs
Background mode for long-running requests
🧠 o3-pro is better suited for difficult problems that need extra computation time and accuracy.
📌 TL;DR for Devs
Use o3 for general-purpose and coding tasks → super cheap now.
Use o3-pro when you need reliability, depth, and complex logic.
Test o3-pro-2025-06-10 in the Responses API or Playground.
Long requests? Try the new background mode to avoid timeouts.
🧪 These pricing changes make it way more feasible to build powerful AI experiences without blowing your budget.
Let the tinkering begin. 🚀
Have you tried the new o3 or o3-pro yet? Share your results or thoughts below!
Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.