🔥 Introducing the Helicone AI Gateway, now on the cloud with passthrough billing. Access 100+ models with 1 API and 0% markup.
openai: OpenAI o3 Mini
o3-mini
Context: 200k
Max Output: 100k
Input: $1.10/1M
Output: $4.40/1M
o3-mini is our newest small reasoning model, providing high intelligence at the same cost and latency targets of o1-mini. o3-mini supports key developer features, like Structured Outputs, function calling, and Batch API. This model supports the `reasoning_effort` parameter, which can be set to 'high', 'medium', or 'low' to control the thinking time of the model.
Input: Text
Output: Text
Providers
openai
Credits
Context200k
Max Output100k
Input$1.10/1M
Output$4.40/1M
Cache Read$0.550/1M
Cache Write—
azure
Credits
Context200k
Max Output100k
Input$1.10/1M
Output$4.40/1M
Cache Read$0.550/1M
Cache Write—
helicone
Credits
Context200k
Max Output100k
Input$1.10/1M
Output$4.40/1M
Cache Read$0.550/1M
Cache Write—
openrouter
Credits
Context200k
Max Output100k
Input (Max)$1.16/1M
Output (Max)$4.64/1M
Cache Read—
Cache Write—
Quick Start
Use OpenAI o3 Mini through Helicone's AI Gateway with automatic logging and monitoring.