openai: OpenAI GPT-4.1 Nano

gpt-4.1-nano
Context: 1M
Max Output: 33k
Input: $0.100/1M
Output: $0.400/1M
For tasks that demand low latency, GPT-4.1 nano is the fastest and cheapest model in the GPT-4.1 series. It delivers exceptional performance at a small size with its 1 million token context window, and scores 80.1% on MMLU, 50.3% on GPQA, and 9.8% on Aider polyglot coding – even higher than GPT-4o mini. It's ideal for tasks like classification or autocompletion.
Input: Text, Image
Output: Text

Providers

openai
Credits
Context1M
Max Output33k
Input$0.100/1M
Output$0.400/1M
Cache Read$0.025/1M
Cache Write
azure
Credits
Context1M
Max Output33k
Input$0.100/1M
Output$0.400/1M
Cache Read$0.030/1M
Cache Write
helicone
Credits
Context1M
Max Output33k
Input$0.100/1M
Output$0.400/1M
Cache Read$0.025/1M
Cache Write
openrouter
Credits
Context1M
Max Output33k
Input (Max)$0.110/1M
Output (Max)$0.420/1M
Cache Read
Cache Write

Quick Start

Use OpenAI GPT-4.1 Nano through Helicone's AI Gateway with automatic logging and monitoring.