google: Google Gemini 2.5 Flash Lite

gemini-2.5-flash-lite
Context: 1M
Max Output: 66k
Input: $0.100/1M
Output: $0.400/1M
Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.
Input: Text, Image
Output: Text

Providers

google-ai-studio
Credits
Context1M
Max Output66k
Input$0.100/1M
Output$0.400/1M
Cache Read$0.025/1M
Cache Write$0.100/1M
Audio$0.300
vertex
Credits
Context1M
Max Output66k
Input$0.100/1M
Output$0.400/1M
Cache Read$0.025/1M
Cache Write$0.100/1M
Audio$0.300
openrouter
Credits
Context1M
Max Output66k
Input (Max)$0.110/1M
Output (Max)$0.420/1M
Cache Read
Cache Write

Quick Start

Use Google Gemini 2.5 Flash Lite through Helicone's AI Gateway with automatic logging and monitoring.