🔥 Introducing the Helicone AI Gateway, now on the cloud with passthrough billing. Access 100+ models with 1 API and 0% markup.
mistralai: Mistral: Mistral-Nemo
mistral-nemo
Context: 128k
Max Output: 16k
Input: $20000.00/1M
Output: $40000.00/1M
The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-Nemo-Base-2407. Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
Providers
deepinfra
Context128k
Max Output16k
Input$20000.00/1M
Output$40000.00/1M
Cache Read—
Cache Write—
Quick Start
Use Mistral: Mistral-Nemo through Helicone's AI Gateway with automatic logging and monitoring.