meta-llama: Meta Llama Prompt Guard 2 86M
Context: 512
Max Output: 2
Input: $0.010/1M
Output: $0.010/1M
86M parameter multilingual prompt safety classifier based on mDeBERTa-base, detecting prompt injections and jailbreaks across 8+ languages with adversarial-resistant tokenization.