Mistral AI via VerticalAPI

Use Mistral Large, Codestral and Pixtral through an OpenAI-compatible endpoint. BYOK with your Mistral key, zero token markup, EU-hosted models for compliance workloads.

Endpoint: https://api.verticalapi.com/v1/chat/completions  ·  BYOK header: X-Provider-Key: <your-mistral-key>

Mistral AI models routed by VerticalAPI

Pass the model ID below as model in any OpenAI-compatible request. New Mistral AI models are typically supported within 24h of release.

Model IDNameContextPricing (provider)
mistral-large-latest Mistral Large 2 128K $2 / $6 per 1M tok
codestral-latest Codestral 256K $0.30 / $0.90 per 1M tok — code-tuned
pixtral-large-latest Pixtral Large 128K $2 / $6 per 1M tok — multimodal
ministral-8b-latest Ministral 8B 128K $0.10 / $0.10 per 1M tok — edge

Pricing reflects Mistral AI's rates — you pay Mistral AI directly. VerticalAPI adds zero markup on tokens.

5-line Mistral AI call via VerticalAPI

Drop-in replacement for the OpenAI SDK. Works with the OpenAI Python client, Node, Go, curl — anything that speaks HTTP.

mistral_quickstart.py Python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.verticalapi.com/v1",
    api_key="vapi_...",
    default_headers={"X-Provider-Key": "..."}
)

response = client.chat.completions.create(
    model="mistral-large-latest",  # Mistral AI
    messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)

Four reasons developers route Mistral AI through us

Zero token markup

You pay Mistral AI directly with your own key. VerticalAPI's revenue is the gateway subscription, not a tax on your tokens.

One key, every provider

Mistral AI alongside OpenAI, Anthropic, Gemini and 12 more — same OpenAI-compatible endpoint, same SDK, switchable per-request.

Latency & cost monitoring

Per-request token counts, p50/p95 latency and cost dashboards out of the box. Compare Mistral AI to other providers on identical prompts.

Observability built in

Every Mistral AI call gets a trace ID, replayable payload and audit log entry. Wire to Datadog or Sentry via OpenTelemetry.

Where Mistral AI shines

EU data residency code completion (Codestral) open-weights deployment fine-tuning

Common questions about Mistral AI on VerticalAPI

Why use Mistral via VerticalAPI?

EU-hosted inference for GDPR-sensitive workloads, plus a single key/dashboard alongside your other providers. Codestral is one of the strongest fill-in-the-middle code models available.

Are open-weights models like Mistral 7B included?

VerticalAPI routes to la-plateforme.mistral.ai endpoints. For self-hosted open weights, point your Mistral key at your own deployment URL via the dashboard.

Does function calling work?

Yes. Mistral Large, Small and Codestral support OpenAI-format tools. tool_choice is forwarded as Mistral's tool_choice mode.

All supported LLM providers

Same endpoint, same SDK — just change the model and the BYOK header.