Anthropic via VerticalAPI
Route to Claude Sonnet, Opus and Haiku through an OpenAI-compatible endpoint. BYOK with your Anthropic key, zero markup on tokens, full prompt caching support.
Anthropic models routed by VerticalAPI
Pass the model ID below as model in any OpenAI-compatible request. New Anthropic models are typically supported within 24h of release.
| Model ID | Name | Context | Pricing (provider) |
|---|---|---|---|
claude-sonnet-4-5 |
Claude Sonnet 4.5 | 200K | $3 / $15 per 1M tok |
claude-opus-4-6 |
Claude Opus 4.6 | 200K | $15 / $75 per 1M tok — flagship |
claude-haiku-4-5 |
Claude Haiku 4.5 | 200K | $0.80 / $4 per 1M tok — fastest |
claude-sonnet-4-5-1m |
Claude Sonnet 4.5 (1M ctx) | 1M | Long-context tier — premium pricing |
Pricing reflects Anthropic's rates — you pay Anthropic directly. VerticalAPI adds zero markup on tokens.
5-line Anthropic call via VerticalAPI
Drop-in replacement for the OpenAI SDK. Works with the OpenAI Python client, Node, Go, curl — anything that speaks HTTP.
from openai import OpenAI client = OpenAI( base_url="https://api.verticalapi.com/v1", api_key="vapi_...", default_headers={"X-Provider-Key": "sk-ant-..."} ) response = client.chat.completions.create( model="claude-sonnet-4-5", # Anthropic messages=[{"role": "user", "content": "Hello"}] ) print(response.choices[0].message.content)
Four reasons developers route Anthropic through us
Zero token markup
You pay Anthropic directly with your own key. VerticalAPI's revenue is the gateway subscription, not a tax on your tokens.
One key, every provider
Anthropic alongside OpenAI, Anthropic, Gemini and 12 more — same OpenAI-compatible endpoint, same SDK, switchable per-request.
Latency & cost monitoring
Per-request token counts, p50/p95 latency and cost dashboards out of the box. Compare Anthropic to other providers on identical prompts.
Observability built in
Every Anthropic call gets a trace ID, replayable payload and audit log entry. Wire to Datadog or Sentry via OpenTelemetry.
Where Anthropic shines
Common questions about Anthropic on VerticalAPI
How is the Anthropic API translated to OpenAI format?
VerticalAPI shapes requests and responses to OpenAI's chat.completions schema. messages[], tools and tool_choice map cleanly. system messages are extracted into Anthropic's top-level system field.
Is prompt caching supported?
Yes — pass cache_control breakpoints in the OpenAI-formatted messages and VerticalAPI forwards them to Anthropic. Cache hits are visible in the dashboard for cost verification.
Do I get the 1M-context tier?
If your Anthropic account has access to the Sonnet 4.5 1M-token context tier, just pass claude-sonnet-4-5-1m as the model. Pricing follows Anthropic's tiered rates.
Which Claude model should I use?
Sonnet 4.5 is the default for general agentic work. Opus 4.6 for hardest reasoning. Haiku 4.5 for low-latency, high-volume work like classification or routing.
All supported LLM providers
Same endpoint, same SDK — just change the model and the BYOK header.
Ship on Anthropic in 60 seconds
Free tier — bring your own Anthropic key, zero markup, OpenAI-compatible endpoint.
Get your VerticalAPI key →