Perplexity Sonar via VerticalAPI
Perplexity Sonar Pro via VerticalAPI's OpenAI-compatible endpoint — every response is web-grounded with citations. BYOK with your Perplexity key, zero markup.
Perplexity Sonar models routed by VerticalAPI
Pass the model ID below as model in any OpenAI-compatible request. New Perplexity Sonar models are typically supported within 24h of release.
| Model ID | Name | Context | Pricing (provider) |
|---|---|---|---|
sonar-pro |
Sonar Pro | 200K | $3 / $15 per 1M tok + $5 per 1K searches |
sonar |
Sonar | 128K | $1 / $1 per 1M tok + $5 per 1K searches |
sonar-reasoning-pro |
Sonar Reasoning Pro | 128K | $2 / $8 per 1M tok — reasoning + web |
Pricing reflects Perplexity Sonar's rates — you pay Perplexity Sonar directly. VerticalAPI adds zero markup on tokens.
5-line Perplexity Sonar call via VerticalAPI
Drop-in replacement for the OpenAI SDK. Works with the OpenAI Python client, Node, Go, curl — anything that speaks HTTP.
from openai import OpenAI client = OpenAI( base_url="https://api.verticalapi.com/v1", api_key="vapi_...", default_headers={"X-Provider-Key": "pplx-..."} ) response = client.chat.completions.create( model="sonar", # Perplexity Sonar messages=[{"role": "user", "content": "Hello"}] ) print(response.choices[0].message.content)
Four reasons developers route Perplexity Sonar through us
Zero token markup
You pay Perplexity Sonar directly with your own key. VerticalAPI's revenue is the gateway subscription, not a tax on your tokens.
One key, every provider
Perplexity Sonar alongside OpenAI, Anthropic, Gemini and 12 more — same OpenAI-compatible endpoint, same SDK, switchable per-request.
Latency & cost monitoring
Per-request token counts, p50/p95 latency and cost dashboards out of the box. Compare Perplexity Sonar to other providers on identical prompts.
Observability built in
Every Perplexity Sonar call gets a trace ID, replayable payload and audit log entry. Wire to Datadog or Sentry via OpenTelemetry.
Where Perplexity Sonar shines
Common questions about Perplexity Sonar on VerticalAPI
How are citations returned?
Perplexity returns a citations array alongside the assistant message. VerticalAPI surfaces it in the chat.completions response as a top-level citations field — drop-in for your app.
Can I scope searches to specific domains?
Yes. Pass search_domain_filter as a Perplexity-native parameter; VerticalAPI forwards it untouched.
All supported LLM providers
Same endpoint, same SDK — just change the model and the BYOK header.
Ship on Perplexity Sonar in 60 seconds
Free tier — bring your own Perplexity Sonar key, zero markup, OpenAI-compatible endpoint.
Get your VerticalAPI key →