Quickstart
Get your first response in 3 lines of code. VerticalAPI is 100% OpenAI SDK compatible.
https://api.verticalapi.com/v1
Python (OpenAI SDK)
from openai import OpenAI client = OpenAI( base_url="https://api.verticalapi.com/v1", api_key="vapi_your_key_here", ) response = client.chat.completions.create( model="geopolitical-risk", messages=[{"role": "user", "content": "Analyze Iran-Israel escalation dynamics"}], ) print(response.choices[0].message.content)
cURL
curl -X POST https://api.verticalapi.com/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer vapi_your_key_here" \ -d '{ "model": "geopolitical-risk", "messages": [{"role": "user", "content": "Risk level for Strait of Hormuz?"}], "stream": false }'
Streaming
stream = client.chat.completions.create( model="geopolitical-risk", messages=[{"role": "user", "content": "Analyze Iran-Israel escalation"}], stream=True, ) for chunk in stream: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="")
Authentication
All API requests require a vapi_ API key passed via the Authorization header.
Authorization: Bearer vapi_your_key_here
API keys are tied to a tier (Free, Pro, Enterprise) which determines rate limits and available backend models. Keys are hashed server-side — we never store the raw key.
To get an API key, contact us.
Chat Completions
POST /v1/chat/completions
Create a chat completion using a vertical model. Follows the exact OpenAI Chat Completions API schema.
Request body
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Vertical slug (e.g. "geopolitical-risk") |
| messages | array | Yes | Array of message objects with role and content |
| stream | boolean | No | Enable SSE streaming (default: false) |
| temperature | float | No | Sampling temperature (0.0 - 1.0) |
| max_tokens | integer | No | Maximum output tokens (default: 4096, max: 8192) |
| tools | array | No | Additional tool definitions (merged with vertical's built-in tools) |
| tool_choice | string | No | Tool selection mode |
Response
{
"id": "chatcmpl-vapi-abc123",
"object": "chat.completion",
"created": 1234567890,
"model": "geopolitical-risk",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "Based on current intelligence signals..."
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 1847,
"completion_tokens": 256,
"total_tokens": 2103
}
}
model field in the response always returns the vertical slug, not the backend model. This is by design — the backend model is an implementation detail.
Streaming
Set "stream": true to receive Server-Sent Events (SSE) in OpenAI format:
data: {"id":"chatcmpl-vapi-abc123","choices":[{"delta":{"role":"assistant"},"finish_reason":null}]}
data: {"id":"chatcmpl-vapi-abc123","choices":[{"delta":{"content":"Based on"},"finish_reason":null}]}
data: {"id":"chatcmpl-vapi-abc123","choices":[{"delta":{"content":" current"},"finish_reason":null}]}
data: {"id":"chatcmpl-vapi-abc123","choices":[{"delta":{},"finish_reason":"stop"}]}
data: [DONE]
List Models
GET /v1/models
Returns available verticals in OpenAI models list format.
{
"object": "list",
"data": [
{
"id": "geopolitical-risk",
"object": "model",
"owned_by": "verticalapi",
"description": "AI model specialized in geopolitical risk analysis"
}
]
}
Health Check
GET /health
{"status": "ok", "version": "0.1.0"}
Errors
VerticalAPI returns OpenAI-compatible error responses:
| Status | Type | Description |
|---|---|---|
| 401 | authentication_error | Missing or invalid API key |
| 404 | not_found_error | Unknown vertical/model slug |
| 429 | rate_limit_error | Rate limit exceeded (check Retry-After header) |
| 400 | invalid_request_error | Malformed request body |
| 502 | api_error | Backend provider error |
{
"error": {
"message": "Model 'xxx' not found. Available: geopolitical-risk",
"type": "not_found_error",
"code": "model_not_found"
}
}
Verticals
Each vertical is a pre-configured AI model with a built-in system prompt, tool definitions, and intelligent model routing.
geopolitical-risk
Senior geopolitical risk analyst with expertise in conflict dynamics, economic warfare, and strategic intelligence.
- System prompt: 2000-token expert analyst persona
- Tools: get_crisis_index, search_conflict_events, get_sanctions_status, get_country_risk_profile, analyze_escalation_ladder, get_commodity_impact
- Routing: Sonnet default, Opus for complex analysis ("scenario analysis", "wargame", "deep analysis"), Haiku for free tier and short queries
SDK Support
VerticalAPI is 100% OpenAI SDK compatible. Any library that supports the OpenAI Chat Completions API works — just change base_url and api_key.
Python
from openai import OpenAI client = OpenAI(base_url="https://api.verticalapi.com/v1", api_key="vapi_...")
JavaScript / TypeScript
import OpenAI from 'openai'; const client = new OpenAI({ baseURL: 'https://api.verticalapi.com/v1', apiKey: 'vapi_...', });
Go
client := openai.NewClient( option.WithBaseURL("https://api.verticalapi.com/v1"), option.WithAPIKey("vapi_..."), )
cURL
curl https://api.verticalapi.com/v1/chat/completions \ -H "Authorization: Bearer vapi_..." \ -H "Content-Type: application/json" \ -d '{"model":"geopolitical-risk","messages":[...]}'
Pricing & Rate Limits
| Tier | Price | Req/min | Req/day | Req/month | Backend |
|---|---|---|---|---|---|
| Free | $0 | 10 | 100 | 3,000 | Haiku |
| Pro | $49/mo | 60 | 10,000 | 300,000 | Sonnet |
| Enterprise | Custom | 300 | 100,000 | 3,000,000 | Opus |
Rate limit headers are included in every response:
X-RateLimit-Limit— requests per minute for your tierX-RateLimit-Remaining— remaining requests in current windowX-RateLimit-Reset— seconds until window resetsRetry-After— seconds to wait (only on 429 responses)
Changelog
v0.1.0 — April 2026
- Initial release
- OpenAI-compatible proxy with streaming support
- First vertical: Geopolitical Risk Analyst
- API key auth with tier-based rate limiting
- Intelligent model routing (Haiku / Sonnet / Opus)