Chat Completions
Send messages and get AI responses.
The Chat Completions endpoint is fully OpenAI-compatible. Point any existing OpenAI SDK or HTTP client at LEAPERone and it works out of the box.
Basic Request
curl -X POST https://api.leaper.one/v1/chat/completions \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "auto",
"messages": [
{"role": "user", "content": "Explain quantum computing in one paragraph."}
]
}'{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Quantum computing uses qubits..."
},
"finish_reason": "stop"
}
]
}Setting model to "auto" routes your request to the best available model automatically. You can also specify a model explicitly (e.g., "gpt-5-nano").
Streaming
Set stream: true to receive results as Server-Sent Events (SSE). Tokens are delivered incrementally as they are generated.
curl -X POST https://api.leaper.one/v1/chat/completions \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "auto",
"stream": true,
"messages": [
{"role": "user", "content": "Write a haiku about APIs."}
]
}'Each SSE event contains a data: line with a JSON chunk:
data: {"id":"chatcmpl-abc123","choices":[{"delta":{"content":"Endpoints"},"index":0}]}
data: {"id":"chatcmpl-abc123","choices":[{"delta":{"content":" await"},"index":0}]}
data: [DONE]Tools / Function Calling
You can pass a tools array to let the model call functions you define. The model will return a tool_calls response when it decides a function should be invoked. This follows the same format as the OpenAI function calling API.
{
"model": "auto",
"messages": [{"role": "user", "content": "What's the weather in Tokyo?"}],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string"}
},
"required": ["city"]
}
}
}
]
}For the complete parameter list and response schema, see the API Reference.
OpenRouter Models
Beyond the built-in models, you can access 350+ models from all major providers (Anthropic, Google, Meta, Mistral, etc.) via OpenRouter. Use the openrouter/ prefix followed by the OpenRouter model ID:
curl -X POST https://api.leaper.one/v1/chat/completions \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "openrouter/anthropic/claude-sonnet-4.6",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'OpenRouter models use pass-through pricing — you are billed at the same rate as OpenRouter's upstream provider. See OpenRouter Models for the full model list and pricing.
Examples
| Model | model value |
|---|---|
| Claude Sonnet 4.6 | openrouter/anthropic/claude-sonnet-4.6 |
| Gemini 2.5 Flash | openrouter/google/gemini-2.5-flash-preview |
| Llama 3.3 70B | openrouter/meta-llama/llama-3.3-70b-instruct |
| DeepSeek V3 | openrouter/deepseek/deepseek-chat |