Endpoints
POST /v1/chat/completions
Create a chat completion. Supports streaming, tool calls, and JSON mode. OpenAI-compatible request and response format.
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | required | Model slug from the catalog (e.g. "deepseek/deepseek-chat-v3.2") |
| messages | array | required | Array of message objects with role ("system", "user", "assistant") and content |
| stream | boolean | optional | If true, returns SSE stream of delta chunks. Default: false |
| temperature | number | optional | Sampling temperature 0-2. Default: 1.0 |
| max_tokens | integer | optional | Maximum tokens to generate |
| top_p | number | optional | Nucleus sampling parameter 0-1. Default: 1.0 |
| tools | array | optional | Array of tool definitions for function calling |
| tool_choice | string | object | optional | "auto", "none", "required", or a specific tool object |
| response_format | object | optional | { "type": "json_object" } to force JSON output |
| models | array | optional | Hober-specific: fallback model list (tried in order) |
| provider | object | optional | Hober-specific: provider routing preferences (see Hober Features) |
modelstringrequired
Model slug from the catalog (e.g. "deepseek/deepseek-chat-v3.2")
messagesarrayrequired
Array of message objects with role ("system", "user", "assistant") and content
streambooleanoptional
If true, returns SSE stream of delta chunks. Default: false
temperaturenumberoptional
Sampling temperature 0-2. Default: 1.0
max_tokensintegeroptional
Maximum tokens to generate
top_pnumberoptional
Nucleus sampling parameter 0-1. Default: 1.0
toolsarrayoptional
Array of tool definitions for function calling
tool_choicestring | objectoptional
"auto", "none", "required", or a specific tool object
response_formatobjectoptional
{ "type": "json_object" } to force JSON output
modelsarrayoptional
Hober-specific: fallback model list (tried in order)
providerobjectoptional
Hober-specific: provider routing preferences (see Hober Features)
Response
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1709000000,
"model": "deepseek/deepseek-chat-v3.2",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Solana is a high-performance blockchain..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 24,
"completion_tokens": 128,
"total_tokens": 152
}
}Streaming Example
from openai import OpenAI
client = OpenAI(
base_url="https://api.hober.dev",
api_key="hb_live_your_key_here",
)
stream = client.chat.completions.create(
model="deepseek/deepseek-chat-v3.2",
messages=[{"role": "user", "content": "Count to 10"}],
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")