API Reference
Complete reference for the 2389 API. Our endpoints are OpenAI-compatible, so you can use existing OpenAI client libraries.
Base URL
https://id.2389.ai/v1Authentication
All API requests require an API key passed in the Authorization header:
Authorization: Bearer sk-2389-YOUR_API_KEYEndpoints
List Models
GET /v1/models
Returns a list of available models.
curl https://id.2389.ai/v1/models \
-H "Authorization: Bearer sk-2389-YOUR_API_KEY"Chat Completions
POST /v1/chat/completions
Creates a chat completion for the given messages.
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model ID to use (see available models below) |
messages | array | Yes | Array of message objects with role and content |
stream | boolean | No | Enable streaming responses (default: false) |
temperature | number | No | Sampling temperature (0-2) |
max_tokens | integer | No | Maximum tokens in the response |
Message Object
| Field | Type | Description |
|---|---|---|
role | string | One of: system, user, assistant |
content | string | The message content |
Example Request
curl -X POST https://id.2389.ai/v1/chat/completions \
-H "Authorization: Bearer sk-2389-YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
"temperature": 0.7
}'Example Response
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1704067200,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 8,
"total_tokens": 33
}
}Available Models
OpenAI Models
| Model ID | Description |
|---|---|
gpt-4o | Most capable GPT-4 model |
gpt-4o-mini | Affordable, fast GPT-4 model |
gpt-4-turbo | GPT-4 Turbo with vision capabilities |
gpt-4 | Original GPT-4 model |
gpt-3.5-turbo | Fast and cost-effective |
o1 | Reasoning model for complex tasks |
o1-mini | Smaller, faster reasoning model |
Anthropic Models
| Model ID | Description |
|---|---|
claude-opus-4-5-20250514 | Most capable Claude model |
claude-sonnet-4-5-20250514 | Balanced performance and speed |
claude-sonnet-4-20250514 | Latest Sonnet model |
claude-3-5-sonnet-20241022 | Claude 3.5 Sonnet |
claude-3-5-haiku-20241022 | Fast and efficient |
claude-3-opus-20240229 | Claude 3 Opus |
claude-3-haiku-20240307 | Claude 3 Haiku |
Streaming
Set stream: true to receive responses as Server-Sent Events (SSE). Each event contains a delta of the response.
curl -X POST https://id.2389.ai/v1/chat/completions \
-H "Authorization: Bearer sk-2389-YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true
}'Rate Limits
API requests are rate limited to 60 requests per minute per API key.
Rate limit headers are included in all responses:
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests per window |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when the window resets |
Error Responses
Errors follow the OpenAI error format:
{
"error": {
"message": "Invalid API key",
"type": "invalid_request_error",
"code": "invalid_api_key"
}
}Common Error Codes
| HTTP Status | Error Code | Description |
|---|---|---|
| 401 | invalid_api_key | Missing or invalid API key |
| 400 | invalid_request_error | Malformed request body |
| 404 | model_not_found | Requested model does not exist |
| 429 | rate_limit_exceeded | Too many requests |
| 500 | internal_error | Server error |