Skip to content

Chat Completions

Create chat completions for conversations with AI models. Supports streaming and non-streaming modes.

Endpoint

POST /proxy/v1/chat/completions

Request Body

ParameterTypeRequiredDescription
modelstringYesModel ID (e.g., qwen/qwen3-32b)
messagesarrayYesArray of message objects
streambooleanNoEnable streaming (default: false)
temperaturenumberNoControls randomness, 0-2 (default: 1.0)
max_tokensnumberNoMaximum tokens to generate
top_pnumberNoNucleus sampling (default: 1.0)

Message Object

FieldTypeDescription
rolestringuser, assistant, or system
contentstringThe message content

Example Request

bash
curl https://ai.hackclub.com/proxy/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "qwen/qwen3-32b",
    "messages": [
      {"role": "user", "content": "Tell me a joke."}
    ]
  }'
javascript
import { OpenRouter } from "@openrouter/sdk";

const client = new OpenRouter({
  apiKey: "YOUR_API_KEY",
  baseURL: "https://ai.hackclub.com/proxy/v1",
});

const response = await client.chat.send({
  model: "qwen/qwen3-32b",
  messages: [
    { role: "user", content: "Tell me a joke." },
  ],
  stream: false,
});

console.log(response.choices[0].message.content);
python
from openrouter import OpenRouter

client = OpenRouter(
    api_key="YOUR_API_KEY",
    server_url="https://ai.hackclub.com/proxy/v1",
)

response = client.chat.send(
    model="qwen/qwen3-32b",
    messages=[
        {"role": "user", "content": "Tell me a joke."}
    ],
    stream=False,
)

print(response.choices[0].message.content)

Example Response

json
{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "qwen/qwen3-32b",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Why did the scarecrow win an award? Because he was outstanding in his field!"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 15,
    "total_tokens": 35
  }
}

Streaming

Set stream: true to receive responses as server-sent events (SSE). Each chunk contains a delta of the response.