Completions API
Inline code completion and OpenAI-compatible chat completion endpoints for drop-in integration with editors and tooling.
Inline Completions
Generate a code completion given a cursor position represented by a prefix and suffix.
POST /complete
Authorization: Bearer YOUR_API_KEY
Content-Type: application/json
Request
{
"prefix": "def calculate_total(items):\n total = 0\n for item in items:\n total += ",
"suffix": "\n return total",
"language": "python",
"file_path": "src/utils.py"
}
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| prefix | string | Yes | All text before the cursor |
| suffix | string | Yes | All text after the cursor |
| language | string | Yes | Language identifier (e.g. typescript, python, go) |
| file_path | string | No | Relative path of the file — used for additional context |
Response
{
"completion": "item.price * item.quantity",
"finish_reason": "stop"
}
| Field | Description |
|-------|-------------|
| completion | The text to insert at the cursor position |
| finish_reason | "stop" — model finished naturally; "length" — truncated at token limit |
Inline completions are optimised for low latency. The model uses a fill-in-the-middle (FIM) prompt format internally — you only need to supply the raw prefix and suffix.
OpenAI-Compatible Chat Completions
A drop-in compatible endpoint for tools and libraries that target the OpenAI Chat Completions API.
POST /v1/chat/completions
Authorization: Bearer YOUR_API_KEY
Content-Type: application/json
Request
{
"model": "misar-default",
"messages": [
{ "role": "system", "content": "You are a helpful coding assistant." },
{ "role": "user", "content": "Explain what a closure is in JavaScript." }
],
"max_tokens": 1024,
"temperature": 0.2,
"stream": true
}
Response (non-streaming)
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1740000000,
"model": "misar-default",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "A closure is a function that retains access to..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 42,
"completion_tokens": 198,
"total_tokens": 240
}
}
Streaming
Set "stream": true to receive Server-Sent Events in the standard OpenAI delta format:
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","choices":[{"delta":{"content":"A closure"},"index":0}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","choices":[{"delta":{"content":" is a"},"index":0}]}
data: [DONE]
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.misar.dev/v1"
)
response = client.chat.completions.create(
model="misar-default",
messages=[{"role": "user", "content": "Write a binary search in Python"}],
stream=True
)
for chunk in response:
print(chunk.choices[0].delta.content or "", end="")
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "YOUR_API_KEY",
baseURL: "https://api.misar.dev/v1",
});
const stream = await client.chat.completions.create({
model: "misar-default",
messages: [{ role: "user", content: "Write a binary search in TypeScript" }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? "");
}
curl https://api.misar.dev/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "misar-default",
"messages": [{"role": "user", "content": "Write a binary search in Python"}],
"stream": true
}'
Use base_url="https://api.misar.dev/v1" with any OpenAI-compatible SDK. The endpoint accepts the same request shape and returns the same response format.