Streaming Overview
Real-time Server-Sent Events (SSE) streaming in the MisarBlog API.
Streaming
Some MisarBlog API endpoints stream their responses using Server-Sent Events (SSE) — the response body is a text/event-stream where data arrives incrementally rather than all at once.
Streaming Endpoints
| Endpoint | Description |
|----------|-------------|
| POST /ai/research | Stream AI-generated research on a topic |
| GET /ai/chat | Stream conversational AI responses (editor assistant) |
| GET /ai/generate | Stream AI content generation (article sections) |
Event Format
Every streamed response follows this format:
data: {"chunk": "...partial text..."}
data: {"chunk": "...more text..."}
data: [DONE]
- Each line beginning with
data:contains a JSON object with achunkfield [DONE]signals the end of the stream — stop reading after this event- Empty lines separate events (standard SSE protocol)
Reading a Stream
fetch + ReadableStream (recommended)
async function streamResearch(topic: string, apiKey: string) {
const res = await fetch("https://api.misar.io/blog/v1/ai/research", {
method: "POST",
headers: {
Authorization: `Bearer ${apiKey}`,
"Content-Type": "application/json",
Accept: "text/event-stream",
},
body: JSON.stringify({ topic, depth: "detailed" }),
});
if (!res.ok) throw new Error(`API error: ${res.status}`);
const reader = res.body!.getReader();
const decoder = new TextDecoder();
let buffer = "";
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split("\n");
buffer = lines.pop() ?? "";
for (const line of lines) {
if (!line.startsWith("data: ")) continue;
const payload = line.slice(6).trim();
if (payload === "[DONE]") return;
const { chunk } = JSON.parse(payload);
process.stdout.write(chunk);
}
}
}
Node.js (server-side)
import https from "node:https";
function streamResearch(topic: string, apiKey: string): Promise<string> {
return new Promise((resolve, reject) => {
const body = JSON.stringify({ topic, depth: "detailed" });
let result = "";
const req = https.request(
{
hostname: "api.misar.io",
path: "/blog/v1/ai/research",
method: "POST",
headers: {
Authorization: `Bearer ${apiKey}`,
"Content-Type": "application/json",
"Content-Length": Buffer.byteLength(body),
},
},
(res) => {
res.on("data", (chunk: Buffer) => {
const lines = chunk.toString().split("\n");
for (const line of lines) {
if (!line.startsWith("data: ")) continue;
const payload = line.slice(6).trim();
if (payload === "[DONE]") return;
try {
result += JSON.parse(payload).chunk;
} catch {}
}
});
res.on("end", () => resolve(result));
}
);
req.on("error", reject);
req.write(body);
req.end();
});
}
Error Handling
Errors during streaming are delivered as a final data event before the stream closes:
data: {"error": "Rate limit exceeded"}
data: [DONE]
Always check for an error field in the parsed JSON alongside chunk.
Next Steps
- AI Research endpoint — full reference for
/ai/research - AI Tools — non-streaming AI endpoints