Chat Integration
Integrate real-time AI-powered chat into your web or mobile application using the SensAI streaming API.
Overview
SensAI's chat streaming endpoint enables you to build real-time conversational interfaces powered by AI. The API uses server-sent events (SSE) to deliver response tokens as they are generated, providing a responsive and interactive user experience.
This guide walks you through integrating the chat API into a web application using JavaScript.
Basic Integration
Setting Up the Stream
Use the native fetch API to connect to the chat streaming endpoint:
async function streamChat(messages) {
const response = await fetch("https://api.sensai.jmrinfotech.com/api/v1/chat/stream", {
method: "POST",
headers: {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "openai/gpt-4o",
messages,
}),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split("\n").filter((line) => line.startsWith("data: "));
for (const line of lines) {
const data = JSON.parse(line.slice(6));
if (!data.done) {
process.stdout.write(data.content);
}
}
}
}Managing Conversation History
To maintain context across multiple turns, accumulate messages in an array and send the full history with each request:
const conversation = [
{ role: "system", content: "You are a helpful assistant." },
];
async function sendMessage(userMessage) {
conversation.push({ role: "user", content: userMessage });
let assistantMessage = "";
// Stream the response and collect the full text
// ... (using the streaming logic above)
conversation.push({ role: "assistant", content: assistantMessage });
}Keep the conversation history trimmed to stay within the model's context window. For long conversations, consider summarizing older messages or implementing a sliding window strategy.
React Integration
For React applications, you can build a custom hook that manages the streaming state:
import { useState, useCallback } from "react";
function useChatStream() {
const [messages, setMessages] = useState<Array<{ role: string; content: string }>>([]);
const [isStreaming, setIsStreaming] = useState(false);
const sendMessage = useCallback(async (content: string) => {
setIsStreaming(true);
const newMessages = [...messages, { role: "user", content }];
setMessages(newMessages);
// Connect to streaming endpoint and update state as tokens arrive
// ... streaming logic here
setIsStreaming(false);
}, [messages]);
return { messages, sendMessage, isStreaming };
}Error Handling
Always implement error handling for network failures, rate limiting, and invalid responses:
- Network errors — Retry with exponential backoff for transient failures
- 429 Too Many Requests — Back off and retry after the period specified in the
Retry-Afterheader - 401 Unauthorized — Check that your API key is valid and included in the request headers
- 500 Server Error — Log the error and notify the user. Retry after a brief delay.
Performance Tips
- Use streaming for all user-facing chat interfaces to minimize perceived latency
- Choose faster models (GPT-4o Mini, Claude 3.5 Haiku) for real-time applications where speed matters more than depth
- Implement request cancellation so users can abort long-running responses
- Cache system prompts and avoid re-sending static configuration on every request where possible