Endpoints
Detailed reference for all SensAI API endpoints including chat streaming, model listing, and health checks.
Base URL
All endpoints are prefixed with:
https://api.sensai.jmrinfotech.com/api/v1POST /chat/stream
Stream a chat completion response from an AI model in real time using server-sent events (SSE).
Request Body
{
"model": "openai/gpt-4o",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Explain quantum computing in simple terms." }
],
"temperature": 0.7,
"max_tokens": 1024
}| Field | Type | Required | Description |
|---|---|---|---|
model | string | Yes | The model identifier (e.g., openai/gpt-4o, anthropic/claude-3.5-sonnet) |
messages | array | Yes | Array of message objects with role and content fields |
temperature | number | No | Sampling temperature between 0 and 2. Defaults to 0.7 |
max_tokens | number | No | Maximum number of tokens to generate |
Response
Returns a stream of server-sent events. Each event contains a chunk of the generated response:
data: {"content": "Quantum", "done": false}
data: {"content": " computing", "done": false}
data: {"content": " uses...", "done": false}
data: {"content": "", "done": true}GET /chat/models
Retrieve the curated list of AI models available through SensAI.
Response
{
"models": [
{
"id": "openai/gpt-4o",
"name": "GPT-4o",
"provider": "OpenAI",
"context_length": 128000
}
]
}This endpoint returns cached results and does not require authentication for public model listing.
GET /health
Check the health status of the SensAI API service.
Response
{
"status": "healthy",
"version": "1.0.0"
}This endpoint does not require authentication and is useful for monitoring and uptime checks.