SensAI Docs

Working with AI Agents

Learn how to configure, customize, and deploy SensAI's pre-built AI agents for various business and development tasks.

What Are SensAI Agents?

SensAI agents are pre-configured AI assistants designed for specific tasks and domains. Each agent comes with a tailored system prompt, recommended model selection, and a set of capabilities optimized for its intended use case. You can use agents out of the box or customize them to fit your requirements.

The SensAI Agent Studio currently offers six specialized agents spanning code review, data analysis, content generation, customer support, research assistance, and workflow automation.

Agent Configuration

Each agent is defined by the following properties:

  • Name and Slug — A human-readable name and a URL-friendly identifier used in API requests
  • System Prompt — The base instructions that shape the agent's behavior and personality
  • Recommended Model — The AI model best suited for the agent's task profile
  • Features — A list of capabilities the agent supports (e.g., code analysis, document summarization)
  • Use Cases — Example scenarios where the agent excels

System Prompt Design

The system prompt is the most critical component of agent configuration. It defines the agent's persona, sets boundaries for its responses, and establishes the tone and format of its output. When writing a system prompt, be explicit about what the agent should and should not do, and provide examples of ideal responses where possible.

Using an Agent

To interact with an agent, send a chat request to the streaming endpoint with the agent's system prompt included as the first message:

curl -X POST https://api.sensai.jmrinfotech.com/api/v1/chat/stream \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "anthropic/claude-3.5-sonnet",
    "messages": [
      { "role": "system", "content": "You are a code review assistant..." },
      { "role": "user", "content": "Review this Python function for potential issues..." }
    ]
  }'

The agent's system prompt establishes the context and constraints for the conversation, ensuring consistent and relevant responses.

Customizing Agents

You can customize any pre-built agent by modifying its system prompt, changing the model, or adjusting parameters like temperature and max tokens. For production deployments, we recommend storing your custom agent configurations server-side and referencing them by slug in your application code.

Best Practices

  • Start with a pre-built agent and iterate on the system prompt based on real usage patterns
  • Use lower temperature values (0.3-0.5) for tasks requiring precision, such as code review or data analysis
  • Use higher temperature values (0.7-1.0) for creative tasks like content generation
  • Monitor agent performance and adjust model selection based on quality and latency requirements

Performance Tuning

When optimizing agent performance, consider the trade-off between response quality and latency. Larger models tend to produce more nuanced responses but take longer to generate output. For latency-sensitive applications like live customer support, pair a fast model with a concise system prompt to keep response times under acceptable thresholds.