Overview
AI chat interface with multi-model support, tool calling, and RAG citations.
The chat module is the core user-facing feature of HyperSaaS. It provides a rich AI conversation interface with support for multiple providers, agent frameworks, knowledge base search, document viewer, and location mapping.
Component Architecture
ChatSessionUI (container)
│
├── ChatSessionHeader
│ ├── Session name + sidebar toggle
│ ├── Document status badge
│ ├── Map toggle
│ └── Settings sheet trigger
│
├── ResizablePanels (desktop)
│ ├── Panel 1: MessageList
│ ├── Panel 2: ChatMapGoogle (optional)
│ └── Panel 3: ChatDocumentWorkspace (optional)
│
├── ChatPromptInput
│ ├── Textarea (Enter to send)
│ ├── Language selector
│ ├── Mic button (audio transcription)
│ ├── Model selector
│ └── Suggestions (empty chat only)
│
└── Floating Sheets
├── ChatSettingsForm (AI config + KB management)
└── LocationDetailView (location info)Chat Session Flow
1. User creates chat session
├── Select AI provider + model
├── Choose agent framework (LangGraph / PydanticAI / None)
├── Set system prompt + parameters
└── Attach knowledge bases (optional)
2. User sends message
├── POST /api/.../messages/
├── Backend processes with selected handler
│ ├── Agent handler → tool calls → final response
│ └── Basic handler → direct LLM response
└── Response saved + returned
3. UI renders response
├── Markdown with syntax highlighting
├── Citation sources from RAG
├── Location tags (clickable → map)
├── Tool call chain-of-thought (expandable)
└── Map markers for locationsKey Features
Multi-Model Support
Users can switch models mid-conversation via the model selector dropdown:
| Provider | Models |
|---|---|
| OpenAI | GPT-4o, GPT-4 Turbo, o1, o3, o3-mini, o4-mini, gpt-5 |
| Anthropic | Claude Opus 4.6, Claude Sonnet 4.6/4.5, Claude Haiku 4.5 |
| Gemini 2.5 Pro, Gemini 2.5 Flash | |
| Groq | Various open models |
| DeepSeek | DeepSeek models |
Agent Frameworks
Selectable at session creation:
| Framework | Description |
|---|---|
| LangGraph | Graph-based execution with tool calling loop |
| PydanticAI | PydanticAI agent with tool closures |
| None | Basic chat — direct LLM, no tools |
Knowledge Base RAG
When knowledge bases are attached to a session, the agent can search documents for relevant context. RAG results appear as citation sources below the assistant's response.
Document Workspace
Clicking a citation opens the source document in a side panel. The PDF viewer highlights the relevant chunk, with page navigation.
Location Mapping
When the AI mentions locations, they appear as clickable tags in the message. Clicking a tag shows the location on the Google Maps panel.
Voice Input
The mic button records audio, sends it to the backend Whisper endpoint for transcription, and inserts the text into the prompt input.
Pages
| Route | Component | Description |
|---|---|---|
.../chat | ChatSessionCreateForm | Create new chat session |
.../chat/history | ChatSessionsList | List all sessions with search |
.../chat/[chatId] | ChatSessionUI | Active chat conversation |
Configuration
Chat sessions store their AI configuration:
| Field | Type | Description |
|---|---|---|
ai_provider | enum | OpenAI, Anthropic, Google, Groq, DeepSeek |
ai_model | string | Model identifier |
agent_framework | enum | langgraph, pydantic_ai, none |
system_prompt | string | System message for the LLM |
temperature | number | 0.0 - 2.0 |
max_tokens | number | Response length limit |
top_p | number | 0.0 - 1.0 |
frequency_penalty | number | -2.0 - 2.0 |
presence_penalty | number | -2.0 - 2.0 |
model_parameters | JSON | Additional provider-specific params |