HyperSaaS
BackendAI Chat

Agent Handlers

Pluggable multi-framework agent architecture.

HyperSaaS uses a framework-agnostic agent pattern. The service layer never imports any AI framework — each handler converts its native output to a common AgentMessage at the boundary.

Architecture

services.py (zero framework imports)


AgentMessage dataclass ← common boundary type

    ├── LangGraphHandler → converts LangChain messages
    ├── PydanticAIHandler → converts PydanticAI results
    └── YourCustomHandler → convert anything

AgentMessage

The framework-agnostic message type defined in chat/handlers/base.py:

@dataclass
class AgentMessage:
    role: str               # "assistant" | "tool"
    content: str
    tool_calls: list[dict] | None = None  # [{"id", "name", "args"}]
    tool_call_id: str | None = None
    map_data: dict | None = None

BaseAgentHandler

The abstract base class all handlers implement:

class BaseAgentHandler(ABC):
    def __init__(self, session: ChatSession):
        self.session = session

    @abstractmethod
    def invoke(self, user_message: str) -> tuple[list[AgentMessage], dict[str, int]]:
        """Synchronous invocation. Returns (messages, usage_dict)."""
        ...

    @abstractmethod
    async def astream(self, user_message: str) -> AsyncGenerator:
        """Async streaming. Yields (chunk, usage) tuples."""
        ...

LangGraph Handler

Located in chat/handlers/langgraph/. Uses LangGraph's graph-based execution with tool calling.

Key files:

  • handler.pyLangGraphHandler(BaseAgentHandler), converts output to AgentMessage
  • graph.py — Defines the LangGraph state machine (llm node → tool node → loop)
  • nodes.py — LLM call node, tool execution node, routing logic
  • state.py — Graph state definition
  • tools.py — Wraps plain Python tools with LangChain @tool decorators

Flow:

User message → LLM node → should_continue? → tool node → LLM node → ... → END

The LLM decides when to call tools. The graph loops until the LLM produces a final response without tool calls.

PydanticAI Handler

Located in chat/handlers/pydantic_ai/. Uses PydanticAI's agent abstraction.

Key files:

  • handler.pyPydanticAIHandler(BaseAgentHandler), converts PydanticAI output to AgentMessage
  • agent.pycreate_agent(session), builds a pydantic_ai.Agent with tool closures

PydanticAI uses "provider:model" format (e.g., "openai:gpt-4o", "anthropic:claude-sonnet-4-5"). The session is captured in closures for tool access.

Handler Factory

# chat/handlers/__init__.py
FRAMEWORK_CHOICES = [
    ("none", "None"),
    ("langgraph", "LangGraph"),
    ("pydantic_ai", "PydanticAI"),
]

def get_agent_handler(session) -> BaseAgentHandler | None:
    framework = getattr(session, "agent_framework", "none")
    if framework == "langgraph":
        return LangGraphHandler(session)
    elif framework == "pydantic_ai":
        return PydanticAIHandler(session)
    return None

Adding a New Framework

  1. Create chat/handlers/your_framework/handler.py:
class YourHandler(BaseAgentHandler):
    def invoke(self, user_message: str):
        # Call your framework
        result = your_framework.run(user_message)

        # Convert to AgentMessage at the boundary
        messages = [AgentMessage(role="assistant", content=result.text)]
        usage = {"input_tokens": result.input_tokens, "output_tokens": result.output_tokens}
        return messages, usage

    async def astream(self, user_message: str):
        async for chunk in your_framework.stream(user_message):
            yield chunk.text, {}
  1. Register it in chat/handlers/__init__.py:
elif framework == "your_framework":
    return YourHandler(session)
  1. Add the choice to FRAMEWORK_CHOICES and run makemigrations.

The tools, service layer, and frontend all work without changes.

On this page