HyperSaaS
BackendAI Chat

AI Model Handlers

Provider-specific LLM handlers for basic chat (no tools).

When agent_framework is set to "none", HyperSaaS falls back to basic chat using provider-specific handlers. These handle direct LLM calls without tool support.

BaseChatModelHandler

Defined in chat/ai_models.py, this base class handles:

  • Message history preparation (last 20 messages)
  • Token estimation with tiktoken
  • Usage extraction from LLM responses
  • Synchronous invoke() and async astream() methods
  • Structured output via invoke_structured() with Pydantic schemas

Provider Handlers

ProviderHandler ClassLLM Library
OpenAIOpenAIModelHandlerlangchain-openai (ChatOpenAI)
AnthropicAnthropicModelHandlerlangchain-anthropic (ChatAnthropic)
GoogleGoogleModelHandlerlangchain-google-genai (ChatGoogleGenerativeAI)
GroqGroqModelHandlerlangchain-openai with Groq endpoint
DeepSeekDeepSeekModelHandlerlangchain-openai with DeepSeek endpoint

Handler Resolution

def get_ai_model_handler(session):
    # 1. Check if agent framework is set
    if session.agent_framework != "none":
        handler = get_agent_handler(session)
        if handler:
            return handler

    # 2. Fall back to provider-specific basic chat handler
    provider = session.ai_provider
    HANDLER_MAP = {
        "openai": OpenAIModelHandler,
        "anthropic": AnthropicModelHandler,
        "google": GoogleModelHandler,
        "groq": GroqModelHandler,
        "deepseek": DeepSeekModelHandler,
    }
    return HANDLER_MAP[provider](session)

Provider Configuration

Each handler reads its API key from settings:

ProviderSettingEnv Variable
OpenAIOPENAI_API_KEYOPENAI_API_KEY
AnthropicANTHROPIC_API_KEYANTHROPIC_API_KEY
GoogleGOOGLE_API_KEYGOOGLE_API_KEY
GroqGROQ_API_KEYGROQ_API_KEY
DeepSeekDEEPSEEK_API_KEYDEEPSEEK_API_KEY

Session Parameters

All handlers respect the session's AI configuration fields:

FieldDescriptionDefault
temperatureRandomness (0.0-2.0)Provider default
max_tokensResponse length limitProvider default
top_pNucleus sampling (0.0-1.0)Provider default
frequency_penaltyRepetition penalty (-2.0-2.0)0
presence_penaltyTopic diversity (-2.0-2.0)0
system_promptSystem message prepended to contextEmpty
model_parametersAdditional provider-specific JSON params{}

Some models (e.g., OpenAI o1/o3) don't support temperature or penalty parameters — the handlers automatically exclude unsupported fields.

On this page