BackendWorkspaces & Teams
Cost Tracking
Track AI usage costs per user and workspace.
HyperSaaS tracks every LLM API call and calculates costs based on per-model token pricing.
Models
AIUsage
Logs every individual LLM interaction:
class AIUsage(BaseModel):
user = models.ForeignKey(User, null=True, blank=True)
workspace = models.ForeignKey(Workspace, on_delete=models.CASCADE)
chat_session = models.ForeignKey(ChatSession, null=True, blank=True)
model_used = models.CharField(max_length=100)
input_tokens = models.PositiveIntegerField()
output_tokens = models.PositiveIntegerField()
input_cost_per_token = models.DecimalField(max_digits=15, decimal_places=10)
output_cost_per_token = models.DecimalField(max_digits=15, decimal_places=10)
total_cost = models.DecimalField(max_digits=15, decimal_places=8) # Auto-calculatedUserCost
Aggregated cost per user across all workspaces:
class UserCost(BaseModel):
user = models.OneToOneField(User, on_delete=models.CASCADE)
total_cost = models.DecimalField(max_digits=14, decimal_places=6)
overage_cost = models.DecimalField(max_digits=12, decimal_places=2)
last_reset_at = models.DateTimeField(null=True)Pricing Configuration
Token pricing is defined per model in chat/pricing_config.py:
PRICING_CONFIG = {
"gpt-4o": {
"input_cost_per_1k_tokens": Decimal("0.0025"),
"output_cost_per_1k_tokens": Decimal("0.01"),
},
"claude-sonnet-4-5-20250514": {
"input_cost_per_1k_tokens": Decimal("0.003"),
"output_cost_per_1k_tokens": Decimal("0.015"),
},
# ... all supported models
}Credit Limits
Subscription plans map to credit limits via PLAN_CREDIT_MAPPING:
PLAN_CREDIT_MAPPING = {
"price_xxx_monthly": Decimal("9.99"),
"price_xxx_yearly": Decimal("99.90"),
# ...
}Credit Usage API
GET /api/users/credit-usage/Returns:
{
"total_credit_limit": "39.99",
"total_cost": "12.45",
"percentage_used": 31.1
}Before creating a chat session, the system checks has_sufficient_buffer(user, workspace) to ensure the user hasn't exceeded their credit limit.