Introduction
CostCanary tracks per-user LLM costs in real time. Know exactly which users, features, and models are driving your AI bill.
What is CostCanary?
CostCanary is an SDK + dashboard that gives you per-user, per-feature visibility into your LLM spending. No proxies, no vendor lock-in — just one line of code around your existing LLM calls.
Why it exists
Most teams fly blind. They see a monthly OpenAI bill but have no idea which users, features, or models are driving it. CostCanary answers three questions:
- Who is using AI? — Per-user cost attribution
- What are they using it for? — Per-feature breakdown
- How much does it cost? — Real-time cost tracking across all providers
How it works
1. Install the SDK: `npm install @costcanary/sdk`
2. Wrap your LLM calls with `track()`
3. Watch costs appear in your dashboard within seconds
Supported providers
OpenAI, Anthropic, Google Gemini, Groq, Mistral, Cohere, Perplexity, Together AI, Fireworks AI, Azure OpenAI, OpenRouter, Replicate, Anyscale, and any HTTP-based LLM API via auto-capture.