Token usage tracker for OpenAI, Claude, and Gemini APIs with MCP (Model Context Protocol) support. Pass accurate API costs to your users. - ๐ฏ Simple Integration - One line to wrap your API client - ๐ Automatic Tracking - No manual token counting - ๐ฐ Accurate Pricing - Up-to-date pricing for all models (2025) - ๐ Multiple Providers - OpenAI, Claude, and Gemini support - ๐ User Management - Tra
Add this skill
npx mdskills install wn01011/llm-token-trackerWell-documented MCP token tracker with comprehensive pricing data and multiple integration options
1# LLM Token Tracker ๐งฎ23Token usage tracker for OpenAI, Claude, and Gemini APIs with **MCP (Model Context Protocol) support**. Pass accurate API costs to your users.45[](https://www.npmjs.com/package/llm-token-tracker)6[](https://opensource.org/licenses/MIT)78<a href="https://glama.ai/mcp/servers/@wn01011/llm-token-tracker">9 <img width="380" height="200" src="https://glama.ai/mcp/servers/@wn01011/llm-token-tracker/badge" alt="llm-token-tracker MCP server" />10</a>1112## โจ Features1314- ๐ฏ **Simple Integration** - One line to wrap your API client15- ๐ **Automatic Tracking** - No manual token counting16- ๐ฐ **Accurate Pricing** - Up-to-date pricing for all models (2025)17- ๐ **Multiple Providers** - OpenAI, Claude, and Gemini support18- ๐ **User Management** - Track usage per user/session19- ๐ **Currency Support** - USD and KRW20- ๐ค **MCP Server** - Use directly in Claude Desktop!21- ๐ **Intuitive Session Tracking** - Real-time usage with progress bars2223## ๐ฆ Installation2425```bash26npm install llm-token-tracker27```2829## ๐ Quick Start3031### Option 1: Use as Library3233```javascript34const { TokenTracker } = require('llm-token-tracker');35// or import { TokenTracker } from 'llm-token-tracker';3637// Initialize tracker38const tracker = new TokenTracker({39 currency: 'USD' // or 'KRW'40});4142// Example: Manual tracking43const trackingId = tracker.startTracking('user-123');4445// ... your API call here ...4647tracker.endTracking(trackingId, {48 provider: 'openai', // or 'anthropic' or 'gemini'49 model: 'gpt-3.5-turbo',50 inputTokens: 100,51 outputTokens: 50,52 totalTokens: 15053});5455// Get user's usage56const usage = tracker.getUserUsage('user-123');57console.log(`Total cost: $${usage.totalCost}`);58```5960## ๐ง With Real APIs6162To use with actual OpenAI/Anthropic APIs:6364```javascript65const OpenAI = require('openai');66const { TokenTracker } = require('llm-token-tracker');6768const tracker = new TokenTracker();69const openai = tracker.wrap(new OpenAI({70 apiKey: process.env.OPENAI_API_KEY71}));7273// Use normally - tracking happens automatically74const response = await openai.chat.completions.create({75 model: "gpt-3.5-turbo",76 messages: [{ role: "user", content: "Hello!" }]77});7879console.log(response._tokenUsage);80// { tokens: 125, cost: 0.0002, model: "gpt-3.5-turbo" }81```8283### Option 2: Use as MCP Server8485Add to Claude Desktop settings (`~/Library/Application Support/Claude/claude_desktop_config.json`):8687```json88{89 "mcpServers": {90 "token-tracker": {91 "command": "npx",92 "args": ["llm-token-tracker"]93 }94 }95}96```9798Then in Claude:99- **"Calculate current session usage"** - See current session usage with intuitive format100- **"Calculate current conversation cost"** - Get cost breakdown with input/output tokens101- "Track my API usage"102- "Compare costs between GPT-4 and Claude"103- "Show my total spending today"104105#### Available MCP Tools1061071. **`get_current_session`** - ๐ Get current session usage (RECOMMENDED)108 - Returns: Used/Remaining tokens, Input/Output breakdown, Cost, Progress bar109 - Default user_id: `current-session`110 - Default budget: 190,000 tokens111 - **Perfect for real-time conversation tracking!**1121132. **`track_usage`** - Track token usage for an AI API call114 - Parameters: provider, model, input_tokens, output_tokens, user_id1151163. **`get_usage`** - Get usage summary for specific user or all users1171184. **`compare_costs`** - Compare costs between different models1191205. **`clear_usage`** - Clear usage data for a user121122#### Example MCP Output123124```125๐ฐ Current Session126โโโโโโโโโโโโโโโโโโโโโโ127๐ Used: 62,830 tokens (33.1%)128โจ Remaining: 127,170 tokens129[โโโโโโโโโโโโโโโโโโโโ]130131๐ฅ Input: 55,000 tokens132๐ค Output: 7,830 tokens133๐ต Cost: $0.2825134โโโโโโโโโโโโโโโโโโโโโโ135136๐ Model Breakdown:137 โข anthropic/claude-sonnet-4.5: 62,830 tokens ($0.2825)138```139140## ๐ Supported Models & Pricing (Updated 2025)141142### OpenAI (2025)143| Model | Input (per 1K tokens) | Output (per 1K tokens) | Notes |144|-------|----------------------|------------------------|-------|145| **GPT-5 Series** | | | |146| GPT-5 | $0.00125 | $0.010 | Latest flagship model |147| GPT-5 Mini | $0.00025 | $0.0010 | Compact version |148| **GPT-4.1 Series** | | | |149| GPT-4.1 | $0.0020 | $0.008 | Advanced reasoning |150| GPT-4.1 Mini | $0.00015 | $0.0006 | Cost-effective |151| **GPT-4o Series** | | | |152| GPT-4o | $0.0025 | $0.010 | Multimodal |153| GPT-4o Mini | $0.00015 | $0.0006 | Fast & cheap |154| **o1 Reasoning Series** | | | |155| o1 | $0.015 | $0.060 | Advanced reasoning |156| o1 Mini | $0.0011 | $0.0044 | Efficient reasoning |157| o1 Pro | $0.015 | $0.060 | Pro reasoning |158| **Legacy Models** | | | |159| GPT-4 Turbo | $0.01 | $0.03 | |160| GPT-4 | $0.03 | $0.06 | |161| GPT-3.5 Turbo | $0.0005 | $0.0015 | Most affordable |162| **Media Models** | | | |163| DALL-E 3 | $0.040 per image | - | Image generation |164| Whisper | $0.006 per minute | - | Speech-to-text |165166### Anthropic (2025)167| Model | Input (per 1K tokens) | Output (per 1K tokens) | Notes |168|-------|----------------------|------------------------|-------|169| **Claude 4 Series** | | | |170| Claude Opus 4.1 | $0.015 | $0.075 | Most powerful |171| Claude Opus 4 | $0.015 | $0.075 | Flagship model |172| Claude Sonnet 4.5 | $0.003 | $0.015 | Best for coding |173| Claude Sonnet 4 | $0.003 | $0.015 | Balanced |174| **Claude 3 Series** | | | |175| Claude 3.5 Sonnet | $0.003 | $0.015 | |176| Claude 3.5 Haiku | $0.00025 | $0.00125 | Fastest |177| Claude 3 Opus | $0.015 | $0.075 | |178| Claude 3 Sonnet | $0.003 | $0.015 | |179| Claude 3 Haiku | $0.00025 | $0.00125 | Most affordable |180181### Google Gemini (2025)182| Model | Input (per 1K tokens) | Output (per 1K tokens) | Notes |183|-------|----------------------|------------------------|-------|184| **Gemini 2.0 Series** | | | |185| Gemini 2.0 Flash (Exp) | Free | Free | Experimental preview |186| Gemini 2.0 Flash Thinking | Free | Free | Reasoning preview |187| **Gemini 1.5 Series** | | | |188| Gemini 1.5 Pro | $0.00125 | $0.005 | Most capable |189| Gemini 1.5 Flash | $0.000075 | $0.0003 | Fast & efficient |190| Gemini 1.5 Flash-8B | $0.0000375 | $0.00015 | Ultra-fast |191| **Gemini 1.0 Series** | | | |192| Gemini 1.0 Pro | $0.0005 | $0.0015 | Legacy model |193| Gemini 1.0 Pro Vision | $0.00025 | $0.0005 | Multimodal |194| Gemini Ultra | $0.002 | $0.006 | Premium tier |195196**Note:** Prices shown are per 1,000 tokens. Batch API offers 50% discount. Prompt caching can reduce costs by up to 90%.197198## ๐ฏ Examples199200Run the example:201```bash202npm run example203```204205Check `examples/basic-usage.js` for detailed usage patterns.206207## ๐ API Reference208209### `new TokenTracker(config)`210- `config.currency`: 'USD' or 'KRW' (default: 'USD')211- `config.webhookUrl`: Optional webhook for usage notifications212213### `tracker.wrap(client)`214Wrap an OpenAI or Anthropic client for automatic tracking.215216### `tracker.forUser(userId)`217Create a user-specific tracker instance.218219### `tracker.startTracking(userId?, sessionId?)`220Start manual tracking session. Returns tracking ID.221222### `tracker.endTracking(trackingId, usage)`223End tracking and record usage.224225### `tracker.getUserUsage(userId)`226Get total usage for a user.227228### `tracker.getAllUsersUsage()`229Get usage summary for all users.230231## ๐ Development232233```bash234# Install dependencies235npm install236237# Build TypeScript238npm run build239240# Watch mode241npm run dev242243# Run examples244npm run example245```246247## ๐ License248249MIT250251## ๐ค Contributing252253Contributions are welcome! Please feel free to submit a Pull Request.254255## ๐ Issues256257For bugs and feature requests, please [create an issue](https://github.com/wn01011/llm-token-tracker/issues).258259## ๐ฆ What's New in v2.4.0260261- ๐ **Gemini API Support** - Full integration with Google's Gemini models262- ๐ **Gemini 2.0 Support** - Free preview models included263- ๐ **Enhanced Pricing** - Up-to-date Gemini 1.5 and 2.0 pricing264- ๐ง **Auto-detection** - Automatic Gemini client wrapping265- ๐ฐ **Cost Comparison** - Compare Gemini with OpenAI and Claude266267## ๐ฆ What's New in v2.3.0268269- ๐ฑ **Real-time exchange rates** - Automatic USD to KRW conversion270- ๐ Uses exchangerate-api.com for accurate rates271- ๐พ 24-hour caching to minimize API calls272- ๐ New `get_exchange_rate` tool to check current rates273- ๐ Background auto-updates with fallback support274275## What's New in v2.2.0276277- ๐๏ธ **File-based persistence** - Session data survives server restarts278- ๐พ Automatic saving to `~/.llm-token-tracker/sessions.json`279- ๐ Works for both npm and local installations280- ๐ Historical data tracking across sessions281- ๐ฏ Zero configuration required - just works!282283## What's New in v2.1.0284285- ๐ Added `get_current_session` tool for intuitive session tracking286- ๐ Real-time progress bars and visual indicators287- ๐ฐ Enhanced cost breakdown with input/output token separation288- ๐จ Improved formatting with thousands separators289- ๐ง Better default user_id handling (`current-session`)290291---292293Built with โค๏ธ for developers who need transparent AI API billing.
Full transparency โ inspect the skill content before installing.