Query and analyze LLM traces with AI assistance. Ask Claude to find expensive API calls, debug errors, compare model performance, or track token usage—all from your IDE. An MCP (Model Context Protocol) server that connects AI assistants to OpenTelemetry trace backends (Jaeger, Tempo, Traceloop), with specialized support for LLM observability through OpenLLMetry semantic conventions. See it in acti
Add this skill
npx mdskills install traceloop/opentelemetry-mcp-serverComprehensive MCP server for OpenTelemetry trace analysis with excellent LLM-specific tooling and multi-backend support
1# OpenTelemetry MCP Server23[](https://www.python.org/downloads/)4[](https://pypi.org/project/opentelemetry-mcp/)5[](LICENSE)67**Query and analyze LLM traces with AI assistance.** Ask Claude to find expensive API calls, debug errors, compare model performance, or track token usage—all from your IDE.89An MCP (Model Context Protocol) server that connects AI assistants to OpenTelemetry trace backends (Jaeger, Tempo, Traceloop), with specialized support for LLM observability through OpenLLMetry semantic conventions.1011**See it in action:**1213https://github.com/user-attachments/assets/e2106ef9-0a58-4ba0-8b2b-e114c0b8b4b91415---1617## Table of Contents1819- [Quick Start](#quick-start)20- [Installation](#installation)21- [Features](#features)22- [Configuration](#configuration)23- [Tools Reference](#tools-reference)24- [Example Queries](#example-queries)25- [Common Workflows](#common-workflows)26- [Troubleshooting](#troubleshooting)27- [Development](#development)28- [Support](#support)2930---3132## Quick Start3334**No installation required!** Configure your client to run the server directly from PyPI:3536```json37// Add to claude_desktop_config.json:38{39 "mcpServers": {40 "opentelemetry-mcp": {41 "command": "pipx",42 "args": ["run", "opentelemetry-mcp"],43 "env": {44 "BACKEND_TYPE": "jaeger",45 "BACKEND_URL": "http://localhost:16686"46 }47 }48 }49}50```5152Or use `uvx` (alternative):5354```json55{56 "mcpServers": {57 "opentelemetry-mcp": {58 "command": "uvx",59 "args": ["opentelemetry-mcp"],60 "env": {61 "BACKEND_TYPE": "jaeger",62 "BACKEND_URL": "http://localhost:16686"63 }64 }65 }66}67```6869**That's it!** Ask Claude: _"Show me traces with errors from the last hour"_7071---7273## Installation7475### For End Users (Recommended)7677```bash78# Run without installing (recommended)79pipx run opentelemetry-mcp --backend jaeger --url http://localhost:166868081# Or with uvx82uvx opentelemetry-mcp --backend jaeger --url http://localhost:1668683```8485This approach:8687- ✅ Always uses the latest version88- ✅ No global installation needed89- ✅ Isolated environment automatically90- ✅ Works on all platforms9192### Per Client Integration9394<details>95<summary><b>Claude Desktop</b></summary>9697Configure the MCP server in your Claude Desktop config file:9899- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`100- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`101102**Using pipx (recommended):**103104```json105{106 "mcpServers": {107 "opentelemetry-mcp": {108 "command": "pipx",109 "args": ["run", "opentelemetry-mcp"],110 "env": {111 "BACKEND_TYPE": "jaeger",112 "BACKEND_URL": "http://localhost:16686"113 }114 }115 }116}117```118119**Using uvx (alternative):**120121```json122{123 "mcpServers": {124 "opentelemetry-mcp": {125 "command": "uvx",126 "args": ["opentelemetry-mcp"],127 "env": {128 "BACKEND_TYPE": "jaeger",129 "BACKEND_URL": "http://localhost:16686"130 }131 }132 }133}134```135136**For Traceloop backend:**137138```json139{140 "mcpServers": {141 "opentelemetry-mcp": {142 "command": "pipx",143 "args": ["run", "opentelemetry-mcp"],144 "env": {145 "BACKEND_TYPE": "traceloop",146 "BACKEND_URL": "https://api.traceloop.com",147 "BACKEND_API_KEY": "your_traceloop_api_key_here"148 }149 }150 }151}152```153154<details>155<summary>Using the repository instead of pipx?</summary>156157If you're developing locally with the cloned repository, use one of these configurations:158159**Option 1: Wrapper script (easy backend switching)**160161```json162{163 "mcpServers": {164 "opentelemetry-mcp": {165 "command": "/absolute/path/to/opentelemetry-mcp-server/start_locally.sh"166 }167 }168}169```170171**Option 2: UV directly (for multiple backends)**172173```json174{175 "mcpServers": {176 "opentelemetry-mcp-jaeger": {177 "command": "uv",178 "args": [179 "--directory",180 "/absolute/path/to/opentelemetry-mcp-server",181 "run",182 "opentelemetry-mcp"183 ],184 "env": {185 "BACKEND_TYPE": "jaeger",186 "BACKEND_URL": "http://localhost:16686"187 }188 }189 }190}191```192193</details>194195</details>196197<details>198<summary><b>Claude Code</b></summary>199200Claude Code works with MCP servers configured in your Claude Desktop config. Once configured above, you can use the server with Claude Code CLI:201202```bash203# Verify the server is available204claude-code mcp list205206# Use Claude Code with access to your OpenTelemetry traces207claude-code "Show me traces with errors from the last hour"208```209210</details>211212<details>213<summary><b>Codeium (Windsurf)</b></summary>2142151. Open Windsurf2162. Navigate to **Settings → MCP Servers**2173. Click **Add New MCP Server**2184. Add this configuration:219220**Using pipx (recommended):**221222```json223{224 "opentelemetry-mcp": {225 "command": "pipx",226 "args": ["run", "opentelemetry-mcp"],227 "env": {228 "BACKEND_TYPE": "jaeger",229 "BACKEND_URL": "http://localhost:16686"230 }231 }232}233```234235**Using uvx (alternative):**236237```json238{239 "opentelemetry-mcp": {240 "command": "uvx",241 "args": ["opentelemetry-mcp"],242 "env": {243 "BACKEND_TYPE": "jaeger",244 "BACKEND_URL": "http://localhost:16686"245 }246 }247}248```249250<details>251<summary>Using the repository instead?</summary>252253```json254{255 "opentelemetry-mcp": {256 "command": "uv",257 "args": [258 "--directory",259 "/absolute/path/to/opentelemetry-mcp-server",260 "run",261 "opentelemetry-mcp"262 ],263 "env": {264 "BACKEND_TYPE": "jaeger",265 "BACKEND_URL": "http://localhost:16686"266 }267 }268}269```270271</details>272273</details>274275<details>276<summary><b>Cursor</b></summary>2772781. Open Cursor2792. Navigate to **Settings → MCP**2803. Click **Add new MCP Server**2814. Add this configuration:282283**Using pipx (recommended):**284285```json286{287 "opentelemetry-mcp": {288 "command": "pipx",289 "args": ["run", "opentelemetry-mcp"],290 "env": {291 "BACKEND_TYPE": "jaeger",292 "BACKEND_URL": "http://localhost:16686"293 }294 }295}296```297298**Using uvx (alternative):**299300```json301{302 "opentelemetry-mcp": {303 "command": "uvx",304 "args": ["opentelemetry-mcp"],305 "env": {306 "BACKEND_TYPE": "jaeger",307 "BACKEND_URL": "http://localhost:16686"308 }309 }310}311```312313<details>314<summary>Using the repository instead of pipx?</summary>315316```json317{318 "opentelemetry-mcp": {319 "command": "uv",320 "args": [321 "--directory",322 "/absolute/path/to/opentelemetry-mcp-server",323 "run",324 "opentelemetry-mcp"325 ],326 "env": {327 "BACKEND_TYPE": "jaeger",328 "BACKEND_URL": "http://localhost:16686"329 }330 }331}332```333334</details>335336</details>337338<details>339<summary><b>Gemini CLI</b></summary>340341Configure the MCP server in your Gemini CLI config file (`~/.gemini/config.json`):342343**Using pipx (recommended):**344345```json346{347 "mcpServers": {348 "opentelemetry-mcp": {349 "command": "pipx",350 "args": ["run", "opentelemetry-mcp"],351 "env": {352 "BACKEND_TYPE": "jaeger",353 "BACKEND_URL": "http://localhost:16686"354 }355 }356 }357}358```359360**Using uvx (alternative):**361362```json363{364 "mcpServers": {365 "opentelemetry-mcp": {366 "command": "uvx",367 "args": ["opentelemetry-mcp"],368 "env": {369 "BACKEND_TYPE": "jaeger",370 "BACKEND_URL": "http://localhost:16686"371 }372 }373 }374}375```376377Then use Gemini CLI with your traces:378379```bash380gemini "Analyze token usage for gpt-4 requests today"381```382383<details>384<parameter name="name">Using the repository instead?</summary>385386```json387{388 "mcpServers": {389 "opentelemetry-mcp": {390 "command": "uv",391 "args": [392 "--directory",393 "/absolute/path/to/opentelemetry-mcp-server",394 "run",395 "opentelemetry-mcp"396 ],397 "env": {398 "BACKEND_TYPE": "jaeger",399 "BACKEND_URL": "http://localhost:16686"400 }401 }402 }403}404```405406</details>407408</details>409410**_Prerequisites:_**411412- Python 3.11 or higher413- [pipx](https://pipx.pypa.io/) or [uv](https://github.com/astral-sh/uv) installed414415<details>416<summary><b>Optional: Install globally</b></summary>417418If you prefer to install the command globally:419420```bash421# Install with pipx422pipx install opentelemetry-mcp423424# Verify425opentelemetry-mcp --help426427# Upgrade428pipx upgrade opentelemetry-mcp429```430431Or with pip:432433```bash434pip install opentelemetry-mcp435```436437</details>438439## Features440441### Core Capabilities442443- **🔌 Multiple Backend Support** - Connect to Jaeger, Grafana Tempo, or Traceloop444- **🤖 LLM-First Design** - Specialized tools for analyzing AI application traces445- **🔍 Advanced Filtering** - Generic filter system with powerful operators446- **📊 Token Analytics** - Track and aggregate LLM token usage across models and services447- **⚡ Fast & Type-Safe** - Built with async Python and Pydantic validation448449### Tools450451| Tool | Description | Use Case |452| -------------------------- | ----------------------------------- | ---------------------------------- |453| `search_traces` | Search traces with advanced filters | Find specific requests or patterns |454| `search_spans` | Search individual spans | Analyze specific operations |455| `get_trace` | Get complete trace details | Deep-dive into a single trace |456| `get_llm_usage` | Aggregate token usage metrics | Track costs and usage trends |457| `list_services` | List available services | Discover what's instrumented |458| `find_errors` | Find traces with errors | Debug failures quickly |459| `list_llm_models` | Discover models in use | Track model adoption |460| `get_llm_model_stats` | Get model performance stats | Compare model efficiency |461| `get_llm_expensive_traces` | Find highest token usage | Optimize costs |462| `get_llm_slow_traces` | Find slowest operations | Improve performance |463464### Backend Support Matrix465466| Feature | Jaeger | Tempo | Traceloop |467| ---------------- | :----: | :---: | :-------: |468| Search traces | ✓ | ✓ | ✓ |469| Advanced filters | ✓ | ✓ | ✓ |470| Span search | ✓\* | ✓ | ✓ |471| Token tracking | ✓ | ✓ | ✓ |472| Error traces | ✓ | ✓ | ✓ |473| LLM tools | ✓ | ✓ | ✓ |474475<sub>\* Jaeger requires `service_name` parameter for span search</sub>476477### For Developers478479If you're contributing to the project or want to make local modifications:480481```bash482# Clone the repository483git clone https://github.com/traceloop/opentelemetry-mcp-server.git484cd opentelemetry-mcp-server485486# Install dependencies with UV487uv sync488489# Or install in development mode with editable install490uv pip install -e ".[dev]"491```492493---494495## Configuration496497### Supported Backends498499| Backend | Type | URL Example | Notes |500| ------------- | ----------- | --------------------------- | -------------------------- |501| **Jaeger** | Local | `http://localhost:16686` | Popular open-source option |502| **Tempo** | Local/Cloud | `http://localhost:3200` | Grafana's trace backend |503| **Traceloop** | Cloud | `https://api.traceloop.com` | Requires API key |504505### Quick Configuration506507**Option 1: Environment Variables** (Create `.env` file - see [.env.example](.env.example))508509```bash510BACKEND_TYPE=jaeger511BACKEND_URL=http://localhost:16686512```513514**Option 2: CLI Arguments** (Override environment)515516```bash517opentelemetry-mcp --backend jaeger --url http://localhost:16686518opentelemetry-mcp --backend traceloop --url https://api.traceloop.com --api-key YOUR_KEY519```520521> **Configuration Precedence:** CLI arguments > Environment variables > Defaults522523<details>524<summary><b>All Configuration Options</b></summary>525526| Variable | Type | Default | Description |527| ---------------------- | ------- | -------- | -------------------------------------------------- |528| `BACKEND_TYPE` | string | `jaeger` | Backend type: `jaeger`, `tempo`, or `traceloop` |529| `BACKEND_URL` | URL | - | Backend API endpoint (required) |530| `BACKEND_API_KEY` | string | - | API key (required for Traceloop) |531| `BACKEND_TIMEOUT` | integer | `30` | Request timeout in seconds |532| `LOG_LEVEL` | string | `INFO` | Logging level: `DEBUG`, `INFO`, `WARNING`, `ERROR` |533| `MAX_TRACES_PER_QUERY` | integer | `100` | Maximum traces to return per query (1-1000) |534535**Complete `.env` example:**536537```bash538# Backend configuration539BACKEND_TYPE=jaeger540BACKEND_URL=http://localhost:16686541542# Optional: API key (mainly for Traceloop)543BACKEND_API_KEY=544545# Optional: Request timeout (default: 30s)546BACKEND_TIMEOUT=30547548# Optional: Logging level549LOG_LEVEL=INFO550551# Optional: Max traces per query (default: 100)552MAX_TRACES_PER_QUERY=100553```554555</details>556557<details>558<summary><b>Backend-Specific Setup</b></summary>559560### Jaeger561562```bash563BACKEND_TYPE=jaeger564BACKEND_URL=http://localhost:16686565```566567### Grafana Tempo568569```bash570BACKEND_TYPE=tempo571BACKEND_URL=http://localhost:3200572```573574### Traceloop575576```bash577BACKEND_TYPE=traceloop578BACKEND_URL=https://api.traceloop.com579BACKEND_API_KEY=your_api_key_here580```581582> **Note:** The API key contains project information. The backend uses a project slug of `"default"` and Traceloop resolves the actual project/environment from the API key.583584</details>585586---587588## Usage589590### Quick Start with start_locally.sh (Recommended)591592The easiest way to run the server:593594```bash595./start_locally.sh596```597598This script handles all configuration and starts the server in stdio mode (perfect for Claude Desktop integration). To switch backends, simply edit the script and uncomment your preferred backend.599600### Manual Running601602For advanced use cases or custom configurations, you can run the server manually.603604#### stdio Transport (for Claude Desktop)605606Start the MCP server with stdio transport for local/Claude Desktop integration:607608```bash609# If installed with pipx/pip610opentelemetry-mcp611612# If running from cloned repository with UV613uv run opentelemetry-mcp614615# With backend override (pipx/pip)616opentelemetry-mcp --backend jaeger --url http://localhost:16686617618# With backend override (UV)619uv run opentelemetry-mcp --backend jaeger --url http://localhost:16686620```621622#### HTTP Transport (for Network Access)623624Start the MCP server with HTTP/SSE transport for remote access:625626```bash627# If installed with pipx/pip628opentelemetry-mcp --transport http629630# If running from cloned repository with UV631uv run opentelemetry-mcp --transport http632633# Specify custom host and port (pipx/pip)634opentelemetry-mcp --transport http --host 127.0.0.1 --port 9000635636# With UV637uv run opentelemetry-mcp --transport http --host 127.0.0.1 --port 9000638```639640The HTTP server will be accessible at `http://localhost:8000/sse` by default.641642**Transport Use Cases:**643644- **stdio transport**: Local use, Claude Desktop integration, single process645- **HTTP transport**: Remote access, multiple clients, network deployment, sample applications646647## Tools Reference648649### 1. search_traces650651Search for traces with flexible filtering:652653```python654{655 "service_name": "my-app",656 "start_time": "2024-01-01T00:00:00Z",657 "end_time": "2024-01-01T23:59:59Z",658 "gen_ai_system": "openai",659 "gen_ai_model": "gpt-4",660 "min_duration_ms": 1000,661 "has_error": false,662 "limit": 50663}664```665666**Parameters:**667668- `service_name` - Filter by service669- `operation_name` - Filter by operation670- `start_time` / `end_time` - ISO 8601 timestamps671- `min_duration_ms` / `max_duration_ms` - Duration filters672- `gen_ai_system` - LLM provider (openai, anthropic, etc.)673- `gen_ai_model` - Model name (gpt-4, claude-3-opus, etc.)674- `has_error` - Filter by error status675- `tags` - Custom tag filters676- `limit` - Max results (1-1000, default: 100)677678**Returns:** List of trace summaries with token counts679680### 2. get_trace681682Get complete trace details including all spans and OpenLLMetry attributes:683684```python685{686 "trace_id": "abc123def456"687}688```689690**Returns:** Full trace tree with:691692- All spans with attributes693- Parsed OpenLLMetry data for LLM spans694- Token usage per span695- Error information696697### 3. get_llm_usage698699Get aggregated token usage metrics:700701```python702{703 "start_time": "2024-01-01T00:00:00Z",704 "end_time": "2024-01-01T23:59:59Z",705 "service_name": "my-app",706 "gen_ai_system": "openai",707 "limit": 1000708}709```710711**Returns:** Aggregated metrics with:712713- Total prompt/completion/total tokens714- Breakdown by model715- Breakdown by service716- Request counts717718### 4. list_services719720List all available services:721722```python723{}724```725726**Returns:** List of service names727728### 5. find_errors729730Find traces with errors:731732```python733{734 "start_time": "2024-01-01T00:00:00Z",735 "service_name": "my-app",736 "limit": 50737}738```739740**Returns:** Error traces with:741742- Error messages and types743- Stack traces (truncated)744- LLM-specific error info745- Error span details746747## Example Queries748749### Find Expensive OpenAI Operations750751**Natural Language:** _"Show me OpenAI traces from the last hour that took longer than 5 seconds"_752753**Tool Call:** `search_traces`754755```json756{757 "service_name": "my-app",758 "gen_ai_system": "openai",759 "min_duration_ms": 5000,760 "start_time": "2024-01-15T10:00:00Z",761 "limit": 20762}763```764765**Response:**766767```json768{769 "traces": [770 {771 "trace_id": "abc123...",772 "service_name": "my-app",773 "duration_ms": 8250,774 "total_tokens": 4523,775 "gen_ai_system": "openai",776 "gen_ai_model": "gpt-4"777 }778 ],779 "count": 1780}781```782783---784785### Analyze Token Usage by Model786787**Natural Language:** _"How many tokens did we use for each model today?"_788789**Tool Call:** `get_llm_usage`790791```json792{793 "start_time": "2024-01-15T00:00:00Z",794 "end_time": "2024-01-15T23:59:59Z",795 "service_name": "my-app"796}797```798799**Response:**800801```json802{803 "summary": {804 "total_tokens": 125430,805 "prompt_tokens": 82140,806 "completion_tokens": 43290,807 "request_count": 487808 },809 "by_model": {810 "gpt-4": {811 "total_tokens": 85200,812 "request_count": 156813 },814 "gpt-3.5-turbo": {815 "total_tokens": 40230,816 "request_count": 331817 }818 }819}820```821822---823824### Find Traces with Errors825826**Natural Language:** _"Show me all errors from the last hour"_827828**Tool Call:** `find_errors`829830```json831{832 "start_time": "2024-01-15T14:00:00Z",833 "service_name": "my-app",834 "limit": 10835}836```837838**Response:**839840```json841{842 "errors": [843 {844 "trace_id": "def456...",845 "service_name": "my-app",846 "error_message": "RateLimitError: Too many requests",847 "error_type": "openai.error.RateLimitError",848 "timestamp": "2024-01-15T14:23:15Z"849 }850 ],851 "count": 1852}853```854855---856857### Compare Model Performance858859**Natural Language:** _"What's the performance difference between GPT-4 and Claude?"_860861**Tool Call 1:** `get_llm_model_stats` for gpt-4862863```json864{865 "model_name": "gpt-4",866 "start_time": "2024-01-15T00:00:00Z"867}868```869870**Tool Call 2:** `get_llm_model_stats` for claude-3-opus871872```json873{874 "model_name": "claude-3-opus-20240229",875 "start_time": "2024-01-15T00:00:00Z"876}877```878879---880881### Investigate High Token Usage882883**Natural Language:** _"Which requests used the most tokens today?"_884885**Tool Call:** `get_llm_expensive_traces`886887```json888{889 "limit": 10,890 "start_time": "2024-01-15T00:00:00Z",891 "min_tokens": 5000892}893```894895---896897## Common Workflows898899### Cost Optimization9009011. **Identify expensive operations:**902903 ```904 Use get_llm_expensive_traces to find high-token requests905 ```9069072. **Analyze by model:**908909 ```910 Use get_llm_usage to see which models are costing the most911 ```9129133. **Investigate specific traces:**914 ```915 Use get_trace with the trace_id to see exact prompts/responses916 ```917918### Performance Debugging9199201. **Find slow operations:**921922 ```923 Use get_llm_slow_traces to identify latency issues924 ```9259262. **Check for errors:**927928 ```929 Use find_errors to see failure patterns930 ```9319323. **Analyze finish reasons:**933 ```934 Use get_llm_model_stats to see if responses are being truncated935 ```936937### Model Adoption Tracking9389391. **Discover models in use:**940941 ```942 Use list_llm_models to see all models being called943 ```9449452. **Compare model statistics:**946947 ```948 Use get_llm_model_stats for each model to compare performance949 ```9509513. **Identify shadow AI:**952 ```953 Look for unexpected models or services in list_llm_models results954 ```955956---957958## Development959960### Running Tests961962```bash963# With UV964uv run pytest965966# With coverage967uv run pytest --cov=openllmetry_mcp --cov-report=html968969# With pip970pytest971```972973### Code Quality974975```bash976# Format code977uv run ruff format .978979# Lint980uv run ruff check .981982# Type checking983uv run mypy src/984```985986## Troubleshooting987988### Backend Connection Issues989990```bash991# Test backend connectivity992curl http://localhost:16686/api/services # Jaeger993curl http://localhost:3200/api/search/tags # Tempo994```995996### Authentication Errors997998Make sure your API key is set correctly:9991000```bash1001export BACKEND_API_KEY=your_key_here1002# Or use --api-key CLI flag1003opentelemetry-mcp --api-key your_key_here1004```10051006### No Traces Found10071008- Check time range (use recent timestamps)1009- Verify service names with `list_services`1010- Check backend has traces: `curl http://localhost:16686/api/services`1011- Try searching without filters first10121013### Token Usage Shows Zero10141015- Ensure your traces have OpenLLMetry instrumentation1016- Check that `gen_ai.usage.*` attributes exist in spans1017- Verify with `get_trace` to see raw span attributes10181019## Future Enhancements10201021- [ ] Cost calculation with built-in pricing tables1022- [ ] Model performance comparison tools1023- [ ] Prompt pattern analysis1024- [ ] MCP resources for common queries1025- [ ] Caching layer for frequent queries1026- [ ] Support for additional backends (SigNoz, ClickHouse)10271028## Contributing10291030Contributions are welcome! Please ensure:103110321. All tests pass: `pytest`10332. Code is formatted: `ruff format .`10343. No linting errors: `ruff check .`10354. Type checking passes: `mypy src/`10361037## License10381039Apache 2.0 License - see LICENSE file for details10401041## Related Projects10421043- [OpenLLMetry](https://github.com/traceloop/openllmetry) - OpenTelemetry instrumentation for LLMs1044- [Model Context Protocol](https://modelcontextprotocol.io/) - MCP specification1045- [Claude Desktop](https://claude.ai/download) - AI assistant with MCP support10461047## Support10481049For issues and questions:10501051- GitHub Issues: https://github.com/traceloop/opentelemetry-mcp-server/issues1052- PyPI Package: https://pypi.org/project/opentelemetry-mcp/1053- Traceloop Community: https://traceloop.com/slack1054
Full transparency — inspect the skill content before installing.