Query and analyze LLM traces with AI assistance. Ask Claude to find expensive API calls, debug errors, compare model performance, or track token usage—all from your IDE. An MCP (Model Context Protocol) server that connects AI assistants to OpenTelemetry trace backends (Jaeger, Tempo, Traceloop), with specialized support for LLM observability through OpenLLMetry semantic conventions. See it in acti
Add this skill
npx mdskills install traceloop/opentelemetry-mcp-serverComprehensive MCP server for OpenTelemetry trace analysis with excellent LLM-specific tooling and multi-backend support
Query and analyze LLM traces with AI assistance. Ask Claude to find expensive API calls, debug errors, compare model performance, or track token usage—all from your IDE.
An MCP (Model Context Protocol) server that connects AI assistants to OpenTelemetry trace backends (Jaeger, Tempo, Traceloop), with specialized support for LLM observability through OpenLLMetry semantic conventions.
See it in action:
https://github.com/user-attachments/assets/e2106ef9-0a58-4ba0-8b2b-e114c0b8b4b9
No installation required! Configure your client to run the server directly from PyPI:
// Add to claude_desktop_config.json:
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "pipx",
"args": ["run", "opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}
Or use uvx (alternative):
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "uvx",
"args": ["opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}
That's it! Ask Claude: "Show me traces with errors from the last hour"
# Run without installing (recommended)
pipx run opentelemetry-mcp --backend jaeger --url http://localhost:16686
# Or with uvx
uvx opentelemetry-mcp --backend jaeger --url http://localhost:16686
This approach:
Claude Desktop
Configure the MCP server in your Claude Desktop config file:
~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%\Claude\claude_desktop_config.jsonUsing pipx (recommended):
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "pipx",
"args": ["run", "opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}
Using uvx (alternative):
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "uvx",
"args": ["opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}
For Traceloop backend:
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "pipx",
"args": ["run", "opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "traceloop",
"BACKEND_URL": "https://api.traceloop.com",
"BACKEND_API_KEY": "your_traceloop_api_key_here"
}
}
}
}
Using the repository instead of pipx?
If you're developing locally with the cloned repository, use one of these configurations:
Option 1: Wrapper script (easy backend switching)
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "/absolute/path/to/opentelemetry-mcp-server/start_locally.sh"
}
}
}
Option 2: UV directly (for multiple backends)
{
"mcpServers": {
"opentelemetry-mcp-jaeger": {
"command": "uv",
"args": [
"--directory",
"/absolute/path/to/opentelemetry-mcp-server",
"run",
"opentelemetry-mcp"
],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}
Claude Code
Claude Code works with MCP servers configured in your Claude Desktop config. Once configured above, you can use the server with Claude Code CLI:
# Verify the server is available
claude-code mcp list
# Use Claude Code with access to your OpenTelemetry traces
claude-code "Show me traces with errors from the last hour"
Codeium (Windsurf)
Using pipx (recommended):
{
"opentelemetry-mcp": {
"command": "pipx",
"args": ["run", "opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
Using uvx (alternative):
{
"opentelemetry-mcp": {
"command": "uvx",
"args": ["opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
Using the repository instead?
{
"opentelemetry-mcp": {
"command": "uv",
"args": [
"--directory",
"/absolute/path/to/opentelemetry-mcp-server",
"run",
"opentelemetry-mcp"
],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
Cursor
Using pipx (recommended):
{
"opentelemetry-mcp": {
"command": "pipx",
"args": ["run", "opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
Using uvx (alternative):
{
"opentelemetry-mcp": {
"command": "uvx",
"args": ["opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
Using the repository instead of pipx?
{
"opentelemetry-mcp": {
"command": "uv",
"args": [
"--directory",
"/absolute/path/to/opentelemetry-mcp-server",
"run",
"opentelemetry-mcp"
],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
Gemini CLI
Configure the MCP server in your Gemini CLI config file (~/.gemini/config.json):
Using pipx (recommended):
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "pipx",
"args": ["run", "opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}
Using uvx (alternative):
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "uvx",
"args": ["opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}
Then use Gemini CLI with your traces:
gemini "Analyze token usage for gpt-4 requests today"
Using the repository instead?
{
"mcpServers": {
"opentelemetry-mcp": {
"command": "uv",
"args": [
"--directory",
"/absolute/path/to/opentelemetry-mcp-server",
"run",
"opentelemetry-mcp"
],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
}
}
}
}
Prerequisites:
Optional: Install globally
If you prefer to install the command globally:
# Install with pipx
pipx install opentelemetry-mcp
# Verify
opentelemetry-mcp --help
# Upgrade
pipx upgrade opentelemetry-mcp
Or with pip:
pip install opentelemetry-mcp
| Tool | Description | Use Case |
|---|---|---|
search_traces | Search traces with advanced filters | Find specific requests or patterns |
search_spans | Search individual spans | Analyze specific operations |
get_trace | Get complete trace details | Deep-dive into a single trace |
get_llm_usage | Aggregate token usage metrics | Track costs and usage trends |
list_services | List available services | Discover what's instrumented |
find_errors | Find traces with errors | Debug failures quickly |
list_llm_models | Discover models in use | Track model adoption |
get_llm_model_stats | Get model performance stats | Compare model efficiency |
get_llm_expensive_traces | Find highest token usage | Optimize costs |
get_llm_slow_traces | Find slowest operations | Improve performance |
| Feature | Jaeger | Tempo | Traceloop |
|---|---|---|---|
| Search traces | ✓ | ✓ | ✓ |
| Advanced filters | ✓ | ✓ | ✓ |
| Span search | ✓* | ✓ | ✓ |
| Token tracking | ✓ | ✓ | ✓ |
| Error traces | ✓ | ✓ | ✓ |
| LLM tools | ✓ | ✓ | ✓ |
* Jaeger requires service_name parameter for span search
If you're contributing to the project or want to make local modifications:
# Clone the repository
git clone https://github.com/traceloop/opentelemetry-mcp-server.git
cd opentelemetry-mcp-server
# Install dependencies with UV
uv sync
# Or install in development mode with editable install
uv pip install -e ".[dev]"
| Backend | Type | URL Example | Notes |
|---|---|---|---|
| Jaeger | Local | http://localhost:16686 | Popular open-source option |
| Tempo | Local/Cloud | http://localhost:3200 | Grafana's trace backend |
| Traceloop | Cloud | https://api.traceloop.com | Requires API key |
Option 1: Environment Variables (Create .env file - see .env.example)
BACKEND_TYPE=jaeger
BACKEND_URL=http://localhost:16686
Option 2: CLI Arguments (Override environment)
opentelemetry-mcp --backend jaeger --url http://localhost:16686
opentelemetry-mcp --backend traceloop --url https://api.traceloop.com --api-key YOUR_KEY
Configuration Precedence: CLI arguments > Environment variables > Defaults
All Configuration Options
| Variable | Type | Default | Description |
|---|---|---|---|
BACKEND_TYPE | string | jaeger | Backend type: jaeger, tempo, or traceloop |
BACKEND_URL | URL | - | Backend API endpoint (required) |
BACKEND_API_KEY | string | - | API key (required for Traceloop) |
BACKEND_TIMEOUT | integer | 30 | Request timeout in seconds |
LOG_LEVEL | string | INFO | Logging level: DEBUG, INFO, WARNING, ERROR |
MAX_TRACES_PER_QUERY | integer | 100 | Maximum traces to return per query (1-1000) |
Complete .env example:
# Backend configuration
BACKEND_TYPE=jaeger
BACKEND_URL=http://localhost:16686
# Optional: API key (mainly for Traceloop)
BACKEND_API_KEY=
# Optional: Request timeout (default: 30s)
BACKEND_TIMEOUT=30
# Optional: Logging level
LOG_LEVEL=INFO
# Optional: Max traces per query (default: 100)
MAX_TRACES_PER_QUERY=100
Backend-Specific Setup
BACKEND_TYPE=jaeger
BACKEND_URL=http://localhost:16686
BACKEND_TYPE=tempo
BACKEND_URL=http://localhost:3200
BACKEND_TYPE=traceloop
BACKEND_URL=https://api.traceloop.com
BACKEND_API_KEY=your_api_key_here
Note: The API key contains project information. The backend uses a project slug of
"default"and Traceloop resolves the actual project/environment from the API key.
The easiest way to run the server:
./start_locally.sh
This script handles all configuration and starts the server in stdio mode (perfect for Claude Desktop integration). To switch backends, simply edit the script and uncomment your preferred backend.
For advanced use cases or custom configurations, you can run the server manually.
Start the MCP server with stdio transport for local/Claude Desktop integration:
# If installed with pipx/pip
opentelemetry-mcp
# If running from cloned repository with UV
uv run opentelemetry-mcp
# With backend override (pipx/pip)
opentelemetry-mcp --backend jaeger --url http://localhost:16686
# With backend override (UV)
uv run opentelemetry-mcp --backend jaeger --url http://localhost:16686
Start the MCP server with HTTP/SSE transport for remote access:
# If installed with pipx/pip
opentelemetry-mcp --transport http
# If running from cloned repository with UV
uv run opentelemetry-mcp --transport http
# Specify custom host and port (pipx/pip)
opentelemetry-mcp --transport http --host 127.0.0.1 --port 9000
# With UV
uv run opentelemetry-mcp --transport http --host 127.0.0.1 --port 9000
The HTTP server will be accessible at http://localhost:8000/sse by default.
Transport Use Cases:
Search for traces with flexible filtering:
{
"service_name": "my-app",
"start_time": "2024-01-01T00:00:00Z",
"end_time": "2024-01-01T23:59:59Z",
"gen_ai_system": "openai",
"gen_ai_model": "gpt-4",
"min_duration_ms": 1000,
"has_error": false,
"limit": 50
}
Parameters:
service_name - Filter by serviceoperation_name - Filter by operationstart_time / end_time - ISO 8601 timestampsmin_duration_ms / max_duration_ms - Duration filtersgen_ai_system - LLM provider (openai, anthropic, etc.)gen_ai_model - Model name (gpt-4, claude-3-opus, etc.)has_error - Filter by error statustags - Custom tag filterslimit - Max results (1-1000, default: 100)Returns: List of trace summaries with token counts
Get complete trace details including all spans and OpenLLMetry attributes:
{
"trace_id": "abc123def456"
}
Returns: Full trace tree with:
Get aggregated token usage metrics:
{
"start_time": "2024-01-01T00:00:00Z",
"end_time": "2024-01-01T23:59:59Z",
"service_name": "my-app",
"gen_ai_system": "openai",
"limit": 1000
}
Returns: Aggregated metrics with:
List all available services:
{}
Returns: List of service names
Find traces with errors:
{
"start_time": "2024-01-01T00:00:00Z",
"service_name": "my-app",
"limit": 50
}
Returns: Error traces with:
Natural Language: "Show me OpenAI traces from the last hour that took longer than 5 seconds"
Tool Call: search_traces
{
"service_name": "my-app",
"gen_ai_system": "openai",
"min_duration_ms": 5000,
"start_time": "2024-01-15T10:00:00Z",
"limit": 20
}
Response:
{
"traces": [
{
"trace_id": "abc123...",
"service_name": "my-app",
"duration_ms": 8250,
"total_tokens": 4523,
"gen_ai_system": "openai",
"gen_ai_model": "gpt-4"
}
],
"count": 1
}
Natural Language: "How many tokens did we use for each model today?"
Tool Call: get_llm_usage
{
"start_time": "2024-01-15T00:00:00Z",
"end_time": "2024-01-15T23:59:59Z",
"service_name": "my-app"
}
Response:
{
"summary": {
"total_tokens": 125430,
"prompt_tokens": 82140,
"completion_tokens": 43290,
"request_count": 487
},
"by_model": {
"gpt-4": {
"total_tokens": 85200,
"request_count": 156
},
"gpt-3.5-turbo": {
"total_tokens": 40230,
"request_count": 331
}
}
}
Natural Language: "Show me all errors from the last hour"
Tool Call: find_errors
{
"start_time": "2024-01-15T14:00:00Z",
"service_name": "my-app",
"limit": 10
}
Response:
{
"errors": [
{
"trace_id": "def456...",
"service_name": "my-app",
"error_message": "RateLimitError: Too many requests",
"error_type": "openai.error.RateLimitError",
"timestamp": "2024-01-15T14:23:15Z"
}
],
"count": 1
}
Natural Language: "What's the performance difference between GPT-4 and Claude?"
Tool Call 1: get_llm_model_stats for gpt-4
{
"model_name": "gpt-4",
"start_time": "2024-01-15T00:00:00Z"
}
Tool Call 2: get_llm_model_stats for claude-3-opus
{
"model_name": "claude-3-opus-20240229",
"start_time": "2024-01-15T00:00:00Z"
}
Natural Language: "Which requests used the most tokens today?"
Tool Call: get_llm_expensive_traces
{
"limit": 10,
"start_time": "2024-01-15T00:00:00Z",
"min_tokens": 5000
}
Identify expensive operations:
Use get_llm_expensive_traces to find high-token requests
Analyze by model:
Use get_llm_usage to see which models are costing the most
Investigate specific traces:
Use get_trace with the trace_id to see exact prompts/responses
Find slow operations:
Use get_llm_slow_traces to identify latency issues
Check for errors:
Use find_errors to see failure patterns
Analyze finish reasons:
Use get_llm_model_stats to see if responses are being truncated
Discover models in use:
Use list_llm_models to see all models being called
Compare model statistics:
Use get_llm_model_stats for each model to compare performance
Identify shadow AI:
Look for unexpected models or services in list_llm_models results
# With UV
uv run pytest
# With coverage
uv run pytest --cov=openllmetry_mcp --cov-report=html
# With pip
pytest
# Format code
uv run ruff format .
# Lint
uv run ruff check .
# Type checking
uv run mypy src/
# Test backend connectivity
curl http://localhost:16686/api/services # Jaeger
curl http://localhost:3200/api/search/tags # Tempo
Make sure your API key is set correctly:
export BACKEND_API_KEY=your_key_here
# Or use --api-key CLI flag
opentelemetry-mcp --api-key your_key_here
list_servicescurl http://localhost:16686/api/servicesgen_ai.usage.* attributes exist in spansget_trace to see raw span attributesContributions are welcome! Please ensure:
pytestruff format .ruff check .mypy src/Apache 2.0 License - see LICENSE file for details
For issues and questions:
Install via CLI
npx mdskills install traceloop/opentelemetry-mcp-serverOpenTelemetry MCP Server is a free, open-source AI agent skill. Query and analyze LLM traces with AI assistance. Ask Claude to find expensive API calls, debug errors, compare model performance, or track token usage—all from your IDE. An MCP (Model Context Protocol) server that connects AI assistants to OpenTelemetry trace backends (Jaeger, Tempo, Traceloop), with specialized support for LLM observability through OpenLLMetry semantic conventions. See it in acti
Install OpenTelemetry MCP Server with a single command:
npx mdskills install traceloop/opentelemetry-mcp-serverThis downloads the skill files into your project and your AI agent picks them up automatically.
OpenTelemetry MCP Server works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Gemini Cli, Amp, Roo Code, Goose. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.