Fast, accurate codebase intelligence for AI coding agents. ctx++ is an MCP (Model Context Protocol) server that gives AI agents precise, structured understanding of large codebases. It extracts symbols using native tree-sitter parsing, indexes them in SQLite with both full-text and vector search, and traces call graphs to automatically map how features are assembled across files -- no hand-maintai
Add this skill
npx mdskills install cavenine/ctxppComprehensive codebase intelligence MCP server with tree-sitter parsing, vector search, and call graph traversal

Fast, accurate codebase intelligence for AI coding agents.
ctx++ is an MCP (Model Context Protocol) server that gives AI agents precise, structured understanding of large codebases. It extracts symbols using native tree-sitter parsing, indexes them in SQLite with both full-text and vector search, and traces call graphs to automatically map how features are assembled across files -- no hand-maintained documentation required.
ctx++ is built around three principles:
| Tool | Description |
|---|---|
ctxpp_index | Index or reindex the codebase. Run once after install; incremental updates happen automatically. |
ctxpp_search | Search by identifier name (keyword) or natural language (semantic). Returns symbol definitions with file paths and line numbers. |
ctxpp_file_skeleton | Return all symbols in a file with signatures and line ranges, without reading the full body. Cheap way to understand a file's API surface. |
ctxpp_feature_traverse | Given an exact symbol name, return related symbols by walking the call graph outward via BFS. The auto-generated feature hub. |
ctxpp_blast_radius | Given a symbol, return every location in the codebase that references it. Answers "what breaks if I change this?" |
| Language | Extensions | Symbols Extracted |
|---|---|---|
| Go | .go | functions, methods, structs, interfaces, types, constants, variables |
| Java | .java | classes, interfaces, enums, methods, constructors, fields |
| Kotlin | .kt, .kts | functions, methods, classes, interfaces, properties, imports |
| JavaScript | .js, .mjs, .cjs, .jsx | functions, classes, methods, arrow functions |
| TypeScript | .ts, .tsx, .mts, .cts | functions, classes, interfaces, type aliases, enums |
| Rust | .rs | functions, structs, enums, traits, impl methods, type aliases |
| C# | .cs | classes, interfaces, methods, fields, imports |
| C | .c, .h | functions, structs, enums, typedefs, function-like macros |
| C++ | .cpp, .cc, .cxx, .hpp, .hh, .hxx | functions, methods, classes, structs, enums, namespaces, templates |
| SQL | .sql | tables, views, indexes, functions, procedures, triggers |
| Markdown | .md, .mdx | headings (as sections) |
| HTML | .html, .htm | headings, script/style blocks |
| Shell | .sh, .bash, .zsh, .dash | functions |
| Protobuf | .proto | messages, services, RPCs, enums |
| HTTP | .http, .rest | named requests |
| Text/Config | .txt, .env, Makefile, Dockerfile, LICENSE, etc. | file-level document symbol |
Want to add another language? See docs/ADDING-LANGUAGE-SUPPORT.md for a step-by-step implementation template and PR checklist.
# Install Ollama, then pull the default embedding model:
ollama pull bge-m3
Without Ollama, ctx++ still works but provides keyword search only. Semantic search and feature traversal quality depend on embeddings.
go install github.com/cavenine/ctxpp@latest
Or build from source:
git clone https://github.com/cavenine/ctxpp
cd ctxpp
make build
ctxpp index --path /path/to/your/project
This creates .ctxpp/index.db in the project root. Add it to .gitignore:
.ctxpp/
Subsequent runs only re-process changed files. Branch switches self-heal automatically via the file watcher.
If parser logic changes but your source files do not, force a full reparse of supported files:
ctxpp index --path /path/to/your/project --force
The examples below use Ollama with bge-m3 (the default). If Ollama is not running, omit CTXPP_OLLAMA_* — ctx++ will fall back to keyword search only.
OpenCode (opencode.json in project root):
{
"mcp": {
"ctxpp": {
"type": "local",
"command": ["ctxpp", "mcp"],
"enabled": true,
"environment": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_OLLAMA_URL": "http://localhost:11434",
"CTXPP_OLLAMA_MODEL": "bge-m3"
}
}
}
}
Claude Code (.mcp.json):
{
"mcpServers": {
"ctxpp": {
"command": "ctxpp",
"args": ["mcp"],
"env": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_OLLAMA_URL": "http://localhost:11434",
"CTXPP_OLLAMA_MODEL": "bge-m3"
}
}
}
}
Cursor / Windsurf (.cursor/mcp.json or .windsurf/mcp.json):
{
"mcpServers": {
"ctxpp": {
"command": "ctxpp",
"args": ["mcp"],
"env": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_OLLAMA_URL": "http://localhost:11434",
"CTXPP_OLLAMA_MODEL": "bge-m3"
}
}
}
}
Ask your AI agent anything about the codebase:
use ctxpp to show me everything involved in account authentication
use ctxpp to find where FetchAccount is defined and what calls it
use ctxpp_blast_radius to tell me what breaks if I change the Account struct
ctx++ uses Ollama for embedding-based semantic search. The default model is bge-m3 (BAAI's BGE-M3, 1024 dimensions), which was selected through head-to-head quality benchmarks against multiple models on real codebases.
ollama pull bge-m3
ctx++ auto-detects Ollama on localhost:11434 at startup. If Ollama is not running, ctx++ falls back to keyword search only and prints a warning.
To use a different embedding model (e.g., all-minilm for faster indexing at the cost of some search quality):
"environment": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_OLLAMA_MODEL": "all-minilm"
}
For environments without a local GPU, ctx++ can use Amazon Titan Text Embeddings V2 via AWS Bedrock. Quality is comparable to the Ollama/bge-m3 default (4.7/5 vs 4.8/5 on the kubernetes benchmark).
Prerequisites: AWS credentials configured via ~/.aws/credentials, AWS_PROFILE, or IAM role. The identity needs bedrock:InvokeModel permission on amazon.titan-embed-text-v2:0.
Set CTXPP_EMBED_BACKEND=bedrock and the following env vars:
OpenCode (opencode.json in project root):
{
"mcp": {
"ctxpp": {
"type": "local",
"command": ["ctxpp", "mcp"],
"enabled": true,
"environment": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_EMBED_BACKEND": "bedrock",
"CTXPP_BEDROCK_REGION": "us-east-1",
"CTXPP_BEDROCK_MODEL": "amazon.titan-embed-text-v2:0",
"CTXPP_BEDROCK_DIMS": "1024",
"CTXPP_EMBED_CONCURRENCY": "100"
}
}
}
}
Claude Code (.mcp.json):
{
"mcpServers": {
"ctxpp": {
"command": "ctxpp",
"args": ["mcp"],
"env": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_EMBED_BACKEND": "bedrock",
"CTXPP_BEDROCK_REGION": "us-east-1",
"CTXPP_BEDROCK_MODEL": "amazon.titan-embed-text-v2:0",
"CTXPP_BEDROCK_DIMS": "1024",
"CTXPP_EMBED_CONCURRENCY": "100"
}
}
}
}
Cursor / Windsurf (.cursor/mcp.json or .windsurf/mcp.json):
{
"mcpServers": {
"ctxpp": {
"command": "ctxpp",
"args": ["mcp"],
"env": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_EMBED_BACKEND": "bedrock",
"CTXPP_BEDROCK_REGION": "us-east-1",
"CTXPP_BEDROCK_MODEL": "amazon.titan-embed-text-v2:0",
"CTXPP_BEDROCK_DIMS": "1024",
"CTXPP_EMBED_CONCURRENCY": "100"
}
}
}
}
Or for initial indexing from the command line:
export CTXPP_EMBED_BACKEND=bedrock
export CTXPP_BEDROCK_REGION=us-east-1
export CTXPP_BEDROCK_MODEL=amazon.titan-embed-text-v2:0
export CTXPP_BEDROCK_DIMS=1024
export CTXPP_EMBED_CONCURRENCY=100 # increase to 200 for large repos
ctxpp index --path /path/to/your/project
Trade-offs vs Ollama:
| Ollama (local GPU) | Bedrock | |
|---|---|---|
| Per-query embed latency | ~25ms | 100-460ms |
| Index time (kubernetes, 318K symbols) | 47m | ~7.5h |
| GPU required | Yes | No |
| Cost | Free (local) | AWS API pricing |
| Horizontal scaling | Limited by GPU | High (100-200 concurrent) |
| Quality (kubernetes benchmark) | 4.8/5 | 4.7/5 |
Bedrock is the right choice for CI/CD pipelines, cloud-hosted agents, or developer machines without a GPU. For interactive development with a GPU available, Ollama is faster.
ctx++ can also use any provider that exposes the OpenAI POST /v1/embeddings API. This includes OpenAI, OpenAI-compatible proxies, vLLM, LiteLLM, LocalAI, and Ollama's OpenAI-compatible endpoint.
Set CTXPP_EMBED_BACKEND=openai and configure:
CTXPP_OPENAI_URLCTXPP_OPENAI_MODELCTXPP_OPENAI_DIMSCTXPP_OPENAI_API_KEY (optional for local unauthenticated servers)Example with OpenAI hosted embeddings:
{
"mcpServers": {
"ctxpp": {
"command": "ctxpp",
"args": ["mcp"],
"env": {
"CTXPP_PROJECT": "/path/to/your/project",
"CTXPP_EMBED_BACKEND": "openai",
"CTXPP_OPENAI_URL": "https://api.openai.com",
"CTXPP_OPENAI_MODEL": "text-embedding-3-small",
"CTXPP_OPENAI_DIMS": "1536",
"CTXPP_OPENAI_API_KEY": "${OPENAI_API_KEY}"
}
}
}
}
Example with Ollama's OpenAI-compatible endpoint:
export CTXPP_EMBED_BACKEND=openai
export CTXPP_OPENAI_URL=http://localhost:11434
export CTXPP_OPENAI_MODEL=bge-m3
export CTXPP_OPENAI_DIMS=1024
ctxpp index --path /path/to/your/project
This backend is opt-in only. Auto-detection still prefers TEI, then Ollama, then bundled fallback.
All configuration is via environment variables.
| Variable | Default | Description |
|---|---|---|
CTXPP_PROJECT | . | Path to the project root to index |
CTXPP_OLLAMA_URL | http://localhost:11434 | Ollama API endpoint |
CTXPP_OLLAMA_MODEL | bge-m3 | Ollama embedding model |
CTXPP_EMBED_BACKEND | (auto-detect) | Embedding backend: auto, ollama, tei, openai, bedrock, or bundled |
CTXPP_OPENAI_URL | https://api.openai.com | OpenAI-compatible embeddings API base URL |
CTXPP_OPENAI_MODEL | (required with openai) | OpenAI-compatible embedding model |
CTXPP_OPENAI_API_KEY | (optional) | Bearer token for OpenAI-compatible providers |
CTXPP_OPENAI_DIMS | (required with openai) | Embedding dimensions for the selected OpenAI-compatible model |
CTXPP_WORKERS | number of CPUs | Parallel workers for initial indexing |
CTXPP_EMBED_CONCURRENCY | 10 | Max concurrent embedding requests (mainly Bedrock) |
ctxpp index [--path/-p ] [--force] Index or reindex a project (default: $CTXPP_PROJECT or current directory)
ctxpp backfill [--path/-p ] Re-embed symbols missing embedding vectors
ctxpp mcp Start the MCP server over stdio
ctxpp version Print version
ctx++ is written in Go and built on:
modernc.org/sqlite (pure Go, no CGO) with FTS5 for full-text search and brute-force cosine similarity for vector searchbge-m3)The index lives in a single .ctxpp/index.db file per project. The schema tracks files, symbols, embeddings, call edges, and import edges. All queries hit indexed columns -- no full-table scans, no loading the entire index into memory.
See PRD.md for full architecture and design decisions.
When you ask ctxpp_feature_traverse about a symbol (e.g. "HandleLogin"):
This gives you the full call tree rooted at a symbol — useful for understanding what a function orchestrates without reading every file manually. Use ctxpp_blast_radius for the reverse direction: what calls this function?
MIT
Install via CLI
npx mdskills install cavenine/ctxppctx++ is a free, open-source AI agent skill. Fast, accurate codebase intelligence for AI coding agents. ctx++ is an MCP (Model Context Protocol) server that gives AI agents precise, structured understanding of large codebases. It extracts symbols using native tree-sitter parsing, indexes them in SQLite with both full-text and vector search, and traces call graphs to automatically map how features are assembled across files -- no hand-maintai
Install ctx++ with a single command:
npx mdskills install cavenine/ctxppThis downloads the skill files into your project and your AI agent picks them up automatically.
ctx++ works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Gemini Cli, Amp, Roo Code, Goose. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.