Give any AI instant automated knowledge of your entire codebase (and infrastructure) — at scale, zero configuration, fully private, completely free. Kindly sponsored by Altaire Limited One thing, done well: deep codebase intelligence — zero setup, no bloat, fully automatic. SocratiCode gives AI assistants deep semantic understanding of your codebase — hybrid search, polyglot code dependency graphs
Add this skill
npx mdskills install giancarloerra/socraticodeEnterprise-grade codebase intelligence with hybrid search, dependency graphs, and zero-config Docker deployment
1<p align="center">2 <img src="./socraticode_logo_thumbnail.png" alt="SocratiCode logo" />3</p>45# SocratiCode67<p align="center">8 <a href="https://github.com/giancarloerra/socraticode/actions/workflows/ci.yml"><img src="https://github.com/giancarloerra/socraticode/actions/workflows/ci.yml/badge.svg" alt="CI"></a>9 <a href="LICENSE"><img src="https://img.shields.io/badge/License-AGPL--3.0-blue.svg" alt="License: AGPL-3.0"></a>10 <a href="https://www.npmjs.com/package/socraticode"><img src="https://img.shields.io/npm/v/socraticode.svg" alt="npm version"></a>11 <a href="https://nodejs.org/"><img src="https://img.shields.io/badge/node-%3E%3D18-brightgreen.svg" alt="Node.js >= 18"></a>12 <a href="https://github.com/giancarloerra/socraticode"><img src="https://img.shields.io/github/stars/giancarloerra/socraticode?style=social" alt="GitHub stars"></a>13</p>1415<p align="center">16 <a href="https://insiders.vscode.dev/redirect/mcp/install?name=socraticode&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22socraticode%22%5D%7D"><img src="https://img.shields.io/badge/VS_Code-Install_MCP_Server-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white" alt="Install in VS Code"></a>17 <a href="https://insiders.vscode.dev/redirect/mcp/install?name=socraticode&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22socraticode%22%5D%7D&quality=insiders"><img src="https://img.shields.io/badge/VS_Code_Insiders-Install_MCP_Server-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white" alt="Install in VS Code Insiders"></a>18 <a href="cursor://anysphere.cursor-deeplink/mcp/install?name=socraticode&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsInNvY3JhdGljb2RlIl19"><img src="https://img.shields.io/badge/Cursor-Install_MCP_Server-F14C28?style=flat-square&logo=cursor&logoColor=white" alt="Install in Cursor"></a>19</p>2021> *"There is only one good, knowledge, and one evil, ignorance."* — Socrates2223**Give any AI instant automated knowledge of your entire codebase (and infrastructure) — at scale, zero configuration, fully private, completely free.**2425<p align="center">26 Kindly sponsored by <a href="https://altaire.com">Altaire Limited</a>27</p>2829> If SocratiCode has been useful to you, please ⭐ **star this repo** — it helps others discover it — and share it with your dev team and fellow developers!3031**One thing, done well: deep codebase intelligence — zero setup, no bloat, fully automatic.** SocratiCode gives AI assistants deep semantic understanding of your codebase — hybrid search, polyglot code dependency graphs, and searchable context artifacts (database schemas, API specs, infra configs, architecture docs). Zero configuration — add it to any MCP host and it manages everything automatically.3233**Production-ready**, battle-tested on **enterprise-level** large repositories (up to and over **~40 million lines of code**). **Batched**, automatic **resumable** indexing checkpoints progress — pauses, crashes, restarts, and interruptions don't lose work. The file watcher keeps the **index automatically updated** at every file change and across sessions.3435**Private and local by default** — Docker handles everything, no API keys required, no data leaves your machine. **Cloud ready** for embeddings (OpenAI, Google Gemini) and Qdrant, and a **full suite of configuration options** are all available when you need them.3637The first Qdrant‑based MCP server that pairs auto‑managed, zero‑config local Docker deployment with **AST‑aware code chunking, hybrid semantic + BM25 (RRF‑fused) code search**, polyglot dependency **graphs** with circular‑dependency visualization, and searchable **infra/API/database artifacts** in a single focused, zero-config and easy to use code intelligence engine.3839> **Benchmarked on VS Code (2.45M lines):** SocratiCode uses **61% less context**, **84% fewer tool calls**, and is **37x faster** than grep‑based exploration — tested live with Claude Opus 4.6. [See the full benchmark →](#real-world-benchmark-vs-code-245m-lines-of-code-with-claude-opus-46)4041## Contents4243- [Quick Start](#quick-start)44- [Why SocratiCode](#why-socraticode)45- [Features](#features)46- [Prerequisites](#prerequisites)47- [Example Workflow](#example-workflow)48- [Agent Instructions](#agent-instructions)49- [Configuration](#configuration)50- [Language Support](#language-support)51- [Ignore Rules](#ignore-rules)52- [Context Artifacts](#context-artifacts)53- [Environment Variables](#environment-variables)54- [Docker Resources](#docker-resources)55- [Testing](#testing)56- [Why Not Just Grep?](#why-not-just-grep)57- [FAQ](#faq)58- [License](#license)5960---6162## Quick Start6364> **Only [Docker](https://www.docker.com/products/docker-desktop/) (running) required.**6566**One-click install** — VS Code and Cursor:6768[](https://insiders.vscode.dev/redirect/mcp/install?name=socraticode&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22socraticode%22%5D%7D) [](https://insiders.vscode.dev/redirect/mcp/install?name=socraticode&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22socraticode%22%5D%7D&quality=insiders) [](cursor://anysphere.cursor-deeplink/mcp/install?name=socraticode&config=eyJjb21tYW5kIjoibnB4IiwiYXJncyI6WyIteSIsInNvY3JhdGljb2RlIl19)6970**All MCP hosts** — add the following to your `mcpServers` (Claude Desktop, Windsurf, Cline, Roo Code) or `servers` (VS Code project-local `.vscode/mcp.json`) config:7172```json73"socraticode": {74 "command": "npx",75 "args": ["-y", "socraticode"]76}77```7879**Claude Code** — run this command:8081```bash82claude mcp add socraticode -- npx -y socraticode83```8485**OpenAI Codex CLI** — add to `~/.codex/config.toml`:8687```toml88[mcp_servers.socraticode]89command = "npx"90args = ["-y", "socraticode"]91```9293Restart your host. On first use SocratiCode automatically pulls Docker images, starts its own Qdrant and Ollama containers, and downloads the embedding model — one-time setup, ~5 minutes depending on your connection. After that, it starts in seconds.9495**First time on a project** — ask your AI: **"Index this codebase"**. Indexing runs in the background; ask **"What is the codebase index status?"** to monitor progress. Depending on codebase size and whether you're using GPU-accelerated Ollama or cloud embeddings, first-time indexing can take anywhere from a few seconds to a few minutes (it takes under 10 minutes to first-index +3 million lines of code on a Macbook Pro M4). Once complete it doesn't need to be run again, you can search, explore the dependency graph, and query context artifacts.9697**Every time after that** — just use the tools (search, graph, etc.). On server startup SocratiCode automatically detects previously indexed projects, restarts the file watcher, and runs an incremental update to catch any changes made while the server was down. If indexing was interrupted, it resumes automatically from the last checkpoint. You can also explicitly start or restart the watcher with `codebase_watch { action: "start" }`.9899> **macOS / Windows on large codebases**: Docker containers can't use the GPU. For medium-to-large repos, [install native Ollama](https://ollama.com/download) (auto-detected, no config change needed) for Metal/CUDA acceleration, or use [OpenAI embeddings](#openai-embeddings) for speed without a local install. [Full details.](#embedding-performance-on-macos--windows)100101> **Recommended**: For best results, add the [Agent Instructions](#agent-instructions) to your AI assistant's system prompt or project instructions file (`CLAUDE.md`, `AGENTS.md`, etc.). The key principle — **search before reading** — helps your AI use SocratiCode's tools effectively and avoid unnecessary file reads.102103> **Advanced**: cloud embeddings (OpenAI / Google), external Qdrant, remote Ollama, native Ollama, and dozens of tuning options are all available. See [Configuration](#configuration) below.104105## Why SocratiCode106107I built SocratiCode because I regularly work on existing, large, and complex codebases across different languages and need to quickly understand them and act. Existing solutions were either too limited, insufficiently tested for production use, or bloated with unnecessary complexity. I wanted a single focused tool that does deep codebase intelligence well — zero setup, no bloat, fully automatic — and gets out of the way.108109- **True Zero Configuration** — Just add the MCP server to your AI host config. The server automatically pulls Docker images, starts Qdrant and Ollama containers, and downloads the embedding model on first use. No config files, no YAML, no environment variables to tune, no native dependencies to compile, no commands to type. Works everywhere Docker runs.110- **Fully Private & Local by Default** — Everything runs on your machine. Your code never leaves your network. The default Docker setup includes Ollama and Qdrant with no external API calls. Optional cloud providers (Qdrant, OpenAI, Gemini) are available but never required.111- **Language-Agnostic** — Works with every programming language, framework, and file type out of the box. No per-language parsers to install, no grammar files to maintain, no "unsupported language" limitations. If your AI can read it, SocratiCode can index it.112- **Production-Grade Vector Search** — Built on Qdrant, a purpose-built vector database with HNSW indexing, concurrent read/write, and payload filtering. Collections store both a dense vector and a BM25 sparse vector per chunk; the Query API runs both sub-queries in a single round-trip and fuses results with RRF. Designed for scale vector search.113- **Flexible Embedding Providers** — Switch between Local Ollama (private), Docker Ollama (zero-config), OpenAI (fastest), or Google Gemini (free tier) with a single environment variable. No provider-specific configuration files.114- **Enterprise-Ready Simplicity** — No agent coordination tuning, no memory limit environment variables, no coordinator/conductor capacity knobs, no backpressure configuration. SocratiCode scales by relying on production-grade infrastructure (Qdrant, proven embedding APIs) rather than complex in-process orchestration.115- **Measurably better than grep** — On VS Code's 2.45M‑line codebase, SocratiCode answers architectural questions with **61% less data**, **84% fewer steps**, and **37× faster** response than a grep‑based AI agent. [Full benchmark →](#real-world-benchmark-vs-code-245m-lines-of-code-with-claude-opus-46)116117## Features118119- **Hybrid code search** — Combines dense vector (semantic) search with BM25 lexical search, merged via Reciprocal Rank Fusion (RRF). Semantic search handles conceptual queries like "authentication middleware" even when those exact words don't appear in the code. BM25 handles exact identifier and keyword lookups that dense models struggle to rank precisely. RRF merges both result sets automatically — you get the best of both in every query with no tuning required.120- **Configurable Qdrant** — Use the built-in Docker Qdrant (default, zero config) or connect to your own instance (self-hosted, remote server, or Qdrant Cloud). Configure via `QDRANT_MODE`, `QDRANT_URL`, and `QDRANT_API_KEY` environment variables.121- **Configurable Ollama** — Use the built-in Docker Ollama (default, zero config) or point to your own Ollama instance (native install -GPU access-, remote server, etc.). Configure via `OLLAMA_MODE`, `OLLAMA_URL`, `EMBEDDING_MODEL` and `EMBEDDING_DIMENSIONS` environment variables.122- **Multi-provider embeddings** — Beyond Ollama, use OpenAI (`text-embedding-3-small`) or Google Generative AI (`gemini-embedding-001`) for cloud-based embeddings. Just set `EMBEDDING_PROVIDER` and your API key.123- **Private & secure** — Everything runs locally. Embeddings via Ollama, vector storage via Qdrant. No API costs, no token limits. Suitable for air-gapped and on-premises environments.124- **AST-aware chunking** — Files are split at function/class boundaries using AST parsing (ast-grep), not arbitrary line counts. This produces higher-quality search results. Falls back to line-based chunking for unsupported languages.125- **Polyglot code dependency graph** — Static analysis of import/require/use/include statements using ast-grep for 18+ languages. No external tools like dependency-cruiser required. Detects circular dependencies and generates visual Mermaid diagrams.126- **Incremental indexing** — After the first full index, only changed files are re-processed. Content hashes are persisted in Qdrant so state survives server restarts.127- **Batched & resumable indexing** — Files are processed in batches of 50, with progress checkpointed to Qdrant after each batch. If the process crashes or is interrupted, the next run automatically resumes from where it left off — already-indexed files are skipped via hash comparison. This keeps peak memory low and makes indexing reliable even for very large codebases.128- **Live file watching** — Optionally watch for file changes and keep the index updated in real time (debounced 2s). Watcher also invalidates the code graph cache.129- **Parallel processing** — Files are scanned and chunked in parallel batches (50 at a time) for fast I/O, while embedding generation and upserts are batched separately for optimal throughput.130- **Multi-project** — Index multiple projects simultaneously. Each gets its own isolated collection with full project path tracking.131- **Respects ignore rules** — Honors all `.gitignore` files (root + nested), plus an optional `.socraticodeignore` for additional exclusions. Includes sensible built-in defaults. `.gitignore` processing can be disabled via `RESPECT_GITIGNORE=false`.132- **Custom file extensions** — Projects with non-standard extensions (e.g. `.tpl`, `.blade`) can be included via `EXTRA_EXTENSIONS` env var or `extraExtensions` tool parameter. Works for both indexing and code graph.133- **Configurable infrastructure** — All ports, hosts, and API keys are configurable via environment variables. Qdrant API key support for enterprise deployments.134- **Auto-setup** — On first use, automatically checks Docker, pulls images, starts containers, and pulls the embedding model. Only prerequisite: Docker.135- **Session resume** — When reopening a previously indexed project, the file watcher starts automatically on first tool use (search, status, update, or graph query). It catches any changes made since the last session and keeps the index live — no manual action needed.136- **Auto-start watcher** — The file watcher is automatically activated when you use any SocratiCode tool on an indexed project. It starts after `codebase_index` completes, after `codebase_update`, and on the first `codebase_search`, `codebase_status`, or graph query. You can also start it manually with `codebase_watch { action: "start" }` if needed.137- **Auto-build code graph** — The code dependency graph is automatically built after indexing and rebuilt when watched files change. No need to call `codebase_graph_build` manually unless you want to force a rebuild.138- **Cross-process safety** — File-based locking (`proper-lockfile`) prevents multiple MCP instances from simultaneously indexing or watching the same project. Stale locks from crashed processes are automatically reclaimed. When another MCP process is already watching a project, `codebase_status` reports "active (watched by another process)" instead of incorrectly showing "inactive."139- **Concurrency guards** — Duplicate indexing and graph-build operations are prevented. If you call `codebase_index` while indexing is already running, it returns the current progress instead of starting a second operation.140- **Graceful stop** — Long-running indexing operations can be stopped safely with `codebase_stop`. The current batch finishes and checkpoints, preserving all progress. Re-run `codebase_index` to resume from where it left off.141- **Graceful shutdown** — On server shutdown, active indexing operations are given up to 60 seconds to complete, all file watchers are stopped cleanly, and the MCP server closes gracefully.142- **Structured logging** — All operations are logged with structured context for observability. Log level configurable via `SOCRATICODE_LOG_LEVEL`.143- **Graceful degradation** — If infrastructure goes down during watch, the watcher backs off and retries instead of crashing.144145## Prerequisites146147| Dependency | Purpose | Install |148|------------|---------|---------|149| [Docker](https://www.docker.com/products/docker-desktop/) | Runs Qdrant (vector DB) and by default Ollama (embeddings) | [docker.com](https://www.docker.com/products/docker-desktop/) |150| Node.js 18+ | Runs the MCP server | [nodejs.org](https://nodejs.org/) |151152Docker must be **running** when you use the server in the default `managed` mode.153154The Qdrant container is managed automatically. If you set `QDRANT_MODE=external` and point `QDRANT_URL` at a remote or cloud Qdrant instance, Docker is only needed for Ollama (embeddings) in that case.155156The Ollama container (embeddings) is also managed automatically in the default `auto` mode. SocratiCode first checks if Ollama is already running natively — if so it uses it. Otherwise it manages a Docker container for you. First-time download of the docker images or embedding models may take a few minutes, depending on your internet speed, and is required only at first launch.157158### Embedding performance on macOS / Windows159160Docker containers on macOS and Windows cannot access the GPU (no Metal or CUDA passthrough). For small projects this is fine, but for medium-to-large codebases the CPU-only container is noticeably slower.161162**For best performance, install native Ollama:** download and run the installer from [ollama.com/download](https://ollama.com/download). Once Ollama is running, SocratiCode will automatically detect and use it — no extra configuration needed (first-time download of the embedding model, if not present, might take a few minutes). This gives you Metal GPU acceleration on macOS and CUDA on Windows/Linux.163164If you prefer speed without a local install, see [OpenAI Embeddings](#openai-embeddings) and [Google Generative AI Embeddings](#google-generative-ai-embeddings) below for cloud-based options. OpenAI is very fast with no local setup required. Google’s free tier is functional but rate-limited. See [Environment Variables](#environment-variables) for configuration details.165166## Example Workflow167168All tools default `projectPath` to the current working directory, so you never need to specify a path for the active project.169170```171User: "Index this project"172→ codebase_index {}173 ⚡ Indexing started in the background — call codebase_status to check progress174→ codebase_status {}175 ⚠ Full index in progress — Phase: generating embeddings (batch 1/1)176 Progress: 247/1847 chunks embedded (13%) — Elapsed: 12s177→ codebase_status {}178 ✓ Indexing complete: 342 files, 1,847 chunks (took 115.2s)179 File watcher: active (auto-updating on changes)180181User: "Search for how authentication is handled"182→ codebase_search { query: "authentication handling" }183 Runs dense semantic search + BM25 keyword search in parallel, fuses results with RRF184 Returns top 10 results ranked by combined relevance185186User: "What files depend on the auth middleware?"187→ codebase_graph_query { filePath: "src/middleware/auth.ts" }188 Returns imports and dependents189 (graph was auto-built after indexing — no manual build needed)190191User: "Show me the dependency graph"192→ codebase_graph_visualize {}193 Returns a Mermaid diagram color-coded by language194195User: "Are there any circular dependencies?"196→ codebase_graph_circular {}197 Found 2 cycles: src/a.ts → src/b.ts → src/a.ts198```199200## Agent Instructions201202For best results, add instructions like the following to your AI assistant's system prompt, `CLAUDE.md`, `AGENTS.md`, or equivalent instructions file. The core principle: **search before reading**. The index gives you a map of the codebase in milliseconds; raw file reading is expensive and context-consuming.203204```markdown205## Codebase Search (SocratiCode)206207This project is indexed with SocratiCode. Always use its MCP tools to explore the codebase208before reading any files directly.209210### Workflow2112121. **Start most explorations with `codebase_search`.**213 Hybrid semantic + keyword search (vector + BM25, RRF-fused) runs in a single call.214 - Use broad, conceptual queries for orientation: "how is authentication handled",215 "database connection setup", "error handling patterns".216 - Use precise queries for symbol lookups: exact function names, constants, type names.217 - Prefer search results to infer which files to read — do not speculatively open files.218 - **When to use grep instead**: If you already know the exact identifier, error string,219 or regex pattern, grep/ripgrep is faster and more precise — no semantic gap to bridge.220 Use `codebase_search` when you're exploring, asking conceptual questions, or don't221 know which files to look in.2222232. **Follow the graph before following imports.**224 Use `codebase_graph_query` to see what a file imports and what depends on it before225 diving into its contents. This prevents unnecessary reading of transitive dependencies.2262273. **Read files only after narrowing down via search.**228 Once search results clearly point to 1–3 files, read only the relevant sections.229 Never read a file just to find out if it's relevant — search first.2302314. **Use `codebase_graph_circular` when debugging unexpected behavior.**232 Circular dependencies cause subtle runtime issues; check for them proactively.2332345. **Check `codebase_status` if search returns no results.**235 The project may not be indexed yet. Run `codebase_index` if needed, then wait for236 `codebase_status` to confirm completion before searching.2372386. **Leverage context artifacts for non-code knowledge.**239 Projects can define a `.socraticodecontextartifacts.json` config to expose database240 schemas, API specs, infrastructure configs, architecture docs, and other project241 knowledge that lives outside source code. These artifacts are auto-indexed alongside242 code during `codebase_index` and `codebase_update`.243 - Run `codebase_context` early to see what artifacts are available.244 - Use `codebase_context_search` to find specific schemas, endpoints, or configs245 before asking about database structure or API contracts.246 - If `codebase_status` shows artifacts are stale, run `codebase_context_index` to247 refresh them.248249### When to use each tool250251| Goal | Tool |252|------|------|253| Understand what a codebase does / where a feature lives | `codebase_search` (broad query) |254| Find a specific function, constant, or type | `codebase_search` (exact name) or grep if you know already the exact string |255| Find exact error messages, log strings, or regex patterns | grep / ripgrep |256| See what a file imports or what depends on it | `codebase_graph_query` |257| Spot architectural problems | `codebase_graph_circular`, `codebase_graph_stats` |258| Visualise module structure | `codebase_graph_visualize` |259| Verify index is up to date | `codebase_status` |260| Discover what project knowledge (schemas, specs, configs) is available | `codebase_context` |261| Find database tables, API endpoints, infra configs | `codebase_context_search` |262```263264> **Why semantic search first?** A single `codebase_search` call returns ranked, deduplicated snippets from across the entire codebase in milliseconds. This gives you a broad map at negligible token cost — far cheaper than opening files speculatively. Once you know which files matter, targeted reading is both faster and more accurate. That said, grep remains the right tool when you have an exact string or pattern — use whichever fits the query.265266> **Keep the connection alive during indexing.** Indexing runs in the background — the MCP server continues working even when not actively responding to tool calls. However, some MCP hosts might disconnect an idle MCP connection after a period of inactivity, which might cut off the background process. Instruct your AI to call `codebase_status` roughly every 60 seconds after starting `codebase_index` until it completes. This keeps the host connection active and provides real-time progress.267## Configuration268269### Install270271#### npx (recommended — no installation)272273Requires Node.js 18+ and Docker (running). Already covered in [Quick Start](#quick-start) above, add the following to your `mcpServers` (Claude Desktop, Windsurf, Cline, Roo Code) or `servers` (VS Code project-local `.vscode/mcp.json`) config:274275```json276 "socraticode": {277 "command": "npx",278 "args": ["-y", "socraticode"]279 }280```281282#### From source (for contributors)283284```bash285git clone https://github.com/giancarloerra/socraticode.git286cd socraticode287npm install288npm run build289```290291Then use `node /absolute/path/to/socraticode/dist/index.js` in place of `npx -y socraticode` in the config examples below.292293### MCP host config variants294295> All `env` options below apply equally to the `npx` install. Just add the `"env"` block to the npx config shown above.296297Add to your MCP settings - `mcpServers` (Claude Desktop, Windsurf, Cline, Roo Code) or `servers` (VS Code project-local `.vscode/mcp.json`):298299#### Default (zero config, from source)300301> Using **npx**? Your config is already in [Quick Start](#quick-start). Add any `"env"` block from the examples below as needed.302303```json304{305 "mcpServers": {306 "socraticode": {307 "command": "node",308 "args": ["/absolute/path/to/socraticode/dist/index.js"]309 }310 }311}312```313314> **Tip**: The default `OLLAMA_MODE=auto` detects native Ollama (port 11434) on startup and uses it if available, otherwise falls back to a managed Docker container. To make your config self-documenting, add an `"env"` block with explicit values. See [Environment Variables](#environment-variables) for all options.315316#### External Ollama (native install)317318If you have [Ollama](https://ollama.com) installed natively, set `OLLAMA_MODE=external` and point to your instance:319320```json321{322 "mcpServers": {323 "socraticode": {324 "command": "node",325 "args": ["/absolute/path/to/socraticode/dist/index.js"],326 "env": {327 "OLLAMA_MODE": "external",328 "OLLAMA_URL": "http://localhost:11434"329 }330 }331 }332}333```334335The embedding model is pulled automatically on first use. To pre-download: `ollama pull nomic-embed-text`336337#### Remote Ollama server338339```json340{341 "mcpServers": {342 "socraticode": {343 "command": "node",344 "args": ["/absolute/path/to/socraticode/dist/index.js"],345 "env": {346 "OLLAMA_MODE": "external",347 "OLLAMA_URL": "http://gpu-server.local:11434"348 }349 }350 }351}352```353354#### OpenAI Embeddings355356Use OpenAI's cloud embedding API instead of local Ollama. Requires an [API key](https://platform.openai.com/api-keys).357358```json359{360 "mcpServers": {361 "socraticode": {362 "command": "node",363 "args": ["/absolute/path/to/socraticode/dist/index.js"],364 "env": {365 "EMBEDDING_PROVIDER": "openai",366 "OPENAI_API_KEY": "sk-..."367 }368 }369 }370}371```372373> Defaults: `EMBEDDING_MODEL=text-embedding-3-small`, `EMBEDDING_DIMENSIONS=1536`. For higher quality, use `text-embedding-3-large` with `EMBEDDING_DIMENSIONS=3072`.374375#### Google Generative AI Embeddings376377Use Google's Gemini embedding API. Requires an [API key](https://aistudio.google.com/apikey).378379```json380{381 "mcpServers": {382 "socraticode": {383 "command": "node",384 "args": ["/absolute/path/to/socraticode/dist/index.js"],385 "env": {386 "EMBEDDING_PROVIDER": "google",387 "GOOGLE_API_KEY": "AIza..."388 }389 }390 }391}392```393394> Defaults: `EMBEDDING_MODEL=gemini-embedding-001`, `EMBEDDING_DIMENSIONS=3072`.395396### Available tools397398Once connected, 21 tools are available to your AI assistant:399400#### Indexing401402| Tool | Description |403|------|-------------|404| `codebase_index` | Start indexing a codebase in the background (poll `codebase_status` for progress) |405| `codebase_stop` | Gracefully stop an in-progress indexing operation (current batch finishes and checkpoints; resume with `codebase_index`) |406| `codebase_update` | Incremental update — only re-indexes changed files |407| `codebase_remove` | Remove a project's index (safely stops watcher, cancels in-flight indexing/update, waits for graph build) |408| `codebase_watch` | Start/stop file watching — on start, catches up missed changes then watches for future ones |409410#### Search411412| Tool | Description |413|------|-------------|414| `codebase_search` | Hybrid semantic + keyword search (dense + BM25, RRF-fused) with optional file path and language filters |415| `codebase_status` | Check index status and chunk count |416417#### Code Graph418419| Tool | Description |420|------|-------------|421| `codebase_graph_build` | Build a polyglot dependency graph (runs in background — poll with `codebase_graph_status`) |422| `codebase_graph_query` | Query imports and dependents for a specific file |423| `codebase_graph_stats` | Get graph statistics (most connected files, orphans, language breakdown) |424| `codebase_graph_circular` | Detect circular dependencies |425| `codebase_graph_visualize` | Generate a Mermaid diagram of the dependency graph |426| `codebase_graph_status` | Check graph build progress or persisted graph metadata |427| `codebase_graph_remove` | Remove a project's persisted code graph (waits for in-flight graph build to finish first) |428429#### Management430431| Tool | Description |432|------|-------------|433| `codebase_health` | Check Docker, Qdrant, and embedding provider status |434| `codebase_list_projects` | List all indexed projects with paths and metadata |435| `codebase_about` | Display info about SocratiCode |436437#### Context Artifacts438439| Tool | Description |440|------|-------------|441| `codebase_context` | List all context artifacts defined in `.socraticodecontextartifacts.json` with names, descriptions, and index status |442| `codebase_context_search` | Semantic search across context artifacts (auto-indexes on first use, auto-detects staleness) |443| `codebase_context_index` | Index or re-index all artifacts from `.socraticodecontextartifacts.json` |444| `codebase_context_remove` | Remove all indexed context artifacts for a project (blocked while indexing is in progress) |445446## Language Support447448SocratiCode supports languages at three levels:449450### Full Support (indexing + code graph + AST chunking)451452JavaScript, TypeScript, TSX, Python, Java, Kotlin, Scala, C, C++, C#, Go, Rust, Ruby, PHP, Swift, Bash/Shell, HTML, CSS/SCSS453454### Code Graph via Regex + Indexing455456Dart (import/export/part), Lua (require/dofile/loadfile)457458### Indexing Only (hybrid search, line-based chunking)459460Vue, Svelte, SASS, LESS, JSON, YAML, TOML, XML, INI/CFG, Markdown/MDX, RST, SQL, R, Dockerfile, TXT, and any file matching a supported extension or special filename (Dockerfile, Makefile, Gemfile, Rakefile, etc.)461462**54 file extensions** + 8 special filenames supported out of the box.463464## Ignore Rules465466The indexer combines three layers of ignore rules:4674681. **Built-in defaults** — `node_modules`, `.git`, `dist`, `build`, lock files, IDE folders, etc.4692. **`.gitignore`** — All `.gitignore` files in the project (root and nested subdirectories). Set `RESPECT_GITIGNORE=false` to skip `.gitignore` processing entirely.4703. **`.socraticodeignore`** — Optional file for indexer-specific exclusions. Same syntax as `.gitignore`.471472## Context Artifacts473474Give the AI awareness of project knowledge beyond source code — database schemas, API specs, infrastructure configs, architecture docs, and more.475476### Setup477478Create a `.socraticodecontextartifacts.json` file in your project root (see [`.socraticodecontextartifacts.json.example`](.socraticodecontextartifacts.json.example) for a starter template):479480```json481{482 "artifacts": [483 {484 "name": "database-schema",485 "path": "./docs/schema.sql",486 "description": "Complete PostgreSQL schema — all tables, indexes, constraints, foreign keys. Use to understand what data the app stores and how tables relate."487 },488 {489 "name": "api-spec",490 "path": "./docs/openapi.yaml",491 "description": "OpenAPI 3.0 spec for the REST API. All endpoints, request/response schemas, auth requirements."492 },493 {494 "name": "k8s-manifests",495 "path": "./deploy/k8s/",496 "description": "Kubernetes deployment manifests. Shows how services are deployed, scaled, and networked."497 }498 ]499}500```501502Each artifact has:503- **`name`** — Unique identifier (used to filter searches)504- **`path`** — Path to a file or directory (relative to project root, or absolute). Directories are read recursively.505- **`description`** — Tells the AI what this artifact is and how to use it506507### How it works508509Artifacts are chunked and embedded into Qdrant using the same hybrid dense + BM25 search as code. On first search, artifacts are auto-indexed. On subsequent searches, staleness is auto-detected via content hashing — changed files are re-indexed transparently.510511### Usage5125131. **Discover**: `codebase_context` — lists all defined artifacts and their index status5142. **Search**: `codebase_context_search` — semantic search across all artifacts (or filter by name)5153. **Re-index**: `codebase_context_index` — force re-index (usually not needed, auto-indexing handles it)5164. **Clean up**: `codebase_context_remove` — remove all indexed artifacts517518### Example artifacts519520| Category | Examples |521|----------|----------|522| **Database** | SQL schema dumps (`pg_dump --schema-only`), Prisma schemas, Rails `schema.rb`, Django model dumps, migration files |523| **API Contracts** | OpenAPI/Swagger specs, GraphQL schemas, Protobuf definitions, AsyncAPI specs (Kafka, RabbitMQ) |524| **Infrastructure** | Terraform/Pulumi configs, Kubernetes manifests, Docker Compose files, CI/CD pipeline configs |525| **Architecture** | Architecture Decision Records (ADRs), service topology docs, data flow diagrams, domain glossaries |526| **Operations** | Monitoring/alerting rules, RBAC/permission matrices, auth flow documentation, feature flag configs |527| **External** | Third-party API docs, compliance requirements (SOC2, HIPAA, GDPR), SLA definitions |528529> **Tip**: For database schemas, every major database can export its entire schema to a single file: `pg_dump --schema-only` (PostgreSQL), `mysqldump --no-data` (MySQL), `sqlite3 db.sqlite .schema` (SQLite). ORM schemas (Prisma, Rails, Django) are often already in your repo.530531## Environment Variables532533### Embedding Provider534535| Variable | Default | Description |536|----------|---------|-------------|537| `EMBEDDING_PROVIDER` | `ollama` | Embedding backend: `ollama` (local, default), `openai`, or `google` |538| `EMBEDDING_MODEL` | *(per provider)* | Model name. Defaults: `nomic-embed-text` (ollama), `text-embedding-3-small` (openai), `gemini-embedding-001` (google) |539| `EMBEDDING_DIMENSIONS` | *(per provider)* | Vector dimensions. Defaults: `768` (ollama), `1536` (openai), `3072` (google) |540| `EMBEDDING_CONTEXT_LENGTH` | *(auto-detected)* | Model context window in tokens. Auto-detected for known models. Set manually for custom models. |541542### Ollama Configuration (when `EMBEDDING_PROVIDER=ollama`)543544| Variable | Default | Description |545|----------|---------|-------------|546| `OLLAMA_MODE` | `auto` | `auto` = use native Ollama on port 11434 if available, otherwise manage a Docker container (recommended). `docker` = always use managed Docker container on port 11435. `external` = user-managed Ollama instance (native, remote, etc.) |547| `OLLAMA_URL` | `http://localhost:11434` (auto/external) / `http://localhost:11435` (docker) | Full Ollama API endpoint |548| `OLLAMA_PORT` | `11435` | Ollama container port (Docker mode). Ignored when `OLLAMA_URL` is set explicitly. |549| `OLLAMA_HOST` | `http://localhost:{OLLAMA_PORT}` | Ollama base URL (alternative to `OLLAMA_URL`) |550| `OLLAMA_API_KEY` | *(none)* | Optional API key for authenticated Ollama proxies |551552### Cloud Provider API Keys553554| Variable | Default | Description |555|----------|---------|-------------|556| `OPENAI_API_KEY` | *(none)* | Required when `EMBEDDING_PROVIDER=openai`. Get from [platform.openai.com](https://platform.openai.com/api-keys) |557| `GOOGLE_API_KEY` | *(none)* | Required when `EMBEDDING_PROVIDER=google`. Get from [aistudio.google.com](https://aistudio.google.com/apikey) |558559### Qdrant Configuration560561| Variable | Default | Description |562|----------|---------|-------------|563| `QDRANT_MODE` | `managed` | `managed` = Docker-managed local Qdrant (default). `external` = user-provided remote or cloud Qdrant (no Docker management). |564| `QDRANT_URL` | *(none)* | Full URL of a remote/cloud Qdrant instance (e.g. `https://xyz.aws.cloud.qdrant.io:6333`). When set, takes precedence over `QDRANT_HOST` + `QDRANT_PORT`. Required (or set `QDRANT_HOST`) when `QDRANT_MODE=external`. |565| `QDRANT_PORT` | `16333` | Qdrant REST API port (managed mode, or external without `QDRANT_URL`) |566| `QDRANT_GRPC_PORT` | `16334` | Qdrant gRPC port (managed mode only) |567| `QDRANT_HOST` | `localhost` | Qdrant hostname (alternative to `QDRANT_URL` for non-HTTPS external instances) |568| `QDRANT_API_KEY` | *(none)* | Qdrant API key (required for Qdrant Cloud and other authenticated deployments) |569570### Indexing Behavior571572| Variable | Default | Description |573|----------|---------|-------------|574| `RESPECT_GITIGNORE` | `true` | Set to `false` to skip `.gitignore` processing. Built-in defaults and `.socraticodeignore` still apply. |575| `EXTRA_EXTENSIONS` | *(none)* | Comma-separated list of additional file extensions to scan (e.g. `.tpl,.blade,.hbs`). Applies to both indexing and code graph. Files with extra extensions are indexed as plaintext and appear as leaf nodes in the code graph. Can also be passed per-operation via the `extraExtensions` tool parameter. |576| `MAX_FILE_SIZE_MB` | `5` | Maximum file size in MB. Files larger than this are skipped during indexing. Increase for repos with large generated or data files you want indexed. |577| `SEARCH_DEFAULT_LIMIT` | `10` | Default number of results returned by `codebase_search` (1-50). Each result is a ranked code chunk with file path, line range, and content. Higher values give broader coverage but produce more output. Can still be overridden per-query via the `limit` tool parameter. |578| `SEARCH_MIN_SCORE` | `0.10` | Minimum RRF (Reciprocal Rank Fusion) score threshold (0-1). Results below this score are filtered out. Helps remove low-relevance noise from search results. Set to `0` to disable filtering (returns all results up to `limit`). Can be overridden per-query via the `minScore` tool parameter. Works together with `limit`: results are first filtered by score, then capped at `limit`. |579| `SOCRATICODE_LOG_LEVEL` | `info` | Log verbosity: `debug`, `info`, `warn`, `error` |580| `SOCRATICODE_LOG_FILE` | *(none)* | Absolute path to a log file. When set, all log entries are appended to this file (a session separator is written on each server start). Useful for debugging when the MCP host doesn't surface log notifications. |581582> **Important**: If you change `EMBEDDING_PROVIDER`, `EMBEDDING_MODEL`, or `EMBEDDING_DIMENSIONS` after indexing, you must re-index your projects (`codebase_remove` then `codebase_index`) since existing vectors have different dimensions.583584## Docker Resources585586SocratiCode manages Docker containers and persistent volumes:587588| Resource | Name | Purpose | When |589|----------|------|---------|------|590| Container | `socraticode-qdrant` | Qdrant vector database (pinned `v1.17.0`) | `managed` mode only |591| Container | `socraticode-ollama` | Ollama embedding server | `docker` mode only |592| Volume | `socraticode_qdrant_data` | Persistent vector storage | `managed` mode only |593| Volume | `socraticode_ollama_data` | Persistent model storage | `docker` mode only |594595In `QDRANT_MODE=external` mode, the Qdrant container and volume are not created or started — SocratiCode connects directly to the configured remote endpoint. Server-side BM25 inference (used for hybrid search) requires **Qdrant v1.15.2 or later**. The managed container runs `v1.17.0`. If you bring your own Qdrant instance, ensure it meets this minimum.596597All containers use `--restart unless-stopped` for automatic recovery.598599> **Why non-standard ports?** SocratiCode intentionally uses non-default ports for its managed containers — `16333`/`16334` instead of Qdrant's defaults (`6333`/`6334`), and `11435` instead of Ollama's default (`11434`). This avoids conflicts with any Qdrant or Ollama instance you may already be running locally. All ports are overridable via environment variables if needed.600601## Testing602603SocratiCode has a comprehensive test suite with **634 tests** across unit, integration, and end-to-end layers.604605### Prerequisites606607- **Unit tests**: No external dependencies required.608- **Integration & E2E tests**: Require Docker running with Qdrant and Ollama containers. Containers are managed automatically by the test infrastructure.609610### Running Tests611612```bash613# Run all tests614npm test615616# Run only unit tests (no Docker needed)617npm run test:unit618619# Run integration tests (requires Docker)620npm run test:integration621622# Run end-to-end tests (requires Docker)623npm run test:e2e624625# Watch mode (re-runs on file changes)626npm run test:watch627628# With coverage report629npm run test:coverage630```631632### Test Architecture633634| Layer | Tests | Docker? | Description |635|-------|-------|---------|-------------|636| **Unit** (`tests/unit/`) | 477 | No | Config, constants, ignore rules, cross-process locking, logging, graph analysis, import extraction, path resolution, embedding config, indexer utilities, embeddings, startup lifecycle, watcher cross-process awareness |637| **Integration** (`tests/integration/`) | 137 | Yes | Docker/Ollama setup, Qdrant CRUD, real embeddings, indexer, watcher, code graph, all MCP tools |638| **E2E** (`tests/e2e/`) | 20 | Yes | Complete lifecycle: health → index → search → graph → watch → remove |639640Integration and E2E tests that require Docker are automatically skipped when Docker is not available.641642## Why Not Just Grep?643644Modern evaluations on real repositories show that hybrid lexical + semantic code search consistently outperforms plain grep once you care about natural-language queries, large codebases, or coding agents: reports show ~20% search-quality gains from BM25F ranking at scale, AST-aware retrieval improving recall and bug-fix performance on RepoEval and SWE-bench, and hybrid approach with grep (the default in SocratiCode) beats grep in 70% of agentic code-search tasks while cutting search operations by over half.645646### Real-world benchmark: VS Code (2.45M lines of code) with Claude Opus 4.6647648Running a head-to-head comparison against the VS Code codebase (~2.45 million lines of TypeScript/JavaScript across 5,300+ files, 55,437 indexed chunks) to measure what a Claude Opus 4.6 AI agent actually consumes when answering architectural questions.649650**Methodology:** For each question, the **grep approach** follows the realistic multi-step workflow an AI agent uses today: `grep -rl` to find matching files, identify core files, read them in chunks (200 lines at a time), and repeat until it has enough context. The **SocratiCode approach** performs a single semantic search call that returns the 10 most relevant code chunks from across the entire codebase.651652| Question | Grep (bytes) | SocratiCode (bytes) | Reduction | Speedup |653|:---------|:-------------|:--------------------|:----------|:--------|654| How does VS Code implement workspace trust restrictions? | 56,383 | 21,149 | **62.5%** | **49.7x** |655| How does the diff editor compute and display text differences? | 37,650 | 15,961 | **57.6%** | **40.2x** |656| How does VS Code handle extension activation and lifecycle? | 36,231 | 16,181 | **55.3%** | **34.4x** |657| How does the integrated terminal spawn and manage shells? | 50,159 | 22,518 | **55.1%** | **31.1x** |658| How does VS Code implement the command palette and quick pick? | 70,087 | 20,676 | **70.5%** | **31.7x** |659| **Total** | **250,510** | **96,485** | **61.5%** | **37.2x** |660661**Key findings:**662663- **84% fewer tool calls** — Grep needed 31 steps across the 5 questions (6-7 per question). SocratiCode: 5 steps total (1 per question).664- **61.5% less data consumed** — The AI agent processes ~150KB less context, which directly reduces token costs with any LLM.665- **37x faster** — Grep scans across 2.45M lines can take up 2-3.5 seconds per question. Semantic search up to 60-90ms.666667> **Note:** This benchmark is _conservative_ for the grep approach. It assumes the agent already knows which files to read. In practice, a real AI agent needs additional exploratory grep calls, follows dead ends, reads irrelevant files, and often needs multiple rounds of narrowing. The actual savings might be larger.668669### When hybrid search wins670671**Natural-language and conceptual queries** — Queries like *"Where do we handle database connection pooling?"* or *"How does this library implement exponential backoff?"* describe behavior rather than naming a function. Evaluations on repository-level benchmarks (RepoEval, SWE-bench) show that AST-aware semantic retrieval improves recall by up to 4.3 points and downstream code-generation accuracy by ~2.7 points compared to fixed line-based chunks. Agentic evaluations on real open-source repos show a 70% win rate for hybrid search over vanilla grep on hard, conceptual questions — with 56% fewer search operations and ~60,000 fewer tokens per complex query.672673**Large repos and monorepos** — At multi-million LOC scale, full-text scans become expensive. Production search engines report ~20% relevance improvement from BM25F ranking over previous approaches, and use it as the first-stage retriever for semantic reranking. Hybrid search backed by inverted and vector indexes avoids full scans entirely, making it both faster and more precise at scale. Industry practitioners explicitly note that grep and find "don't scale well to millions of files" and that optimized embedding-based indexes can be faster at that scale.674675**Cross-file and cross-language reasoning** — Finding all code paths that eventually call an internal helper across services, or mapping a natural-language spec to implementations in Go and SQL, requires understanding that goes beyond string matching. Evaluations show that hybrid pipelines with tree-sitter parsing and dependency context outperform grep when naming is non-obvious and semantic understanding is needed. AST-based chunking with learned retrievers improves retrieval in cross-language benchmarks, and multi-vector semantic models show large gains over BM25 alone across diverse code search tasks (AppsRetrieval, CodeSearchNet, CosQA) where queries are in natural language and targets span many languages.676677**Mixed code + context artifacts** — Questions like *"Where is rate-limiting configured?"* might match Nginx configs, Terraform files, or YAML — not just application code. Hybrid search over mixed technical corpora (structured fields + free text) consistently outperforms pure lexical or pure vector approaches in published evaluations.678679### When grep still wins680681The same research makes clear when grep (or ripgrep) is entirely reasonable — and sometimes optimal:682683- **You know the exact identifier, error string, or regex pattern.** No semantic gap to bridge.684- **The repo is modest in size** — full scans are cheap and fast.685- **Content is limited and structured code with distinctive names**, not prose or documentation.686687On easy or directly-named queries, grep can match or beat semantic methods. That's why the best architectures don't replace grep — they extend it. SocratiCode's hybrid approach runs both BM25 keyword search and dense semantic search on every query, fusing results via RRF, so you get the precision of exact matching and the recall of semantic understanding in a single call.688689## FAQ690691### Indexing failed with an error — can I resume without starting over?692693Yes. Indexing automatically resumes from where it left off. The indexer checkpoints694file hashes after every batch of files. When you ask your AI to index again (e.g. *"index695this project"*), it detects the existing data, skips every file that was already successfully696embedded, and only re-processes the files that weren't checkpointed before the failure.697Already-indexed chunks are never deleted or re-embedded. Just ask your AI to index again and698it will pick up where it stopped.699700### My MCP host disconnects while indexing a large codebase. What should I do?701702Indexing runs in the background on the MCP server. However, some MCP hosts (VS Code,703Claude Desktop, etc.) disconnect an idle connection after a period of inactivity, which704kills the background process. To keep the connection alive, ask your AI to check status705(e.g. *"check indexing status"*) roughly every 60 seconds after starting indexing until it706completes. If the connection does drop and indexing is interrupted, just ask your AI to707index again — it resumes automatically (see above).708709### Indexing keeps failing or won't resume properly. What should I do?710711If indexing repeatedly fails, throws errors on resume, or gets stuck in a loop, the712simplest fix is to start fresh: ask your AI to *"remove the index for this project"*, then713ask it to index again. This clears all stored chunks and metadata for the project and714begins a clean re-index. It won't affect other indexed projects.715716### My codebase is very large — can I pause indexing and resume it later?717718Yes. You can stop indexing at any time and resume it later without losing progress:7197201. **Ask your AI assistant to stop** — say something like *"stop indexing"* and it will721 cancel the current operation at the next batch boundary. All batches completed so far722 are checkpointed and preserved.7232. **Or just close your project/editor** — SocratiCode detects the disconnection and shuts724 down gracefully, preserving all checkpointed progress.7253. **Come back whenever you want** — reopen the same project in your editor and ask the AI726 to resume indexing (e.g. *"resume indexing"*). SocratiCode detects the incomplete index727 automatically, skips every file already embedded, and picks up exactly where it left off.728729This makes indexing very large codebases practical even on slower hardware — you can index in730multiple sessions across hours or days, and no work is ever repeated or lost.731732### I reopened my project but new/changed files aren't showing up in search results.733734The file watcher auto-starts on first tool use for any previously indexed project. When it735starts, it catches up all files modified while SocratiCode was down before watching for736future changes.737738If you want to force an immediate catch-up before searching, ask your AI to *"start watching739this project"* or *"update the index"* — both run an incremental update synchronously and740then start watching.741742The watcher will not auto-start if a full index or incremental update is currently in743progress, if the project has not been indexed yet, or if another MCP process is already744watching the same project.745746### Can I index multiple projects at the same time?747748Yes. SocratiCode maintains a separate isolated collection for each project path. Ask your749AI to *"list all indexed projects"* to see everything currently indexed.750751### What happens if I change my embedding provider or model?752753Each collection is created with a fixed vector size matching the model used at index time.754If you change `EMBEDDING_PROVIDER`, `EMBEDDING_MODEL`, or `EMBEDDING_DIMENSIONS` in your755MCP config, any projects indexed with the old model will return a dimension mismatch error.756Ask your AI to *"remove the index for this project"* and then to index again with the new757model. Projects you haven't touched are unaffected.758759### How do I remove a project's index (e.g. to switch embedding model or reindex from scratch)?7607611. **Stop first** — if indexing is in progress, say *"stop indexing this project"*. Removing762 while indexing is active would corrupt data, so the remove will be refused until the763 current batch finishes.7642. **Remove** — say *"remove the index for this project"*. This deletes the vector765 collection, all stored chunk metadata, the code graph, and context artifact metadata for766 that project only. Other projects are untouched.7673. **Re-index** — update your MCP config with the new parameters if needed, then say768 *"index this project"* to start fresh.769770### What is the code behind Socrates face in the SocratiCode logo?771772The code you see behind Socrates is part of the original Apollo 11 guidance computer (AGC) source code for Command Module (Comanche055)!773774775## License776777SocratiCode is dual-licensed:778779- **Open Source** — [AGPL-3.0](LICENSE). Free to use, modify, and distribute.780 If you modify SocratiCode and offer it as a network service, you must release781 your modifications under AGPL-3.0.782783- **Commercial** — For organizations that need to use SocratiCode in proprietary784 products or services without AGPL obligations. See [LICENSE-COMMERCIAL](LICENSE-COMMERCIAL)785 or contact [giancarlo@altaire.com](mailto:giancarlo@altaire.com).786787Copyright (C) 2026 Giancarlo Erra - Altaire Limited.788789### Third-Party Licenses790791SocratiCode includes open-source dependencies under their own licenses792(MIT, Apache 2.0, ISC). See [THIRD-PARTY-LICENSES](THIRD-PARTY-LICENSES) for details.793794### Contributing795796Contributions are welcome. By submitting a pull request, you agree to the797[Contributor License Agreement](CLA.md).798
Full transparency — inspect the skill content before installing.