cognee‑mcp - Run cognee’s memory engine as a Model Context Protocol server Join Discord Join r/AIMemory Build memory for Agents and query from any client that speaks MCP – in your terminal or IDE. - Multiple transports – choose Streamable HTTP --transport http (recommended for web deployments), SSE --transport sse (real‑time streaming), or stdio (classic pipe, default) - API Mode – connect to an a
Add this skill
npx mdskills install topoteretes/cogneeWell-documented memory MCP server with multiple transports, Docker support, and comprehensive setup

cognee‑mcp - Run cognee’s memory engine as a Model Context Protocol server
Demo . Learn more · Join Discord · Join r/AIMemory
Build memory for Agents and query from any client that speaks MCP – in your terminal or IDE.
Please refer to our documentation here for further information.
git clone https://github.com/topoteretes/cognee.git
cd cognee/cognee-mcp
pip install uv
uv sync --dev --all-extras --reinstall
source .venv/bin/activate
LLM_API_KEY="YOUR_OPENAI_API_KEY"
python src/server.py
or stream responses over SSE
python src/server.py --transport sse
or run with Streamable HTTP transport (recommended for web deployments)
python src/server.py --transport http --host 127.0.0.1 --port 8000 --path /mcp
You can do more advanced configurations by creating .env file using our template. To use different LLM providers / database configurations, and for more info check out our documentation.
If you'd rather run cognee-mcp in a container, you have two options:
Build locally
Make sure you are in /cognee root directory and have a fresh .env containing only your LLM_API_KEY (and your chosen settings).
Remove any old image and rebuild:
docker rmi cognee/cognee-mcp:main || true
docker build --no-cache -f cognee-mcp/Dockerfile -t cognee/cognee-mcp:main .
Run it:
# For HTTP transport (recommended for web deployments)
docker run -e TRANSPORT_MODE=http --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
# For SSE transport
docker run -e TRANSPORT_MODE=sse --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
# For stdio transport (default)
docker run -e TRANSPORT_MODE=stdio --env-file ./.env --rm -it cognee/cognee-mcp:main
Installing optional dependencies at runtime:
You can install optional dependencies when running the container by setting the EXTRAS environment variable:
# Install a single optional dependency group at runtime
docker run \
-e TRANSPORT_MODE=http \
-e EXTRAS=aws \
--env-file ./.env \
-p 8000:8000 \
--rm -it cognee/cognee-mcp:main
# Install multiple optional dependency groups at runtime (comma-separated)
docker run \
-e TRANSPORT_MODE=sse \
-e EXTRAS=aws,postgres,neo4j \
--env-file ./.env \
-p 8000:8000 \
--rm -it cognee/cognee-mcp:main
Available optional dependency groups:
aws - S3 storage supportpostgres / postgres-binary - PostgreSQL database supportneo4j - Neo4j graph database supportneptune - AWS Neptune supportchromadb - ChromaDB vector store supportscraping - Web scraping capabilitiesdistributed - Modal distributed executionlangchain - LangChain integrationllama-index - LlamaIndex integrationanthropic - Anthropic modelsgroq - Groq modelsmistral - Mistral modelsollama / huggingface - Local model supportdocs - Document processingcodegraph - Code analysismonitoring - Sentry & Langfuse monitoringredis - Redis supportPull from Docker Hub (no build required):
# With HTTP transport (recommended for web deployments)
docker run -e TRANSPORT_MODE=http --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
# With SSE transport
docker run -e TRANSPORT_MODE=sse --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
# With stdio transport (default)
docker run -e TRANSPORT_MODE=stdio --env-file ./.env --rm -it cognee/cognee-mcp:main
With runtime installation of optional dependencies:
# Install optional dependencies from Docker Hub image
docker run \
-e TRANSPORT_MODE=http \
-e EXTRAS=aws,postgres \
--env-file ./.env \
-p 8000:8000 \
--rm -it cognee/cognee-mcp:main
Docker uses environment variables, not command line arguments:
-e TRANSPORT_MODE=http--transport http (won't work)Direct Python usage uses command line arguments:
python src/server.py --transport http-e TRANSPORT_MODE=http (won't work)To connect the MCP Docker container to a Cognee API server running on your host machine:
# Start your Cognee API server on the host
python -m cognee.api.client
# Run MCP container in API mode - localhost is automatically converted!
docker run \
-e TRANSPORT_MODE=sse \
-e API_URL=http://localhost:8000 \
-e API_TOKEN=your_auth_token \
-p 8001:8000 \
--rm -it cognee/cognee-mcp:main
Note: The container will automatically convert localhost to host.docker.internal on Mac/Windows/Docker Desktop. You'll see a message in the logs showing the conversion.
# Or explicitly use host.docker.internal
docker run \
-e TRANSPORT_MODE=sse \
-e API_URL=http://host.docker.internal:8000 \
-e API_TOKEN=your_auth_token \
-p 8001:8000 \
--rm -it cognee/cognee-mcp:main
# Option 1: Use host network (simplest)
docker run \
--network host \
-e TRANSPORT_MODE=sse \
-e API_URL=http://localhost:8000 \
-e API_TOKEN=your_auth_token \
--rm -it cognee/cognee-mcp:main
# Option 2: Use host IP address
# First, get your host IP: ip addr show docker0
docker run \
-e TRANSPORT_MODE=sse \
-e API_URL=http://172.17.0.1:8000 \
-e API_TOKEN=your_auth_token \
-p 8001:8000 \
--rm -it cognee/cognee-mcp:main
Environment variables for API mode:
API_URL: URL of the running Cognee API serverAPI_TOKEN: Authentication token (optional, required if API has authentication enabled)Note: When running in API mode:
After starting your Cognee MCP server with Docker, you need to configure your MCP client to connect to it.
Start the server with SSE transport:
docker run -e TRANSPORT_MODE=sse --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
Configure your MCP client:
claude mcp add cognee-sse -t sse http://localhost:8000/sse
Verify the connection:
claude mcp list
You should see your server connected:
Checking MCP server health...
cognee-sse: http://localhost:8000/sse (SSE) - ✓ Connected
Claude (~/.claude.json)
{
"mcpServers": {
"cognee": {
"type": "sse",
"url": "http://localhost:8000/sse"
}
}
}
Cursor (~/.cursor/mcp.json)
{
"mcpServers": {
"cognee-sse": {
"url": "http://localhost:8000/sse"
}
}
}
Start the server with HTTP transport:
docker run -e TRANSPORT_MODE=http --env-file ./.env -p 8000:8000 --rm -it cognee/cognee-mcp:main
Configure your MCP client:
claude mcp add cognee-http -t http http://localhost:8000/mcp
Verify the connection:
claude mcp list
You should see your server connected:
Checking MCP server health...
cognee-http: http://localhost:8000/mcp (HTTP) - ✓ Connected
Claude (~/.claude.json)
{
"mcpServers": {
"cognee": {
"type": "http",
"url": "http://localhost:8000/mcp"
}
}
}
Cursor (~/.cursor/mcp.json)
{
"mcpServers": {
"cognee-http": {
"url": "http://localhost:8000/mcp"
}
}
}
You can configure both transports simultaneously for testing:
{
"mcpServers": {
"cognee-sse": {
"type": "sse",
"url": "http://localhost:8000/sse"
},
"cognee-http": {
"type": "http",
"url": "http://localhost:8000/mcp"
}
}
}
Note: Only enable the server you're actually running to avoid connection errors.
The MCP server can operate in two modes:
The MCP server directly imports and uses the cognee library. This is the default mode with full feature support.
The MCP server connects to an already running Cognee FastAPI server via HTTP requests. This is useful when:
Starting the MCP server in API mode:
# Start your Cognee FastAPI server first (default port 8000)
cd /path/to/cognee
python -m cognee.api.client
# Then start the MCP server in API mode
cd cognee-mcp
python src/server.py --api-url http://localhost:8000 --api-token YOUR_AUTH_TOKEN
API Mode with different transports:
# With SSE transport
python src/server.py --transport sse --api-url http://localhost:8000 --api-token YOUR_TOKEN
# With HTTP transport
python src/server.py --transport http --api-url http://localhost:8000 --api-token YOUR_TOKEN
API Mode with Docker:
# On Mac/Windows (use host.docker.internal to access host)
docker run \
-e TRANSPORT_MODE=sse \
-e API_URL=http://host.docker.internal:8000 \
-e API_TOKEN=YOUR_TOKEN \
-p 8001:8000 \
--rm -it cognee/cognee-mcp:main
# On Linux (use host network)
docker run \
--network host \
-e TRANSPORT_MODE=sse \
-e API_URL=http://localhost:8000 \
-e API_TOKEN=YOUR_TOKEN \
--rm -it cognee/cognee-mcp:main
Command-line arguments for API mode:
--api-url: Base URL of the running Cognee FastAPI server (e.g., http://localhost:8000)--api-token: Authentication token for the API (optional, required if API has authentication enabled)Docker environment variables for API mode:
API_URL: Base URL of the running Cognee FastAPI serverAPI_TOKEN: Authentication token (optional, required if API has authentication enabled)API Mode limitations: Some features are only available in direct mode:
codify (code graph pipeline)cognify_status / codify_status (pipeline status tracking)prune (data reset)get_developer_rules (developer rules retrieval)list_data with specific dataset_id (detailed data listing)Basic operations like cognify, search, delete, and list_data (all datasets) work in both modes.
The MCP server exposes its functionality through tools. Call them from any MCP client (Cursor, Claude Desktop, Cline, Roo and more).
cognify: Turns your data into a structured knowledge graph and stores it in memory
cognee_add_developer_rules: Ingest core developer rule files into memory
codify: Analyse a code repository, build a code graph, stores it in memory
delete: Delete specific data from a dataset (supports soft/hard deletion modes)
get_developer_rules: Retrieve all developer rules that were generated based on previous interactions
list_data: List all datasets and their data items with IDs for deletion operations
save_interaction: Logs user-agent interactions and query-answer pairs
prune: Reset cognee for a fresh start (removes all data)
search: Query memory – supports GRAPH_COMPLETION, RAG_COMPLETION, CODE, CHUNKS, SUMMARIES, CYPHER, and FEELING_LUCKY
cognify_status / codify_status: Track pipeline progress
Data Management Examples:
# List all available datasets and data items
list_data()
# List data items in a specific dataset
list_data(dataset_id="your-dataset-id-here")
# Delete specific data (soft deletion - safer, preserves shared entities)
delete(data_id="data-uuid", dataset_id="dataset-uuid", mode="soft")
# Delete specific data (hard deletion - removes orphaned entities)
delete(data_id="data-uuid", dataset_id="dataset-uuid", mode="hard")
To use debugger, run:
bash mcp dev src/server.py
Open inspector with timeout passed:
http://localhost:5173?timeout=120000
To apply new changes while developing cognee you need to do:
uv sync --dev --all-extras --reinstallmcp dev src/server.pyIn order to use local cognee:
Uncomment the following line in the cognee-mcp pyproject.toml file and set the cognee root path.
#"cognee[postgres,codegraph,gemini,huggingface,docs,neo4j] @ file:/Users//Desktop/cognee"
Remember to replace file:/Users//Desktop/cognee with your actual cognee root path.
Install dependencies with uv in the mcp folder
uv sync --reinstall
We are committed to making open source an enjoyable and respectful experience for our community. See CODE_OF_CONDUCT for more information.
Install via CLI
npx mdskills install topoteretes/cogneeStart your Cognee API server on the host is a free, open-source AI agent skill. cognee‑mcp - Run cognee’s memory engine as a Model Context Protocol server Join Discord Join r/AIMemory Build memory for Agents and query from any client that speaks MCP – in your terminal or IDE. - Multiple transports – choose Streamable HTTP --transport http (recommended for web deployments), SSE --transport sse (real‑time streaming), or stdio (classic pipe, default) - API Mode – connect to an a
Install Start your Cognee API server on the host with a single command:
npx mdskills install topoteretes/cogneeThis downloads the skill files into your project and your AI agent picks them up automatically.
Start your Cognee API server on the host works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Codex, Gemini Cli, Amp, Roo Code, Goose, Opencode, Trae, Qodo, Command Code. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.