Every multi-step tool workflow burns an LLM round-trip per step. The agent calls tool A, waits, sends the full context back to the model, gets a decision to call tool B, calls it, sends everything back again. Each round-trip re-transmits 2K–10K tokens on what is essentially plumbing. For a session with 20 two-step workflows, that's 20 wasted model calls, ~100K wasted tokens, and 20–40 seconds of a
Add this skill
npx mdskills install mk-in/mcp-chainEliminates LLM round-trips by chaining 2-3 tool calls into pipelines with excellent JSONPath reference support.
Unix pipes for MCP. Compose 2–3 tool calls into deterministic pipelines — no LLM round-trips between steps.
Every multi-step tool workflow burns an LLM round-trip per step. The agent calls tool A, waits, sends the full context back to the model, gets a decision to call tool B, calls it, sends everything back again. Each round-trip re-transmits 2K–10K tokens on what is essentially plumbing.
For a session with 20 two-step workflows, that's 20 wasted model calls, ~100K wasted tokens, and 20–40 seconds of added latency.
mcp-chain is an MCP server that connects to your other MCP servers and lets agents compose tool calls into short pipelines. One agent decision triggers a deterministic sequence. No LLM in the loop between steps.
# Before: 2 LLM round-trips
agent → LLM → web_search → LLM → web_fetch → result
# After: 1 LLM decision
agent → chain([web_search, web_fetch]) → result
Hard limit: 3 steps. This is a feature, not a limitation — it keeps error handling trivial and chains readable at a glance.
Add to your mcp.json / claude_desktop_config.json:
{
"mcpServers": {
"chain": {
"command": "npx",
"args": ["-y", "mcp-chain", "--config", "./mcp.json"]
}
}
}
The chain server reads the same config file it's listed in, connecting to all your other MCP servers automatically.
chain([
{ tool: "web_search", params: { query: "MCP protocol spec" } },
{ tool: "web_fetch", params: { url: "$1.results[0].url", maxChars: 5000 } }
])
$1 refers to the output of step 1. Supports full JSONPath: $1.results[0].url, $1.items[*].id, $input.query.
run_chain("research", { query: "MCP protocol spec" })
Chains are JSON files in a chains/ directory. Three example chains ship with the package.
chain([
{ tool: "web_search", params: { query: "$input.query", count: 3 } },
{ tool: "web_fetch", parallel: true, foreach: "$1.results[:3]", params: { url: "$item.url" } }
])
Fetches the top 3 results simultaneously. Up to 10 concurrent invocations per fan-out step.
chain([...], { dry_run: true })
Validates the chain and returns the execution plan without calling any tools.
| Expression | Resolves to |
|---|---|
$input | Input object passed to the chain |
$input.query | Nested property of input |
$1 | Full output of step 1 |
$1.results[0].url | Nested array index + property |
$1.results[:3] | Array slice (first 3 items) |
$1.results[*].url | Map — extract url from every item |
$item | Current item in a foreach fan-out |
$item.url | Property of current foreach item |
research.json — Search + fetch top result{
"name": "research",
"description": "Search the web and fetch the top result",
"steps": [
{ "id": "search", "tool": "web_search", "params": { "query": "$input.query" } },
{ "id": "fetch", "tool": "web_fetch", "params": { "url": "$1.results[0].url", "maxChars": 8000 } }
]
}
deep-research.json — Search + fetch top 3 in parallel{
"name": "deep-research",
"description": "Search and fetch top 3 results in parallel",
"steps": [
{ "id": "search", "tool": "web_search", "params": { "query": "$input.query", "count": 3 } },
{ "id": "fetch_all", "tool": "web_fetch", "parallel": true, "foreach": "$1.results[:3]",
"params": { "url": "$item.url", "maxChars": 5000 } }
]
}
email-to-calendar.json — Gmail → read → create calendar event{
"name": "email-to-calendar",
"description": "Find email about a meeting and create calendar event",
"steps": [
{ "id": "find", "tool": "gmail_search", "server": "gog", "params": { "query": "$input.query", "limit": 1 } },
{ "id": "read", "tool": "gmail_read", "server": "gog", "params": { "message_id": "$1.messages[0].id" } },
{ "id": "create", "tool": "calendar_create", "server": "gog", "params": { "summary": "$2.subject", "description": "$2.body" } }
]
}
| Scenario | Without chain | With chain | Token savings |
|---|---|---|---|
| 2-step sequential | 2 LLM calls × 4K ctx | 1 LLM call × 4K ctx | 50% |
| 3-step sequential | 3 × 4K | 1 × 4K | 67% |
| Search + 3× parallel fetch | 4 × 4K | 1 × 4K | 75% |
| 20 research queries/session | 40 × 4K = 160K tokens | 20 × 4K = 80K tokens | 50% |
Chain overhead (no-op 2-step): ** Saved chain definitions dir (default: ./chains) --sse Use SSE transport instead of stdio --port SSE port (default: 8399) --log-level debug | info | warn | error (default: info) --timeout Default per-step timeout (default: 30) --max-fanout Max fan-out concurrency (default: 10)
### Environment variables
MCP_CHAIN_CONFIG=./mcp.json MCP_CHAIN_CHAINS_DIR=./chains MCP_CHAIN_LOG_LEVEL=info MCP_CHAIN_DEFAULT_TIMEOUT=30 MCP_CHAIN_MAX_FANOUT=10
---
## Chain Definition Format
```json
{
"$schema": "https://mcp-chain.dev/schema/chain-v1.json",
"name": "my-chain",
"description": "What this chain does",
"version": "1.0.0",
"input_schema": {
"type": "object",
"properties": { "query": { "type": "string" } },
"required": ["query"]
},
"steps": [
{
"id": "step1",
"tool": "tool_name",
"server": "server_name",
"params": { "key": "$input.query" },
"timeout": 30
},
{
"id": "step2",
"tool": "another_tool",
"params": { "url": "$1.result.url" }
}
]
}
Save chain files to the chains/ directory. Use run_chain("list") to see available chains.
Any step failure terminates the chain immediately and returns:
{
"error": true,
"error_type": "step_error",
"message": "Step 2 (web_fetch) failed: HTTP 404",
"failed_step": 2,
"partial_results": { "1": { ... } },
"_chain_meta": { "steps_executed": 1, "total_duration_ms": 423 }
}
Fan-out partial failures return per-item status — the chain continues if at least one item succeeded.
Host Client (Claude Desktop / Cursor / OpenClaw)
│ MCP Protocol
▼
MCP Chain Server ← you are here
│ MCP Protocol (as client)
├──► MCP Server A (brave-search)
├──► MCP Server B (gmail/calendar)
└──► MCP Server C (filesystem)
mcp-chain acts as both an MCP server (to your AI client) and an MCP client (to your other MCP servers). It reads the same mcp.json config file, discovers all connected tools automatically, and resolves tool name ambiguity when the same tool exists on multiple servers.
Zero AI/LLM dependencies. Pure TypeScript plumbing.
Issues and PRs welcome. The 3-step limit is intentional — please don't open issues requesting 5+ steps.
MIT
Install via CLI
npx mdskills install mk-in/mcp-chainMCP Chain is a free, open-source AI agent skill. Every multi-step tool workflow burns an LLM round-trip per step. The agent calls tool A, waits, sends the full context back to the model, gets a decision to call tool B, calls it, sends everything back again. Each round-trip re-transmits 2K–10K tokens on what is essentially plumbing. For a session with 20 two-step workflows, that's 20 wasted model calls, ~100K wasted tokens, and 20–40 seconds of a
Install MCP Chain with a single command:
npx mdskills install mk-in/mcp-chainThis downloads the skill files into your project and your AI agent picks them up automatically.
MCP Chain works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Gemini Cli, Amp, Roo Code, Goose. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.