Multi-agent AI consultation framework for Claude Code via MCP. Get a second (and third) opinion from other LLMs when Claude Code alone isn't enough. Claude Code is powerful, but one brain can miss bugs, overlook edge cases, or get stuck in a local optimum. Critical decisions benefit from diverse perspectives. Concilium runs parallel consultations with multiple LLMs through standard MCP protocol. E
Add this skill
npx mdskills install spyrae/claude-conciliumWell-architected multi-LLM consultation framework with strong setup docs and fallback chains.
Multi-agent AI consultation framework for Claude Code via MCP.
Get a second (and third) opinion from other LLMs when Claude Code alone isn't enough.
Claude Code ──┬── OpenAI (Codex CLI) ──► Opinion A
├── Gemini (gemini-cli) ─► Opinion B
│
└── Synthesis ◄── Consensus or iterate
Claude Code is powerful, but one brain can miss bugs, overlook edge cases, or get stuck in a local optimum. Critical decisions benefit from diverse perspectives.
Concilium runs parallel consultations with multiple LLMs through standard MCP protocol. Each LLM server wraps a CLI tool — no API keys needed for the primary providers (they use OAuth).
Key features:
npm install, add to .mcp.json┌─────────────────────────────────────────────────────────┐
│ Claude Code │
│ │
│ "Review this code for race conditions" │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ MCP Call #1 │ │ MCP Call #2 │ (parallel) │
│ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │
└─────────┼──────────────────┼──────────────────────────────┘
│ │
▼ ▼
┌──────────────┐ ┌──────────────┐
│ mcp-openai │ │ mcp-gemini │ Primary agents
│ (codex exec)│ │ (gemini -p) │
└──────┬───────┘ └──────┬───────┘
│ │
▼ ▼
┌──────────────┐ ┌──────────────┐
│ OpenAI │ │ Google │ LLM providers
│ (OAuth) │ │ (OAuth) │
└──────────────┘ └──────────────┘
Fallback chain (on quota/error):
OpenAI → Qwen → DeepSeek
Gemini → Qwen → DeepSeek
git clone https://github.com/spyrae/claude-concilium.git
cd claude-concilium
# Install dependencies for each server
cd servers/mcp-openai && npm install && cd ../..
cd servers/mcp-gemini && npm install && cd ../..
cd servers/mcp-qwen && npm install && cd ../..
# Verify all servers work (no CLI tools required)
node test/smoke-test.mjs
Expected output:
PASS mcp-openai (Tools: openai_chat, openai_review)
PASS mcp-gemini (Tools: gemini_chat, gemini_analyze)
PASS mcp-qwen (Tools: qwen_chat)
All tests passed.
Pick at least 2 providers:
| Provider | Auth | Free Tier | Setup |
|---|---|---|---|
| OpenAI | codex login (OAuth) | ChatGPT Plus weekly credits | Setup guide |
| Gemini | Google OAuth | 1000 req/day | Setup guide |
| Qwen | qwen login or API key | Varies | Setup guide |
| DeepSeek | API key | Pay-per-use (cheap) | Setup guide |
Copy config/mcp.json.example and update paths:
# Edit the example with your actual paths
cp config/mcp.json.example .mcp.json
# Update "/path/to/claude-concilium" with actual path
Or add servers individually to your existing .mcp.json:
{
"mcpServers": {
"mcp-openai": {
"type": "stdio",
"command": "node",
"args": ["/absolute/path/to/servers/mcp-openai/server.js"],
"env": {
"CODEX_HOME": "~/.codex-minimal"
}
},
"mcp-gemini": {
"type": "stdio",
"command": "node",
"args": ["/absolute/path/to/servers/mcp-gemini/server.js"]
}
}
}
Copy the Concilium skill to your Claude Code commands:
cp skill/ai-concilium.md ~/.claude/commands/ai-concilium.md
Now use /ai-concilium in Claude Code to trigger a multi-agent consultation.
Each server can be used independently — you don't need all of them.
| Server | CLI Tool | Auth | Tools |
|---|---|---|---|
| mcp-openai | codex | OAuth (ChatGPT Plus) | openai_chat, openai_review |
| mcp-gemini | gemini | Google OAuth | gemini_chat, gemini_analyze |
| mcp-qwen | qwen | API key / CLI login | qwen_chat |
DeepSeek uses the existing deepseek-mcp-server npm package — no custom server needed.
All servers detect provider-specific errors and return structured responses:
| Error Type | Meaning | Action |
|---|---|---|
QUOTA_EXCEEDED | Rate/credit limit hit | Use fallback provider |
AUTH_EXPIRED / AUTH_REQUIRED | Token needs refresh | Re-authenticate CLI |
MODEL_NOT_SUPPORTED | Model unavailable on plan | Use default model |
| Timeout | Process hung | Auto-killed, use fallback |
Primary: OpenAI ──────────────► Response
(QUOTA_EXCEEDED?)
│
Fallback 1: Qwen ──┴────────────► Response
(timeout?)
│
Fallback 2: DeepSeek ───────────► Response (always available)
| Scenario | Recommended Agents |
|---|---|
| Code review | OpenAI + Gemini (parallel) |
| Architecture decision | OpenAI + Gemini → iterate if disagree |
| Stuck bug (3+ attempts) | All available agents |
| Performance optimization | Gemini (1M context) + OpenAI |
| Security review | OpenAI + Gemini + manual verification |
See docs/customization.md for:
MIT
Install via CLI
npx mdskills install spyrae/claude-conciliumClaude Concilium is a free, open-source AI agent skill. Multi-agent AI consultation framework for Claude Code via MCP. Get a second (and third) opinion from other LLMs when Claude Code alone isn't enough. Claude Code is powerful, but one brain can miss bugs, overlook edge cases, or get stuck in a local optimum. Critical decisions benefit from diverse perspectives. Concilium runs parallel consultations with multiple LLMs through standard MCP protocol. E
Install Claude Concilium with a single command:
npx mdskills install spyrae/claude-conciliumThis downloads the skill files into your project and your AI agent picks them up automatically.
Claude Concilium works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Codex, Gemini Cli, Amp, Roo Code, Goose, Opencode, Trae, Qodo, Command Code, Chatgpt. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.