Multi-agent AI consultation framework for Claude Code via MCP. Get a second (and third) opinion from other LLMs when Claude Code alone isn't enough. Claude Code is powerful, but one brain can miss bugs, overlook edge cases, or get stuck in a local optimum. Critical decisions benefit from diverse perspectives. Concilium runs parallel consultations with multiple LLMs through standard MCP protocol. E
Add this skill
npx mdskills install spyrae/claude-conciliumWell-architected multi-LLM consultation framework with strong setup docs and fallback chains.
1# Claude Concilium23[](https://opensource.org/licenses/MIT)4[](https://nodejs.org/)5[](https://modelcontextprotocol.io/)6[](#mcp-servers)7[](#quickstart)89**Multi-agent AI consultation framework for Claude Code via MCP.**1011Get a second (and third) opinion from other LLMs when Claude Code alone isn't enough.1213```14Claude Code ──┬── OpenAI (Codex CLI) ──► Opinion A15 ├── Gemini (gemini-cli) ─► Opinion B16 │17 └── Synthesis ◄── Consensus or iterate18```1920## The Problem2122Claude Code is powerful, but one brain can miss bugs, overlook edge cases, or get stuck in a local optimum. Critical decisions benefit from diverse perspectives.2324## The Solution2526Concilium runs parallel consultations with multiple LLMs through standard [MCP protocol](https://modelcontextprotocol.io/). Each LLM server wraps a CLI tool — no API keys needed for the primary providers (they use OAuth).2728**Key features:**29- Parallel consultation with 2+ AI agents30- Production-grade fallback chains with error detection31- Each MCP server works standalone or as part of Concilium32- Plug & play: clone, `npm install`, add to `.mcp.json`3334## Architecture3536```37┌─────────────────────────────────────────────────────────┐38│ Claude Code │39│ │40│ "Review this code for race conditions" │41│ │42│ ┌──────────────┐ ┌──────────────┐ │43│ │ MCP Call #1 │ │ MCP Call #2 │ (parallel) │44│ └──────┬───────┘ └──────┬───────┘ │45│ │ │ │46└─────────┼──────────────────┼──────────────────────────────┘47 │ │48 ▼ ▼49 ┌──────────────┐ ┌──────────────┐50 │ mcp-openai │ │ mcp-gemini │ Primary agents51 │ (codex exec)│ │ (gemini -p) │52 └──────┬───────┘ └──────┬───────┘53 │ │54 ▼ ▼55 ┌──────────────┐ ┌──────────────┐56 │ OpenAI │ │ Google │ LLM providers57 │ (OAuth) │ │ (OAuth) │58 └──────────────┘ └──────────────┘5960 Fallback chain (on quota/error):61 OpenAI → Qwen → DeepSeek62 Gemini → Qwen → DeepSeek63```6465## Quickstart6667### 1. Clone and install6869```bash70git clone https://github.com/spyrae/claude-concilium.git71cd claude-concilium7273# Install dependencies for each server74cd servers/mcp-openai && npm install && cd ../..75cd servers/mcp-gemini && npm install && cd ../..76cd servers/mcp-qwen && npm install && cd ../..7778# Verify all servers work (no CLI tools required)79node test/smoke-test.mjs80```8182Expected output:83```84PASS mcp-openai (Tools: openai_chat, openai_review)85PASS mcp-gemini (Tools: gemini_chat, gemini_analyze)86PASS mcp-qwen (Tools: qwen_chat)87All tests passed.88```8990### 2. Set up providers9192Pick at least 2 providers:9394| Provider | Auth | Free Tier | Setup |95|----------|------|-----------|-------|96| **OpenAI** | `codex login` (OAuth) | ChatGPT Plus weekly credits | [Setup guide](docs/setup-openai.md) |97| **Gemini** | Google OAuth | 1000 req/day | [Setup guide](docs/setup-gemini.md) |98| **Qwen** | `qwen login` or API key | Varies | [Setup guide](docs/setup-qwen.md) |99| **DeepSeek** | API key | Pay-per-use (cheap) | [Setup guide](docs/setup-deepseek.md) |100101### 3. Add to Claude Code102103Copy `config/mcp.json.example` and update paths:104105```bash106# Edit the example with your actual paths107cp config/mcp.json.example .mcp.json108# Update "/path/to/claude-concilium" with actual path109```110111Or add servers individually to your existing `.mcp.json`:112113```json114{115 "mcpServers": {116 "mcp-openai": {117 "type": "stdio",118 "command": "node",119 "args": ["/absolute/path/to/servers/mcp-openai/server.js"],120 "env": {121 "CODEX_HOME": "~/.codex-minimal"122 }123 },124 "mcp-gemini": {125 "type": "stdio",126 "command": "node",127 "args": ["/absolute/path/to/servers/mcp-gemini/server.js"]128 }129 }130}131```132133### 4. Install the skill (optional)134135Copy the Concilium skill to your Claude Code commands:136137```bash138cp skill/ai-concilium.md ~/.claude/commands/ai-concilium.md139```140141Now use `/ai-concilium` in Claude Code to trigger a multi-agent consultation.142143## MCP Servers144145Each server can be used independently — you don't need all of them.146147| Server | CLI Tool | Auth | Tools |148|--------|----------|------|-------|149| [mcp-openai](servers/mcp-openai/) | `codex` | OAuth (ChatGPT Plus) | `openai_chat`, `openai_review` |150| [mcp-gemini](servers/mcp-gemini/) | `gemini` | Google OAuth | `gemini_chat`, `gemini_analyze` |151| [mcp-qwen](servers/mcp-qwen/) | `qwen` | API key / CLI login | `qwen_chat` |152153**DeepSeek** uses the existing [`deepseek-mcp-server`](https://www.npmjs.com/package/deepseek-mcp-server) npm package — no custom server needed.154155## How It Works156157### Consultation Flow1581591. **Formulate** — describe the problem concisely (under 500 chars)1602. **Send in parallel** — OpenAI + Gemini get the same prompt1613. **Handle errors** — if a provider fails, fallback chain kicks in (Qwen → DeepSeek)1624. **Synthesize** — compare responses, find consensus1635. **Iterate** (optional) — resolve disagreements with follow-up questions1646. **Decide** — apply the synthesized solution165166### Error Detection167168All servers detect provider-specific errors and return structured responses:169170| Error Type | Meaning | Action |171|------------|---------|--------|172| `QUOTA_EXCEEDED` | Rate/credit limit hit | Use fallback provider |173| `AUTH_EXPIRED` / `AUTH_REQUIRED` | Token needs refresh | Re-authenticate CLI |174| `MODEL_NOT_SUPPORTED` | Model unavailable on plan | Use default model |175| Timeout | Process hung | Auto-killed, use fallback |176177### Fallback Chain178179```180Primary: OpenAI ──────────────► Response181 (QUOTA_EXCEEDED?)182 │183Fallback 1: Qwen ──┴────────────► Response184 (timeout?)185 │186Fallback 2: DeepSeek ───────────► Response (always available)187```188189## When to Use Concilium190191| Scenario | Recommended Agents |192|----------|-------------------|193| Code review | OpenAI + Gemini (parallel) |194| Architecture decision | OpenAI + Gemini → iterate if disagree |195| Stuck bug (3+ attempts) | All available agents |196| Performance optimization | Gemini (1M context) + OpenAI |197| Security review | OpenAI + Gemini + manual verification |198199## Customization200201See [docs/customization.md](docs/customization.md) for:202- Adding your own LLM provider203- Modifying the fallback chain204- MCP server template205- Custom prompt strategies206207## Documentation208209- [Architecture](docs/architecture.md) — flow diagrams, error handling, design decisions210- [OpenAI Setup](docs/setup-openai.md) — Codex CLI, ChatGPT Plus, minimal config211- [Gemini Setup](docs/setup-gemini.md) — gemini-cli, Google OAuth212- [Qwen Setup](docs/setup-qwen.md) — Qwen CLI, DashScope213- [DeepSeek Setup](docs/setup-deepseek.md) — API key, npm package214- [Customization](docs/customization.md) — add your own LLM, modify chains215216## License217218MIT219
Full transparency — inspect the skill content before installing.