A multi-model AI orchestration MCP server for automated code review and LLM-powered analysis. Multi-MCP integrates with Claude Code CLI to orchestrate multiple AI models (OpenAI GPT, Anthropic Claude, Google Gemini) for code quality checks, security analysis (OWASP Top 10), and multi-agent consensus. Built on the Model Context Protocol (MCP), this tool enables Python developers and DevOps teams to
Add this skill
npx mdskills install religa/multi-mcpMulti-model orchestration server enabling parallel AI code reviews, security checks, and consensus workflows
1# Multi-MCP: Multi-Model Code Review and Analysis MCP Server for Claude Code23<!-- mcp-name: io.github.religa/multi-mcp -->45[](https://github.com/religa/multi_mcp/actions)6[](https://opensource.org/licenses/MIT)7[](https://www.python.org/downloads/)89A **multi-model AI orchestration MCP server** for **automated code review** and **LLM-powered analysis**. Multi-MCP integrates with **Claude Code CLI** to orchestrate multiple AI models (OpenAI GPT, Anthropic Claude, Google Gemini) for **code quality checks**, **security analysis** (OWASP Top 10), and **multi-agent consensus**. Built on the **Model Context Protocol (MCP)**, this tool enables Python developers and DevOps teams to automate code reviews with AI-powered insights directly in their development workflow.10111213## Features1415- **๐ Code Review** - Systematic workflow with OWASP Top 10 security checks and performance analysis16- **๐ฌ Chat** - Interactive development assistance with repository context awareness17- **๐ Compare** - Parallel multi-model analysis for architectural decisions18- **๐ญ Debate** - Multi-agent consensus workflow (independent answers + critique)19- **๐ค Multi-Model Support** - OpenAI GPT, Anthropic Claude, Google Gemini, and OpenRouter20- **๐ฅ๏ธ CLI & API Models** - Mix CLI-based (Gemini CLI, Codex CLI) and API models21- **๐ท๏ธ Model Aliases** - Use short names like `mini`, `sonnet`, `gemini`22- **๐งต Threading** - Maintain context across multi-step reviews2324## How It Works2526Multi-MCP acts as an **MCP server** that Claude Code connects to, providing AI-powered code analysis tools:27281. **Install** the MCP server and configure your AI model API keys292. **Integrate** with Claude Code CLI automatically via `make install`303. **Invoke** tools using natural language (e.g., "multi codereview this file")314. **Get Results** from multiple AI models orchestrated in parallel3233## Performance3435**Fast Multi-Model Analysis:**36- โก **Parallel Execution** - 3 models in ~10s (vs ~30s sequential)37- ๐ **Async Architecture** - Non-blocking Python asyncio38- ๐พ **Conversation Threading** - Maintains context across multi-step reviews39- ๐ **Low Latency** - Response time = slowest model, not sum of all models4041## Quick Start4243**Prerequisites:**44- Python 3.11+45- API key for at least one provider (OpenAI, Anthropic, Google, or OpenRouter)4647### Installation4849<!-- Claude Code Plugin - Coming Soon50#### Option 1: Claude Code Plugin (Recommended)5152```bash53# Add the marketplace54/plugin marketplace add religa/multi_mcp5556# Install the plugin57/plugin install multi-mcp@multi_mcp58```5960Then configure API keys in `~/.multi_mcp/.env` (see [Configuration](#configuration)).61-->6263#### Option 1: From Source6465```bash66# Clone and install67git clone https://github.com/religa/multi_mcp.git68cd multi_mcp69# Execute ./scripts/install.sh70make install7172# The installer will:73# 1. Install dependencies (uv sync)74# 2. Generate your .env file75# 3. Automatically add to Claude Code config (requires jq)76# 4. Test the installation77```7879#### Option 2: Manual Configuration8081If you prefer not to run `make install`:8283```bash84# Install dependencies85uv sync8687# Copy and configure .env88cp .env.example .env89# Edit .env with your API keys90```9192Add to Claude Code (`~/.claude.json`), replacing `/path/to/multi_mcp` with your actual clone path:9394```json95{96 "mcpServers": {97 "multi": {98 "type": "stdio",99 "command": "/path/to/multi_mcp/.venv/bin/python",100 "args": ["-m", "multi_mcp.server"]101 }102 }103}104```105106## Configuration107108### Environment Configuration (API Keys & Settings)109110Multi-MCP loads settings from `.env` files in this order (highest priority first):1111. **Environment variables** (already set in shell)1122. **Project `.env`** (current directory or project root)1133. **User `.env`** (`~/.multi_mcp/.env`) - fallback for pip installs114115Edit `.env` with your API keys:116117```bash118# API Keys (configure at least one)119OPENAI_API_KEY=sk-...120ANTHROPIC_API_KEY=sk-ant-...121GEMINI_API_KEY=...122OPENROUTER_API_KEY=sk-or-...123124# Azure OpenAI (optional)125AZURE_API_KEY=...126AZURE_API_BASE=https://your-resource.openai.azure.com/127128# AWS Bedrock (optional)129AWS_ACCESS_KEY_ID=...130AWS_SECRET_ACCESS_KEY=...131AWS_REGION_NAME=us-east-1132133# Model Configuration134DEFAULT_MODEL=gpt-5-mini135DEFAULT_MODEL_LIST=gpt-5-mini,gemini-3-flash136```137138### Model Configuration (Adding Custom Models)139140Models are defined in YAML configuration files (user config wins):1411. **Package defaults**: `multi_mcp/config/config.yaml` (bundled with package)1422. **User overrides**: `~/.multi_mcp/config.yaml` (optional, takes precedence)143144To add your own models, create `~/.multi_mcp/config.yaml` (see [`config.yaml`](multi_mcp/config/config.yaml) and [`config.override.example.yaml`](multi_mcp/config/config.override.example.yaml) for examples):145146```yaml147version: "1.0"148149models:150 # Add a new API model151 my-custom-gpt:152 litellm_model: openai/gpt-4o153 aliases:154 - custom155 notes: "My custom GPT-4o configuration"156157 # Add a custom CLI model158 my-local-llm:159 provider: cli160 cli_command: ollama161 cli_args:162 - "run"163 - "llama3.2"164 cli_parser: text165 aliases:166 - local167 notes: "Local LLaMA via Ollama"168169 # Override an existing model's settings170 gpt-5-mini:171 constraints:172 temperature: 0.5 # Override default temperature173```174175**Merge behavior:**176- New models are added alongside package defaults177- Existing models are merged (your settings override package defaults)178- Aliases can be "stolen" from package models to your custom models179180## Usage Examples181182Once installed in Claude Code, you can use these commands:183184**๐ฌ Chat** - Interactive development assistance:185```186Can you ask Multi chat what's the answer to life, universe and everything?187```188189**๐ Code Review** - Analyze code with specific models:190```191Can you multi codereview this module for code quality and maintainability using gemini-3 and codex?192```193194**๐ Compare** - Get multiple perspectives (uses default models):195```196Can you multi compare the best state management approach for this React app?197```198199**๐ญ Debate** - Deep analysis with critique:200```201Can you multi debate the best project code name for this project?202```203204## Enabling Allowlist205206Edit `~/.claude/settings.json` and add the following lines to `permissions.allow` to enable Claude Code to use Multi MCP without blocking for user permission:207208```json209{210 "permissions": {211 "allow": [212 ...213 "mcp__multi__chat",214 "mcp__multi__codereview",215 "mcp__multi__compare",216 "mcp__multi__debate",217 "mcp__multi__models"218 ],219 },220 "env": {221 "MCP_TIMEOUT": "300000",222 "MCP_TOOL_TIMEOUT": "300000"223 },224}225```226227## Model Aliases228229Use short aliases instead of full model names:230231| Alias | Model | Provider |232|-------|-------|----------|233| `mini` | gpt-5-mini | OpenAI |234| `nano` | gpt-5-nano | OpenAI |235| `gpt` | gpt-5.2 | OpenAI |236| `codex` | gpt-5.1-codex | OpenAI |237| `sonnet` | claude-sonnet-4.6 | Anthropic |238| `haiku` | claude-haiku-4.5 | Anthropic |239| `opus` | claude-opus-4.6 | Anthropic |240| `gemini` | gemini-3.1-pro-preview | Google |241| `gemini-3` | gemini-3.1-pro-preview | Google |242| `flash` | gemini-3-flash | Google |243| `azure-mini` | azure-gpt-5-mini | Azure |244| `bedrock-sonnet` | bedrock-claude-4-5-sonnet | AWS |245246Run `multi:models` to see all available models and aliases.247248## CLI Models249250Multi-MCP can execute **CLI-based AI models** (like Gemini CLI, Codex CLI, or Claude CLI) alongside API models. CLI models run as subprocesses and work seamlessly with all existing tools.251252**Benefits:**253- Use models with full tool access (file operations, shell commands)254- Mix API and CLI models in `compare` and `debate` workflows255- Leverage local CLIs without API overhead256257**Built-in CLI Models:**258- `gemini-cli` (alias: `gem-cli`) - Gemini CLI with auto-edit mode259- `codex-cli` (alias: `cx-cli`) - Codex CLI with full-auto mode260- `claude-cli` (alias: `cl-cli`) - Claude CLI with acceptEdits mode261262**Adding Custom CLI Models:**263264Add to `~/.multi_mcp/config.yaml` (see [Model Configuration](#model-configuration-adding-custom-models)):265266```yaml267version: "1.0"268269models:270 my-ollama:271 provider: cli272 cli_command: ollama273 cli_args:274 - "run"275 - "codellama"276 cli_parser: text # "json", "jsonl", or "text"277 aliases:278 - ollama279 notes: "Local CodeLlama via Ollama"280```281282**Prerequisites:**283284CLI models require the respective CLI tools to be installed:285286```bash287# Gemini CLI288npm install -g @anthropic-ai/gemini-cli289290# Codex CLI291npm install -g @openai/codex292293# Claude CLI294npm install -g @anthropic-ai/claude-code295```296297## CLI Usage (Experimental)298299Multi-MCP includes a standalone CLI for code review without needing an MCP client.300301โ ๏ธ **Note:** The CLI is experimental and under active development.302303```bash304# Review a directory305multi src/306307# Review specific files308multi src/server.py src/config.py309310# Use a different model311multi --model mini src/312313# JSON output for CI/pipelines314multi --json src/ > results.json315316# Verbose logging317multi -v src/318319# Specify project root (for CLAUDE.md loading)320multi --base-path /path/to/project src/321```322323## Why Multi-MCP?324325| Feature | Multi-MCP | Single-Model Tools |326|---------|-----------|-------------------|327| Parallel model execution | โ | โ |328| Multi-model consensus | โ | Varies |329| Model debates | โ | โ |330| CLI + API model support | โ | โ |331| OWASP security analysis | โ | Varies |332333334## Troubleshooting335336**"No API key found"**337- Add at least one API key to your `.env` file338- Verify it's loaded: `uv run python -c "from multi_mcp.settings import settings; print(settings.openai_api_key)"`339340**Integration tests fail**341- Set `RUN_E2E=1` environment variable342- Verify API keys are valid and have sufficient credits343344**Debug mode:**345```bash346export LOG_LEVEL=DEBUG # INFO is default347uv run python -m multi_mcp.server348```349350Check logs in `logs/server.log` for detailed information.351352## FAQ353354**Q: Do I need all three AI providers?**355A: No, just one API key (OpenAI, Anthropic, or Google) is enough to get started.356357**Q: Does it truly run in parallel?**358A: Yes! When you use `codereview`, `compare` or `debate` tools, all models are executed concurrently using Python's `asyncio.gather()`. This means you get responses from multiple models in the time it takes for the slowest model to respond, not the sum of all response times.359360**Q: How many models can I run at the same time?**361A: There's no hard limit! You can run as many models as you want in parallel. In practice, 2-5 models work well for most use cases. All tools use your configured default models (typically 2-3), but you can specify any number of models you want.362363## Contributing364365We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for:366- Development setup367- Code standards368- Testing guidelines369- Pull request process370371**Quick start:**372```bash373git clone https://github.com/YOUR_USERNAME/multi_mcp.git374cd multi_mcp375uv sync --extra dev376make check && make test377```378379## License380381MIT License - see LICENSE file for details382383## Links384385- [Issue Tracker](https://github.com/religa/multi_mcp/issues)386- [Contributing Guide](CONTRIBUTING.md)387
Full transparency โ inspect the skill content before installing.