MCP server connecting Claude/Cursor to Codex CLI. Enables code analysis via @ file references, multi-turn conversations, sandboxed edits, and structured change mode. - File Analysis — Reference files with @src/, @package.json syntax - Multi-Turn Sessions — Conversation continuity with workspace isolation - Native Resume — Uses codex resume for context preservation (CLI v0.36.0+) - Local OSS Models
Add this skill
npx mdskills install x51xxx/codex-mcp-toolWell-documented MCP server bridging Claude to Codex CLI with multi-turn sessions and local model support
1# Codex MCP Tool23<div align="center">45[](https://github.com/x51xxx/codex-mcp-tool/releases)6[](https://www.npmjs.com/package/@trishchuk/codex-mcp-tool)7[](https://www.npmjs.com/package/@trishchuk/codex-mcp-tool)8[](https://opensource.org/licenses/MIT)910</div>1112MCP server connecting Claude/Cursor to Codex CLI. Enables code analysis via `@` file references, multi-turn conversations, sandboxed edits, and structured change mode.1314## Features1516- **File Analysis** — Reference files with `@src/`, `@package.json` syntax17- **Multi-Turn Sessions** — Conversation continuity with workspace isolation18- **Native Resume** — Uses `codex resume` for context preservation (CLI v0.36.0+)19- **Local OSS Models** — Run with Ollama or LM Studio via `localProvider`20- **Web Search** — Research capabilities with `search: true`21- **Sandbox Mode** — Safe code execution with `--full-auto`22- **Change Mode** — Structured OLD/NEW patch output for refactoring23- **Brainstorming** — SCAMPER, design-thinking, lateral thinking frameworks24- **Health Diagnostics** — CLI version, features, and session monitoring25- **Cross-Platform** — Windows, macOS, Linux fully supported2627## Quick Start2829```bash30claude mcp add codex-cli -- npx -y @trishchuk/codex-mcp-tool31```3233**Prerequisites:** Node.js 18+, [Codex CLI](https://github.com/openai/codex) installed and authenticated.3435### Configuration3637```json38{39 "mcpServers": {40 "codex-cli": {41 "command": "npx",42 "args": ["-y", "@trishchuk/codex-mcp-tool"]43 }44 }45}46```4748**Config locations:** macOS: `~/Library/Application Support/Claude/claude_desktop_config.json` | Windows: `%APPDATA%\Claude\claude_desktop_config.json`4950## Usage Examples5152```javascript53// File analysis54'explain the architecture of @src/';55'analyze @package.json and list dependencies';5657// With specific model58'use codex with model gpt-5.3-codex to analyze @algorithm.py';5960// Multi-turn conversations (v1.4.0+)61'ask codex sessionId:"my-project" prompt:"explain @src/"';62'ask codex sessionId:"my-project" prompt:"now add error handling"';6364// Brainstorming65'brainstorm ways to optimize CI/CD using SCAMPER method';6667// Sandbox mode68'use codex sandbox:true to create and run a Python script';6970// Web search71'ask codex search:true prompt:"latest TypeScript 5.7 features"';7273// Local OSS model (Ollama)74'ask codex localProvider:"ollama" model:"qwen3:8b" prompt:"explain @src/"';75```7677## Tools7879| Tool | Description |80| --------------- | ------------------------------------------------------ |81| `ask-codex` | Execute Codex CLI with file analysis, models, sessions |82| `brainstorm` | Generate ideas with SCAMPER, design-thinking, etc. |83| `list-sessions` | View/delete/clear conversation sessions |84| `health` | Diagnose CLI installation, version, features |85| `ping` / `help` | Test connection, show CLI help |8687## Models8889Default: `gpt-5.3-codex` with fallback → `gpt-5.2-codex` → `gpt-5.1-codex-max` → `gpt-5.2`9091| Model | Use Case |92| -------------------- | ---------------------------------------- |93| `gpt-5.3-codex` | Latest frontier agentic coding (default) |94| `gpt-5.2-codex` | Frontier agentic coding |95| `gpt-5.1-codex-max` | Deep and fast reasoning |96| `gpt-5.1-codex-mini` | Cost-efficient quick tasks |97| `gpt-5.2` | Broad knowledge, reasoning and coding |9899## Key Features100101### Session Management (v1.4.0+)102103Multi-turn conversations with workspace isolation:104105```javascript106{ "prompt": "analyze code", "sessionId": "my-session" }107{ "prompt": "continue from here", "sessionId": "my-session" }108{ "prompt": "start fresh", "sessionId": "my-session", "resetSession": true }109```110111**Environment:**112113- `CODEX_SESSION_TTL_MS` - Session TTL (default: 24h)114- `CODEX_MAX_SESSIONS` - Max sessions (default: 50)115116### Local OSS Models (v1.6.0+)117118Run with local Ollama or LM Studio instead of OpenAI:119120```javascript121// Ollama122{ "prompt": "analyze @src/", "localProvider": "ollama", "model": "qwen3:8b" }123124// LM Studio125{ "prompt": "analyze @src/", "localProvider": "lmstudio", "model": "my-model" }126127// Auto-select provider128{ "prompt": "analyze @src/", "oss": true }129```130131**Requirements:** [Ollama](https://ollama.com) running locally with a model that supports tool calling (e.g. `qwen3:8b`).132133### Advanced Options134135| Parameter | Description |136| ---------------------- | ----------------------------------------- |137| `model` | Model selection |138| `sessionId` | Enable conversation continuity |139| `sandbox` | Enable `--full-auto` mode |140| `search` | Enable web search |141| `changeMode` | Structured OLD/NEW edits |142| `addDirs` | Additional writable directories |143| `toolOutputTokenLimit` | Cap response verbosity (100-10,000) |144| `reasoningEffort` | Reasoning depth: low, medium, high, xhigh |145| `oss` | Use local OSS model provider |146| `localProvider` | Local provider: `lmstudio` or `ollama` |147148## CLI Compatibility149150| Version | Features |151| -------- | -------------------------------- |152| v0.60.0+ | GPT-5.2 model family |153| v0.59.0+ | `--add-dir`, token limits |154| v0.52.0+ | Native `--search` flag |155| v0.36.0+ | Native `codex resume` (sessions) |156157## Troubleshooting158159```bash160codex --version # Check CLI version161codex login # Authenticate162```163164Use `health` tool for diagnostics: `'use health verbose:true'`165166## Migration167168**v1.5.x → v1.6.0:** Local OSS model support (`localProvider`, `oss`), `gpt-5.3-codex` default model, `xhigh` reasoning effort.169170**v1.3.x → v1.4.0:** New `sessionId` parameter, `list-sessions`/`health` tools, structured error handling. No breaking changes.171172## License173174MIT License. Not affiliated with OpenAI.175176---177178[Documentation](https://x51xxx.github.io/codex-mcp-tool/) | [Issues](https://github.com/x51xxx/codex-mcp-tool/issues) | Inspired by [jamubc/gemini-mcp-tool](https://github.com/jamubc/gemini-mcp-tool)179
Full transparency — inspect the skill content before installing.