🎨 Pixelle MCP - Omnimodal Agent Framework English | 中文 ✨ An AIGC solution based on the MCP protocol, supporting both local ComfyUI and cloud ComfyUI (RunningHub) modes, seamlessly converting workflows into MCP tools with zero code. - ✅ 2025-09-29: Added RunningHub cloud ComfyUI support, enabling workflow execution without local GPU and ComfyUI environment - ✅ 2025-09-03: Architecture refactoring
Add this skill
npx mdskills install AIDC-AI/pixelle-mcpComprehensive multimodal MCP server converting ComfyUI workflows into tools with excellent docs and dual execution modes
1<h1 align="center">🎨 Pixelle MCP - Omnimodal Agent Framework</h1>23<p align="center"><b>English</b> | <a href="README_CN.md">中文</a></p>45<p align="center">✨ An AIGC solution based on the MCP protocol, supporting both local ComfyUI and cloud ComfyUI (RunningHub) modes, seamlessly converting workflows into MCP tools with zero code.</p>6789https://github.com/user-attachments/assets/65422cef-96f9-44fe-a82b-6a124674c417101112## 📋 Recent Updates1314- ✅ **2025-09-29**: Added RunningHub cloud ComfyUI support, enabling workflow execution without local GPU and ComfyUI environment15- ✅ **2025-09-03**: Architecture refactoring from three services to unified application; added CLI tool support; published to [PyPI](https://pypi.org/project/pixelle/)16- ✅ **2025-08-12**: Integrated the LiteLLM framework, adding multi-model support for Gemini, DeepSeek, Claude, Qwen, and more171819## 🚀 Features2021- ✅ 🔄 **Full-modal Support**: Supports TISV (Text, Image, Sound/Speech, Video) full-modal conversion and generation22- ✅ 🚀 **Dual Execution Modes**: Local ComfyUI self-hosted environment + RunningHub cloud ComfyUI service, users can flexibly choose based on their needs23- ✅ 🧩 **ComfyUI Ecosystem**: Built on [ComfyUI](https://github.com/comfyanonymous/ComfyUI), inheriting all capabilities from the open ComfyUI ecosystem24- ✅ 🔧 **Zero-code Development**: Defines and implements the Workflow-as-MCP Tool solution, enabling zero-code development and dynamic addition of new MCP Tools25- ✅ 🗄️ **MCP Server**: Based on the [MCP](https://modelcontextprotocol.io/introduction) protocol, supporting integration with any MCP client (including but not limited to Cursor, Claude Desktop, etc.)26- ✅ 🌐 **Web Interface**: Developed based on the [Chainlit](https://github.com/Chainlit/chainlit) framework, inheriting Chainlit's UI controls and supporting integration with more MCP Servers27- ✅ 📦 **One-click Deployment**: Supports PyPI installation, CLI commands, Docker and other deployment methods, ready to use out of the box28- ✅ ⚙️ **Simplified Configuration**: Uses environment variable configuration scheme, simple and intuitive configuration29- ✅ 🤖 **Multi-LLM Support**: Supports multiple mainstream LLMs, including OpenAI, Ollama, Gemini, DeepSeek, Claude, Qwen, and more303132## 📁 Project Architecture3334Pixelle MCP adopts a **unified architecture design**, integrating MCP server, web interface, and file services into one application, providing:3536- 🌐 **Web Interface**: Chainlit-based chat interface supporting multimodal interaction37- 🔌 **MCP Endpoint**: For external MCP clients (such as Cursor, Claude Desktop) to connect38- 📁 **File Service**: Handles file upload, download, and storage39- 🛠️ **Workflow Engine**: Supports both local ComfyUI and cloud ComfyUI (RunningHub) workflows, automatically converts workflows into MCP tools40414243<div id="tutorial-start" />4445## 🏃♂️ Quick Start4647Choose the deployment method that best suits your needs, from simple to complex:4849### 🎯 Method 1: One-click Experience5051> 💡 **Zero configuration startup, perfect for quick experience and testing**5253#### 🚀 Temporary Run5455```bash56# First you need to install the uv environment57# Start with one command, no system installation required58uvx pixelle@latest59```6061📚 **[View uvx CLI Reference →](docs/CLI.md#uvx-method)**6263#### 📦 Persistent Installation6465```bash66# Here you need to install it in the python3.11 environment67# Install to system68pip install -U pixelle6970# Start service71pixelle72```7374📚 **[View pip CLI Reference →](docs/CLI.md#pip-install-method)**7576After startup, it will automatically enter the **configuration wizard** to guide you through execution engine selection (ComfyUI/RunningHub) and LLM configuration.7778### 🛠️ Method 2: Local Development Deployment7980> 💡 **Supports custom workflows and secondary development**8182#### 📥 1. Get Source Code8384```bash85git clone https://github.com/AIDC-AI/Pixelle-MCP.git86cd Pixelle-MCP87```8889#### 🚀 2. Start Service9091```bash92# Interactive mode (recommended)93uv run pixelle94```9596📚 **[View Complete CLI Reference →](docs/CLI.md#uv-run-method)**9798#### 🔧 3. Add Custom Workflows (Optional)99100```bash101# Copy example workflows to data directory (run this in your desired project directory)102cp -r workflows/* ./data/custom_workflows/103```104105**⚠️ Important**: Make sure to test workflows in ComfyUI first to ensure they run properly, otherwise execution will fail.106107### 🐳 Method 3: Docker Deployment108109> 💡 **Suitable for production environments and containerized deployment**110111#### 📋 1. Prepare Configuration112113```bash114git clone https://github.com/AIDC-AI/Pixelle-MCP.git115cd Pixelle-MCP116117# Create environment configuration file118cp .env.example .env119# Edit .env file to configure your ComfyUI address and LLM settings120```121122#### 🚀 2. Start Container123124```bash125# Start all services in background126docker compose up -d127128# View logs129docker compose logs -f130```131132### 🌐 Access Services133134Regardless of which method you use, after startup you can access via:135136- **🌐 Web Interface**: http://localhost:9004137 *Default username and password are both `dev`, can be modified after startup*138- **🔌 MCP Endpoint**: http://localhost:9004/pixelle/mcp139 *For MCP clients like Cursor, Claude Desktop to connect*140141**💡 Port Configuration**: Default port is 9004, can be customized via environment variable `PORT=your_port`.142143### ⚙️ Initial Configuration144145On first startup, the system will automatically detect configuration status:1461471. **🚀 Execution Engine Selection**: Choose between local ComfyUI or RunningHub cloud service1482. **🤖 LLM Configuration**: Configure at least one LLM provider (OpenAI, Ollama, etc.)1493. **📁 Workflow Directory**: System will automatically create necessary directory structure150151### 🌐 RunningHub Cloud Mode Advantages152- ✅ **Zero Hardware Requirements**: No need for local GPU or high-performance hardware153- ✅ **No Environment Setup**: No need to install and configure ComfyUI locally154- ✅ **Ready to Use**: Register and get API key to start immediately155- ✅ **Stable Performance**: Professional cloud infrastructure ensures stable execution156- ✅ **Auto Scaling**: Automatically handles concurrent requests and resource allocation157158### 🏠 Local ComfyUI Mode Advantages159- ✅ **Full Control**: Complete control over execution environment and model versions160- ✅ **Privacy Protection**: All data processing happens locally, ensuring data privacy161- ✅ **Custom Models**: Support for custom models and nodes not available in cloud162- ✅ **No Network Dependency**: Can work offline without internet connection163- ✅ **Cost Control**: No cloud service fees for high-frequency usage164165**🆘 Need Help?** Join community groups for support (see Community section below)166167## 🛠️ Add Your Own MCP Tool168169⚡ One workflow = One MCP Tool, supports two addition methods:170171📋 **Method 1: Local ComfyUI Workflow** - Export API format workflow files172📋 **Method 2: RunningHub Workflow ID** - Use cloud workflow IDs directly173174175176### 🎯 1. Add the Simplest MCP Tool177178* 📝 Build a workflow in ComfyUI for image Gaussian blur ([Get it here](docs/i_blur_ui.json)), then set the `LoadImage` node's title to `$image.image!` as shown below:179180181* 📤 Export it as an API format file and rename it to `i_blur.json`. You can export it yourself or use our pre-exported version ([Get it here](docs/i_blur.json))182183* 📋 Copy the exported API workflow file (must be API format), input it on the web page, and let the LLM add this Tool184185 186187* ✨ After sending, the LLM will automatically convert this workflow into an MCP Tool188189 190191* 🎨 Now, refresh the page and send any image to perform Gaussian blur processing via LLM192193 194195### 🔌 2. Add a Complex MCP Tool196197The steps are the same as above, only the workflow part differs (Download workflow: [UI format](docs/t2i_by_flux_turbo_ui.json) and [API format](docs/t2i_by_flux_turbo.json))198199> **Note:** When using RunningHub, you only need to input the corresponding workflow ID, no need to download and upload workflow files.200201202203204## 🔧 ComfyUI Workflow Custom Specification205206### 🎨 Workflow Format207The system supports ComfyUI workflows. Just design your workflow in the canvas and export it as API format. Use special syntax in node titles to define parameters and outputs.208209### 📝 Parameter Definition Specification210211In the ComfyUI canvas, double-click the node title to edit, and use the following DSL syntax to define parameters:212213```214$<param_name>.[~]<field_name>[!][:<description>]215```216217#### 🔍 Syntax Explanation:218- `param_name`: The parameter name for the generated MCP tool function219- `~`: Optional, indicates URL parameter upload processing, returns relative path220- `field_name`: The corresponding input field in the node221- `!`: Indicates this parameter is required222- `description`: Description of the parameter223224#### 💡 Example:225226**Required parameter example:**227228- Set LoadImage node title to: `$image.image!:Input image URL`229- Meaning: Creates a required parameter named `image`, mapped to the node's `image` field230231**URL upload processing example:**232233- Set any node title to: `$image.~image!:Input image URL`234- Meaning: Creates a required parameter named `image`, system will automatically download URL and upload to ComfyUI, returns relative path235236> 📝 Note: `LoadImage`, `VHS_LoadAudioUpload`, `VHS_LoadVideo` and other nodes have built-in functionality, no need to add `~` marker237238239### 🎯 Type Inference Rules240241The system automatically infers parameter types based on the current value of the node field:242- 🔢 `int`: Integer values (e.g. 512, 1024)243- 📊 `float`: Floating-point values (e.g. 1.5, 3.14)244- ✅ `bool`: Boolean values (e.g. true, false)245- 📝 `str`: String values (default type)246247### 📤 Output Definition Specification248249#### 🤖 Method 1: Auto-detect Output Nodes250The system will automatically detect the following common output nodes:251- 🖼️ `SaveImage` - Image save node252- 🎬 `SaveVideo` - Video save node253- 🔊 `SaveAudio` - Audio save node254- 📹 `VHS_SaveVideo` - VHS video save node255- 🎵 `VHS_SaveAudio` - VHS audio save node256257#### 🎯 Method 2: Manual Output Marking258> Usually used for multiple outputs259Use `$output.var_name` in any node title to mark output:260- Set node title to: `$output.result`261- The system will use this node's output as the tool's return value262263264### 📄 Tool Description Configuration (Optional)265266You can add a node titled `MCP` in the workflow to provide a tool description:2672681. Add a `String (Multiline)` or similar text node (must have a single string property, and the node field should be one of: value, text, string)2692. Set the node title to: `MCP`2703. Enter a detailed tool description in the value field271272273### ⚠️ Important Notes2742751. **🔒 Parameter Validation**: Optional parameters (without !) must have default values set in the node2762. **🔗 Node Connections**: Fields already connected to other nodes will not be parsed as parameters2773. **🏷️ Tool Naming**: Exported file name will be used as the tool name, use meaningful English names2784. **📋 Detailed Descriptions**: Provide detailed parameter descriptions for better user experience2795. **🎯 Export Format**: Must export as API format, do not export as UI format280281<div id="tutorial-end" />282283## 💬 Community284285Scan the QR codes below to join our communities for latest updates and technical support:286287| Discord Community | WeChat Group |288| :----------------------------------------------------------: | :----------------------------------------------------------: |289| <img src="docs/discord.png" alt="Discord Community" width="250" /> | <img src="docs/wechat.png" alt="WeChat Group" width="250" /> |290291## 🤝 How to Contribute292293We welcome all forms of contribution! Whether you're a developer, designer, or user, you can participate in the project in the following ways:294295### 🐛 Report Issues296* 📋 Submit bug reports on the [Issues](https://github.com/AIDC-AI/Pixelle-MCP/issues) page297* 🔍 Please search for similar issues before submitting298* 📝 Describe the reproduction steps and environment in detail299300### 💡 Feature Suggestions301* 🚀 Submit feature requests in [Issues](https://github.com/AIDC-AI/Pixelle-MCP/issues)302* 💭 Describe the feature you want and its use case303* 🎯 Explain how it improves user experience304305### 🔧 Code Contributions306307#### 📋 Contribution Process3081. 🍴 Fork this repo to your GitHub account3092. 🌿 Create a feature branch: `git checkout -b feature/your-feature-name`3103. 💻 Develop and add corresponding tests3114. 📝 Commit changes: `git commit -m "feat: add your feature"`3125. 📤 Push to your repo: `git push origin feature/your-feature-name`3136. 🔄 Create a Pull Request to the main repo314315#### 🎨 Code Style316* 🐍 Python code follows [PEP 8](https://pep8.org/) style guide317* 📖 Add appropriate documentation and comments for new features318319### 🧩 Contribute Workflows320* 📦 Share your ComfyUI workflows with the community321* 🛠️ Submit tested workflow files322* 📚 Add usage instructions and examples for workflows323324## 🙏 Acknowledgements325326❤️ Sincere thanks to the following organizations, projects, and teams for supporting the development and implementation of this project.327328* 🧩 [ComfyUI](https://github.com/comfyanonymous/ComfyUI)329* 💬 [Chainlit](https://github.com/Chainlit/chainlit)330331* 🔌 [MCP](https://modelcontextprotocol.io/introduction)332* 🎬 [WanVideo](https://github.com/Wan-Video/Wan2.1)333* ⚡ [Flux](https://github.com/black-forest-labs/flux)334* 🤖 [LiteLLM](https://github.com/BerriAI/litellm)335336## License337This project is released under the MIT License ([LICENSE](LICENSE), SPDX-License-identifier: MIT).338339## ⭐ Star History340341[](https://star-history.com/#AIDC-AI/Pixelle-MCP&Date)342
Full transparency — inspect the skill content before installing.