ðš Pixelle MCP - Omnimodal Agent Framework English | äžæ âš An AIGC solution based on the MCP protocol, supporting both local ComfyUI and cloud ComfyUI (RunningHub) modes, seamlessly converting workflows into MCP tools with zero code. - â 2025-09-29: Added RunningHub cloud ComfyUI support, enabling workflow execution without local GPU and ComfyUI environment - â 2025-09-03: Architecture refactoring
Add this skill
npx mdskills install AIDC-AI/pixelle-mcpComprehensive multimodal MCP server converting ComfyUI workflows into tools with excellent docs and dual execution modes
1<h1 align="center">ðš Pixelle MCP - Omnimodal Agent Framework</h1>23<p align="center"><b>English</b> | <a href="README_CN.md">äžæ</a></p>45<p align="center">âš An AIGC solution based on the MCP protocol, supporting both local ComfyUI and cloud ComfyUI (RunningHub) modes, seamlessly converting workflows into MCP tools with zero code.</p>6789https://github.com/user-attachments/assets/65422cef-96f9-44fe-a82b-6a124674c417101112## ð Recent Updates1314- â **2025-09-29**: Added RunningHub cloud ComfyUI support, enabling workflow execution without local GPU and ComfyUI environment15- â **2025-09-03**: Architecture refactoring from three services to unified application; added CLI tool support; published to [PyPI](https://pypi.org/project/pixelle/)16- â **2025-08-12**: Integrated the LiteLLM framework, adding multi-model support for Gemini, DeepSeek, Claude, Qwen, and more171819## ð Features2021- â ð **Full-modal Support**: Supports TISV (Text, Image, Sound/Speech, Video) full-modal conversion and generation22- â ð **Dual Execution Modes**: Local ComfyUI self-hosted environment + RunningHub cloud ComfyUI service, users can flexibly choose based on their needs23- â ð§© **ComfyUI Ecosystem**: Built on [ComfyUI](https://github.com/comfyanonymous/ComfyUI), inheriting all capabilities from the open ComfyUI ecosystem24- â ð§ **Zero-code Development**: Defines and implements the Workflow-as-MCP Tool solution, enabling zero-code development and dynamic addition of new MCP Tools25- â ðïž **MCP Server**: Based on the [MCP](https://modelcontextprotocol.io/introduction) protocol, supporting integration with any MCP client (including but not limited to Cursor, Claude Desktop, etc.)26- â ð **Web Interface**: Developed based on the [Chainlit](https://github.com/Chainlit/chainlit) framework, inheriting Chainlit's UI controls and supporting integration with more MCP Servers27- â ðŠ **One-click Deployment**: Supports PyPI installation, CLI commands, Docker and other deployment methods, ready to use out of the box28- â âïž **Simplified Configuration**: Uses environment variable configuration scheme, simple and intuitive configuration29- â ð€ **Multi-LLM Support**: Supports multiple mainstream LLMs, including OpenAI, Ollama, Gemini, DeepSeek, Claude, Qwen, and more303132## ð Project Architecture3334Pixelle MCP adopts a **unified architecture design**, integrating MCP server, web interface, and file services into one application, providing:3536- ð **Web Interface**: Chainlit-based chat interface supporting multimodal interaction37- ð **MCP Endpoint**: For external MCP clients (such as Cursor, Claude Desktop) to connect38- ð **File Service**: Handles file upload, download, and storage39- ð ïž **Workflow Engine**: Supports both local ComfyUI and cloud ComfyUI (RunningHub) workflows, automatically converts workflows into MCP tools40414243<div id="tutorial-start" />4445## ðââïž Quick Start4647Choose the deployment method that best suits your needs, from simple to complex:4849### ð¯ Method 1: One-click Experience5051> ð¡ **Zero configuration startup, perfect for quick experience and testing**5253#### ð Temporary Run5455```bash56# First you need to install the uv environment57# Start with one command, no system installation required58uvx pixelle@latest59```6061ð **[View uvx CLI Reference â](docs/CLI.md#uvx-method)**6263#### ðŠ Persistent Installation6465```bash66# Here you need to install it in the python3.11 environment67# Install to system68pip install -U pixelle6970# Start service71pixelle72```7374ð **[View pip CLI Reference â](docs/CLI.md#pip-install-method)**7576After startup, it will automatically enter the **configuration wizard** to guide you through execution engine selection (ComfyUI/RunningHub) and LLM configuration.7778### ð ïž Method 2: Local Development Deployment7980> ð¡ **Supports custom workflows and secondary development**8182#### ð¥ 1. Get Source Code8384```bash85git clone https://github.com/AIDC-AI/Pixelle-MCP.git86cd Pixelle-MCP87```8889#### ð 2. Start Service9091```bash92# Interactive mode (recommended)93uv run pixelle94```9596ð **[View Complete CLI Reference â](docs/CLI.md#uv-run-method)**9798#### ð§ 3. Add Custom Workflows (Optional)99100```bash101# Copy example workflows to data directory (run this in your desired project directory)102cp -r workflows/* ./data/custom_workflows/103```104105**â ïž Important**: Make sure to test workflows in ComfyUI first to ensure they run properly, otherwise execution will fail.106107### ð³ Method 3: Docker Deployment108109> ð¡ **Suitable for production environments and containerized deployment**110111#### ð 1. Prepare Configuration112113```bash114git clone https://github.com/AIDC-AI/Pixelle-MCP.git115cd Pixelle-MCP116117# Create environment configuration file118cp .env.example .env119# Edit .env file to configure your ComfyUI address and LLM settings120```121122#### ð 2. Start Container123124```bash125# Start all services in background126docker compose up -d127128# View logs129docker compose logs -f130```131132### ð Access Services133134Regardless of which method you use, after startup you can access via:135136- **ð Web Interface**: http://localhost:9004137 *Default username and password are both `dev`, can be modified after startup*138- **ð MCP Endpoint**: http://localhost:9004/pixelle/mcp139 *For MCP clients like Cursor, Claude Desktop to connect*140141**ð¡ Port Configuration**: Default port is 9004, can be customized via environment variable `PORT=your_port`.142143### âïž Initial Configuration144145On first startup, the system will automatically detect configuration status:1461471. **ð Execution Engine Selection**: Choose between local ComfyUI or RunningHub cloud service1482. **ð€ LLM Configuration**: Configure at least one LLM provider (OpenAI, Ollama, etc.)1493. **ð Workflow Directory**: System will automatically create necessary directory structure150151### ð RunningHub Cloud Mode Advantages152- â **Zero Hardware Requirements**: No need for local GPU or high-performance hardware153- â **No Environment Setup**: No need to install and configure ComfyUI locally154- â **Ready to Use**: Register and get API key to start immediately155- â **Stable Performance**: Professional cloud infrastructure ensures stable execution156- â **Auto Scaling**: Automatically handles concurrent requests and resource allocation157158### ð Local ComfyUI Mode Advantages159- â **Full Control**: Complete control over execution environment and model versions160- â **Privacy Protection**: All data processing happens locally, ensuring data privacy161- â **Custom Models**: Support for custom models and nodes not available in cloud162- â **No Network Dependency**: Can work offline without internet connection163- â **Cost Control**: No cloud service fees for high-frequency usage164165**ð Need Help?** Join community groups for support (see Community section below)166167## ð ïž Add Your Own MCP Tool168169â¡ One workflow = One MCP Tool, supports two addition methods:170171ð **Method 1: Local ComfyUI Workflow** - Export API format workflow files172ð **Method 2: RunningHub Workflow ID** - Use cloud workflow IDs directly173174175176### ð¯ 1. Add the Simplest MCP Tool177178* ð Build a workflow in ComfyUI for image Gaussian blur ([Get it here](docs/i_blur_ui.json)), then set the `LoadImage` node's title to `$image.image!` as shown below:179180181* ð€ Export it as an API format file and rename it to `i_blur.json`. You can export it yourself or use our pre-exported version ([Get it here](docs/i_blur.json))182183* ð Copy the exported API workflow file (must be API format), input it on the web page, and let the LLM add this Tool184185 186187* âš After sending, the LLM will automatically convert this workflow into an MCP Tool188189 190191* ðš Now, refresh the page and send any image to perform Gaussian blur processing via LLM192193 194195### ð 2. Add a Complex MCP Tool196197The steps are the same as above, only the workflow part differs (Download workflow: [UI format](docs/t2i_by_flux_turbo_ui.json) and [API format](docs/t2i_by_flux_turbo.json))198199> **Note:** When using RunningHub, you only need to input the corresponding workflow ID, no need to download and upload workflow files.200201202203204## ð§ ComfyUI Workflow Custom Specification205206### ðš Workflow Format207The system supports ComfyUI workflows. Just design your workflow in the canvas and export it as API format. Use special syntax in node titles to define parameters and outputs.208209### ð Parameter Definition Specification210211In the ComfyUI canvas, double-click the node title to edit, and use the following DSL syntax to define parameters:212213```214$<param_name>.[~]<field_name>[!][:<description>]215```216217#### ð Syntax Explanation:218- `param_name`: The parameter name for the generated MCP tool function219- `~`: Optional, indicates URL parameter upload processing, returns relative path220- `field_name`: The corresponding input field in the node221- `!`: Indicates this parameter is required222- `description`: Description of the parameter223224#### ð¡ Example:225226**Required parameter example:**227228- Set LoadImage node title to: `$image.image!:Input image URL`229- Meaning: Creates a required parameter named `image`, mapped to the node's `image` field230231**URL upload processing example:**232233- Set any node title to: `$image.~image!:Input image URL`234- Meaning: Creates a required parameter named `image`, system will automatically download URL and upload to ComfyUI, returns relative path235236> ð Note: `LoadImage`, `VHS_LoadAudioUpload`, `VHS_LoadVideo` and other nodes have built-in functionality, no need to add `~` marker237238239### ð¯ Type Inference Rules240241The system automatically infers parameter types based on the current value of the node field:242- ð¢ `int`: Integer values (e.g. 512, 1024)243- ð `float`: Floating-point values (e.g. 1.5, 3.14)244- â `bool`: Boolean values (e.g. true, false)245- ð `str`: String values (default type)246247### ð€ Output Definition Specification248249#### ð€ Method 1: Auto-detect Output Nodes250The system will automatically detect the following common output nodes:251- ðŒïž `SaveImage` - Image save node252- ð¬ `SaveVideo` - Video save node253- ð `SaveAudio` - Audio save node254- ð¹ `VHS_SaveVideo` - VHS video save node255- ðµ `VHS_SaveAudio` - VHS audio save node256257#### ð¯ Method 2: Manual Output Marking258> Usually used for multiple outputs259Use `$output.var_name` in any node title to mark output:260- Set node title to: `$output.result`261- The system will use this node's output as the tool's return value262263264### ð Tool Description Configuration (Optional)265266You can add a node titled `MCP` in the workflow to provide a tool description:2672681. Add a `String (Multiline)` or similar text node (must have a single string property, and the node field should be one of: value, text, string)2692. Set the node title to: `MCP`2703. Enter a detailed tool description in the value field271272273### â ïž Important Notes2742751. **ð Parameter Validation**: Optional parameters (without !) must have default values set in the node2762. **ð Node Connections**: Fields already connected to other nodes will not be parsed as parameters2773. **ð·ïž Tool Naming**: Exported file name will be used as the tool name, use meaningful English names2784. **ð Detailed Descriptions**: Provide detailed parameter descriptions for better user experience2795. **ð¯ Export Format**: Must export as API format, do not export as UI format280281<div id="tutorial-end" />282283## ð¬ Community284285Scan the QR codes below to join our communities for latest updates and technical support:286287| Discord Community | WeChat Group |288| :----------------------------------------------------------: | :----------------------------------------------------------: |289| <img src="docs/discord.png" alt="Discord Community" width="250" /> | <img src="docs/wechat.png" alt="WeChat Group" width="250" /> |290291## ð€ How to Contribute292293We welcome all forms of contribution! Whether you're a developer, designer, or user, you can participate in the project in the following ways:294295### ð Report Issues296* ð Submit bug reports on the [Issues](https://github.com/AIDC-AI/Pixelle-MCP/issues) page297* ð Please search for similar issues before submitting298* ð Describe the reproduction steps and environment in detail299300### ð¡ Feature Suggestions301* ð Submit feature requests in [Issues](https://github.com/AIDC-AI/Pixelle-MCP/issues)302* ð Describe the feature you want and its use case303* ð¯ Explain how it improves user experience304305### ð§ Code Contributions306307#### ð Contribution Process3081. ðŽ Fork this repo to your GitHub account3092. ð¿ Create a feature branch: `git checkout -b feature/your-feature-name`3103. ð» Develop and add corresponding tests3114. ð Commit changes: `git commit -m "feat: add your feature"`3125. ð€ Push to your repo: `git push origin feature/your-feature-name`3136. ð Create a Pull Request to the main repo314315#### ðš Code Style316* ð Python code follows [PEP 8](https://pep8.org/) style guide317* ð Add appropriate documentation and comments for new features318319### ð§© Contribute Workflows320* ðŠ Share your ComfyUI workflows with the community321* ð ïž Submit tested workflow files322* ð Add usage instructions and examples for workflows323324## ð Acknowledgements325326â€ïž Sincere thanks to the following organizations, projects, and teams for supporting the development and implementation of this project.327328* ð§© [ComfyUI](https://github.com/comfyanonymous/ComfyUI)329* ð¬ [Chainlit](https://github.com/Chainlit/chainlit)330331* ð [MCP](https://modelcontextprotocol.io/introduction)332* ð¬ [WanVideo](https://github.com/Wan-Video/Wan2.1)333* â¡ [Flux](https://github.com/black-forest-labs/flux)334* ð€ [LiteLLM](https://github.com/BerriAI/litellm)335336## License337This project is released under the MIT License ([LICENSE](LICENSE), SPDX-License-identifier: MIT).338339## â Star History340341[](https://star-history.com/#AIDC-AI/Pixelle-MCP&Date)342
Full transparency â inspect the skill content before installing.