π¨ Pixelle MCP - Omnimodal Agent Framework English | δΈζ β¨ An AIGC solution based on the MCP protocol, supporting both local ComfyUI and cloud ComfyUI (RunningHub) modes, seamlessly converting workflows into MCP tools with zero code. - β 2025-09-29: Added RunningHub cloud ComfyUI support, enabling workflow execution without local GPU and ComfyUI environment - β 2025-09-03: Architecture refactoring
Add this skill
npx mdskills install AIDC-AI/pixelle-mcpComprehensive multimodal MCP server converting ComfyUI workflows into tools with excellent docs and dual execution modes
1<h1 align="center">π¨ Pixelle MCP - Omnimodal Agent Framework</h1>23<p align="center"><b>English</b> | <a href="README_CN.md">δΈζ</a></p>45<p align="center">β¨ An AIGC solution based on the MCP protocol, supporting both local ComfyUI and cloud ComfyUI (RunningHub) modes, seamlessly converting workflows into MCP tools with zero code.</p>6789https://github.com/user-attachments/assets/65422cef-96f9-44fe-a82b-6a124674c417101112## π Recent Updates1314- β **2025-09-29**: Added RunningHub cloud ComfyUI support, enabling workflow execution without local GPU and ComfyUI environment15- β **2025-09-03**: Architecture refactoring from three services to unified application; added CLI tool support; published to [PyPI](https://pypi.org/project/pixelle/)16- β **2025-08-12**: Integrated the LiteLLM framework, adding multi-model support for Gemini, DeepSeek, Claude, Qwen, and more171819## π Features2021- β π **Full-modal Support**: Supports TISV (Text, Image, Sound/Speech, Video) full-modal conversion and generation22- β π **Dual Execution Modes**: Local ComfyUI self-hosted environment + RunningHub cloud ComfyUI service, users can flexibly choose based on their needs23- β π§© **ComfyUI Ecosystem**: Built on [ComfyUI](https://github.com/comfyanonymous/ComfyUI), inheriting all capabilities from the open ComfyUI ecosystem24- β π§ **Zero-code Development**: Defines and implements the Workflow-as-MCP Tool solution, enabling zero-code development and dynamic addition of new MCP Tools25- β ποΈ **MCP Server**: Based on the [MCP](https://modelcontextprotocol.io/introduction) protocol, supporting integration with any MCP client (including but not limited to Cursor, Claude Desktop, etc.)26- β π **Web Interface**: Developed based on the [Chainlit](https://github.com/Chainlit/chainlit) framework, inheriting Chainlit's UI controls and supporting integration with more MCP Servers27- β π¦ **One-click Deployment**: Supports PyPI installation, CLI commands, Docker and other deployment methods, ready to use out of the box28- β βοΈ **Simplified Configuration**: Uses environment variable configuration scheme, simple and intuitive configuration29- β π€ **Multi-LLM Support**: Supports multiple mainstream LLMs, including OpenAI, Ollama, Gemini, DeepSeek, Claude, Qwen, and more303132## π Project Architecture3334Pixelle MCP adopts a **unified architecture design**, integrating MCP server, web interface, and file services into one application, providing:3536- π **Web Interface**: Chainlit-based chat interface supporting multimodal interaction37- π **MCP Endpoint**: For external MCP clients (such as Cursor, Claude Desktop) to connect38- π **File Service**: Handles file upload, download, and storage39- π οΈ **Workflow Engine**: Supports both local ComfyUI and cloud ComfyUI (RunningHub) workflows, automatically converts workflows into MCP tools40414243<div id="tutorial-start" />4445## πββοΈ Quick Start4647Choose the deployment method that best suits your needs, from simple to complex:4849### π― Method 1: One-click Experience5051> π‘ **Zero configuration startup, perfect for quick experience and testing**5253#### π Temporary Run5455```bash56# First you need to install the uv environment57# Start with one command, no system installation required58uvx pixelle@latest59```6061π **[View uvx CLI Reference β](docs/CLI.md#uvx-method)**6263#### π¦ Persistent Installation6465```bash66# Here you need to install it in the python3.11 environment67# Install to system68pip install -U pixelle6970# Start service71pixelle72```7374π **[View pip CLI Reference β](docs/CLI.md#pip-install-method)**7576After startup, it will automatically enter the **configuration wizard** to guide you through execution engine selection (ComfyUI/RunningHub) and LLM configuration.7778### π οΈ Method 2: Local Development Deployment7980> π‘ **Supports custom workflows and secondary development**8182#### π₯ 1. Get Source Code8384```bash85git clone https://github.com/AIDC-AI/Pixelle-MCP.git86cd Pixelle-MCP87```8889#### π 2. Start Service9091```bash92# Interactive mode (recommended)93uv run pixelle94```9596π **[View Complete CLI Reference β](docs/CLI.md#uv-run-method)**9798#### π§ 3. Add Custom Workflows (Optional)99100```bash101# Copy example workflows to data directory (run this in your desired project directory)102cp -r workflows/* ./data/custom_workflows/103```104105**β οΈ Important**: Make sure to test workflows in ComfyUI first to ensure they run properly, otherwise execution will fail.106107### π³ Method 3: Docker Deployment108109> π‘ **Suitable for production environments and containerized deployment**110111#### π 1. Prepare Configuration112113```bash114git clone https://github.com/AIDC-AI/Pixelle-MCP.git115cd Pixelle-MCP116117# Create environment configuration file118cp .env.example .env119# Edit .env file to configure your ComfyUI address and LLM settings120```121122#### π 2. Start Container123124```bash125# Start all services in background126docker compose up -d127128# View logs129docker compose logs -f130```131132### π Access Services133134Regardless of which method you use, after startup you can access via:135136- **π Web Interface**: http://localhost:9004137 *Default username and password are both `dev`, can be modified after startup*138- **π MCP Endpoint**: http://localhost:9004/pixelle/mcp139 *For MCP clients like Cursor, Claude Desktop to connect*140141**π‘ Port Configuration**: Default port is 9004, can be customized via environment variable `PORT=your_port`.142143### βοΈ Initial Configuration144145On first startup, the system will automatically detect configuration status:1461471. **π Execution Engine Selection**: Choose between local ComfyUI or RunningHub cloud service1482. **π€ LLM Configuration**: Configure at least one LLM provider (OpenAI, Ollama, etc.)1493. **π Workflow Directory**: System will automatically create necessary directory structure150151### π RunningHub Cloud Mode Advantages152- β **Zero Hardware Requirements**: No need for local GPU or high-performance hardware153- β **No Environment Setup**: No need to install and configure ComfyUI locally154- β **Ready to Use**: Register and get API key to start immediately155- β **Stable Performance**: Professional cloud infrastructure ensures stable execution156- β **Auto Scaling**: Automatically handles concurrent requests and resource allocation157158### π Local ComfyUI Mode Advantages159- β **Full Control**: Complete control over execution environment and model versions160- β **Privacy Protection**: All data processing happens locally, ensuring data privacy161- β **Custom Models**: Support for custom models and nodes not available in cloud162- β **No Network Dependency**: Can work offline without internet connection163- β **Cost Control**: No cloud service fees for high-frequency usage164165**π Need Help?** Join community groups for support (see Community section below)166167## π οΈ Add Your Own MCP Tool168169β‘ One workflow = One MCP Tool, supports two addition methods:170171π **Method 1: Local ComfyUI Workflow** - Export API format workflow files172π **Method 2: RunningHub Workflow ID** - Use cloud workflow IDs directly173174175176### π― 1. Add the Simplest MCP Tool177178* π Build a workflow in ComfyUI for image Gaussian blur ([Get it here](docs/i_blur_ui.json)), then set the `LoadImage` node's title to `$image.image!` as shown below:179180181* π€ Export it as an API format file and rename it to `i_blur.json`. You can export it yourself or use our pre-exported version ([Get it here](docs/i_blur.json))182183* π Copy the exported API workflow file (must be API format), input it on the web page, and let the LLM add this Tool184185 186187* β¨ After sending, the LLM will automatically convert this workflow into an MCP Tool188189 190191* π¨ Now, refresh the page and send any image to perform Gaussian blur processing via LLM192193 194195### π 2. Add a Complex MCP Tool196197The steps are the same as above, only the workflow part differs (Download workflow: [UI format](docs/t2i_by_flux_turbo_ui.json) and [API format](docs/t2i_by_flux_turbo.json))198199> **Note:** When using RunningHub, you only need to input the corresponding workflow ID, no need to download and upload workflow files.200201202203204## π§ ComfyUI Workflow Custom Specification205206### π¨ Workflow Format207The system supports ComfyUI workflows. Just design your workflow in the canvas and export it as API format. Use special syntax in node titles to define parameters and outputs.208209### π Parameter Definition Specification210211In the ComfyUI canvas, double-click the node title to edit, and use the following DSL syntax to define parameters:212213```214$<param_name>.[~]<field_name>[!][:<description>]215```216217#### π Syntax Explanation:218- `param_name`: The parameter name for the generated MCP tool function219- `~`: Optional, indicates URL parameter upload processing, returns relative path220- `field_name`: The corresponding input field in the node221- `!`: Indicates this parameter is required222- `description`: Description of the parameter223224#### π‘ Example:225226**Required parameter example:**227228- Set LoadImage node title to: `$image.image!:Input image URL`229- Meaning: Creates a required parameter named `image`, mapped to the node's `image` field230231**URL upload processing example:**232233- Set any node title to: `$image.~image!:Input image URL`234- Meaning: Creates a required parameter named `image`, system will automatically download URL and upload to ComfyUI, returns relative path235236> π Note: `LoadImage`, `VHS_LoadAudioUpload`, `VHS_LoadVideo` and other nodes have built-in functionality, no need to add `~` marker237238239### π― Type Inference Rules240241The system automatically infers parameter types based on the current value of the node field:242- π’ `int`: Integer values (e.g. 512, 1024)243- π `float`: Floating-point values (e.g. 1.5, 3.14)244- β `bool`: Boolean values (e.g. true, false)245- π `str`: String values (default type)246247### π€ Output Definition Specification248249#### π€ Method 1: Auto-detect Output Nodes250The system will automatically detect the following common output nodes:251- πΌοΈ `SaveImage` - Image save node252- π¬ `SaveVideo` - Video save node253- π `SaveAudio` - Audio save node254- πΉ `VHS_SaveVideo` - VHS video save node255- π΅ `VHS_SaveAudio` - VHS audio save node256257#### π― Method 2: Manual Output Marking258> Usually used for multiple outputs259Use `$output.var_name` in any node title to mark output:260- Set node title to: `$output.result`261- The system will use this node's output as the tool's return value262263264### π Tool Description Configuration (Optional)265266You can add a node titled `MCP` in the workflow to provide a tool description:2672681. Add a `String (Multiline)` or similar text node (must have a single string property, and the node field should be one of: value, text, string)2692. Set the node title to: `MCP`2703. Enter a detailed tool description in the value field271272273### β οΈ Important Notes2742751. **π Parameter Validation**: Optional parameters (without !) must have default values set in the node2762. **π Node Connections**: Fields already connected to other nodes will not be parsed as parameters2773. **π·οΈ Tool Naming**: Exported file name will be used as the tool name, use meaningful English names2784. **π Detailed Descriptions**: Provide detailed parameter descriptions for better user experience2795. **π― Export Format**: Must export as API format, do not export as UI format280281<div id="tutorial-end" />282283## π¬ Community284285Scan the QR codes below to join our communities for latest updates and technical support:286287| Discord Community | WeChat Group |288| :----------------------------------------------------------: | :----------------------------------------------------------: |289| <img src="docs/discord.png" alt="Discord Community" width="250" /> | <img src="docs/wechat.png" alt="WeChat Group" width="250" /> |290291## π€ How to Contribute292293We welcome all forms of contribution! Whether you're a developer, designer, or user, you can participate in the project in the following ways:294295### π Report Issues296* π Submit bug reports on the [Issues](https://github.com/AIDC-AI/Pixelle-MCP/issues) page297* π Please search for similar issues before submitting298* π Describe the reproduction steps and environment in detail299300### π‘ Feature Suggestions301* π Submit feature requests in [Issues](https://github.com/AIDC-AI/Pixelle-MCP/issues)302* π Describe the feature you want and its use case303* π― Explain how it improves user experience304305### π§ Code Contributions306307#### π Contribution Process3081. π΄ Fork this repo to your GitHub account3092. πΏ Create a feature branch: `git checkout -b feature/your-feature-name`3103. π» Develop and add corresponding tests3114. π Commit changes: `git commit -m "feat: add your feature"`3125. π€ Push to your repo: `git push origin feature/your-feature-name`3136. π Create a Pull Request to the main repo314315#### π¨ Code Style316* π Python code follows [PEP 8](https://pep8.org/) style guide317* π Add appropriate documentation and comments for new features318319### π§© Contribute Workflows320* π¦ Share your ComfyUI workflows with the community321* π οΈ Submit tested workflow files322* π Add usage instructions and examples for workflows323324## π Acknowledgements325326β€οΈ Sincere thanks to the following organizations, projects, and teams for supporting the development and implementation of this project.327328* π§© [ComfyUI](https://github.com/comfyanonymous/ComfyUI)329* π¬ [Chainlit](https://github.com/Chainlit/chainlit)330331* π [MCP](https://modelcontextprotocol.io/introduction)332* π¬ [WanVideo](https://github.com/Wan-Video/Wan2.1)333* β‘ [Flux](https://github.com/black-forest-labs/flux)334* π€ [LiteLLM](https://github.com/BerriAI/litellm)335336## License337This project is released under the MIT License ([LICENSE](LICENSE), SPDX-License-identifier: MIT).338339## β Star History340341[](https://star-history.com/#AIDC-AI/Pixelle-MCP&Date)342
Full transparency β inspect the skill content before installing.