A Model Context Protocol (MCP) server that enables Claude Desktop (and other MCP clients) to generate images, videos, music, and audio using Fal.ai models. - Native Async API - Uses falclient.runasync() for optimal performance - Queue Support - Long-running tasks (video/music) use queue API with progress updates - Non-blocking - All operations are truly asynchronous - STDIO - Traditional Model Con
Add this skill
npx mdskills install raveenb/fal-mcp-serverComprehensive MCP server for AI-powered media generation with excellent multi-transport support and 600+ models
A Model Context Protocol (MCP) server that enables Claude Desktop (and other MCP clients) to generate images, videos, music, and audio using Fal.ai models.
Image Generation:
Image Editing:
Video Tools:
Audio Tools:
Utility Tools:
If you're using Claude Code, install directly via the plugin system:
# Add the Luminary Lane Tools marketplace
/plugin marketplace add raveenb/fal-mcp-server
# Install the fal-ai plugin
/plugin install fal-ai@luminary-lane-tools
Or install directly without adding the marketplace:
/plugin install fal-ai@raveenb/fal-mcp-server
Note: You'll need to set
FAL_KEYin your environment before using the plugin.
Run directly without installation using uv:
# Run the MCP server directly
uvx --from fal-mcp-server fal-mcp
# Or with specific version
uvx --from fal-mcp-server==1.4.0 fal-mcp
Claude Desktop Configuration for uvx:
{
"mcpServers": {
"fal-ai": {
"command": "uvx",
"args": ["--from", "fal-mcp-server", "fal-mcp"],
"env": {
"FAL_KEY": "your-fal-api-key"
}
}
}
}
Note: Install uv first:
curl -LsSf https://astral.sh/uv/install.sh | sh
Official Docker image available on GitHub Container Registry.
Step 1: Start the Docker container
# Pull and run with your API key
docker run -d \
--name fal-mcp \
-e FAL_KEY=your-api-key \
-p 8080:8080 \
ghcr.io/raveenb/fal-mcp-server:latest
# Verify it's running
docker logs fal-mcp
Step 2: Configure Claude Desktop to connect
Add to your Claude Desktop config file:
~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%\Claude\claude_desktop_config.json{
"mcpServers": {
"fal-ai": {
"command": "npx",
"args": ["mcp-remote", "http://localhost:8080/sse"]
}
}
}
Note: This uses mcp-remote to connect to the HTTP/SSE endpoint. Alternatively, if you have
curlavailable:"command": "curl", "args": ["-N", "http://localhost:8080/sse"]
Step 3: Restart Claude Desktop
The fal-ai tools should now be available.
Docker Environment Variables:
| Variable | Default | Description |
|---|---|---|
FAL_KEY | (required) | Your Fal.ai API key |
FAL_MCP_TRANSPORT | http | Transport mode: http, stdio, or dual |
FAL_MCP_HOST | 0.0.0.0 | Host to bind the server to |
FAL_MCP_PORT | 8080 | Port for the HTTP server |
Using Docker Compose:
curl -O https://raw.githubusercontent.com/raveenb/fal-mcp-server/main/docker-compose.yml
echo "FAL_KEY=your-api-key" > .env
docker-compose up -d
โ ๏ธ File Upload with Docker:
The upload_file tool requires volume mounts to access host files:
docker run -d -p 8080:8080 \
-e FAL_KEY="${FAL_KEY}" \
-e FAL_MCP_TRANSPORT=http \
-v ${HOME}/Downloads:/downloads:ro \
-v ${HOME}/Pictures:/pictures:ro \
ghcr.io/raveenb/fal-mcp-server:latest
Then use container paths like /downloads/image.png instead of host paths.
| Feature | stdio (uvx) | Docker (HTTP/SSE) |
|---|---|---|
upload_file | โ Full filesystem | โ ๏ธ Needs volume mounts |
| Security | Runs as user | Sandboxed container |
pip install fal-mcp-server
Or with uv:
uv pip install fal-mcp-server
git clone https://github.com/raveenb/fal-mcp-server.git
cd fal-mcp-server
pip install -e .
Get your Fal.ai API key from fal.ai
Configure Claude Desktop by adding to:
~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%\Claude\claude_desktop_config.json{
"mcpServers": {
"fal-ai": {
"command": "fal-mcp",
"env": {
"FAL_KEY": "your-fal-api-key"
}
}
}
}
Note: For Docker configuration, see Option 2: Docker above.
{
"mcpServers": {
"fal-ai": {
"command": "python",
"args": ["/path/to/fal-mcp-server/src/fal_mcp_server/server.py"],
"env": {
"FAL_KEY": "your-fal-api-key"
}
}
}
}
Once configured, ask Claude to:
Use the list_models tool to discover available models:
You can use any model from the Fal.ai platform:
# Using a friendly alias (backward compatible)
"Generate an image with flux_schnell"
# Using a full model ID (new capability)
"Generate an image using fal-ai/flux-pro/v1.1-ultra"
"Create a video with fal-ai/kling-video/v1.5/pro"
Run the server with HTTP transport for web-based access:
# Using Docker (recommended)
docker run -d -e FAL_KEY=your-key -p 8080:8080 ghcr.io/raveenb/fal-mcp-server:latest
# Using pip installation
fal-mcp-http --host 0.0.0.0 --port 8000
# Or dual mode (STDIO + HTTP)
fal-mcp-dual --transport dual --port 8000
Connect from web clients via Server-Sent Events:
http://localhost:8080/sse (Docker) or http://localhost:8000/sse (pip)POST http://localhost:8080/messages/See Docker Documentation and HTTP Transport Documentation for details.
This server supports 600+ models from the Fal.ai platform through dynamic discovery. Use the list_models tool to explore available models, or use any model ID directly.
These friendly aliases are always available for commonly used models:
| Alias | Model ID | Type |
|---|---|---|
flux_schnell | fal-ai/flux/schnell | Image |
flux_dev | fal-ai/flux/dev | Image |
flux_pro | fal-ai/flux-pro | Image |
sdxl | fal-ai/fast-sdxl | Image |
stable_diffusion | fal-ai/stable-diffusion-v3-medium | Image |
svd | fal-ai/stable-video-diffusion | Video |
animatediff | fal-ai/fast-animatediff | Video |
kling | fal-ai/kling-video | Video |
musicgen | fal-ai/musicgen-medium | Audio |
musicgen_large | fal-ai/musicgen-large | Audio |
bark | fal-ai/bark | Audio |
whisper | fal-ai/whisper | Audio |
You can also use any model directly by its full ID:
# Examples of full model IDs
"fal-ai/flux-pro/v1.1-ultra" # Latest Flux Pro
"fal-ai/kling-video/v1.5/pro" # Kling Video Pro
"fal-ai/hunyuan-video" # Hunyuan Video
"fal-ai/minimax-video" # MiniMax Video
Use list_models with category filters to discover more:
list_models(category="image") - All image generation modelslist_models(category="video") - All video generation modelslist_models(category="audio") - All audio modelslist_models(search="flux") - Search for specific models| Guide | Description |
|---|---|
| Installation Guide | Detailed setup instructions for all platforms |
| API Reference | Complete tool documentation with parameters |
| Examples | Usage examples for image, video, and audio generation |
| Docker Guide | Container deployment and configuration |
| HTTP Transport | Web-based SSE transport setup |
| Local Testing | Running CI locally with act |
๐ Full documentation site: raveenb.github.io/fal-mcp-server
This project is part of the Luminary Lane Tools marketplace for Claude Code plugins.
Add the marketplace:
/plugin marketplace add raveenb/fal-mcp-server
Available plugins:
| Plugin | Description |
|---|---|
fal-ai | Generate images, videos, and music using 600+ Fal.ai models |
More plugins coming soon!
Error: FAL_KEY environment variable is required
Solution: Set your Fal.ai API key:
export FAL_KEY="your-api-key"
Error: Model 'xyz' not found
Solution: Use list_models to discover available models, or check the model ID spelling.
Error: File not found: /Users/username/image.png
Solution: When using Docker, mount the directory as a volume. See File Upload with Docker above.
Error: Generation timed out after 300s
Solution: Video and music generation can take several minutes. This is normal for high-quality models. Try:
schnell instead of pro)Error: Rate limit exceeded
Solution: Wait a few minutes and retry. Consider upgrading your Fal.ai plan for higher limits.
Enable verbose logging for troubleshooting:
# Set debug environment variable
export FAL_MCP_DEBUG=true
# Run the server
fal-mcp
If you encounter a bug or unexpected behavior:
Check existing issues: GitHub Issues
Gather information:
Open a new issue with:
**Error:** [paste error message]
**Steps to reproduce:** [what you did]
**Model:** [model ID if applicable]
**Environment:** [OS, Python version, Docker/uvx/pip]
Include logs if available (with sensitive data removed)
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
We support local CI testing with act:
# Quick setup
make ci-local # Run CI locally before pushing
# See detailed guide
cat docs/LOCAL_TESTING.md
MIT License - see LICENSE file for details.
Install via CLI
npx mdskills install raveenb/fal-mcp-serverFal.ai MCP Server is a free, open-source AI agent skill. A Model Context Protocol (MCP) server that enables Claude Desktop (and other MCP clients) to generate images, videos, music, and audio using Fal.ai models. - Native Async API - Uses falclient.runasync() for optimal performance - Queue Support - Long-running tasks (video/music) use queue API with progress updates - Non-blocking - All operations are truly asynchronous - STDIO - Traditional Model Con
Install Fal.ai MCP Server with a single command:
npx mdskills install raveenb/fal-mcp-serverThis downloads the skill files into your project and your AI agent picks them up automatically.
Fal.ai MCP Server works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Gemini Cli, Amp, Roo Code, Goose. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.