An MCP (Model Context Protocol) server that enables AI applications to outsource tasks to various model providers through a unified interface. Compatible with any AI tool that supports the Model Context Protocol, including Claude Desktop, Cline, and other MCP-enabled applications. Built with FastMCP for the MCP server implementation and Agno for AI agent capabilities. - ๐ค Multi-Provider Support:
Add this skill
npx mdskills install gwbischof/outsource-mcpComprehensive multi-provider AI tool integration with excellent documentation and clear examples
1# Outsource MCP23An MCP (Model Context Protocol) server that enables AI applications to outsource tasks to various model providers through a unified interface.45<img width="1154" alt="image" src="https://github.com/user-attachments/assets/cd364a7c-eae5-4c58-bc1f-fdeea6cb8434" />67<img width="1103" alt="image" src="https://github.com/user-attachments/assets/55924981-83e9-4811-9f51-b049595b7505" />8910Compatible with any AI tool that supports the Model Context Protocol, including Claude Desktop, Cline, and other MCP-enabled applications.11Built with [FastMCP](https://github.com/mcp/fastmcp) for the MCP server implementation and [Agno](https://github.com/agno-agi/agno) for AI agent capabilities.1213## Features1415- ๐ค **Multi-Provider Support**: Access 20+ AI providers through a single interface16- ๐ **Text Generation**: Generate text using models from OpenAI, Anthropic, Google, and more17- ๐จ **Image Generation**: Create images using DALL-E 3 and DALL-E 218- ๐ง **Simple API**: Consistent interface with just three parameters: provider, model, and prompt19- ๐ **Flexible Authentication**: Only configure API keys for the providers you use2021## Configuration2223Add the following configuration to your MCP client. Consult your MCP client's documentation for specific configuration details.2425```json26{27 "mcpServers": {28 "outsource-mcp": {29 "command": "uvx",30 "args": ["--from", "git+https://github.com/gwbischof/outsource-mcp.git", "outsource-mcp"],31 "env": {32 "OPENAI_API_KEY": "your-openai-key",33 "ANTHROPIC_API_KEY": "your-anthropic-key",34 "GOOGLE_API_KEY": "your-google-key",35 "GROQ_API_KEY": "your-groq-key",36 "DEEPSEEK_API_KEY": "your-deepseek-key",37 "XAI_API_KEY": "your-xai-key",38 "PERPLEXITY_API_KEY": "your-perplexity-key",39 "COHERE_API_KEY": "your-cohere-key",40 "FIREWORKS_API_KEY": "your-fireworks-key",41 "HUGGINGFACE_API_KEY": "your-huggingface-key",42 "MISTRAL_API_KEY": "your-mistral-key",43 "NVIDIA_API_KEY": "your-nvidia-key",44 "OLLAMA_HOST": "http://localhost:11434",45 "OPENROUTER_API_KEY": "your-openrouter-key",46 "TOGETHER_API_KEY": "your-together-key",47 "CEREBRAS_API_KEY": "your-cerebras-key",48 "DEEPINFRA_API_KEY": "your-deepinfra-key",49 "SAMBANOVA_API_KEY": "your-sambanova-key"50 }51 }52 }53}54```5556Note: The environment variables are optional. Only include the API keys for the providers you want to use.5758## Quick Start5960Once installed and configured, you can use the tools in your MCP client:61621. **Generate text**: Use the `outsource_text` tool with provider "openai", model "gpt-4o-mini", and prompt "Write a haiku about coding"632. **Generate images**: Use the `outsource_image` tool with provider "openai", model "dall-e-3", and prompt "A futuristic city skyline at sunset"6465## Tools6667### outsource_text68Creates an Agno agent with a specified provider and model to generate text responses.6970**Arguments:**71- `provider`: The provider name (e.g., "openai", "anthropic", "google", "groq", etc.)72- `model`: The model name (e.g., "gpt-4o", "claude-3-5-sonnet-20241022", "gemini-2.0-flash-exp")73- `prompt`: The text prompt to send to the model7475### outsource_image76Generates images using AI models.7778**Arguments:**79- `provider`: The provider name (currently only "openai" is supported)80- `model`: The model name ("dall-e-3" or "dall-e-2")81- `prompt`: The image generation prompt8283Returns the URL of the generated image.8485> **Note**: Image generation is currently only supported by OpenAI models (DALL-E 2 and DALL-E 3). Other providers only support text generation.8687## Supported Providers8889The following providers are supported. Use the provider name (in parentheses) as the `provider` argument:9091### Core Providers92- **OpenAI** (`openai`) - GPT-4, GPT-3.5, DALL-E, etc. | [Models](https://platform.openai.com/docs/models)93- **Anthropic** (`anthropic`) - Claude 3.5, Claude 3, etc. | [Models](https://docs.anthropic.com/en/docs/about-claude/models/overview)94- **Google** (`google`) - Gemini Pro, Gemini Flash, etc. | [Models](https://ai.google.dev/models)95- **Groq** (`groq`) - Llama 3, Mixtral, etc. | [Models](https://console.groq.com/docs/models)96- **DeepSeek** (`deepseek`) - DeepSeek Chat & Coder | [Models](https://api-docs.deepseek.com/api/list-models)97- **xAI** (`xai`) - Grok models | [Models](https://docs.x.ai/docs/models)98- **Perplexity** (`perplexity`) - Sonar models | [Models](https://docs.perplexity.ai/guides/model-cards)99100### Additional Providers101- **Cohere** (`cohere`) - Command models | [Models](https://docs.cohere.com/v2/docs/models)102- **Mistral AI** (`mistral`) - Mistral Large, Medium, Small | [Models](https://docs.mistral.ai/getting-started/models/models_overview/)103- **NVIDIA** (`nvidia`) - Various models | [Models](https://build.nvidia.com/models)104- **HuggingFace** (`huggingface`) - Open source models | [Models](https://huggingface.co/models)105- **Ollama** (`ollama`) - Local models | [Models](https://ollama.com/library)106- **Fireworks AI** (`fireworks`) - Fast inference | [Models](https://fireworks.ai/models?view=list)107- **OpenRouter** (`openrouter`) - Multi-provider access | [Models](https://openrouter.ai/docs/overview/models)108- **Together AI** (`together`) - Open source models | [Models](https://docs.together.ai/docs/serverless-models)109- **Cerebras** (`cerebras`) - Fast inference | [Models](https://cerebras.ai/models)110- **DeepInfra** (`deepinfra`) - Optimized models | [Models](https://deepinfra.com/docs/models)111- **SambaNova** (`sambanova`) - Enterprise models | [Models](https://docs.sambanova.ai/cloud/docs/get-started/supported-models)112113### Enterprise Providers114- **AWS Bedrock** (`aws` or `bedrock`) - AWS-hosted models | [Models](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html)115- **Azure AI** (`azure`) - Azure-hosted models | [Models](https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/foundry-models-overview)116- **IBM WatsonX** (`ibm` or `watsonx`) - IBM models | [Models](https://www.ibm.com/docs/en/software-hub/5.1.x?topic=install-foundation-models)117- **LiteLLM** (`litellm`) - Universal interface | [Models](https://docs.litellm.ai/docs/providers)118- **Vercel v0** (`vercel` or `v0`) - Vercel AI | [Models](https://sdk.vercel.ai/docs/introduction)119- **Meta Llama** (`meta`) - Direct Meta access | [Models](https://www.llama.com/get-started/)120121### Environment Variables122123Each provider requires its corresponding API key:124125| Provider | Environment Variable | Example |126|----------|---------------------|---------|127| OpenAI | `OPENAI_API_KEY` | sk-... |128| Anthropic | `ANTHROPIC_API_KEY` | sk-ant-... |129| Google | `GOOGLE_API_KEY` | AIza... |130| Groq | `GROQ_API_KEY` | gsk_... |131| DeepSeek | `DEEPSEEK_API_KEY` | sk-... |132| xAI | `XAI_API_KEY` | xai-... |133| Perplexity | `PERPLEXITY_API_KEY` | pplx-... |134| Cohere | `COHERE_API_KEY` | ... |135| Fireworks | `FIREWORKS_API_KEY` | ... |136| HuggingFace | `HUGGINGFACE_API_KEY` | hf_... |137| Mistral | `MISTRAL_API_KEY` | ... |138| NVIDIA | `NVIDIA_API_KEY` | nvapi-... |139| Ollama | `OLLAMA_HOST` | http://localhost:11434 |140| OpenRouter | `OPENROUTER_API_KEY` | ... |141| Together | `TOGETHER_API_KEY` | ... |142| Cerebras | `CEREBRAS_API_KEY` | ... |143| DeepInfra | `DEEPINFRA_API_KEY` | ... |144| SambaNova | `SAMBANOVA_API_KEY` | ... |145| AWS Bedrock | AWS credentials | Via AWS CLI/SDK |146| Azure AI | Azure credentials | Via Azure CLI/SDK |147| IBM WatsonX | `IBM_WATSONX_API_KEY` | ... |148| Meta Llama | `LLAMA_API_KEY` | ... |149150**Note**: Only configure the API keys for providers you plan to use.151152## Examples153154### Text Generation155```156# Using OpenAI157provider: openai158model: gpt-4o-mini159prompt: Write a haiku about coding160161# Using Anthropic162provider: anthropic163model: claude-3-5-sonnet-20241022164prompt: Explain quantum computing in simple terms165166# Using Google167provider: google168model: gemini-2.0-flash-exp169prompt: Create a recipe for chocolate chip cookies170```171172### Image Generation173```174# Using DALL-E 3175provider: openai176model: dall-e-3177prompt: A serene Japanese garden with cherry blossoms178179# Using DALL-E 2180provider: openai181model: dall-e-2182prompt: A futuristic cityscape at sunset183```184185## Development186187### Prerequisites188189- Python 3.11 or higher190- [uv](https://github.com/astral-sh/uv) package manager191192### Setup193194```bash195git clone https://github.com/gwbischof/outsource-mcp.git196cd outsource-mcp197uv sync198```199200### Testing with MCP Inspector201202The MCP Inspector allows you to test the server interactively:203204```bash205mcp dev server.py206```207208### Running Tests209210The test suite includes integration tests that verify both text and image generation:211212```bash213# Run all tests214uv run pytest215```216217**Note:** Integration tests require API keys to be set in your environment.218219## Troubleshooting220221### Common Issues2222231. **"Error: Unknown provider"**224 - Check that you're using a supported provider name from the list above225 - Provider names are case-insensitive2262272. **"Error: OpenAI API error"**228 - Verify your API key is correctly set in the environment variables229 - Check that your API key has access to the requested model230 - Ensure you have sufficient credits/quota2312323. **"Error: No image was generated"**233 - This can happen if the image generation request fails234 - Try a simpler prompt or different model (dall-e-2 vs dall-e-3)2352364. **Environment variables not working**237 - Make sure to restart your MCP client after updating the configuration238 - Verify the configuration file location for your specific MCP client239 - Check that the environment variables are properly formatted in the configuration240241## Contributing242243Contributions are welcome! Please feel free to submit a Pull Request.244
Full transparency โ inspect the skill content before installing.