A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities. - Web scraping, crawling, and discovery - Search and content extraction - Deep research and batch scraping - Cloud browser sessions with agent-browser automation - Automatic retries and rate limiting - Cloud and self-hosted support - SSE support Configuring Cursor 🖥️ Note: Requires C
Add this skill
npx mdskills install firecrawl/firecrawl-mcp-serverComprehensive MCP server with extensive web scraping tools, excellent setup docs, and intelligent tool selection guidance

A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.
Big thanks to @vrknetha, @knacklabs for the initial implementation!
Play around with our MCP Server on MCP.so's playground or on Klavis AI.
env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
npm install -g firecrawl-mcp
Configuring Cursor 🖥️ Note: Requires Cursor version 0.45.6+ For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers: Cursor MCP Server Configuration Guide
To configure Firecrawl MCP in Cursor v0.48.6
{
"mcpServers": {
"firecrawl-mcp": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR-API-KEY"
}
}
}
}
To configure Firecrawl MCP in Cursor v0.45.6
env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcpIf you are using Windows and are running into issues, try
cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"
Replace your-api-key with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys
After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.
Add this to your ./codeium/windsurf/model_config.json:
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY"
}
}
}
}
To run the server using Streamable HTTP locally instead of the default stdio transport:
env HTTP_STREAMABLE_SERVER=true FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
Use the url: http://localhost:3000/mcp
To install Firecrawl for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude
For one-click installation, click one of the install buttons below...
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P and typing Preferences: Open User Settings (JSON).
{
"mcp": {
"inputs": [
{
"type": "promptString",
"id": "apiKey",
"description": "Firecrawl API Key",
"password": true
}
],
"servers": {
"firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "${input:apiKey}"
}
}
}
}
}
Optionally, you can add it to a file called .vscode/mcp.json in your workspace. This will allow you to share the configuration with others:
{
"inputs": [
{
"type": "promptString",
"id": "apiKey",
"description": "Firecrawl API Key",
"password": true
}
],
"servers": {
"firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "${input:apiKey}"
}
}
}
}
FIRECRAWL_API_KEY: Your Firecrawl API key
FIRECRAWL_API_URLFIRECRAWL_API_URL (Optional): Custom API endpoint for self-hosted instances
https://firecrawl.your-domain.comFIRECRAWL_RETRY_MAX_ATTEMPTS: Maximum number of retry attempts (default: 3)FIRECRAWL_RETRY_INITIAL_DELAY: Initial delay in milliseconds before first retry (default: 1000)FIRECRAWL_RETRY_MAX_DELAY: Maximum delay in milliseconds between retries (default: 10000)FIRECRAWL_RETRY_BACKOFF_FACTOR: Exponential backoff multiplier (default: 2)FIRECRAWL_CREDIT_WARNING_THRESHOLD: Credit usage warning threshold (default: 1000)FIRECRAWL_CREDIT_CRITICAL_THRESHOLD: Credit usage critical threshold (default: 100)For cloud API usage with custom retry and credit monitoring:
# Required for cloud API
export FIRECRAWL_API_KEY=your-api-key
# Optional retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=5 # Increase max retry attempts
export FIRECRAWL_RETRY_INITIAL_DELAY=2000 # Start with 2s delay
export FIRECRAWL_RETRY_MAX_DELAY=30000 # Maximum 30s delay
export FIRECRAWL_RETRY_BACKOFF_FACTOR=3 # More aggressive backoff
# Optional credit monitoring
export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000 # Warning at 2000 credits
export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500 # Critical at 500 credits
For self-hosted instance:
# Required for self-hosted
export FIRECRAWL_API_URL=https://firecrawl.your-domain.com
# Optional authentication for self-hosted
export FIRECRAWL_API_KEY=your-api-key # If your instance requires auth
# Custom retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=10
export FIRECRAWL_RETRY_INITIAL_DELAY=500 # Start with faster retries
Add this to your claude_desktop_config.json:
{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE",
"FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
"FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
"FIRECRAWL_RETRY_MAX_DELAY": "30000",
"FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",
"FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
"FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
}
}
}
}
The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:
const CONFIG = {
retry: {
maxAttempts: 3, // Number of retry attempts for rate-limited requests
initialDelay: 1000, // Initial delay before first retry (in milliseconds)
maxDelay: 10000, // Maximum delay between retries (in milliseconds)
backoffFactor: 2, // Multiplier for exponential backoff
},
credit: {
warningThreshold: 1000, // Warn when credit usage reaches this level
criticalThreshold: 100, // Critical alert when credit usage reaches this level
},
};
These configurations control:
Retry Behavior
Credit Usage Monitoring
The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:
Use this guide to select the right tool for your task:
| Tool | Best for | Returns |
|---|---|---|
| scrape | Single page content | JSON (preferred) or markdown |
| batch_scrape | Multiple known URLs | JSON (preferred) or markdown[] |
| map | Discovering URLs on a site | URL[] |
| crawl | Multi-page extraction (with limits) | markdown/html[] |
| search | Web search for info | results[] |
| agent | Complex multi-source research | JSON (structured data) |
| browser | Interactive multi-step automation | Session with live browser |
When using scrape or batch_scrape, choose the right format:
firecrawl_scrape)Scrape content from a single URL with advanced options.
Best for:
Not recommended for:
Common mistakes:
Choosing the right format:
Prompt Example:
"Get the product details from https://example.com/product."
Usage Example (JSON format - preferred):
{
"name": "firecrawl_scrape",
"arguments": {
"url": "https://example.com/product",
"formats": [{
"type": "json",
"prompt": "Extract the product information",
"schema": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" },
"description": { "type": "string" }
},
"required": ["name", "price"]
}
}]
}
}
Usage Example (markdown format - when full content needed):
{
"name": "firecrawl_scrape",
"arguments": {
"url": "https://example.com/article",
"formats": ["markdown"],
"onlyMainContent": true
}
}
Usage Example (branding format - extract brand identity):
{
"name": "firecrawl_scrape",
"arguments": {
"url": "https://example.com",
"formats": ["branding"]
}
}
Branding format: Extracts comprehensive brand identity (colors, fonts, typography, spacing, logo, UI components) for design analysis or style replication.
Returns:
firecrawl_batch_scrape)Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.
Best for:
Not recommended for:
Common mistakes:
Prompt Example:
"Get the content of these three blog posts: [url1, url2, url3]."
Usage Example:
{
"name": "firecrawl_batch_scrape",
"arguments": {
"urls": ["https://example1.com", "https://example2.com"],
"options": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}
Returns:
{
"content": [
{
"type": "text",
"text": "Batch operation queued with ID: batch_1. Use firecrawl_check_batch_status to check progress."
}
],
"isError": false
}
firecrawl_check_batch_status)Check the status of a batch operation.
{
"name": "firecrawl_check_batch_status",
"arguments": {
"id": "batch_1"
}
}
firecrawl_map)Map a website to discover all indexed URLs on the site.
Best for:
Not recommended for:
Common mistakes:
Prompt Example:
"List all URLs on example.com."
Usage Example:
{
"name": "firecrawl_map",
"arguments": {
"url": "https://example.com"
}
}
Returns:
firecrawl_search)Search the web and optionally extract content from search results.
Best for:
Not recommended for:
Common mistakes:
Usage Example:
{
"name": "firecrawl_search",
"arguments": {
"query": "latest AI research papers 2023",
"limit": 5,
"lang": "en",
"country": "us",
"scrapeOptions": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}
Returns:
Prompt Example:
"Find the latest research papers on AI published in 2023."
firecrawl_crawl)Starts an asynchronous crawl job on a website and extract content from all pages.
Best for:
Not recommended for:
Warning: Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control.
Common mistakes:
Prompt Example:
"Get all blog posts from the first two levels of example.com/blog."
Usage Example:
{
"name": "firecrawl_crawl",
"arguments": {
"url": "https://example.com/blog/*",
"maxDepth": 2,
"limit": 100,
"allowExternalLinks": false,
"deduplicateSimilarURLs": true
}
}
Returns:
{
"content": [
{
"type": "text",
"text": "Started crawl for: https://example.com/* with job ID: 550e8400-e29b-41d4-a716-446655440000. Use firecrawl_check_crawl_status to check progress."
}
],
"isError": false
}
firecrawl_check_crawl_status)Check the status of a crawl job.
{
"name": "firecrawl_check_crawl_status",
"arguments": {
"id": "550e8400-e29b-41d4-a716-446655440000"
}
}
Returns:
firecrawl_extract)Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
Best for:
Not recommended for:
Arguments:
urls: Array of URLs to extract information fromprompt: Custom prompt for the LLM extractionsystemPrompt: System prompt to guide the LLMschema: JSON schema for structured data extractionallowExternalLinks: Allow extraction from external linksenableWebSearch: Enable web search for additional contextincludeSubdomains: Include subdomains in extractionWhen using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service. Prompt Example:
"Extract the product name, price, and description from these product pages."
Usage Example:
{
"name": "firecrawl_extract",
"arguments": {
"urls": ["https://example.com/page1", "https://example.com/page2"],
"prompt": "Extract product information including name, price, and description",
"systemPrompt": "You are a helpful assistant that extracts product information",
"schema": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" },
"description": { "type": "string" }
},
"required": ["name", "price"]
},
"allowExternalLinks": false,
"enableWebSearch": false,
"includeSubdomains": false
}
}
Returns:
{
"content": [
{
"type": "text",
"text": {
"name": "Example Product",
"price": 99.99,
"description": "This is an example product description"
}
}
],
"isError": false
}
firecrawl_agent)Autonomous web research agent. This is a separate AI agent layer that independently browses the internet, searches for information, navigates through pages, and extracts structured data based on your query.
How it works:
The agent performs web searches, follows links, reads pages, and gathers data autonomously. This runs asynchronously - it returns a job ID immediately, and you poll firecrawl_agent_status to check when complete and retrieve results.
Async workflow:
firecrawl_agent with your prompt/schema → returns job IDfirecrawl_agent_status with the job ID to check progressBest for:
Not recommended for:
Arguments:
prompt: Natural language description of the data you want (required, max 10,000 characters)urls: Optional array of URLs to focus the agent on specific pagesschema: Optional JSON schema for structured outputPrompt Example:
"Find the founders of Firecrawl and their backgrounds"
Usage Example (start agent, then poll for results):
{
"name": "firecrawl_agent",
"arguments": {
"prompt": "Find the top 5 AI startups founded in 2024 and their funding amounts",
"schema": {
"type": "object",
"properties": {
"startups": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": { "type": "string" },
"funding": { "type": "string" },
"founded": { "type": "string" }
}
}
}
}
}
}
}
Then poll with firecrawl_agent_status using the returned job ID.
Usage Example (with URLs - agent focuses on specific pages):
{
"name": "firecrawl_agent",
"arguments": {
"urls": ["https://docs.firecrawl.dev", "https://firecrawl.dev/pricing"],
"prompt": "Compare the features and pricing information from these pages"
}
}
Returns:
firecrawl_agent_status to poll for results.firecrawl_agent_status)Check the status of an agent job and retrieve results when complete. Use this to poll for results after starting an agent.
Polling pattern: Agent research can take minutes for complex queries. Poll this endpoint periodically (e.g., every 10-30 seconds) until status is "completed" or "failed".
{
"name": "firecrawl_agent_status",
"arguments": {
"id": "550e8400-e29b-41d4-a716-446655440000"
}
}
Possible statuses:
processing: Agent is still researching - check back latercompleted: Research finished - response includes the extracted datafailed: An error occurredfirecrawl_browser_create)Create a persistent cloud browser session for interactive automation.
Best for:
Arguments:
ttl: Total session lifetime in seconds (30-3600, optional)activityTtl: Idle timeout in seconds (10-3600, optional)streamWebView: Whether to enable live view streaming (optional)Usage Example:
{
"name": "firecrawl_browser_create",
"arguments": {
"ttl": 600
}
}
Returns:
firecrawl_browser_execute)Execute code in a browser session. Supports agent-browser commands (bash), Python, or JavaScript.
Recommended: Use bash with agent-browser commands (pre-installed in every sandbox):
{
"name": "firecrawl_browser_execute",
"arguments": {
"sessionId": "session-id-here",
"code": "agent-browser open https://example.com",
"language": "bash"
}
}
Common agent-browser commands:
| Command | Description |
|---|---|
agent-browser open | Navigate to URL |
agent-browser snapshot | Accessibility tree with clickable refs |
agent-browser click @e5 | Click element by ref from snapshot |
agent-browser type @e3 "text" | Type into element |
agent-browser get title | Get page title |
agent-browser screenshot | Take screenshot |
agent-browser --help | Full command reference |
For Playwright scripting, use Python:
{
"name": "firecrawl_browser_execute",
"arguments": {
"sessionId": "session-id-here",
"code": "await page.goto('https://example.com')\ntitle = await page.title()\nprint(title)",
"language": "python"
}
}
firecrawl_browser_list)List browser sessions, optionally filtered by status.
{
"name": "firecrawl_browser_list",
"arguments": {
"status": "active"
}
}
firecrawl_browser_delete)Destroy a browser session.
{
"name": "firecrawl_browser_delete",
"arguments": {
"sessionId": "session-id-here"
}
}
The server includes comprehensive logging:
Example log messages:
[INFO] Firecrawl MCP Server initialized successfully
[INFO] Starting scrape for URL: https://example.com
[INFO] Batch operation queued with ID: batch_1
[WARNING] Credit usage has reached warning threshold
[ERROR] Rate limit exceeded, retrying in 2s...
The server provides robust error handling:
Example error response:
{
"content": [
{
"type": "text",
"text": "Error: Rate limit exceeded. Retrying in 2 seconds..."
}
],
"isError": true
}
# Install dependencies
npm install
# Build
npm run build
# Run tests
npm test
npm testThanks to @vrknetha, @cawstudios for the initial implementation!
Thanks to MCP.so and Klavis AI for hosting and @gstarwd, @xiangkaiz and @zihaolin96 for integrating our server.
MIT License - see LICENSE file for details
Install via CLI
npx mdskills install firecrawl/firecrawl-mcp-serverFirecrawl MCP Server is a free, open-source AI agent skill. A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities. - Web scraping, crawling, and discovery - Search and content extraction - Deep research and batch scraping - Cloud browser sessions with agent-browser automation - Automatic retries and rate limiting - Cloud and self-hosted support - SSE support Configuring Cursor 🖥️ Note: Requires C
Install Firecrawl MCP Server with a single command:
npx mdskills install firecrawl/firecrawl-mcp-serverThis downloads the skill files into your project and your AI agent picks them up automatically.
Firecrawl MCP Server works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Gemini Cli, Amp, Roo Code, Goose. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.