A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities. - Web scraping, crawling, and discovery - Search and content extraction - Deep research and batch scraping - Cloud browser sessions with agent-browser automation - Automatic retries and rate limiting - Cloud and self-hosted support - SSE support Configuring Cursor ๐ฅ๏ธ Note: Requires C
Add this skill
npx mdskills install firecrawl/firecrawl-mcp-serverComprehensive MCP server with extensive web scraping tools, excellent setup docs, and intelligent tool selection guidance
1<div align="center">2 <a name="readme-top"></a>3 <img4 src="https://raw.githubusercontent.com/firecrawl/firecrawl-mcp-server/main/img/fire.png"5 height="140"6 >7</div>89# Firecrawl MCP Server1011A Model Context Protocol (MCP) server implementation that integrates with [Firecrawl](https://github.com/firecrawl/firecrawl) for web scraping capabilities.1213> Big thanks to [@vrknetha](https://github.com/vrknetha), [@knacklabs](https://www.knacklabs.ai) for the initial implementation!1415## Features1617- Web scraping, crawling, and discovery18- Search and content extraction19- Deep research and batch scraping20- Cloud browser sessions with agent-browser automation21- Automatic retries and rate limiting22- Cloud and self-hosted support23- SSE support2425> Play around with [our MCP Server on MCP.so's playground](https://mcp.so/playground?server=firecrawl-mcp-server) or on [Klavis AI](https://www.klavis.ai/mcp-servers).2627## Installation2829### Running with npx3031```bash32env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp33```3435### Manual Installation3637```bash38npm install -g firecrawl-mcp39```4041### Running on Cursor4243Configuring Cursor ๐ฅ๏ธ44Note: Requires Cursor version 0.45.6+45For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers:46[Cursor MCP Server Configuration Guide](https://docs.cursor.com/context/model-context-protocol#configuring-mcp-servers)4748To configure Firecrawl MCP in Cursor **v0.48.6**49501. Open Cursor Settings512. Go to Features > MCP Servers523. Click "+ Add new global MCP server"534. Enter the following code:54 ```json55 {56 "mcpServers": {57 "firecrawl-mcp": {58 "command": "npx",59 "args": ["-y", "firecrawl-mcp"],60 "env": {61 "FIRECRAWL_API_KEY": "YOUR-API-KEY"62 }63 }64 }65 }66 ```6768To configure Firecrawl MCP in Cursor **v0.45.6**69701. Open Cursor Settings712. Go to Features > MCP Servers723. Click "+ Add New MCP Server"734. Enter the following:74 - Name: "firecrawl-mcp" (or your preferred name)75 - Type: "command"76 - Command: `env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp`7778> If you are using Windows and are running into issues, try `cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"`7980Replace `your-api-key` with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys8182After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.8384### Running on Windsurf8586Add this to your `./codeium/windsurf/model_config.json`:8788```json89{90 "mcpServers": {91 "mcp-server-firecrawl": {92 "command": "npx",93 "args": ["-y", "firecrawl-mcp"],94 "env": {95 "FIRECRAWL_API_KEY": "YOUR_API_KEY"96 }97 }98 }99}100```101102### Running with Streamable HTTP Local Mode103104To run the server using Streamable HTTP locally instead of the default stdio transport:105106```bash107env HTTP_STREAMABLE_SERVER=true FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp108```109110Use the url: http://localhost:3000/mcp111112### Installing via Smithery (Legacy)113114To install Firecrawl for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@mendableai/mcp-server-firecrawl):115116```bash117npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude118```119120### Running on VS Code121122For one-click installation, click one of the install buttons below...123124[](https://insiders.vscode.dev/redirect/mcp/install?name=firecrawl&inputs=%5B%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22apiKey%22%2C%22description%22%3A%22Firecrawl%20API%20Key%22%2C%22password%22%3Atrue%7D%5D&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22firecrawl-mcp%22%5D%2C%22env%22%3A%7B%22FIRECRAWL_API_KEY%22%3A%22%24%7Binput%3AapiKey%7D%22%7D%7D) [](https://insiders.vscode.dev/redirect/mcp/install?name=firecrawl&inputs=%5B%7B%22type%22%3A%22promptString%22%2C%22id%22%3A%22apiKey%22%2C%22description%22%3A%22Firecrawl%20API%20Key%22%2C%22password%22%3Atrue%7D%5D&config=%7B%22command%22%3A%22npx%22%2C%22args%22%3A%5B%22-y%22%2C%22firecrawl-mcp%22%5D%2C%22env%22%3A%7B%22FIRECRAWL_API_KEY%22%3A%22%24%7Binput%3AapiKey%7D%22%7D%7D&quality=insiders)125126For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing `Ctrl + Shift + P` and typing `Preferences: Open User Settings (JSON)`.127128```json129{130 "mcp": {131 "inputs": [132 {133 "type": "promptString",134 "id": "apiKey",135 "description": "Firecrawl API Key",136 "password": true137 }138 ],139 "servers": {140 "firecrawl": {141 "command": "npx",142 "args": ["-y", "firecrawl-mcp"],143 "env": {144 "FIRECRAWL_API_KEY": "${input:apiKey}"145 }146 }147 }148 }149}150```151152Optionally, you can add it to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others:153154```json155{156 "inputs": [157 {158 "type": "promptString",159 "id": "apiKey",160 "description": "Firecrawl API Key",161 "password": true162 }163 ],164 "servers": {165 "firecrawl": {166 "command": "npx",167 "args": ["-y", "firecrawl-mcp"],168 "env": {169 "FIRECRAWL_API_KEY": "${input:apiKey}"170 }171 }172 }173}174```175176## Configuration177178### Environment Variables179180#### Required for Cloud API181182- `FIRECRAWL_API_KEY`: Your Firecrawl API key183 - Required when using cloud API (default)184 - Optional when using self-hosted instance with `FIRECRAWL_API_URL`185- `FIRECRAWL_API_URL` (Optional): Custom API endpoint for self-hosted instances186 - Example: `https://firecrawl.your-domain.com`187 - If not provided, the cloud API will be used (requires API key)188189#### Optional Configuration190191##### Retry Configuration192193- `FIRECRAWL_RETRY_MAX_ATTEMPTS`: Maximum number of retry attempts (default: 3)194- `FIRECRAWL_RETRY_INITIAL_DELAY`: Initial delay in milliseconds before first retry (default: 1000)195- `FIRECRAWL_RETRY_MAX_DELAY`: Maximum delay in milliseconds between retries (default: 10000)196- `FIRECRAWL_RETRY_BACKOFF_FACTOR`: Exponential backoff multiplier (default: 2)197198##### Credit Usage Monitoring199200- `FIRECRAWL_CREDIT_WARNING_THRESHOLD`: Credit usage warning threshold (default: 1000)201- `FIRECRAWL_CREDIT_CRITICAL_THRESHOLD`: Credit usage critical threshold (default: 100)202203### Configuration Examples204205For cloud API usage with custom retry and credit monitoring:206207```bash208# Required for cloud API209export FIRECRAWL_API_KEY=your-api-key210211# Optional retry configuration212export FIRECRAWL_RETRY_MAX_ATTEMPTS=5 # Increase max retry attempts213export FIRECRAWL_RETRY_INITIAL_DELAY=2000 # Start with 2s delay214export FIRECRAWL_RETRY_MAX_DELAY=30000 # Maximum 30s delay215export FIRECRAWL_RETRY_BACKOFF_FACTOR=3 # More aggressive backoff216217# Optional credit monitoring218export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000 # Warning at 2000 credits219export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500 # Critical at 500 credits220```221222For self-hosted instance:223224```bash225# Required for self-hosted226export FIRECRAWL_API_URL=https://firecrawl.your-domain.com227228# Optional authentication for self-hosted229export FIRECRAWL_API_KEY=your-api-key # If your instance requires auth230231# Custom retry configuration232export FIRECRAWL_RETRY_MAX_ATTEMPTS=10233export FIRECRAWL_RETRY_INITIAL_DELAY=500 # Start with faster retries234```235236### Usage with Claude Desktop237238Add this to your `claude_desktop_config.json`:239240```json241{242 "mcpServers": {243 "mcp-server-firecrawl": {244 "command": "npx",245 "args": ["-y", "firecrawl-mcp"],246 "env": {247 "FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE",248249 "FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",250 "FIRECRAWL_RETRY_INITIAL_DELAY": "2000",251 "FIRECRAWL_RETRY_MAX_DELAY": "30000",252 "FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",253254 "FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",255 "FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"256 }257 }258 }259}260```261262### System Configuration263264The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:265266```typescript267const CONFIG = {268 retry: {269 maxAttempts: 3, // Number of retry attempts for rate-limited requests270 initialDelay: 1000, // Initial delay before first retry (in milliseconds)271 maxDelay: 10000, // Maximum delay between retries (in milliseconds)272 backoffFactor: 2, // Multiplier for exponential backoff273 },274 credit: {275 warningThreshold: 1000, // Warn when credit usage reaches this level276 criticalThreshold: 100, // Critical alert when credit usage reaches this level277 },278};279```280281These configurations control:2822831. **Retry Behavior**284285 - Automatically retries failed requests due to rate limits286 - Uses exponential backoff to avoid overwhelming the API287 - Example: With default settings, retries will be attempted at:288 - 1st retry: 1 second delay289 - 2nd retry: 2 seconds delay290 - 3rd retry: 4 seconds delay (capped at maxDelay)2912922. **Credit Usage Monitoring**293 - Tracks API credit consumption for cloud API usage294 - Provides warnings at specified thresholds295 - Helps prevent unexpected service interruption296 - Example: With default settings:297 - Warning at 1000 credits remaining298 - Critical alert at 100 credits remaining299300### Rate Limiting and Batch Processing301302The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:303304- Automatic rate limit handling with exponential backoff305- Efficient parallel processing for batch operations306- Smart request queuing and throttling307- Automatic retries for transient errors308309## How to Choose a Tool310311Use this guide to select the right tool for your task:312313- **If you know the exact URL(s) you want:**314 - For one: use **scrape** (with JSON format for structured data)315 - For many: use **batch_scrape**316- **If you need to discover URLs on a site:** use **map**317- **If you want to search the web for info:** use **search**318- **If you need complex research across multiple unknown sources:** use **agent**319- **If you want to analyze a whole site or section:** use **crawl** (with limits!)320- **If you need interactive browser automation** (click, type, navigate): use **browser**321322### Quick Reference Table323324| Tool | Best for | Returns |325| ------------ | ----------------------------------- | -------------------------- |326| scrape | Single page content | JSON (preferred) or markdown |327| batch_scrape | Multiple known URLs | JSON (preferred) or markdown[] |328| map | Discovering URLs on a site | URL[] |329| crawl | Multi-page extraction (with limits) | markdown/html[] |330| search | Web search for info | results[] |331| agent | Complex multi-source research | JSON (structured data) |332| browser | Interactive multi-step automation | Session with live browser |333334### Format Selection Guide335336When using `scrape` or `batch_scrape`, choose the right format:337338- **JSON format (recommended for most cases):** Use when you need specific data from a page. Define a schema based on what you need to extract. This keeps responses small and avoids context window overflow.339- **Markdown format (use sparingly):** Only when you genuinely need the full page content, such as reading an entire article for summarization or analyzing page structure.340341## Available Tools342343### 1. Scrape Tool (`firecrawl_scrape`)344345Scrape content from a single URL with advanced options.346347**Best for:**348349- Single page content extraction, when you know exactly which page contains the information.350351**Not recommended for:**352353- Extracting content from multiple pages (use batch_scrape for known URLs, or map + batch_scrape to discover URLs first, or crawl for full page content)354- When you're unsure which page contains the information (use search)355356**Common mistakes:**357358- Using scrape for a list of URLs (use batch_scrape instead).359- Using markdown format by default (use JSON format to extract only what you need).360361**Choosing the right format:**362363- **JSON format (preferred):** For most use cases, use JSON format with a schema to extract only the specific data needed. This keeps responses focused and prevents context window overflow.364- **Markdown format:** Only when the task genuinely requires full page content (e.g., summarizing an entire article, analyzing page structure).365366**Prompt Example:**367368> "Get the product details from https://example.com/product."369370**Usage Example (JSON format - preferred):**371372```json373{374 "name": "firecrawl_scrape",375 "arguments": {376 "url": "https://example.com/product",377 "formats": [{378 "type": "json",379 "prompt": "Extract the product information",380 "schema": {381 "type": "object",382 "properties": {383 "name": { "type": "string" },384 "price": { "type": "number" },385 "description": { "type": "string" }386 },387 "required": ["name", "price"]388 }389 }]390 }391}392```393394**Usage Example (markdown format - when full content needed):**395396```json397{398 "name": "firecrawl_scrape",399 "arguments": {400 "url": "https://example.com/article",401 "formats": ["markdown"],402 "onlyMainContent": true403 }404}405```406407**Usage Example (branding format - extract brand identity):**408409```json410{411 "name": "firecrawl_scrape",412 "arguments": {413 "url": "https://example.com",414 "formats": ["branding"]415 }416}417```418419**Branding format:** Extracts comprehensive brand identity (colors, fonts, typography, spacing, logo, UI components) for design analysis or style replication.420421**Returns:**422423- JSON structured data, markdown, branding profile, or other formats as specified.424425### 2. Batch Scrape Tool (`firecrawl_batch_scrape`)426427Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.428429**Best for:**430431- Retrieving content from multiple pages, when you know exactly which pages to scrape.432433**Not recommended for:**434435- Discovering URLs (use map first if you don't know the URLs)436- Scraping a single page (use scrape)437438**Common mistakes:**439440- Using batch_scrape with too many URLs at once (may hit rate limits or token overflow)441442**Prompt Example:**443444> "Get the content of these three blog posts: [url1, url2, url3]."445446**Usage Example:**447448```json449{450 "name": "firecrawl_batch_scrape",451 "arguments": {452 "urls": ["https://example1.com", "https://example2.com"],453 "options": {454 "formats": ["markdown"],455 "onlyMainContent": true456 }457 }458}459```460461**Returns:**462463- Response includes operation ID for status checking:464465```json466{467 "content": [468 {469 "type": "text",470 "text": "Batch operation queued with ID: batch_1. Use firecrawl_check_batch_status to check progress."471 }472 ],473 "isError": false474}475```476477### 3. Check Batch Status (`firecrawl_check_batch_status`)478479Check the status of a batch operation.480481```json482{483 "name": "firecrawl_check_batch_status",484 "arguments": {485 "id": "batch_1"486 }487}488```489490### 4. Map Tool (`firecrawl_map`)491492Map a website to discover all indexed URLs on the site.493494**Best for:**495496- Discovering URLs on a website before deciding what to scrape497- Finding specific sections of a website498499**Not recommended for:**500501- When you already know which specific URL you need (use scrape or batch_scrape)502- When you need the content of the pages (use scrape after mapping)503504**Common mistakes:**505506- Using crawl to discover URLs instead of map507508**Prompt Example:**509510> "List all URLs on example.com."511512**Usage Example:**513514```json515{516 "name": "firecrawl_map",517 "arguments": {518 "url": "https://example.com"519 }520}521```522523**Returns:**524525- Array of URLs found on the site526527### 5. Search Tool (`firecrawl_search`)528529Search the web and optionally extract content from search results.530531**Best for:**532533- Finding specific information across multiple websites, when you don't know which website has the information.534- When you need the most relevant content for a query535536**Not recommended for:**537538- When you already know which website to scrape (use scrape)539- When you need comprehensive coverage of a single website (use map or crawl)540541**Common mistakes:**542543- Using crawl or map for open-ended questions (use search instead)544545**Usage Example:**546547```json548{549 "name": "firecrawl_search",550 "arguments": {551 "query": "latest AI research papers 2023",552 "limit": 5,553 "lang": "en",554 "country": "us",555 "scrapeOptions": {556 "formats": ["markdown"],557 "onlyMainContent": true558 }559 }560}561```562563**Returns:**564565- Array of search results (with optional scraped content)566567**Prompt Example:**568569> "Find the latest research papers on AI published in 2023."570571### 6. Crawl Tool (`firecrawl_crawl`)572573Starts an asynchronous crawl job on a website and extract content from all pages.574575**Best for:**576577- Extracting content from multiple related pages, when you need comprehensive coverage.578579**Not recommended for:**580581- Extracting content from a single page (use scrape)582- When token limits are a concern (use map + batch_scrape)583- When you need fast results (crawling can be slow)584585**Warning:** Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control.586587**Common mistakes:**588589- Setting limit or maxDepth too high (causes token overflow)590- Using crawl for a single page (use scrape instead)591592**Prompt Example:**593594> "Get all blog posts from the first two levels of example.com/blog."595596**Usage Example:**597598```json599{600 "name": "firecrawl_crawl",601 "arguments": {602 "url": "https://example.com/blog/*",603 "maxDepth": 2,604 "limit": 100,605 "allowExternalLinks": false,606 "deduplicateSimilarURLs": true607 }608}609```610611**Returns:**612613- Response includes operation ID for status checking:614615```json616{617 "content": [618 {619 "type": "text",620 "text": "Started crawl for: https://example.com/* with job ID: 550e8400-e29b-41d4-a716-446655440000. Use firecrawl_check_crawl_status to check progress."621 }622 ],623 "isError": false624}625```626627### 7. Check Crawl Status (`firecrawl_check_crawl_status`)628629Check the status of a crawl job.630631```json632{633 "name": "firecrawl_check_crawl_status",634 "arguments": {635 "id": "550e8400-e29b-41d4-a716-446655440000"636 }637}638```639640**Returns:**641642- Response includes the status of the crawl job:643644### 8. Extract Tool (`firecrawl_extract`)645646Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.647648**Best for:**649650- Extracting specific structured data like prices, names, details.651652**Not recommended for:**653654- When you need the full content of a page (use scrape)655- When you're not looking for specific structured data656657**Arguments:**658659- `urls`: Array of URLs to extract information from660- `prompt`: Custom prompt for the LLM extraction661- `systemPrompt`: System prompt to guide the LLM662- `schema`: JSON schema for structured data extraction663- `allowExternalLinks`: Allow extraction from external links664- `enableWebSearch`: Enable web search for additional context665- `includeSubdomains`: Include subdomains in extraction666667When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service.668**Prompt Example:**669670> "Extract the product name, price, and description from these product pages."671672**Usage Example:**673674```json675{676 "name": "firecrawl_extract",677 "arguments": {678 "urls": ["https://example.com/page1", "https://example.com/page2"],679 "prompt": "Extract product information including name, price, and description",680 "systemPrompt": "You are a helpful assistant that extracts product information",681 "schema": {682 "type": "object",683 "properties": {684 "name": { "type": "string" },685 "price": { "type": "number" },686 "description": { "type": "string" }687 },688 "required": ["name", "price"]689 },690 "allowExternalLinks": false,691 "enableWebSearch": false,692 "includeSubdomains": false693 }694}695```696697**Returns:**698699- Extracted structured data as defined by your schema700701```json702{703 "content": [704 {705 "type": "text",706 "text": {707 "name": "Example Product",708 "price": 99.99,709 "description": "This is an example product description"710 }711 }712 ],713 "isError": false714}715```716717### 9. Agent Tool (`firecrawl_agent`)718719Autonomous web research agent. This is a separate AI agent layer that independently browses the internet, searches for information, navigates through pages, and extracts structured data based on your query.720721**How it works:**722723The agent performs web searches, follows links, reads pages, and gathers data autonomously. This runs **asynchronously** - it returns a job ID immediately, and you poll `firecrawl_agent_status` to check when complete and retrieve results.724725**Async workflow:**7267271. Call `firecrawl_agent` with your prompt/schema โ returns job ID7282. Do other work while the agent researches (can take minutes for complex queries)7293. Poll `firecrawl_agent_status` with the job ID to check progress7304. When status is "completed", the response includes the extracted data731732**Best for:**733734- Complex research tasks where you don't know the exact URLs735- Multi-source data gathering736- Finding information scattered across the web737- Tasks where you can do other work while waiting for results738739**Not recommended for:**740741- Simple single-page scraping where you know the URL (use scrape with JSON format - faster and cheaper)742743**Arguments:**744745- `prompt`: Natural language description of the data you want (required, max 10,000 characters)746- `urls`: Optional array of URLs to focus the agent on specific pages747- `schema`: Optional JSON schema for structured output748749**Prompt Example:**750751> "Find the founders of Firecrawl and their backgrounds"752753**Usage Example (start agent, then poll for results):**754755```json756{757 "name": "firecrawl_agent",758 "arguments": {759 "prompt": "Find the top 5 AI startups founded in 2024 and their funding amounts",760 "schema": {761 "type": "object",762 "properties": {763 "startups": {764 "type": "array",765 "items": {766 "type": "object",767 "properties": {768 "name": { "type": "string" },769 "funding": { "type": "string" },770 "founded": { "type": "string" }771 }772 }773 }774 }775 }776 }777}778```779780Then poll with `firecrawl_agent_status` using the returned job ID.781782**Usage Example (with URLs - agent focuses on specific pages):**783784```json785{786 "name": "firecrawl_agent",787 "arguments": {788 "urls": ["https://docs.firecrawl.dev", "https://firecrawl.dev/pricing"],789 "prompt": "Compare the features and pricing information from these pages"790 }791}792```793794**Returns:**795796- Job ID for status checking. Use `firecrawl_agent_status` to poll for results.797798### 10. Check Agent Status (`firecrawl_agent_status`)799800Check the status of an agent job and retrieve results when complete. Use this to poll for results after starting an agent.801802**Polling pattern:** Agent research can take minutes for complex queries. Poll this endpoint periodically (e.g., every 10-30 seconds) until status is "completed" or "failed".803804```json805{806 "name": "firecrawl_agent_status",807 "arguments": {808 "id": "550e8400-e29b-41d4-a716-446655440000"809 }810}811```812813**Possible statuses:**814815- `processing`: Agent is still researching - check back later816- `completed`: Research finished - response includes the extracted data817- `failed`: An error occurred818819### 11. Browser Create (`firecrawl_browser_create`)820821Create a persistent cloud browser session for interactive automation.822823**Best for:**824825- Multi-step browser automation (navigate, click, fill forms, extract data)826- Interactive workflows that require maintaining state across actions827- Testing and debugging web pages in a live browser828829**Arguments:**830831- `ttl`: Total session lifetime in seconds (30-3600, optional)832- `activityTtl`: Idle timeout in seconds (10-3600, optional)833- `streamWebView`: Whether to enable live view streaming (optional)834835**Usage Example:**836837```json838{839 "name": "firecrawl_browser_create",840 "arguments": {841 "ttl": 600842 }843}844```845846**Returns:**847848- Session ID, CDP URL, and live view URL849850### 12. Browser Execute (`firecrawl_browser_execute`)851852Execute code in a browser session. Supports agent-browser commands (bash), Python, or JavaScript.853854**Recommended: Use bash with agent-browser commands** (pre-installed in every sandbox):855856```json857{858 "name": "firecrawl_browser_execute",859 "arguments": {860 "sessionId": "session-id-here",861 "code": "agent-browser open https://example.com",862 "language": "bash"863 }864}865```866867**Common agent-browser commands:**868869| Command | Description |870|---------|-------------|871| `agent-browser open <url>` | Navigate to URL |872| `agent-browser snapshot` | Accessibility tree with clickable refs |873| `agent-browser click @e5` | Click element by ref from snapshot |874| `agent-browser type @e3 "text"` | Type into element |875| `agent-browser get title` | Get page title |876| `agent-browser screenshot` | Take screenshot |877| `agent-browser --help` | Full command reference |878879**For Playwright scripting, use Python:**880881```json882{883 "name": "firecrawl_browser_execute",884 "arguments": {885 "sessionId": "session-id-here",886 "code": "await page.goto('https://example.com')\ntitle = await page.title()\nprint(title)",887 "language": "python"888 }889}890```891892### 13. Browser List (`firecrawl_browser_list`)893894List browser sessions, optionally filtered by status.895896```json897{898 "name": "firecrawl_browser_list",899 "arguments": {900 "status": "active"901 }902}903```904905### 14. Browser Delete (`firecrawl_browser_delete`)906907Destroy a browser session.908909```json910{911 "name": "firecrawl_browser_delete",912 "arguments": {913 "sessionId": "session-id-here"914 }915}916```917918## Logging System919920The server includes comprehensive logging:921922- Operation status and progress923- Performance metrics924- Credit usage monitoring925- Rate limit tracking926- Error conditions927928Example log messages:929930```931[INFO] Firecrawl MCP Server initialized successfully932[INFO] Starting scrape for URL: https://example.com933[INFO] Batch operation queued with ID: batch_1934[WARNING] Credit usage has reached warning threshold935[ERROR] Rate limit exceeded, retrying in 2s...936```937938## Error Handling939940The server provides robust error handling:941942- Automatic retries for transient errors943- Rate limit handling with backoff944- Detailed error messages945- Credit usage warnings946- Network resilience947948Example error response:949950```json951{952 "content": [953 {954 "type": "text",955 "text": "Error: Rate limit exceeded. Retrying in 2 seconds..."956 }957 ],958 "isError": true959}960```961962## Development963964```bash965# Install dependencies966npm install967968# Build969npm run build970971# Run tests972npm test973```974975### Contributing9769771. Fork the repository9782. Create your feature branch9793. Run tests: `npm test`9804. Submit a pull request981982### Thanks to contributors983984Thanks to [@vrknetha](https://github.com/vrknetha), [@cawstudios](https://caw.tech) for the initial implementation!985986Thanks to MCP.so and Klavis AI for hosting and [@gstarwd](https://github.com/gstarwd), [@xiangkaiz](https://github.com/xiangkaiz) and [@zihaolin96](https://github.com/zihaolin96) for integrating our server.987988## License989990MIT License - see LICENSE file for details991
Full transparency โ inspect the skill content before installing.