Official Macrocosmos Model Context Protocol (MCP) server that enables interaction with X (Twitter) and Reddit, powered by Data Universe (SN13) on Bittensor. This server allows MCP clients like Claude Desktop , Cursor , Windsurf , OpenAI Agents and others to fetch real-time social media data. 1. Get your API key from Macrocosmos. There is a free tier with $5 of credits to start. 2. Install uv (Pyth
Add this skill
npx mdskills install macrocosm-os/macrocosmos-mcpWell-documented social media data collection MCP with clear tool descriptions and workflow examples
12# Macrocosmos MCP34<p align="center">5 Official Macrocosmos <a href="https://github.com/modelcontextprotocol">Model Context Protocol (MCP)</a> server that enables interaction with X (Twitter) and Reddit, powered by Data Universe (SN13) on Bittensor. This server allows MCP clients like <a href="https://www.anthropic.com/claude">Claude Desktop</a>, <a href="https://www.cursor.so">Cursor</a>, <a href="https://codeium.com/windsurf">Windsurf</a>, <a href="https://github.com/openai/openai-agents-python">OpenAI Agents</a> and others to fetch real-time social media data.6</p>78---910## Quickstart with Claude Desktop11121. Get your API key from [Macrocosmos](https://app.macrocosmos.ai/account?tab=api-keys). There is a free tier with $5 of credits to start.132. Install `uv` (Python package manager), install with `curl -LsSf https://astral.sh/uv/install.sh | sh` or see the `uv` [repo](https://github.com/astral-sh/uv) for additional install methods.143. Go to Claude > Settings > Developer > Edit Config > claude_desktop_config.json to include the following:1516```json17{18 "mcpServers": {19 "macrocosmos": {20 "command": "uvx",21 "args": ["macrocosmos-mcp"],22 "env": {23 "MC_API": "<insert-your-api-key-here>"24 }25 }26 }27}28```2930---3132## Available Tools3334### 1. `query_on_demand_data` - Real-time Social Media Queries3536Fetch real-time data from X (Twitter) and Reddit. Best for quick queries up to 1000 results.3738**Parameters:**39| Parameter | Type | Description |40|-----------|------|-------------|41| `source` | string | **REQUIRED**. Platform: `'X'` or `'REDDIT'` (case-sensitive) |42| `usernames` | list | Up to 5 usernames. For X: `@` is optional. Not available for Reddit |43| `keywords` | list | Up to 5 keywords. For Reddit: first item is subreddit (e.g., `'r/MachineLearning'`) |44| `start_date` | string | ISO format (e.g., `'2024-01-01T00:00:00Z'`). Defaults to 24h ago |45| `end_date` | string | ISO format. Defaults to now |46| `limit` | int | Max results 1-1000. Default: 10 |47| `keyword_mode` | string | `'any'` (default) or `'all'` |4849**Example prompts:**50- "What has @elonmusk been posting about today?"51- "Get me the latest posts from r/bittensor about dTAO"52- "Fetch 50 tweets about #AI from the last week"5354---5556### 2. `create_gravity_task` - Large-Scale Data Collection5758Create a Gravity task for collecting large datasets over 7 days. Use this when you need more than 1000 results.5960**Parameters:**61| Parameter | Type | Description |62|-----------|------|-------------|63| `tasks` | list | **REQUIRED**. List of task objects (see below) |64| `name` | string | Optional name for the task |65| `email` | string | Email for notification when complete |6667**Task object structure:**68```json69{70 "platform": "x", // 'x' or 'reddit'71 "topic": "#Bittensor", // For X: MUST start with '#' or '$'72 "keyword": "dTAO" // Optional: filter within topic73}74```7576**Important:** For X (Twitter), topics MUST start with `#` or `$` (e.g., `#ai`, `$BTC`). Plain keywords are rejected.7778**Example prompts:**79- "Create a gravity task to collect #Bittensor tweets for the next 7 days"80- "Start collecting data from r/MachineLearning about neural networks"8182---8384### 3. `get_gravity_task_status` - Check Collection Progress8586Monitor your Gravity task and see how much data has been collected.8788**Parameters:**89| Parameter | Type | Description |90|-----------|------|-------------|91| `gravity_task_id` | string | **REQUIRED**. The task ID from create_gravity_task |92| `include_crawlers` | bool | Include detailed stats. Default: `True` |9394**Returns:** Task status, crawler IDs, records_collected, bytes_collected9596**Example prompts:**97- "Check the status of my Bittensor data collection task"98- "How many records have been collected so far?"99100---101102### 4. `build_dataset` - Build & Download Dataset103104Build a dataset from collected data before the 7-day completion.105106**Warning:** This will STOP the crawler and de-register it from the network.107108**Parameters:**109| Parameter | Type | Description |110|-----------|------|-------------|111| `crawler_id` | string | **REQUIRED**. Get from get_gravity_task_status |112| `max_rows` | int | Max rows to include. Default: 10000 |113| `email` | string | Email for notification when ready |114115**Example prompts:**116- "Build a dataset from my Bittensor crawler with 5000 rows"117- "I have enough data, build the dataset now"118119---120121### 5. `get_dataset_status` - Check Build Progress & Download122123Check dataset build progress and get download links when ready.124125**Parameters:**126| Parameter | Type | Description |127|-----------|------|-------------|128| `dataset_id` | string | **REQUIRED**. The dataset ID from build_dataset |129130**Returns:** Build status (10 steps), and when complete: download URLs for Parquet files131132**Example prompts:**133- "Is my dataset ready to download?"134- "Get the download link for my Bittensor dataset"135136---137138### 6. `cancel_gravity_task` - Stop Data Collection139140Cancel a running Gravity task.141142**Parameters:**143| Parameter | Type | Description |144|-----------|------|-------------|145| `gravity_task_id` | string | **REQUIRED**. The task ID to cancel |146147---148149### 7. `cancel_dataset` - Cancel Build or Purge Dataset150151Cancel a dataset build or purge a completed dataset.152153**Parameters:**154| Parameter | Type | Description |155|-----------|------|-------------|156| `dataset_id` | string | **REQUIRED**. The dataset ID to cancel/purge |157158---159160## Example Workflows161162### Quick Query (On-Demand)163```164User: "What's the sentiment about $TAO on Twitter today?"165→ Uses query_on_demand_data to fetch recent tweets166→ Returns up to 1000 results instantly167```168169### Large Dataset Collection (Gravity)170```171User: "I need to collect a week's worth of #AI tweets for analysis"1721731. create_gravity_task → Returns gravity_task_id1742. get_gravity_task_status → Monitor progress, get crawler_ids1753. build_dataset → When ready, build the dataset1764. get_dataset_status → Get download URL for Parquet file177```178179---180181## Example Prompts182183### On-Demand Queries184- "What has the president of the U.S. been saying over the past week on X?"185- "Fetch me information about what people are posting on r/politics today."186- "Please analyze posts from @elonmusk for the last week."187- "Get me 100 tweets about #Bittensor and analyze the sentiment"188189### Large-Scale Collection190- "Create a gravity task to collect data about #AI from Twitter and r/MachineLearning from Reddit"191- "Start a 7-day collection of $BTC tweets with keyword 'ETF'"192- "Check how many records my gravity task has collected"193- "Build a dataset with 10,000 rows from my crawler"194195---196197MIT License198Made with love by the Macrocosmos team199
Full transparency — inspect the skill content before installing.