Official Macrocosmos Model Context Protocol (MCP) server that enables interaction with X (Twitter) and Reddit, powered by Data Universe (SN13) on Bittensor. This server allows MCP clients like Claude Desktop , Cursor , Windsurf , OpenAI Agents and others to fetch real-time social media data. 1. Get your API key from Macrocosmos. There is a free tier with $5 of credits to start. 2. Install uv (Pyth
Add this skill
npx mdskills install macrocosm-os/macrocosmos-mcpWell-documented social media data collection MCP with clear tool descriptions and workflow examples
Official Macrocosmos Model Context Protocol (MCP) server that enables interaction with X (Twitter) and Reddit, powered by Data Universe (SN13) on Bittensor. This server allows MCP clients like Claude Desktop, Cursor, Windsurf, OpenAI Agents and others to fetch real-time social media data.
uv (Python package manager), install with curl -LsSf https://astral.sh/uv/install.sh | sh or see the uv repo for additional install methods.{
"mcpServers": {
"macrocosmos": {
"command": "uvx",
"args": ["macrocosmos-mcp"],
"env": {
"MC_API": ""
}
}
}
}
query_on_demand_data - Real-time Social Media QueriesFetch real-time data from X (Twitter) and Reddit. Best for quick queries up to 1000 results.
Parameters:
| Parameter | Type | Description |
|---|---|---|
source | string | REQUIRED. Platform: 'X' or 'REDDIT' (case-sensitive) |
usernames | list | Up to 5 usernames. For X: @ is optional. Not available for Reddit |
keywords | list | Up to 5 keywords. For Reddit: first item is subreddit (e.g., 'r/MachineLearning') |
start_date | string | ISO format (e.g., '2024-01-01T00:00:00Z'). Defaults to 24h ago |
end_date | string | ISO format. Defaults to now |
limit | int | Max results 1-1000. Default: 10 |
keyword_mode | string | 'any' (default) or 'all' |
Example prompts:
create_gravity_task - Large-Scale Data CollectionCreate a Gravity task for collecting large datasets over 7 days. Use this when you need more than 1000 results.
Parameters:
| Parameter | Type | Description |
|---|---|---|
tasks | list | REQUIRED. List of task objects (see below) |
name | string | Optional name for the task |
email | string | Email for notification when complete |
Task object structure:
{
"platform": "x", // 'x' or 'reddit'
"topic": "#Bittensor", // For X: MUST start with '#' or '$'
"keyword": "dTAO" // Optional: filter within topic
}
Important: For X (Twitter), topics MUST start with # or $ (e.g., #ai, $BTC). Plain keywords are rejected.
Example prompts:
get_gravity_task_status - Check Collection ProgressMonitor your Gravity task and see how much data has been collected.
Parameters:
| Parameter | Type | Description |
|---|---|---|
gravity_task_id | string | REQUIRED. The task ID from create_gravity_task |
include_crawlers | bool | Include detailed stats. Default: True |
Returns: Task status, crawler IDs, records_collected, bytes_collected
Example prompts:
build_dataset - Build & Download DatasetBuild a dataset from collected data before the 7-day completion.
Warning: This will STOP the crawler and de-register it from the network.
Parameters:
| Parameter | Type | Description |
|---|---|---|
crawler_id | string | REQUIRED. Get from get_gravity_task_status |
max_rows | int | Max rows to include. Default: 10000 |
email | string | Email for notification when ready |
Example prompts:
get_dataset_status - Check Build Progress & DownloadCheck dataset build progress and get download links when ready.
Parameters:
| Parameter | Type | Description |
|---|---|---|
dataset_id | string | REQUIRED. The dataset ID from build_dataset |
Returns: Build status (10 steps), and when complete: download URLs for Parquet files
Example prompts:
cancel_gravity_task - Stop Data CollectionCancel a running Gravity task.
Parameters:
| Parameter | Type | Description |
|---|---|---|
gravity_task_id | string | REQUIRED. The task ID to cancel |
cancel_dataset - Cancel Build or Purge DatasetCancel a dataset build or purge a completed dataset.
Parameters:
| Parameter | Type | Description |
|---|---|---|
dataset_id | string | REQUIRED. The dataset ID to cancel/purge |
User: "What's the sentiment about $TAO on Twitter today?"
→ Uses query_on_demand_data to fetch recent tweets
→ Returns up to 1000 results instantly
User: "I need to collect a week's worth of #AI tweets for analysis"
1. create_gravity_task → Returns gravity_task_id
2. get_gravity_task_status → Monitor progress, get crawler_ids
3. build_dataset → When ready, build the dataset
4. get_dataset_status → Get download URL for Parquet file
MIT License Made with love by the Macrocosmos team
Install via CLI
npx mdskills install macrocosm-os/macrocosmos-mcpMacrocosmos MCP is a free, open-source AI agent skill. Official Macrocosmos Model Context Protocol (MCP) server that enables interaction with X (Twitter) and Reddit, powered by Data Universe (SN13) on Bittensor. This server allows MCP clients like Claude Desktop , Cursor , Windsurf , OpenAI Agents and others to fetch real-time social media data. 1. Get your API key from Macrocosmos. There is a free tier with $5 of credits to start. 2. Install uv (Pyth
Install Macrocosmos MCP with a single command:
npx mdskills install macrocosm-os/macrocosmos-mcpThis downloads the skill files into your project and your AI agent picks them up automatically.
Macrocosmos MCP works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Gemini Cli, Amp, Roo Code, Goose. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.