π MCP Server Available: Install the Model Context Protocol server for AI Distiller from NPM: @janreges/ai-distiller-mcp - seamlessly integrate with Claude, Cursor, and other MCP-compatible AI tools! /.aid/ regardless of where you run aid - Cache management: MCP cache stored in .aid/cache/ for better organization - Easy cleanup: Add .aid/ to .gitignore to keep outputs out of version control Detect
Add this skill
npx mdskills install janreges/ai-distillerComprehensive tool for extracting public APIs from codebases to improve AI context understanding
aid)Note: This is the very first version of this tool. We would be very grateful for any feedback in the form of a discussion or by creating an issue on GitHub. Thank you!
π MCP Server Available: Install the Model Context Protocol server for AI Distiller from NPM: @janreges/ai-distiller-mcp - seamlessly integrate with Claude, Cursor, and other MCP-compatible AI tools!

π‘ New to dependency analysis? This feature traces which functions actually call each other in your code, creating a minimal context that includes only the relevant parts. Perfect for AI tools that need to understand code relationships without processing entire files.
Instead of including entire files, dependency-aware distillation:
# Basic dependency analysis
aid main.py --dependency-aware
# Control analysis depth
aid main.py --dependency-aware --max-depth=2
# Include implementations for deeper analysis
aid main.py --dependency-aware --implementation=1 --max-depth=3
We've worked extensively to make dependency-aware distillation as reliable as possible across different programming languages. However, the complexity varies significantly between languages, and we want to be transparent about the current state:
| Language | Support Level | Cross-File Analysis | Intra-File Calls | Performance | Notes |
|---|---|---|---|---|---|
| Python | π’ Very Good | β Full | β Complete | ~37ms | Package imports, all call patterns |
| JavaScript | π’ Very Good | β Full | β Complete | ~38ms | CommonJS & ES6 modules |
| Go | π’ Very Good | β Full | β Complete | ~37ms | Package system integration |
| Rust | π’ Very Good | β Full | β Complete | ~36ms | Crate system, proper filtering |
| Java | π’ Very Good | β Full | β Complete | ~41ms | Package imports, static methods |
| Swift | π’ Very Good | β Full | β Complete | ~37ms | Class and static method detection |
| PHP | π’ Very Good | β Full | β Complete | ~37ms | Include/require resolution |
| Ruby | π’ Very Good | β Full | β Complete | ~40ms | Module system, all call patterns |
| TypeScript | π‘ Limited | β Issues | β Issues | N/A | Language processor limitations |
| C# | π‘ Limited | β Issues | β Issues | N/A | Language processor limitations |
| C++ | π‘ Limited | β Issues | β Issues | N/A | Language processor limitations |
| Kotlin | π Good | β Partial | β οΈ Basic | ~45ms | Companion objects, some edge cases |
Legend:
Very Good Performance (8 languages):
Areas for Enhancement:
Perfect for:
Best Practices:
# Start with small depth for quick overview
aid main.py --dependency-aware --max-depth=1
# Increase depth for comprehensive analysis
aid main.py --dependency-aware --max-depth=2 --implementation=1
# Use with specific languages known to work well
aid src/ --dependency-aware --include="*.py,*.js,*.go"
The Problem: Modern codebases contain thousands of files with millions of lines. But for AI to understand your code architecture, suggest improvements, or help with development, it doesn't need to see every implementation detail - it needs the structure and public interfaces.
The Solution: AI Distiller extracts only what matters - public APIs, types, and signatures - reducing codebase size by 90-98% while preserving all essential information for AI comprehension.
Project Files Original Tokens Distilled Tokens Fits in Context1 Speed2
βοΈ react 1,781 ~5.5M 250K (-95%) β Gemini3 2,875 files/s
π¨ vscode 4,768 ~22.5M 2M (-91%) β οΈ Needs chunking 5,072 files/s
π django 970 ~10M 256K (-97%) β Gemini3 4,199 files/s
π¦ prometheus 685 ~8.5M 154K (-98%) β Claude/Gemini 3,071 files/s
π¦ rust-analyzer 1,275 ~5.5M 172K (-97%) β Claude/Gemini 10,451 files/s
π astro 1,058 ~10.5M 149K (-99%) β Claude/Gemini 5,212 files/s
π rails 394 ~1M 104K (-90%) β ChatGPT-4o 4,864 files/s
π laravel 1,443 ~3M 238K (-92%) β Gemini3 4,613 files/s
β‘ nestjs 802 ~1.5M 107K (-93%) β ChatGPT-4o 8,813 files/s
π» ghost 2,184 ~8M 235K (-97%) β Gemini3 4,719 files/s
1 Context windows: ChatGPT-4o (128K), Claude (200K), Gemini (1M). β = fits completely, β οΈ = needs splitting
2 Processing speed with 12 parallel workers on AMD Ryzen 7945HX. Use -w 1 for serial mode or -w N for custom workers.
3 These frameworks exceed 200K tokens and work only with Gemini due to its larger 1M token context window.
Large codebases are overwhelming for AI models. A typical web framework like Django has ~10 million tokens of source code. Even with Claude's 200K context window, you'd need to split it into 50+ chunks, losing coherence and relationships between components.
But here's the good news: Most real-world projects that teams have invested hundreds to thousands of hours developing are much smaller. Thanks to AI Distiller, the vast majority of typical business applications, SaaS products, and internal tools can fit entirely within AI context windows, enabling unprecedented AI assistance quality.
Most AI agents and IDEs are "context misers" - they try to save tokens at the expense of actual codebase knowledge. They rely on:
This is why AI-generated code often fails on first attempts - the AI is literally guessing method signatures, parameter types, and return values because it can't see the full picture.
AI Distiller changes the game by giving AI complete knowledge of:
Instead of playing "code roulette", AI can now write correct code from the start.
Result: Django's 10M tokens compress to just 256K tokens - suddenly the entire framework fits in a single AI conversation, leading to:
# Process entire codebase (default: public APIs only)
aid ./my-project
# Process specific directory or module
aid ./my-project/src/auth
aid ./my-project/src/api
# Process a directory
aid ./my-project/core/
# Process individual file
aid src/main.py
# Include protected/private for deeper analysis
aid ./my-project --private=1 --protected=1
# Include implementations for small projects
aid ./my-small-lib --implementation=1
# Everything for complete understanding
aid ./micro-service --private=1 --protected=1 --implementation=1
Granular Control: Process your entire codebase, specific modules, directories, or even individual files. Perfect for focusing AI on the exact context it needs - whether that's understanding the whole system architecture or diving deep into a specific authentication module.
π Full benchmark details | π§ͺ Reproduce these results
macOS / Linux / WSL:
# Install to ~/.aid/bin (recommended, no sudo required)
curl -sSL https://raw.githubusercontent.com/janreges/ai-distiller/main/install.sh | bash
# Install to /usr/local/bin (requires sudo)
curl -sSL https://raw.githubusercontent.com/janreges/ai-distiller/main/install.sh | bash -s -- --sudo
Windows PowerShell:
iwr https://raw.githubusercontent.com/janreges/ai-distiller/main/install.ps1 -useb | iex
The installer will:
~/.aid/bin/aid by default (no sudo required)/usr/local/bin/aid with --sudo flag# Basic usage
aid . # Current directory, output is saved to file in ./aid
aid . --stdout # Current directory, output is printed to STDOUT
aid src/ # Specific directory
aid main.py # Specific file
Python Class Example
Input (car.py):
class Car:
"""A car with basic attributes and methods."""
def __init__(self, make: str, model: str):
self.make = make
self.model = model
self._mileage = 0 # Private
def drive(self, distance: int) -> None:
"""Drive the car."""
if distance > 0:
self._mileage += distance
Output (aid car.py --format text --implementation=0):
class Car:
+def __init__(self, make: str, model: str)
+def drive(self, distance: int) -> None
TypeScript Interface Example
Input (api.ts):
export interface User {
id: number;
name: string;
email?: string;
}
export class UserService {
private cache = new Map();
async getUser(id: number): Promise {
return this.cache.get(id) || null;
}
}
Output (aid api.ts --format text --implementation=0):
export interface User {
id: number;
name: string;
email?: string;
}
export class UserService {
+async getUser(id: number): Promise
}
AI Distiller generates sophisticated analysis prompts that AI assistants can execute for comprehensive codebase understanding:
aid internal \
--private=1 --protected=1 --implementation=1 \
--ai-action=flow-for-deep-file-to-file-analysis
β
AI Analysis Task List generated successfully!
π Task List: .aid/ANALYSIS-TASK-LIST.internal.2025-06-20.md
π Summary File: .aid/ANALYSIS-SUMMARY.internal.2025-06-20.md
π Analysis Reports Directory: .aid/analysis.internal/2025-06-20
π€ Ready for AI-driven analysis workflow!
π Files to analyze: 158
π‘ If you are an AI agent, please read the Task List above and carefully follow all instructions to systematically analyze each file.
What AI Distiller generates:
.aid/ANALYSIS-TASK-LIST.PROJECT.DATE.md)How to use the generated prompts:
--stdout to get prompt directly without saving to fileNote: The analysis dimensions (Security, Performance, Maintainability, Readability) are part of the prompts that guide the AI - AI Distiller itself doesn't perform any analysis.
AI Distiller now integrates seamlessly with Claude Code/Desktop through the Model Context Protocol (MCP), enabling AI agents to analyze and understand codebases directly within conversations.
# One-line installation
claude mcp add aid -- npx -y @janreges/ai-distiller-mcp
π¦ NPM Package: @janreges/ai-distiller-mcp - Full documentation and examples available
π Code Structure Tools:
distill_file - Extract structure from a single filedistill_directory - Extract structure from entire directorieslist_files - Browse directories with file statisticsget_capabilities - Get info about AI Distiller capabilitiesπ― Specialized AI Analysis Tools:
aid_hunt_bugs - Generate bug-hunting prompts with distilled codeaid_suggest_refactoring - Create refactoring analysis promptsaid_generate_diagram - Produce diagram generation prompts (Mermaid)aid_analyze_security - Generate security audit prompts (OWASP Top 10)aid_generate_docs - Create documentation generation promptsaid_deep_file_analysis - Systematic file-by-file analysis workflowaid_multi_file_docs - Multi-file documentation workflowaid_complex_analysis - Enterprise-grade analysis promptsaid_performance_analysis - Performance optimization promptsaid_best_practices - Code quality and best practices promptsπ§ Core Analysis Engine:
aid_analyze - Direct access to all AI actions for custom workflowsImportant: AI Distiller generates analysis prompts with distilled code - it does NOT perform the actual analysis! The output is a specialized prompt + distilled code that AI agents (like Claude) then execute. For large codebases, you can copy the output to tools like Gemini 2.0 with 1M context window.
Smart Context Management: AI agents can analyze your entire project for understanding the big picture, then zoom into specific modules (auth, API, database) for detailed work. No more overwhelming AI with irrelevant code!
aid [OPTIONS]
| Argument | Type | Default | Description |
|---|---|---|---|
| `` | String | (required) | Path to source file or directory to analyze. Use .git for git history mode, - (or empty) for stdin input |
| Option | Type | Default | Description |
|---|---|---|---|
-o, --output | String | .aid/.[options].txt | Output file path. Auto-generated based on input directory basename and options if not specified |
--stdout | Flag | false | Print output to stdout in addition to file. When used alone, no file is created |
--format | String | text | Output format: text (ultra-compact), md (clean Markdown), jsonl (one JSON per file), json-structured (rich semantic data), xml (structured XML) |
| Option | Type | Default | Description |
|---|---|---|---|
--ai-action | String | (none) | Generate pre-configured prompts with distilled code for AI analysis. See AI Actions section below |
--ai-output | String | ./.aid/...md | Custom output path for generated AI prompt files |
| Option | Type | Default | Description |
|---|---|---|---|
--public | 0|1 | 1 | Include public members (methods, functions, classes) |
--protected | 0|1 | 0 | Include protected members |
--internal | 0|1 | 0 | Include internal/package-private members |
--private | 0|1 | 0 | Include private members |
| Option | Type | Default | Description |
|---|---|---|---|
--comments | 0|1 | 0 | Include inline and block comments |
--docstrings | 0|1 | 1 | Include documentation comments (docstrings, JSDoc, etc.) |
--implementation | 0|1 | 0 | Include function/method bodies (implementation details) |
--imports | 0|1 | 1 | Include import/require statements |
--annotations | 0|1 | 1 | Include decorators and annotations |
--fields | 0|1 | 1 | Include class fields and properties |
--methods | 0|1 | 1 | Include methods and functions |
| Option | Type | Default | Description |
|---|---|---|---|
--include-only | String | (none) | Include ONLY these categories (comma-separated: public,protected,imports) |
--exclude-items | String | (none) | Exclude these categories (comma-separated: private,comments,implementation) |
| Option | Type | Default | Description |
|---|---|---|---|
--include | String | (all files) | Include file patterns (comma-separated: *.go,*.py or multiple: --include "*.go" --include "*.py") |
--exclude | String | (none) | Exclude file patterns (comma-separated: *test*,*.json or multiple: --exclude "*test*" --exclude "vendor/**") |
-r, --recursive | 0|1 | 1 | Process directories recursively. Set to 0 to process only immediate directory contents |
| Option | Type | Default | Description |
|---|---|---|---|
--raw | Flag | false | Process all text files without language parsing. Overrides all content filters |
--lang | String | auto | Force language detection: auto, python, typescript, javascript, go, rust, java, csharp, kotlin, cpp, php, ruby, swift |
| Option | Type | Default | Description |
|---|---|---|---|
--file-path-type | String | relative | Path format in output: relative or absolute |
--relative-path-prefix | String | (empty) | Custom prefix for relative paths (e.g., module/ β module/src/file.go) |
| Option | Type | Default | Description |
|---|---|---|---|
-w, --workers | Integer | 0 | Number of parallel workers. 0 = auto (80% of CPU cores), 1 = serial processing, 2+ = specific worker count |
| Option | Type | Default | Description |
|---|---|---|---|
--summary-type | String | visual-progress-bar | Summary format after processing. See Summary Types below |
--no-emoji | Flag | false | Disable emojis in summary output for plain text terminals |
.git)| Option | Type | Default | Description |
|---|---|---|---|
--git-limit | Integer | 200 | Number of commits to analyze. Use 0 for all commits |
--with-analysis-prompt | Flag | false | Add comprehensive AI analysis prompt for commit quality, patterns, and insights |
| Option | Type | Default | Description |
|---|---|---|---|
-v, --verbose | Count | 0 | Verbose output. Use -vv for detailed info, -vvv for full trace with data dumps |
--version | Flag | false | Show version information and exit |
--help | Flag | false | Show help message |
--help-extended | Flag | false | Show complete documentation (man page style) |
--cheat | Flag | false | Show quick reference card |
AI actions generate pre-configured prompts combined with distilled code that AI agents can then execute for specific analysis tasks:
| Action | Generated Prompt Type | AI Agent Will |
|---|---|---|
prompt-for-refactoring-suggestion | Refactoring analysis prompt with distilled code | Analyze code for improvements, technical debt, effort sizing |
prompt-for-complex-codebase-analysis | Enterprise-grade analysis prompt with full codebase | Generate architecture diagrams, compliance checks, findings |
prompt-for-security-analysis | Security audit prompt with OWASP Top 10 guidelines | Detect vulnerabilities, suggest remediation steps |
prompt-for-performance-analysis | Performance optimization prompt with complexity focus | Identify bottlenecks, analyze scalability issues |
prompt-for-best-practices-analysis | Code quality prompt with industry standards | Assess code quality, suggest improvements |
prompt-for-bug-hunting | Bug detection prompt with pattern analysis | Find bugs, analyze quality metrics |
prompt-for-single-file-docs | Documentation generation prompt for single file | Create comprehensive API documentation |
prompt-for-diagrams | Diagram generation prompt with Mermaid syntax | Generate 10+ architecture and process diagrams |
flow-for-deep-file-to-file-analysis | Systematic analysis task list with directory structure | Perform file-by-file deep analysis |
flow-for-multi-file-docs | Documentation workflow with file relationships | Create interconnected documentation |
| Type | Description | Example Output |
|---|---|---|
visual-progress-bar | Default. Shows compression progress bar with colors | β
Distilled 150 files [ββββββββββ] 85% (5MB β 750KB) |
stock-ticker | Compact stock market style | π AID 97.5% β² | 5MBβ128KB | ~1.2M tokens saved |
speedometer-dashboard | Multi-line dashboard with detailed metrics | Shows files, size, tokens, processing time in box format |
minimalist-sparkline | Single line with sparkline visualization | βββ
ββ 150 files β 97.5% reduction (750KB) β |
ci-friendly | Clean format for CI/CD pipelines | [aid] β 85.9% saved | 21 kB β 2.9 kB | 4ms |
json | Machine-readable JSON output | {"original_bytes":5242880,"distilled_bytes":131072,...} |
off | Disable summary output | No summary displayed |
| Code | Meaning |
|---|---|
0 | Success |
1 | General error (file not found, parse error, etc.) |
2 | Invalid arguments or conflicting options |
# Basic usage - distill with default settings (public APIs only)
aid ./src
# Include all visibility levels and implementation
aid ./src --private=1 --protected=1 --internal=1 --implementation=1
# Generate security analysis prompt (AI agent will execute the analysis)
aid --ai-action prompt-for-security-analysis ./api --private=1
# Process only Python and Go files, exclude tests
aid --include "*.py,*.go" --exclude "*test*,*spec*" ./
# Git history analysis with AI insights
aid .git --with-analysis-prompt --git-limit=500
# Raw text processing for documentation
aid ./docs --raw
# Force single-threaded processing for debugging (-v, -vv, -vvv)
aid ./complex-code -w 1 -vv
# Custom output with absolute paths
aid ./lib --output=/tmp/analysis.txt --file-path-type=absolute
# CI/CD integration with clean output
aid ./internal --summary-type=ci-friendly --no-emoji
.aidignore to skip generated filesβ οΈ Important: AI Distiller extracts code structure which may include:
processPayment, calculateTaxEvasion)/api/v1/internal/user-data)Recommendations:
--comments=0 to remove potentially sensitive documentation--obfuscate flag to anonymize sensitive identifiersAI Distiller now supports parallel processing for significantly faster analysis of large codebases:
# Use default parallel processing (80% of CPU cores)
aid ./src
# Force serial processing (original behavior)
aid ./src -w 1
# Use specific number of workers
aid ./src -w 16
# Check performance with verbose output
aid ./src -v # Shows: "Using 25 parallel workers (32 CPU cores available)"
Performance Benefits:
AI Distiller can process code directly from stdin, perfect for:
# Auto-detect language from stdin
echo 'class User { getName() { return this.name; } }' | aid --format text
# Explicit language specification
cat mycode.php | aid --lang php --private=0 --protected=0
# Use "-" to explicitly read from stdin
aid - --lang python context.txt
# Generate a codebase summary for RAG systems
aid . --format json-structured | jq -r '.files[].symbols[].name' > symbols.txt
# Extract API surface for documentation
aid ./api --comments=0 --implementation=0 --format md > api-ref.md
# Extract only method signatures (no fields/properties) - great for large codebases
aid ./src --fields=0 --implementation=0 > methods-only.txt
# Extract only data structures (no method noise)
aid ./models --methods=0 > data-structures.txt
# Focus on public API methods only
aid ./services --fields=0 --private=0 --protected=0 --internal=0
AI Distiller respects .aidignore files for excluding files and directories from processing. The syntax is similar to .gitignore.
AI Distiller only processes source code files with these extensions:
.py, .pyw, .pyi.js, .mjs, .cjs, .jsx.ts, .tsx, .d.ts.go.rs.rb, .rake, .gemspec.java.cs.kt, .kts.cpp, .cc, .cxx, .c++, .h, .hpp, .hh, .hxx, .h++.php, .phtml, .php3, .php4, .php5, .php7, .phps, .inc.swiftNote: Files like .log, .txt, .md, images, PDFs, and other non-source files are automatically ignored by AI Distiller, so you don't need to add them to .aidignore.
AI Distiller automatically ignores these common dependency and build directories:
node_modules/ - npm packagesvendor/ - Go and PHP dependenciestarget/ - Rust build outputbuild/, dist/ - Common build directories__pycache__/, .pytest_cache/, venv/, .venv/, env/, .env/ - Python.gradle/, gradle/ - Java/KotlinPods/ - Swift/iOS dependencies.bundle/ - Ruby bundlerbin/, obj/ - Compiled binaries.vs/, .idea/, .vscode/ - IDE directoriescoverage/, .nyc_output/ - Test coveragebower_components/ - Legacy JavaScript.terraform/ - Terraform.git/, .svn/, .hg/ - Version controlYou can override these defaults using ! patterns in .aidignore (see Advanced Usage below).
Create a .aidignore file in your project root or any subdirectory:
# Comments start with hash
*.test.js # Ignore test files
*.spec.ts # Ignore spec files
temp/ # Ignore temp directory
build/ # Ignore build directory
/secrets.py # Ignore secrets.py only in root
node_modules/ # Ignore node_modules everywhere
**/*.bak # Ignore .bak files in any directory
src/test_* # Ignore test_* files in src/
!important.test.js # Don't ignore important.test.js (negation)
.aidignore files work recursively - place them in any directory.aidignore file/ prefix for patterns relative to the .aidignore location** for recursive matching/! prefix to negate a pattern (re-include previously ignored files)# .aidignore in project root
node_modules/ # Excludes all node_modules directories
*.test.js # Excludes all test files
*.spec.ts # Excludes all spec files
dist/ # Excludes dist directory
.env.py # Excludes environment config files
vendor/ # Excludes vendor directory
# More specific patterns
src/**/test_*.py # Test files in src subdirectories
!src/test_utils.py # But include this specific test file
/config/*.local.py # Local config files in root config dir
**/*_generated.go # Generated Go files anywhere
Use ! patterns to include directories that are ignored by default:
# Include vendor directory for analysis
!vendor/
# Include specific node_modules package
!node_modules/my-local-package/
# Include Python virtual environment
!venv/
You can also include files that AI Distiller normally doesn't process:
# Include all markdown files
!*.md
!**/*.md
# Include configuration files
!*.yaml
!*.json
!.env
# Include specific documentation
!docs/**/*.txt
!README.md
!CHANGELOG.md
When you include non-source files with ! patterns, AI Distiller will include their raw content in the output.
You can place .aidignore files in subdirectories for more specific control:
# project/.aidignore
*.test.py
!vendor/ # Include vendor in this project
# project/src/.aidignore
test_*.go
*.mock.ts
!test_helpers.ts # Exception: include test_helpers.ts
AI Distiller includes a special mode for analyzing git repositories. When you pass a .git directory, it switches to git log mode:
# View formatted git history
aid .git
# Limit to recent commits (default is 200)
aid .git --git-limit=500
# Include AI analysis prompt for comprehensive insights
aid .git --git-limit=1000 --with-analysis-prompt
The --with-analysis-prompt flag adds a sophisticated prompt combined with git history that AI agents can use to generate:
The output file contains both the analysis prompt and formatted git history, ready for AI agents to process. Perfect for understanding project history, identifying knowledge silos, or generating impressive development reports.
How accurate are the token counts?
Token counts are estimated using OpenAI's cl100k_base tokenizer (1 token β 4 characters). Actual token usage varies by model - Claude and GPT-4 use similar tokenizers, while others may differ by Β±10%.
Can AI Distiller handle very large repositories?
Yes! We've tested on repositories with 50,000+ files. The parallel processing mode (-w flag) scales linearly with CPU cores. Memory usage is bounded - large files are processed in streaming chunks.
What about generated code and vendor directories?
Create a .aidignore file (same syntax as .gitignore) to exclude generated files, vendor directories, or any paths you don't want processed.
What happens with unsupported file types?
Files with unknown or unsupported extensions are automatically skipped - no errors, no interruption. AI Distiller only processes files it has parsers for, ensuring clean and relevant output. This means you can safely run it on mixed repositories containing documentation, images, configs, etc.
Is my code sent anywhere?
No! AI Distiller runs 100% locally. It only extracts and formats your code structure - you decide what to do with the output. The tool itself makes no network connections.
Which programming languages are supported?
Currently 12+ languages via tree-sitter: Python, TypeScript, JavaScript, Go, Java, C#, Rust, Ruby, Swift, Kotlin, PHP, C++. All parsers are bundled in the binary - no external dependencies needed.
We welcome contributions! See CONTRIBUTING.md for guidelines.
# Clone and setup
git clone https://github.com/janreges/ai-distiller
cd ai-distiller
make dev-init # Initialize development environment
# Run tests
make test # Unit tests
make test-integration # Integration tests
# Build binary
make build # Build for current platform
AI Distiller requires CGO for full language support via tree-sitter parsers. To build release binaries for all supported platforms:
Ubuntu/Debian:
# Install cross-compilation toolchains
sudo apt-get update
sudo apt-get install -y gcc-aarch64-linux-gnu gcc-mingw-w64-x86-64
# For macOS cross-compilation, you need osxcross:
# 1. Clone osxcross: git clone https://github.com/tpoechtrager/osxcross tools/osxcross
# 2. Obtain macOS SDK (see https://github.com/tpoechtrager/osxcross#packaging-the-sdk)
# 3. Place SDK in tools/osxcross/tarballs/
# 4. Build osxcross: cd tools/osxcross && ./build.sh
# Build release archives for all platforms
./scripts/build-releases.sh
# This creates:
# - aid-linux-amd64.tar.gz (Linux 64-bit)
# - aid-linux-arm64.tar.gz (Linux ARM64)
# - aid-darwin-amd64.tar.gz (macOS Intel)
# - aid-darwin-arm64.tar.gz (macOS Apple Silicon)
# - aid-windows-amd64.zip (Windows 64-bit)
The script automatically detects available toolchains and builds for all possible platforms. Each archive contains the aid binary (or aid.exe for Windows) with full language support.
Note: Without proper toolchains, only the native platform will be built.
MIT License - see LICENSE for details.
Install via CLI
npx mdskills install janreges/ai-distillerAI Distiller (aid) is a free, open-source AI agent skill. π MCP Server Available: Install the Model Context Protocol server for AI Distiller from NPM: @janreges/ai-distiller-mcp - seamlessly integrate with Claude, Cursor, and other MCP-compatible AI tools! /.aid/ regardless of where you run aid - Cache management: MCP cache stored in .aid/cache/ for better organization - Easy cleanup: Add .aid/ to .gitignore to keep outputs out of version control Detect
Install AI Distiller (aid) with a single command:
npx mdskills install janreges/ai-distillerThis downloads the skill files into your project and your AI agent picks them up automatically.
AI Distiller (aid) works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Codex, Gemini Cli, Amp, Roo Code, Goose, Opencode, Trae, Qodo, Command Code, Chatgpt. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.