Smart context management for LLM development workflows. Share relevant project files instantly through intelligent selection and rule-based filtering. Getting the right context into LLM conversations is friction-heavy: - Manually finding and copying relevant files wastes time - Too much context hits token limits, too little misses important details - AI requests for additional files require manual
Add this skill
npx mdskills install cyberchitta/llm-context-pyComprehensive context management tool with strong docs, multi-workflow support, and thoughtful rule system
1# LLM Context23[](https://opensource.org/licenses/Apache-2.0)4[](https://pypi.org/project/llm-context/)5[](https://pepy.tech/project/llm-context)67**Smart context management for LLM development workflows.** Share relevant project files instantly through intelligent selection and rule-based filtering.89## The Problem1011Getting the right context into LLM conversations is friction-heavy:1213- Manually finding and copying relevant files wastes time14- Too much context hits token limits, too little misses important details15- AI requests for additional files require manual fetching16- Hard to track what changed during development sessions1718## The Solution1920llm-context provides focused, task-specific project context through composable rules.2122**For humans using chat interfaces:**23```bash24lc-select # Smart file selection25lc-context # Copy formatted context to clipboard26# Paste and work - AI can access additional files via MCP27```2829**For AI agents with CLI access:**30```bash31lc-preview tmp-prm-auth # Validate rule selects right files32lc-context tmp-prm-auth # Get focused context for sub-agent33```3435**For AI agents in chat (MCP tools):**36- `lc_outlines` - Generate excerpted context from current rule37- `lc_preview` - Validate rule effectiveness before use38- `lc_missing` - Fetch specific files/implementations on demand3940> **Note**: This project was developed in collaboration with several Claude Sonnets (3.5, 3.6, 3.7, 4.0) and Groks (3, 4), using LLM Context itself to share code during development. All code is heavily human-curated by @restlessronin.4142## Installation4344```bash45uv tool install "llm-context>=0.6.0"46```4748## Quick Start4950### Human Workflow (Clipboard)5152```bash53# One-time setup54cd your-project55lc-init5657# Daily usage58lc-select59lc-context60# Paste into your LLM chat61```6263### MCP Integration (Recommended)6465Add to Claude Desktop config (`~/Library/Application Support/Claude/claude_desktop_config.json`):6667```jsonc68{69 "mcpServers": {70 "llm-context": {71 "command": "uvx",72 "args": ["--from", "llm-context", "lc-mcp"]73 }74 }75}76```7778Restart Claude Desktop. Now AI can access additional files during conversations without manual copying.7980### Agent Workflow (CLI)8182AI agents with shell access use llm-context to create focused contexts:8384```bash85# Agent explores codebase86lc-outlines8788# Agent creates focused rule for specific task89# (via Skill or lc-rule-instructions)9091# Agent validates rule92lc-preview tmp-prm-oauth-task9394# Agent uses context for sub-task95lc-context tmp-prm-oauth-task96```9798### Agent Workflow (MCP)99100AI agents in chat environments use MCP tools:101102```bash103# Explore codebase structure104lc_outlines(root_path, rule_name)105106# Validate rule effectiveness107lc_preview(root_path, rule_name)108109# Fetch specific files/implementations110lc_missing(root_path, param_type, data, timestamp)111```112113## Core Concepts114115### Rules: Task-Specific Context Descriptors116117Rules are YAML+Markdown files that describe what context to provide for a task:118119```yaml120---121description: "Debug API authentication"122compose:123 filters: [lc/flt-no-files]124 excerpters: [lc/exc-base]125also-include:126 full-files: ["/src/auth/**", "/tests/auth/**"]127---128Focus on authentication system and related tests.129```130131### Five Rule Categories132133- **Prompt Rules (`prm-`)**: Generate project contexts (e.g., `lc/prm-developer`)134- **Filter Rules (`flt-`)**: Control file inclusion (e.g., `lc/flt-base`, `lc/flt-no-files`)135- **Instruction Rules (`ins-`)**: Provide guidelines (e.g., `lc/ins-developer`)136- **Style Rules (`sty-`)**: Enforce coding standards (e.g., `lc/sty-python`)137- **Excerpt Rules (`exc-`)**: Configure content extraction (e.g., `lc/exc-base`)138139### Rule Composition140141Build complex rules from simpler ones:142143```yaml144---145instructions: [lc/ins-developer, lc/sty-python]146compose:147 filters: [lc/flt-base, project-filters]148 excerpters: [lc/exc-base]149---150```151152## Essential Commands153154| Command | Purpose |155| -------------------- | ---------------------------------------- |156| `lc-init` | Initialize project configuration |157| `lc-select` | Select files based on current rule |158| `lc-context` | Generate and copy context |159| `lc-context -p` | Include prompt instructions |160| `lc-context -m` | Format as separate message |161| `lc-context -nt` | No tools (manual workflow) |162| `lc-set-rule <name>` | Switch active rule |163| `lc-preview <rule>` | Validate rule selection and size |164| `lc-outlines` | Get code structure excerpts |165| `lc-missing` | Fetch files/implementations (manual MCP) |166167## AI-Assisted Rule Creation168169Let AI help create focused, task-specific rules. Two approaches depending on your environment:170171### Claude Skill (Interactive, Claude Desktop/Code)172173**How it works**: Global skill guides you through creating rules interactively. Examines your codebase as needed using MCP tools.174175**Setup**:176```bash177lc-init # Installs skill to ~/.claude/skills/178# Restart Claude Desktop or Claude Code179```180181**Usage**:182```bash183# 1. Share project context184lc-context # Any rule - overview included185186# 2. Paste into Claude, then ask:187# "Create a rule for refactoring authentication to JWT"188# "I need a rule to debug the payment processing"189```190191Claude will:1921. Use project overview already in context1932. Examine specific files via `lc-missing` as needed1943. Ask clarifying questions about scope1954. Generate optimized rule (`tmp-prm-<task>.md`)1965. Provide validation instructions197198**Skill documentation** (progressively disclosed):199- `Skill.md` - Quick workflow, decision patterns200- `PATTERNS.md` - Common rule patterns201- `SYNTAX.md` - Detailed reference202- `EXAMPLES.md` - Complete walkthroughs203- `TROUBLESHOOTING.md` - Problem solving204205### Instruction Rules (Works Anywhere)206207**How it works**: Load comprehensive rule-creation documentation into context, work with any LLM.208209**Usage**:210```bash211# 1. Load framework212lc-set-rule lc/prm-rule-create213lc-select214lc-context -nt215216# 2. Paste into any LLM217# "I need a rule for adding OAuth integration"218219# 3. LLM generates focused rule using framework220221# 4. Use the new rule222lc-set-rule tmp-prm-oauth223lc-select224lc-context225```226227**Included documentation**:228- `lc/ins-rule-intro` - Introduction and overview229- `lc/ins-rule-framework` - Complete decision framework230231### Comparison232233| Aspect | Skill | Instruction Rules |234| ------------------------- | ------------------------------- | ------------------------ |235| **Setup** | Automatic with `lc-init` | Already available |236| **Interaction** | Interactive, uses `lc-missing` | Static documentation |237| **File examination** | Automatic via MCP | Manual or via AI |238| **Best for** | Claude Desktop/Code | Any LLM, any environment |239| **Updates** | Automatic with version upgrades | Built-in to rules |240241Both require sharing project context first. Both produce equivalent results.242243## Project Customization244245### Create Base Filters246247```bash248cat > .llm-context/rules/flt-repo-base.md << 'EOF'249---250description: "Repository-specific exclusions"251compose:252 filters: [lc/flt-base]253gitignores:254 full-files: ["*.md", "/tests", "/node_modules"]255 excerpted-files: ["*.md", "/tests"]256---257EOF258```259260### Create Development Rule261262```bash263cat > .llm-context/rules/prm-code.md << 'EOF'264---265description: "Main development rule"266instructions: [lc/ins-developer, lc/sty-python]267compose:268 filters: [flt-repo-base]269 excerpters: [lc/exc-base]270---271Additional project-specific guidelines and context.272EOF273274lc-set-rule prm-code275```276277## Deployment Patterns278279Choose format based on your LLM environment:280281| Pattern | Command | Use Case |282| --------------------- | ---------------- | ------------------------- |283| System Message | `lc-context -p` | AI Studio, etc. |284| Single User Message | `lc-context -p -m` | Grok, etc. |285| Separate Messages | `lc-prompt` + `lc-context -m` | Flexible placement |286| Project Files (included) | `lc-context` | Claude Projects, etc. |287| Project Files (searchable) | `lc-context -m` | Force into context |288289See [Deployment Patterns](docs/user-guide.md#deployment-patterns) for details.290291## Key Features292293- **Intelligent Selection**: Rules automatically include/exclude appropriate files294- **Context Validation**: Preview size and selection before generation295- **Code Excerpting**: Extract structure while reducing tokens (15+ languages)296- **MCP Integration**: AI accesses additional files without manual intervention297- **Composable Rules**: Build complex contexts from reusable patterns298- **AI-Assisted Creation**: Interactive skill or documentation-based approaches299- **Agent-Friendly**: CLI and MCP interfaces for autonomous operation300301## Common Workflows302303### Daily Development (Human)304305```bash306lc-set-rule prm-code307lc-select308lc-context309# Paste into chat - AI accesses more files via MCP if needed310```311312### Focused Task (Human or Agent)313314```bash315# Share project context first316lc-context317318# Then create focused rule:319# Via Skill: "Create a rule for [task]"320# Via Instructions: lc-set-rule lc/prm-rule-create && lc-context -nt321322# Validate and use323lc-preview tmp-prm-task324lc-context tmp-prm-task325```326327### Agent Context Provisioning (CLI)328329```bash330# Agent validates rule effectiveness331lc-preview tmp-prm-refactor-auth332333# Agent generates context for sub-agent334lc-context tmp-prm-refactor-auth > /tmp/context.md335# Sub-agent reads context and executes task336```337338### Agent Context Provisioning (MCP)339340```python341# Agent validates rule342preview = lc_preview(root_path="/path/to/project", rule_name="tmp-prm-task")343344# Agent generates context345context = lc_outlines(root_path="/path/to/project")346347# Agent fetches additional files as needed348files = lc_missing(root_path, "f", "['/proj/src/auth.py']", timestamp)349```350351## Path Format352353All paths use project-relative format with project name prefix:354355```356/{project-name}/src/module/file.py357/{project-name}/tests/test_module.py358```359360This enables multi-project context composition without path conflicts.361362**In rules**, patterns are project-relative without the prefix:363```yaml364also-include:365 full-files:366 - "/src/auth/**" # ✓ Correct367 - "/myproject/src/**" # ✗ Wrong - don't include project name368```369370## Learn More371372- **[User Guide](docs/user-guide.md)** - Complete documentation with examples373- **[Design Philosophy](https://www.cyberchitta.cc/articles/llm-ctx-why.html)** - Why llm-context exists374- **[Real-world Examples](https://www.cyberchitta.cc/articles/full-context-magic.html)** - Using full context effectively375376## License377378Apache License, Version 2.0. See [LICENSE](LICENSE) for details.379
Full transparency — inspect the skill content before installing.