Smart context management for LLM development workflows. Share relevant project files instantly through intelligent selection and rule-based filtering. Getting the right context into LLM conversations is friction-heavy: - Manually finding and copying relevant files wastes time - Too much context hits token limits, too little misses important details - AI requests for additional files require manual
Add this skill
npx mdskills install cyberchitta/llm-context-pyComprehensive context management tool with strong docs, multi-workflow support, and thoughtful rule system
Smart context management for LLM development workflows. Share relevant project files instantly through intelligent selection and rule-based filtering.
Getting the right context into LLM conversations is friction-heavy:
llm-context provides focused, task-specific project context through composable rules.
For humans using chat interfaces:
lc-select # Smart file selection
lc-context # Copy formatted context to clipboard
# Paste and work - AI can access additional files via MCP
For AI agents with CLI access:
lc-preview tmp-prm-auth # Validate rule selects right files
lc-context tmp-prm-auth # Get focused context for sub-agent
For AI agents in chat (MCP tools):
lc_outlines - Generate excerpted context from current rulelc_preview - Validate rule effectiveness before uselc_missing - Fetch specific files/implementations on demandNote: This project was developed in collaboration with several Claude Sonnets (3.5, 3.6, 3.7, 4.0) and Groks (3, 4), using LLM Context itself to share code during development. All code is heavily human-curated by @restlessronin.
uv tool install "llm-context>=0.6.0"
# One-time setup
cd your-project
lc-init
# Daily usage
lc-select
lc-context
# Paste into your LLM chat
Add to Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"llm-context": {
"command": "uvx",
"args": ["--from", "llm-context", "lc-mcp"]
}
}
}
Restart Claude Desktop. Now AI can access additional files during conversations without manual copying.
AI agents with shell access use llm-context to create focused contexts:
# Agent explores codebase
lc-outlines
# Agent creates focused rule for specific task
# (via Skill or lc-rule-instructions)
# Agent validates rule
lc-preview tmp-prm-oauth-task
# Agent uses context for sub-task
lc-context tmp-prm-oauth-task
AI agents in chat environments use MCP tools:
# Explore codebase structure
lc_outlines(root_path, rule_name)
# Validate rule effectiveness
lc_preview(root_path, rule_name)
# Fetch specific files/implementations
lc_missing(root_path, param_type, data, timestamp)
Rules are YAML+Markdown files that describe what context to provide for a task:
---
description: "Debug API authentication"
compose:
filters: [lc/flt-no-files]
excerpters: [lc/exc-base]
also-include:
full-files: ["/src/auth/**", "/tests/auth/**"]
---
Focus on authentication system and related tests.
prm-): Generate project contexts (e.g., lc/prm-developer)flt-): Control file inclusion (e.g., lc/flt-base, lc/flt-no-files)ins-): Provide guidelines (e.g., lc/ins-developer)sty-): Enforce coding standards (e.g., lc/sty-python)exc-): Configure content extraction (e.g., lc/exc-base)Build complex rules from simpler ones:
---
instructions: [lc/ins-developer, lc/sty-python]
compose:
filters: [lc/flt-base, project-filters]
excerpters: [lc/exc-base]
---
| Command | Purpose |
|---|---|
lc-init | Initialize project configuration |
lc-select | Select files based on current rule |
lc-context | Generate and copy context |
lc-context -p | Include prompt instructions |
lc-context -m | Format as separate message |
lc-context -nt | No tools (manual workflow) |
lc-set-rule | Switch active rule |
lc-preview | Validate rule selection and size |
lc-outlines | Get code structure excerpts |
lc-missing | Fetch files/implementations (manual MCP) |
Let AI help create focused, task-specific rules. Two approaches depending on your environment:
How it works: Global skill guides you through creating rules interactively. Examines your codebase as needed using MCP tools.
Setup:
lc-init # Installs skill to ~/.claude/skills/
# Restart Claude Desktop or Claude Code
Usage:
# 1. Share project context
lc-context # Any rule - overview included
# 2. Paste into Claude, then ask:
# "Create a rule for refactoring authentication to JWT"
# "I need a rule to debug the payment processing"
Claude will:
lc-missing as neededtmp-prm-.md)Skill documentation (progressively disclosed):
Skill.md - Quick workflow, decision patternsPATTERNS.md - Common rule patternsSYNTAX.md - Detailed referenceEXAMPLES.md - Complete walkthroughsTROUBLESHOOTING.md - Problem solvingHow it works: Load comprehensive rule-creation documentation into context, work with any LLM.
Usage:
# 1. Load framework
lc-set-rule lc/prm-rule-create
lc-select
lc-context -nt
# 2. Paste into any LLM
# "I need a rule for adding OAuth integration"
# 3. LLM generates focused rule using framework
# 4. Use the new rule
lc-set-rule tmp-prm-oauth
lc-select
lc-context
Included documentation:
lc/ins-rule-intro - Introduction and overviewlc/ins-rule-framework - Complete decision framework| Aspect | Skill | Instruction Rules |
|---|---|---|
| Setup | Automatic with lc-init | Already available |
| Interaction | Interactive, uses lc-missing | Static documentation |
| File examination | Automatic via MCP | Manual or via AI |
| Best for | Claude Desktop/Code | Any LLM, any environment |
| Updates | Automatic with version upgrades | Built-in to rules |
Both require sharing project context first. Both produce equivalent results.
cat > .llm-context/rules/flt-repo-base.md .llm-context/rules/prm-code.md /tmp/context.md
# Sub-agent reads context and executes task
# Agent validates rule
preview = lc_preview(root_path="/path/to/project", rule_name="tmp-prm-task")
# Agent generates context
context = lc_outlines(root_path="/path/to/project")
# Agent fetches additional files as needed
files = lc_missing(root_path, "f", "['/proj/src/auth.py']", timestamp)
All paths use project-relative format with project name prefix:
/{project-name}/src/module/file.py
/{project-name}/tests/test_module.py
This enables multi-project context composition without path conflicts.
In rules, patterns are project-relative without the prefix:
also-include:
full-files:
- "/src/auth/**" # ✓ Correct
- "/myproject/src/**" # ✗ Wrong - don't include project name
Apache License, Version 2.0. See LICENSE for details.
Install via CLI
npx mdskills install cyberchitta/llm-context-pyLLM Context is a free, open-source AI agent skill. Smart context management for LLM development workflows. Share relevant project files instantly through intelligent selection and rule-based filtering. Getting the right context into LLM conversations is friction-heavy: - Manually finding and copying relevant files wastes time - Too much context hits token limits, too little misses important details - AI requests for additional files require manual
Install LLM Context with a single command:
npx mdskills install cyberchitta/llm-context-pyThis downloads the skill files into your project and your AI agent picks them up automatically.
LLM Context works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Codex, Gemini Cli, Amp, Roo Code, Goose, Opencode, Trae, Qodo, Command Code, Grok. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.