AI Agent Skills Not Triggering? How the Description Field Works
Your skill works perfectly in testing but agents ignore it completely. The culprit is usually a poorly written description in your SKILL.md frontmatter. This single field determines when your AI agent skills trigger, yet most developers treat it as an afterthought.
The description acts as the matching layer between user requests and available skills. When someone asks "convert this image to a PDF," the agent scans all skill descriptions to find relevant matches. A vague description like "handles file operations" won't trigger. A specific one like "converts images to PDF using ImageMagick" will.
How agents parse descriptions for matching
Agents use semantic similarity to match user requests against skill descriptions. They're looking for concept overlap, not exact keyword matches. When you write "processes documents," the agent connects this to requests about "converting files" or "handling PDFs."
The matching happens in two phases. First, the agent identifies intent from the user's message. Then it scores each available skill description against that intent. Skills with higher semantic similarity scores get considered for execution.
This means your description needs semantic density. Words like "utility" or "helper" carry no matching weight. Specific verbs and nouns do. Compare these two descriptions:
description: "Utility for working with files"
versus:
description: "Converts images to PDF, extracts text from documents, resizes photos"
The second version triggers on image conversion requests, document text extraction, and photo resizing. The first triggers on almost nothing.
Writing descriptions that actually trigger
Start with action verbs. Agents match strongly on "converts," "extracts," "generates," "analyzes," "downloads." These words signal clear capabilities to the matching algorithm.
Include the input and output types. "Converts Markdown to HTML" tells the agent exactly when to trigger. "Processes text" leaves too much ambiguity. The agent needs to know what goes in and what comes out.
Mention specific formats when relevant. If your skill handles CSV files, say "CSV" in the description. If it works with images, list the formats: "PNG, JPEG, WebP." Format names carry strong semantic weight for matching.
description: "Downloads YouTube videos as MP4 files, extracts audio as MP3"
This triggers on YouTube download requests, video conversion requests, and audio extraction requests. Three clear trigger patterns from one description.
Common description mistakes that break triggering
Generic business language kills matching. Descriptions like "streamlines workflow" or "enables productivity" tell the agent nothing actionable. Users don't ask to "streamline workflows." They ask to "convert this spreadsheet" or "download that video."
Overuse of technical jargon backfires too. If your skill "performs ETL operations on structured datasets," most users will ask to "import CSV data" or "clean this spreadsheet." The agent needs to bridge common language to technical capabilities.
Don't list technologies instead of functions. "Uses pandas and numpy" means nothing for triggering. "Analyzes data, creates charts, calculates statistics" tells the agent when to activate your skill. Focus on what users want done, not how you do it.
Single-purpose descriptions often work better than multi-purpose ones. Instead of "handles various file operations," try separate skills for "converts PDFs to text" and "merges multiple PDFs." Narrow descriptions trigger more reliably.
Testing your description effectiveness
Create realistic user requests and check if your description would match. If someone says "make this image smaller," would your description "resizes and compresses images" trigger? Probably yes. Would "image utility tool" trigger? Unlikely.
Try the opposite direction too. Given your description, what requests should trigger it? If you can't think of specific user phrases that would match, rewrite the description.
Look at successful skills in the Browse skills section. Notice how their descriptions use concrete action words and specific formats. "Extracts metadata from images" beats "image processing tool" every time.
The SKILL.md spec shows required description formats, but doesn't explain matching psychology. Understanding how agents think about similarity helps you write better descriptions.
Multiple skills competing for the same request
When several skills have similar descriptions, agents pick based on semantic closeness and skill quality signals. A description that exactly matches the user's language wins over a close match.
If your skill keeps losing to competitors, check their descriptions. Are they more specific? Do they use language closer to how users actually talk? Sometimes the fix is simply rewording your description to match common user phrasing.
Quality signals matter too. Skills with better documentation, clearer examples, and positive usage patterns get preference when descriptions tie. This is why Best practices recommends comprehensive SKILL.md files beyond just the description.
Advanced description patterns
Some descriptions work better with context clues. Instead of just "generates QR codes," try "generates QR codes from URLs, text, and contact information." This triggers on broader request types while staying specific.
Conditional descriptions can help: "converts Word documents to PDF if pandoc is installed." This manages user expectations while improving matching accuracy.
Action chains work well too: "downloads web pages, extracts main content, converts to Markdown." Each step in the chain creates a potential trigger point.
When descriptions aren't the problem
Sometimes skills don't trigger because of dependency issues or installation problems. Check the Install skills guide if your description seems right but the skill never activates.
Runtime errors can also prevent triggering. If your skill fails during execution, agents may stop trying to use it. Good error handling and clear dependency documentation prevent this.
The skill might also conflict with built-in agent capabilities. If you write a calculator skill, but the agent already has math capabilities, your skill may never get chosen. Focus on capabilities that extend beyond base agent functions.
Skills with poor examples or unclear documentation get deprioritized even with good descriptions. The description gets the agent's attention, but the rest of the SKILL.md file determines if it actually gets used.
Your description is the front door to skill activation. Make it specific, action-oriented, and semantically rich. The difference between "file utility" and "converts PDFs to Word documents" is often the difference between a skill that works and one that sits unused.