This skill should be used when the user asks to "evaluate agent performance", "build test framework", "measure agent quality", "create evaluation rubrics", or mentions LLM-as-judge, multi-dimensional evaluation, agent testing, or quality gates for agent pipelines.
Add this skill
npx mdskills install muratcankoylan/evaluationComprehensive evaluation framework guide with multi-dimensional rubrics and practical implementation patterns
1---2name: evaluation3description: This skill should be used when the user asks to "evaluate agent performance", "build test framework", "measure agent quality", "create evaluation rubrics", or mentions LLM-as-judge, multi-dimensional evaluation, agent testing, or quality gates for agent pipelines.4---56# Evaluation Methods for Agent Systems78Evaluation of agent systems requires different approaches than traditional software or even standard language model applications. Agents make dynamic decisions, are non-deterministic between runs, and often lack single correct answers. Effective evaluation must account for these characteristics while providing actionable feedback. A robust evaluation framework enables continuous improvement, catches regressions, and validates that context engineering choices achieve intended effects.910## When to Activate1112Activate this skill when:13- Testing agent performance systematically14- Validating context engineering choices15- Measuring improvements over time16- Catching regressions before deployment17- Building quality gates for agent pipelines18- Comparing different agent configurations19- Evaluating production systems continuously2021## Core Concepts2223Agent evaluation requires outcome-focused approaches that account for non-determinism and multiple valid paths. Multi-dimensional rubrics capture various quality aspects: factual accuracy, completeness, citation accuracy, source quality, and tool efficiency. LLM-as-judge provides scalable evaluation while human evaluation catches edge cases.2425The key insight is that agents may find alternative paths to goals—the evaluation should judge whether they achieve right outcomes while following reasonable processes.2627**Performance Drivers: The 95% Finding**28Research on the BrowseComp evaluation (which tests browsing agents' ability to locate hard-to-find information) found that three factors explain 95% of performance variance:2930| Factor | Variance Explained | Implication |31|--------|-------------------|-------------|32| Token usage | 80% | More tokens = better performance |33| Number of tool calls | ~10% | More exploration helps |34| Model choice | ~5% | Better models multiply efficiency |3536This finding has significant implications for evaluation design:37- **Token budgets matter**: Evaluate agents with realistic token budgets, not unlimited resources38- **Model upgrades beat token increases**: Upgrading to Claude Sonnet 4.5 or GPT-5.2 provides larger gains than doubling token budgets on previous versions39- **Multi-agent validation**: The finding validates architectures that distribute work across agents with separate context windows4041## Detailed Topics4243### Evaluation Challenges4445**Non-Determinism and Multiple Valid Paths**46Agents may take completely different valid paths to reach goals. One agent might search three sources while another searches ten. They might use different tools to find the same answer. Traditional evaluations that check for specific steps fail in this context.4748The solution is outcome-focused evaluation that judges whether agents achieve right outcomes while following reasonable processes.4950**Context-Dependent Failures**51Agent failures often depend on context in subtle ways. An agent might succeed on simple queries but fail on complex ones. It might work well with one tool set but fail with another. Failures may emerge only after extended interaction when context accumulates.5253Evaluation must cover a range of complexity levels and test extended interactions, not just isolated queries.5455**Composite Quality Dimensions**56Agent quality is not a single dimension. It includes factual accuracy, completeness, coherence, tool efficiency, and process quality. An agent might score high on accuracy but low in efficiency, or vice versa.5758Evaluation rubrics must capture multiple dimensions with appropriate weighting for the use case.5960### Evaluation Rubric Design6162**Multi-Dimensional Rubric**63Effective rubrics cover key dimensions with descriptive levels:6465Factual accuracy: Claims match ground truth (excellent to failed)6667Completeness: Output covers requested aspects (excellent to failed)6869Citation accuracy: Citations match claimed sources (excellent to failed)7071Source quality: Uses appropriate primary sources (excellent to failed)7273Tool efficiency: Uses right tools reasonable number of times (excellent to failed)7475**Rubric Scoring**76Convert dimension assessments to numeric scores (0.0 to 1.0) with appropriate weighting. Calculate weighted overall scores. Determine passing threshold based on use case requirements.7778### Evaluation Methodologies7980**LLM-as-Judge**81LLM-based evaluation scales to large test sets and provides consistent judgments. The key is designing effective evaluation prompts that capture the dimensions of interest.8283Provide clear task description, agent output, ground truth (if available), evaluation scale with level descriptions, and request structured judgment.8485**Human Evaluation**86Human evaluation catches what automation misses. Humans notice hallucinated answers on unusual queries, system failures, and subtle biases that automated evaluation misses.8788Effective human evaluation covers edge cases, samples systematically, tracks patterns, and provides contextual understanding.8990**End-State Evaluation**91For agents that mutate persistent state, end-state evaluation focuses on whether the final state matches expectations rather than how the agent got there.9293### Test Set Design9495**Sample Selection**96Start with small samples during development. Early in agent development, changes have dramatic impacts because there is abundant low-hanging fruit. Small test sets reveal large effects.9798Sample from real usage patterns. Add known edge cases. Ensure coverage across complexity levels.99100**Complexity Stratification**101Test sets should span complexity levels: simple (single tool call), medium (multiple tool calls), complex (many tool calls, significant ambiguity), and very complex (extended interaction, deep reasoning).102103### Context Engineering Evaluation104105**Testing Context Strategies**106Context engineering choices should be validated through systematic evaluation. Run agents with different context strategies on the same test set. Compare quality scores, token usage, and efficiency metrics.107108**Degradation Testing**109Test how context degradation affects performance by running agents at different context sizes. Identify performance cliffs where context becomes problematic. Establish safe operating limits.110111### Continuous Evaluation112113**Evaluation Pipeline**114Build evaluation pipelines that run automatically on agent changes. Track results over time. Compare versions to identify improvements or regressions.115116**Monitoring Production**117Track evaluation metrics in production by sampling interactions and evaluating randomly. Set alerts for quality drops. Maintain dashboards for trend analysis.118119## Practical Guidance120121### Building Evaluation Frameworks1221231. Define quality dimensions relevant to your use case1242. Create rubrics with clear, actionable level descriptions1253. Build test sets from real usage patterns and edge cases1264. Implement automated evaluation pipelines1275. Establish baseline metrics before making changes1286. Run evaluations on all significant changes1297. Track metrics over time for trend analysis1308. Supplement automated evaluation with human review131132### Avoiding Evaluation Pitfalls133134Overfitting to specific paths: Evaluate outcomes, not specific steps.135Ignoring edge cases: Include diverse test scenarios.136Single-metric obsession: Use multi-dimensional rubrics.137Neglecting context effects: Test with realistic context sizes.138Skipping human evaluation: Automated evaluation misses subtle issues.139140## Examples141142**Example 1: Simple Evaluation**143```python144def evaluate_agent_response(response, expected):145 rubric = load_rubric()146 scores = {}147 for dimension, config in rubric.items():148 scores[dimension] = assess_dimension(response, expected, dimension)149 overall = weighted_average(scores, config["weights"])150 return {"passed": overall >= 0.7, "scores": scores}151```152153**Example 2: Test Set Structure**154155Test sets should span multiple complexity levels to ensure comprehensive evaluation:156157```python158test_set = [159 {160 "name": "simple_lookup",161 "input": "What is the capital of France?",162 "expected": {"type": "fact", "answer": "Paris"},163 "complexity": "simple",164 "description": "Single tool call, factual lookup"165 },166 {167 "name": "medium_query",168 "input": "Compare the revenue of Apple and Microsoft last quarter",169 "complexity": "medium",170 "description": "Multiple tool calls, comparison logic"171 },172 {173 "name": "multi_step_reasoning",174 "input": "Analyze sales data from Q1-Q4 and create a summary report with trends",175 "complexity": "complex",176 "description": "Many tool calls, aggregation, analysis"177 },178 {179 "name": "research_synthesis",180 "input": "Research emerging AI technologies, evaluate their potential impact, and recommend adoption strategy",181 "complexity": "very_complex",182 "description": "Extended interaction, deep reasoning, synthesis"183 }184]185```186187## Guidelines1881891. Use multi-dimensional rubrics, not single metrics1902. Evaluate outcomes, not specific execution paths1913. Cover complexity levels from simple to complex1924. Test with realistic context sizes and histories1935. Run evaluations continuously, not just before release1946. Supplement LLM evaluation with human review1957. Track metrics over time for trend detection1968. Set clear pass/fail thresholds based on use case197198## Integration199200This skill connects to all other skills as a cross-cutting concern:201202- context-fundamentals - Evaluating context usage203- context-degradation - Detecting degradation204- context-optimization - Measuring optimization effectiveness205- multi-agent-patterns - Evaluating coordination206- tool-design - Evaluating tool effectiveness207- memory-systems - Evaluating memory quality208209## References210211Internal reference:212- [Metrics Reference](./references/metrics.md) - Detailed evaluation metrics and implementation213214## References215216Internal skills:217- All other skills connect to evaluation for quality measurement218219External resources:220- LLM evaluation benchmarks221- Agent evaluation research papers222- Production monitoring practices223224---225226## Skill Metadata227228**Created**: 2025-12-20229**Last Updated**: 2025-12-20230**Author**: Agent Skills for Context Engineering Contributors231**Version**: 1.0.0232
Full transparency — inspect the skill content before installing.