LLM evaluation and testing patterns including prompt testing, hallucination detection, benchmark creation, and quality metrics. Use when testing LLM applications, validating prompt quality, implementing systematic evaluation, or measuring LLM performance.
Add this skill
npx mdskills install applied-artificial-intelligence/llm-evaluationComprehensive evaluation framework with metrics, testing patterns, and statistical validation
No comments yet. Sign in to start the discussion.
Threaded comments with markdown support coming soon.