Caching strategies for LLM prompts including Anthropic prompt caching, response caching, and CAG (Cache Augmented Generation) Use when: prompt caching, cache prompt, response cache, cag, cache augmented.
Add this skill
npx mdskills install sickn33/prompt-cachingStrong caching framework with anti-patterns and edge cases, but lacks actionable implementation steps
No comments yet. Sign in to start the discussion.
Threaded comments with markdown support coming soon.