Repository-grounded threat modeling that enumerates trust boundaries, assets, attacker capabilities, abuse paths, and mitigations, and writes a concise Markdown threat model. Trigger only when the user explicitly asks to threat model a codebase or path, enumerate threats/abuse paths, or perform AppSec threat modeling. Do not trigger for general architecture summaries, code review, or non-security design work.
Add this skill
npx mdskills install openai/security-threat-modelComprehensive, evidence-grounded threat modeling workflow with strong methodology and quality gates
No forks yet. Be the first to fork and customize this skill.
Visual fork tree and fork list coming soon.