Use when working with incident response incident response
Add this skill
npx mdskills install sickn33/incident-response-incident-responseComprehensive multi-phase incident response workflow with specialized agents and SRE practices
1---2name: incident-response-incident-response3description: "Use when working with incident response incident response"4---56## Use this skill when78- Working on incident response incident response tasks or workflows9- Needing guidance, best practices, or checklists for incident response incident response1011## Do not use this skill when1213- The task is unrelated to incident response incident response14- You need a different domain or tool outside this scope1516## Instructions1718- Clarify goals, constraints, and required inputs.19- Apply relevant best practices and validate outcomes.20- Provide actionable steps and verification.21- If detailed examples are required, open `resources/implementation-playbook.md`.2223Orchestrate multi-agent incident response with modern SRE practices for rapid resolution and learning:2425[Extended thinking: This workflow implements a comprehensive incident command system (ICS) following modern SRE principles. Multiple specialized agents collaborate through defined phases: detection/triage, investigation/mitigation, communication/coordination, and resolution/postmortem. The workflow emphasizes speed without sacrificing accuracy, maintains clear communication channels, and ensures every incident becomes a learning opportunity through blameless postmortems and systematic improvements.]2627## Configuration2829### Severity Levels30- **P0/SEV-1**: Complete outage, security breach, data loss - immediate all-hands response31- **P1/SEV-2**: Major degradation, significant user impact - rapid response required32- **P2/SEV-3**: Minor degradation, limited impact - standard response33- **P3/SEV-4**: Cosmetic issues, no user impact - scheduled resolution3435### Incident Types36- Performance degradation37- Service outage38- Security incident39- Data integrity issue40- Infrastructure failure41- Third-party service disruption4243## Phase 1: Detection & Triage4445### 1. Incident Detection and Classification46- Use Task tool with subagent_type="incident-responder"47- Prompt: "URGENT: Detect and classify incident: $ARGUMENTS. Analyze alerts from PagerDuty/Opsgenie/monitoring. Determine: 1) Incident severity (P0-P3), 2) Affected services and dependencies, 3) User impact and business risk, 4) Initial incident command structure needed. Check error budgets and SLO violations."48- Output: Severity classification, impact assessment, incident command assignments, SLO status49- Context: Initial alerts, monitoring dashboards, recent changes5051### 2. Observability Analysis52- Use Task tool with subagent_type="observability-monitoring::observability-engineer"53- Prompt: "Perform rapid observability sweep for incident: $ARGUMENTS. Query: 1) Distributed tracing (OpenTelemetry/Jaeger), 2) Metrics correlation (Prometheus/Grafana/DataDog), 3) Log aggregation (ELK/Splunk), 4) APM data, 5) Real User Monitoring. Identify anomalies, error patterns, and service degradation points."54- Output: Observability findings, anomaly detection, service health matrix, trace analysis55- Context: Severity level from step 1, affected services5657### 3. Initial Mitigation58- Use Task tool with subagent_type="incident-responder"59- Prompt: "Implement immediate mitigation for P$SEVERITY incident: $ARGUMENTS. Actions: 1) Traffic throttling/rerouting if needed, 2) Feature flag disabling for affected features, 3) Circuit breaker activation, 4) Rollback assessment for recent deployments, 5) Scale resources if capacity-related. Prioritize user experience restoration."60- Output: Mitigation actions taken, temporary fixes applied, rollback decisions61- Context: Observability findings, severity classification6263## Phase 2: Investigation & Root Cause Analysis6465### 4. Deep System Debugging66- Use Task tool with subagent_type="error-debugging::debugger"67- Prompt: "Conduct deep debugging for incident: $ARGUMENTS using observability data. Investigate: 1) Stack traces and error logs, 2) Database query performance and locks, 3) Network latency and timeouts, 4) Memory leaks and CPU spikes, 5) Dependency failures and cascading errors. Apply Five Whys analysis."68- Output: Root cause identification, contributing factors, dependency impact map69- Context: Observability analysis, mitigation status7071### 5. Security Assessment72- Use Task tool with subagent_type="security-scanning::security-auditor"73- Prompt: "Assess security implications of incident: $ARGUMENTS. Check: 1) DDoS attack indicators, 2) Authentication/authorization failures, 3) Data exposure risks, 4) Certificate issues, 5) Suspicious access patterns. Review WAF logs, security groups, and audit trails."74- Output: Security assessment, breach analysis, vulnerability identification75- Context: Root cause findings, system logs7677### 6. Performance Engineering Analysis78- Use Task tool with subagent_type="application-performance::performance-engineer"79- Prompt: "Analyze performance aspects of incident: $ARGUMENTS. Examine: 1) Resource utilization patterns, 2) Query optimization opportunities, 3) Caching effectiveness, 4) Load balancer health, 5) CDN performance, 6) Autoscaling triggers. Identify bottlenecks and capacity issues."80- Output: Performance bottlenecks, resource recommendations, optimization opportunities81- Context: Debug findings, current mitigation state8283## Phase 3: Resolution & Recovery8485### 7. Fix Implementation86- Use Task tool with subagent_type="backend-development::backend-architect"87- Prompt: "Design and implement production fix for incident: $ARGUMENTS based on root cause. Requirements: 1) Minimal viable fix for rapid deployment, 2) Risk assessment and rollback capability, 3) Staged rollout plan with monitoring, 4) Validation criteria and health checks. Consider both immediate fix and long-term solution."88- Output: Fix implementation, deployment strategy, validation plan, rollback procedures89- Context: Root cause analysis, performance findings, security assessment9091### 8. Deployment and Validation92- Use Task tool with subagent_type="deployment-strategies::deployment-engineer"93- Prompt: "Execute emergency deployment for incident fix: $ARGUMENTS. Process: 1) Blue-green or canary deployment, 2) Progressive rollout with monitoring, 3) Health check validation at each stage, 4) Rollback triggers configured, 5) Real-time monitoring during deployment. Coordinate with incident command."94- Output: Deployment status, validation results, monitoring dashboard, rollback readiness95- Context: Fix implementation, current system state9697## Phase 4: Communication & Coordination9899### 9. Stakeholder Communication100- Use Task tool with subagent_type="content-marketing::content-marketer"101- Prompt: "Manage incident communication for: $ARGUMENTS. Create: 1) Status page updates (public-facing), 2) Internal engineering updates (technical details), 3) Executive summary (business impact/ETA), 4) Customer support briefing (talking points), 5) Timeline documentation with key decisions. Update every 15-30 minutes based on severity."102- Output: Communication artifacts, status updates, stakeholder briefings, timeline log103- Context: All previous phases, current resolution status104105### 10. Customer Impact Assessment106- Use Task tool with subagent_type="incident-responder"107- Prompt: "Assess and document customer impact for incident: $ARGUMENTS. Analyze: 1) Affected user segments and geography, 2) Failed transactions or data loss, 3) SLA violations and contractual implications, 4) Customer support ticket volume, 5) Revenue impact estimation. Prepare proactive customer outreach list."108- Output: Customer impact report, SLA analysis, outreach recommendations109- Context: Resolution progress, communication status110111## Phase 5: Postmortem & Prevention112113### 11. Blameless Postmortem114- Use Task tool with subagent_type="documentation-generation::docs-architect"115- Prompt: "Conduct blameless postmortem for incident: $ARGUMENTS. Document: 1) Complete incident timeline with decisions, 2) Root cause and contributing factors (systems focus), 3) What went well in response, 4) What could improve, 5) Action items with owners and deadlines, 6) Lessons learned for team education. Follow SRE postmortem best practices."116- Output: Postmortem document, action items list, process improvements, training needs117- Context: Complete incident history, all agent outputs118119### 12. Monitoring and Alert Enhancement120- Use Task tool with subagent_type="observability-monitoring::observability-engineer"121- Prompt: "Enhance monitoring to prevent recurrence of: $ARGUMENTS. Implement: 1) New alerts for early detection, 2) SLI/SLO adjustments if needed, 3) Dashboard improvements for visibility, 4) Runbook automation opportunities, 5) Chaos engineering scenarios for testing. Ensure alerts are actionable and reduce noise."122- Output: New monitoring configuration, alert rules, dashboard updates, runbook automation123- Context: Postmortem findings, root cause analysis124125### 13. System Hardening126- Use Task tool with subagent_type="backend-development::backend-architect"127- Prompt: "Design system improvements to prevent incident: $ARGUMENTS. Propose: 1) Architecture changes for resilience (circuit breakers, bulkheads), 2) Graceful degradation strategies, 3) Capacity planning adjustments, 4) Technical debt prioritization, 5) Dependency reduction opportunities. Create implementation roadmap."128- Output: Architecture improvements, resilience patterns, technical debt items, roadmap129- Context: Postmortem action items, performance analysis130131## Success Criteria132133### Immediate Success (During Incident)134- Service restoration within SLA targets135- Accurate severity classification within 5 minutes136- Stakeholder communication every 15-30 minutes137- No cascading failures or incident escalation138- Clear incident command structure maintained139140### Long-term Success (Post-Incident)141- Comprehensive postmortem within 48 hours142- All action items assigned with deadlines143- Monitoring improvements deployed within 1 week144- Runbook updates completed145- Team training conducted on lessons learned146- Error budget impact assessed and communicated147148## Coordination Protocols149150### Incident Command Structure151- **Incident Commander**: Decision authority, coordination152- **Technical Lead**: Technical investigation and resolution153- **Communications Lead**: Stakeholder updates154- **Subject Matter Experts**: Specific system expertise155156### Communication Channels157- War room (Slack/Teams channel or Zoom)158- Status page updates (StatusPage, Statusly)159- PagerDuty/Opsgenie for alerting160- Confluence/Notion for documentation161162### Handoff Requirements163- Each phase provides clear context to the next164- All findings documented in shared incident doc165- Decision rationale recorded for postmortem166- Timestamp all significant events167168Production incident requiring immediate response: $ARGUMENTS169
Full transparency — inspect the skill content before installing.