Use when working with error debugging multi agent review
Add this skill
npx mdskills install sickn33/error-debugging-multi-agent-reviewConceptual framework for multi-agent reviews, but lacks actionable agent instructions for actual use
resources/implementation-playbook.md.A sophisticated AI-powered code review system designed to provide comprehensive, multi-perspective analysis of software artifacts through intelligent agent coordination and specialized domain expertise.
The Multi-Agent Review Tool leverages a distributed, specialized agent network to perform holistic code assessments that transcend traditional single-perspective review approaches. By coordinating agents with distinct expertise, we generate a comprehensive evaluation that captures nuanced insights across multiple critical dimensions:
$ARGUMENTS: Target code/project for review
def route_agents(code_context):
agents = []
if is_web_application(code_context):
agents.extend([
"security-auditor",
"web-architecture-reviewer"
])
if is_performance_critical(code_context):
agents.append("performance-analyst")
return agents
class ReviewContext:
def __init__(self, target, metadata):
self.target = target
self.metadata = metadata
self.agent_insights = {}
def update_insights(self, agent_type, insights):
self.agent_insights[agent_type] = insights
def execute_review(review_context):
# Parallel independent agents
parallel_agents = [
"code-quality-reviewer",
"security-auditor"
]
# Sequential dependent agents
sequential_agents = [
"architecture-reviewer",
"performance-optimizer"
]
def synthesize_review_insights(agent_results):
consolidated_report = {
"critical_issues": [],
"important_issues": [],
"improvement_suggestions": []
}
# Intelligent merging logic
return consolidated_report
def resolve_conflicts(agent_insights):
conflict_resolver = ConflictResolutionEngine()
return conflict_resolver.process(agent_insights)
def optimize_review_process(review_context):
return ReviewOptimizer.allocate_resources(review_context)
def validate_review_quality(review_results):
quality_score = QualityScoreCalculator.compute(review_results)
return quality_score > QUALITY_THRESHOLD
multi_agent_review(
target="/path/to/project",
agents=[
{"type": "security-auditor", "weight": 0.3},
{"type": "architecture-reviewer", "weight": 0.3},
{"type": "performance-analyst", "weight": 0.2}
]
)
sequential_review_workflow = [
{"phase": "design-review", "agent": "architect-reviewer"},
{"phase": "implementation-review", "agent": "code-quality-reviewer"},
{"phase": "testing-review", "agent": "test-coverage-analyst"},
{"phase": "deployment-readiness", "agent": "devops-validator"}
]
hybrid_review_strategy = {
"parallel_agents": ["security", "performance"],
"sequential_agents": ["architecture", "compliance"]
}
The tool is designed with a plugin-based architecture, allowing easy addition of new agent types and review strategies.
Target for review: $ARGUMENTS
Install via CLI
npx mdskills install sickn33/error-debugging-multi-agent-reviewError Debugging Multi Agent Review is a free, open-source AI agent skill. Use when working with error debugging multi agent review
Install Error Debugging Multi Agent Review with a single command:
npx mdskills install sickn33/error-debugging-multi-agent-reviewThis downloads the skill files into your project and your AI agent picks them up automatically.
Error Debugging Multi Agent Review works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Codex, Gemini Cli, Amp, Roo Code, Goose, Opencode, Trae, Qodo, Command Code. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.