Multi-agent orchestration platform for enterprise AI development. Run 12+ AI agents concurrently with full governance, Gemini-powered code review, and enforced quality gates. Solves coordination, verification, and metrics challenges that stall enterprise AI adoption.
Add this skill
npx mdskills install martymcenroe/assemblyzeroComprehensive documentation but lacks the executable agent instructions a skill requires
Run 12+ AI agents concurrently. One identity. Full governance. Measurable ROI.
This isn't theoretical. AssemblyZero has processed 207 issues (159 closed) in 27 days:
Issues closed per day (Central Time):
2026-01-21: 12 ############
2026-02-02: 23 #######################
2026-02-03: 55 #######################################################
2026-02-04: 31 ###############################
Average velocity: 5.9 issues/day | Peak: 55 issues in one day | Full Metrics →
graph TD
subgraph Intent["HUMAN ORCHESTRATOR"]
O["Human Intent
& Oversight"]
end
subgraph LG["LANGGRAPH WORKFLOWS"]
W["5 State Machines
SQLite Checkpointing"]
end
subgraph Agents["CLAUDE AGENTS (12+)"]
A["Feature | Bug Fix
Docs | Review"]
end
subgraph Gemini["GEMINI VERIFICATION"]
G["LLD Review | Code Review
Security | Quality"]
end
subgraph Gov["GOVERNANCE GATES"]
M["Requirements | Implementation
Reports | Audit Trail"]
end
O --> LG
LG --> Agents
Agents --> Gemini
Gemini --> Gov
What makes AssemblyZero different:
| Capability | What It Means |
|---|---|
| 12+ Concurrent Agents | Multiple Claude agents work in parallel on features, bugs, docs - all under one user identity |
| Gemini Reviews Claude | Every design doc and code change is reviewed by Gemini 3 Pro before humans see it |
| Enforced Gates | LLD review, implementation review, report generation - gates that can't be skipped |
| 34 Governance Audits | OWASP, GDPR, NIST AI Safety - adversarial audits that find violations |
AI coding assistants like Claude Code and GitHub Copilot are transforming development. But enterprise adoption stalls because:
| Challenge | Reality |
|---|---|
| No coordination | Multiple agents conflict and duplicate work |
| No governance | Security teams can't approve ungoverned AI |
| No verification | AI-generated code goes unreviewed |
| No metrics | Leadership can't prove ROI |
| Permission friction | Constant approval prompts destroy flow state |
Organizations run pilots. Developers love the tools. Then adoption plateaus at 10-20% because the infrastructure layer is missing.
AssemblyZero is that infrastructure layer.
The headline feature: run 12+ AI agents concurrently under single-user identity with full coordination.
| Component | Function |
|---|---|
| Single-User Identity | All agents share API credentials, git identity, permission patterns |
| Worktree Isolation | Each agent gets its own git worktree - no conflicts, clean PRs |
| Credential Rotation | Automatic rotation across API keys when quota exhausted |
| Session Coordination | Agents can see what others are working on via session logs |
| Role | Context | Tasks |
|---|---|---|
| Feature Agent | Full codebase | New functionality, refactors |
| Bug Fix Agent | Issue-focused | Specific bug investigation |
| Documentation Agent | Docs + code | README, wiki, API docs |
| Review Agent | PR diff | Code review assistance |
| Audit Agent | Compliance | Security, privacy audits |
Result: One engineer orchestrating 12+ agents can accomplish what previously required a team.
Full Architecture Documentation
The key differentiator: Claude builds, Gemini reviews.
This isn't just "two models" - it's adversarial verification where one AI checks another's work before humans approve.
| Gate | When | What Gemini Checks |
|---|---|---|
| Issue Review | Before work starts | Requirements clarity, scope, risks |
| LLD Review | Before coding | Design completeness, security, testability |
| Code Review | Before PR | Quality, patterns, vulnerabilities |
| Security Audit | Before merge | OWASP Top 10, dependency risks |
| Single Model | Multi-Model (AssemblyZero) |
|---|---|
| Claude reviews Claude's work | Gemini reviews Claude's work |
| Same blind spots | Different model catches different mistakes |
| Trust the output | Verify the output |
| "It looks good to me" | Structured JSON verdicts: APPROVE/BLOCK |
AssemblyZero detects silent model downgrades:
| Provider | Class | Use Case | Cost Model |
|---|---|---|---|
Claude CLI (claude -p) | ClaudeCLIProvider | Drafting, implementation | Free (Max subscription) |
| Anthropic API | AnthropicProvider | Automatic fallback | Per-token |
| Fallback | FallbackProvider | CLI (180s) → API (300s) | Free first, paid if needed |
| Gemini | GeminiProvider | Adversarial review only | Free (API quota) |
Claude is invoked via claude -p with --tools "" and --strict-mcp-config — no tools, no MCP, deterministic side-effect-free calls. The Anthropic API exists only as a paid fallback for resilience.
All workflows are LangGraph StateGraph instances with typed state and SQLite checkpointing:
| Workflow | Nodes | Purpose |
|---|---|---|
| Issue | 7 | Idea → structured GitHub issue |
| Requirements | 10 | Issue → approved LLD (design) |
| Implementation Spec | 7 | LLD → concrete implementation instructions |
| TDD Implementation | 13 | Spec → code + tests + PR |
| Scout | Variable | External intelligence gathering |
AssemblyZero uses deterministic RAG-like techniques — not vector embeddings — for codebase understanding:
Three mandatory checkpoints that cannot be bypassed:
Idea → Issue → LLD Review → Coding → Implementation Review → PR → Report Generation → Merge
↑ ↑ ↑
Gemini Gate Gemini Gate Auto-Generated
Before writing ANY code:
Cost of design fix: 1 hour Cost of code fix: 8 hours Cost of production fix: 80 hours
Before creating ANY PR:
Before merge, auto-generate:
implementation-report.md - What changed and whytest-report.md - Full test output, coverage metricsPermission prompts are the #1 adoption killer.
Every "Allow this command?" prompt breaks flow state. Developers either:
AssemblyZero identifies:
| Metric | Before | After |
|---|---|---|
| Prompts per session | 15-20 | 2-3 |
| Time lost to prompts | 10+ min | < 1 min |
| Developer frustration | High | Low |
Audits designed with an adversarial philosophy: they exist to find violations, not confirm compliance.
| Category | Count | Focus |
|---|---|---|
| Security & Privacy | 3 | OWASP, GDPR, License compliance |
| AI Governance | 7 | Bias, Explainability, Safety, Agentic risks |
| Code Quality | 4 | Standards, Accessibility, Capabilities |
| Permission Management | 3 | Friction, Permissiveness, Self-audit |
| Documentation Health | 6 | Reports, LLD alignment, Terminology |
| Extended | 10 | Cost, Structure, References, Wiki |
| Meta | 1 | Audit system governance |
| Audit | Standard | What It Checks |
|---|---|---|
| 0808 | OWASP LLM 2025 | AI-specific vulnerabilities |
| 0809 | OWASP Agentic 2026 | Agent autonomy risks |
| 0810 | ISO/IEC 42001 | AI management system |
| 0815 | Internal | Permission friction patterns |
"How do I prove ROI to leadership?"
| Metric | What It Shows |
|---|---|
| Active users / Total engineers | Adoption rate |
| Sessions per user per week | Engagement depth |
| Features shipped with AI assist | Productivity impact |
| Metric | Target |
|---|---|
| Permission prompts per session | < 3 |
| Time to first productive action | < 30 seconds |
| Session abandonment rate | < 5% |
| Metric | What It Shows |
|---|---|
| Gemini first-pass approval rate | Design quality |
| PR revision count | Code quality |
| Post-merge defects | Overall quality |
| Metric | Calculation |
|---|---|
| Cost per feature | Total API spend / Features shipped |
| Cost per agent-hour | API spend / Active agent hours |
| ROI | (Time saved × Engineer cost) / Platform cost |
AssemblyZero workflows run as LangGraph state machines with SQLite checkpointing:
| Phase | Status | Capability | Impact |
|---|---|---|---|
| 1 | COMPLETE | LangGraph state machines (5 workflows) | Gates structurally enforced |
| 2 | COMPLETE | SQLite checkpointing (SqliteSaver) | Long tasks survive interruptions |
| 3 | Q2 2026 | Supervisor pattern | Autonomous task decomposition |
| 4 | Q2 2026 | LangSmith observability | Full dashboards, traces, cost attribution |
| 5 | Q3 2026 | Dynamic tool graphs | Context-aware tool selection |
git clone https://github.com/martymcenroe/AssemblyZero.git
cd AssemblyZero
poetry install
mkdir -p YourProject/.claude
cp AssemblyZero/.claude/project.json.example YourProject/.claude/project.json
# Edit project.json with your project details
poetry run python tools/assemblyzero-generate.py --project YourProject
The generated configs include:
Full documentation at AssemblyZero Wiki (32 pages):
| Page | Description |
|---|---|
| Technical Architecture | LLM invocation, LangGraph workflows, codebase intelligence |
| Metrics Dashboard | Velocity charts, production numbers |
| Multi-Agent Orchestration | The headline feature - 12+ concurrent agents |
| Requirements Workflow | LLD → Gemini → Approval flow |
| Implementation Workflow | Worktree → Code → Reports → PR |
| Governance Gates | LLD, implementation, report gates |
| How AssemblyZero Learns | Self-improving governance from verdicts |
| LangGraph Evolution | Roadmap to enterprise state machines |
| Gemini Verification | Multi-model review architecture |
| Quick Start | 5-minute setup guide |
| Audience | Start Here |
|---|---|
| Engineering Leaders | Why AssemblyZero? |
| Architects | Technical Architecture |
| Security Teams | Security & Compliance |
| Developers | Quick Start |
AssemblyZero was built by Martin McEnroe, applying 29 years of enterprise technology leadership to the emerging challenge of scaling AI coding assistants across engineering organizations.
| Role | Organization | Relevance |
|---|---|---|
| Director, Data Science & AI | AT&T | Led 45-person team, $10M+ annual savings from production AI |
| VP Product | Afiniti | AI-powered platform at scale |
| AI Strategic Consultant | TX DOT | 76-page enterprise AI strategy |
Having led enterprise AI adoption, I know the blockers:
| Blocker | AssemblyZero Solution |
|---|---|
| "Security won't approve ungoverned AI" | 34 audits, Gemini gates, enforced checkpoints |
| "We can't measure productivity" | KPI framework, friction tracking, cost attribution |
| "Agents conflict with each other" | Worktree isolation, single-user identity model |
| "Developers hate the permission prompts" | Pattern detection, friction analysis, auto-remediation |
| "It's just pilots, not real adoption" | Infrastructure that scales to organization-wide |
This isn't theoretical. It's production infrastructure I use daily to orchestrate 12+ AI agents with full governance.
The code in this repo is the same code that:
If you're scaling AI coding assistants across an engineering organization, this is the infrastructure layer you need.
Workflows are named after Terry Pratchett's Discworld characters — intuitive metaphors that make system behavior memorable:
| Persona | Function |
|---|---|
| Vimes | Regression guard — deep suspicion of everything |
| Hex | Codebase intelligence — AST parsing, pattern matching |
| Ponder | Mechanical validation — auto-fix before review |
| Lu-Tze | Maintenance — constant sweeping prevents disasters |
| DEATH | Documentation reconciliation — INEVITABLE. THOROUGH. |
"A man is not dead while his name is still spoken." GNU Terry Pratchett
PolyForm Noncommercial 1.0.0
Install via CLI
npx mdskills install martymcenroe/assemblyzeroAssemblyZero is a free, open-source AI agent skill. Multi-agent orchestration platform for enterprise AI development. Run 12+ AI agents concurrently with full governance, Gemini-powered code review, and enforced quality gates. Solves coordination, verification, and metrics challenges that stall enterprise AI adoption.
Install AssemblyZero with a single command:
npx mdskills install martymcenroe/assemblyzeroThis downloads the skill files into your project and your AI agent picks them up automatically.
AssemblyZero works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Codex, Gemini Cli, Amp, Roo Code, Goose, Opencode, Trae, Qodo, Command Code. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.