Counter Service · Kernel Foods · Internal Guide · April 2026

The AI Product Team
Blueprint

Build a full product org with Claude agents — from business problem to deployed software.

Claude Code Agent Teams MCP Integration Multi-Agent Orchestration
Your New Role

Your Job Is Now Definition and Judgment

Before the phases, the tools, and the agent definitions — this is the mental model that makes all of it work.

As you add agents under you, your job shifts from execution to direction. You're no longer the one writing the PRD, designing the architecture, or reviewing the PR — you're the one who defines the problem clearly enough that agents can do those things well. Agents multiply your output, but only as far as your intent is clear.

There's an old adage from the early days of computing: garbage in, garbage out. It's never been more relevant. The limiting factor in an agent-powered team isn't the technology — it's how precisely you can define the problem, the output, and the quality bar. A vague brief produces vague output, every time. The practical test: if you wouldn't hand this brief to a new hire and expect them to succeed without follow-up questions, the agent will struggle too.

Before
You executed
  • Wrote the PRD
  • Designed the architecture
  • Reviewed the PR
  • Ran the sprint
Now
You direct
  • Is this the right problem to solve?
  • Is the definition clear enough to act on?
  • What does done look like, exactly?
  • How might this go wrong — and is there a check for that?
Phase 01Define Your Agent Personas

Build Your Agent Roster

Each agent is a Markdown file with a YAML frontmatter header defining its role, tools, model, and system prompt.

Repository structure

# Project-level agents
.claude/
  agents/
    product-manager.md
    engineering-manager.md
    solution-architect.md
    product-designer.md
    business-analyst.md
    backend-engineer.md
    frontend-engineer.md
    qa-engineer.md
    devops-engineer.md
    security-reviewer.md
    contrarian-reviewer.md   # ← new — see Phase 02B

Example agent definition — Product Manager

---
name: product-manager
description: Senior PM. Transforms business objectives into PRDs,
             user stories, and acceptance criteria. Invoke at
             project start and for scope changes.
tools: Read, Write, WebSearch
model: opus
permissionMode: default
---

You are a senior product manager. Given a business problem
and objectives, you:
  1. Write a structured PRD with goals, non-goals, user
     personas, and success metrics
  2. Decompose into prioritized epics and user stories
     with acceptance criteria
  3. Flag risks and dependencies before handoff
  4. Output all artifacts to /specs/product/
Key principle

Agent definitions are your org's engineering handbook and role descriptions. The more precise they are — including what paths they own, what they output, and what success looks like — the less drift you get during execution.

Full agent roster

AgentLayerPrimary OutputModel
Product ManagerSpec & StrategyPRD, user stories, acceptance criteriaOpus
Solution ArchitectSpec & StrategyTech spec, API contracts, data modelsOpus
Product DesignerSpec & StrategyUX spec, component map, design briefSonnet
Business AnalystSpec & StrategyRequirements doc, test plan inputsSonnet
Eng. ManagerSpec & StrategyWork breakdown, capacity planSonnet
Backend EngineerBuild & DeployAPI code, services, migrationsSonnet
Frontend EngineerBuild & DeployUI components, client logicSonnet
QA EngineerBuild & DeployTest cases, integration tests, bug reportsSonnet
DevOps EngineerBuild & DeployCI/CD pipeline, infra configSonnet
Security ReviewerBuild & DeployVulnerability report, fixesOpus
Contrarian ReviewerBoth layersRejection rationale or approvalOpus
Phase 02The Workflow Pipeline

From Problem to Deployed Software

Structure your pipeline as sequential handoffs with parallel execution branches. Each phase gates the next.

Business Problem
        │
        ▼
    PM Agent  ──────────────────▶  Architect Agent┌─────────────────────┼─────────────────────┐
              ▼                     ▼                     ▼
         Designer           BA / QA Spec         Infra Plan
              └─────────────────────┼─────────────────────┘Eng. Team Lead
                       ┌────────────┼────────────┐
                       ▼            ▼            ▼
                  Backend     Frontend    DevOps
                       └────────────┼────────────┘QA AgentSecurity Review▶  Deploy

Stage-by-stage breakdown

StageAgent(s)InputsOutputs
DiscoveryPM AgentBusiness problem docPRD, user stories
ArchitectureArchitectPRDTech spec, API contracts, data models
DesignDesigner + BAPRD + Tech specUX spec, component map, test plan
BuildBackend + FrontendAll specsWorking code, unit tests
QAQA AgentAcceptance criteriaIntegration tests, bug reports
SecuritySecurity ReviewerCodebaseVulnerability report, fixes
DeployDevOps AgentTested buildCI/CD pipeline, live deployment
Orchestration hierarchy

The parent orchestrator only talks to a few top-level agents, keeping its own context clean. A Feature Lead agent receives a brief and decomposes it into sub-agents on its own. The parent never sees those details. This mirrors how real engineering orgs work — you don't have the VP of Engineering assigning tasks to individual engineers; you go through layers of leads.

Phase 02BThe Contrarian Layer

Build Dissent In by Design

Without a contrarian agent, you've assembled a team of yes-men. Every proposal gets endorsed — not because it's good, but because no one's job is to reject it.

A well-run research team always has someone whose job is to tear down weak hypotheses — not out of difficulty, but out of rigor. The same principle applies here. Add an agent whose express role is to pressure-test the work of the others before it moves to the next phase.

Two contrarian archetypes

Strategist vs Operator
The Strategist (PM / Exec Sponsor) thinks big — vision, unconstrained ideation. The Operator (Eng Manager / Program Manager) brings in real-world practicality, logistics, and running concerns. They are in productive tension on every major decision.
Builder vs Breaker
The Builder (Engineer / IC) owns implementation details and explains why trade-offs were made. The Breaker (QA / SRE / Security) actively tries to fail the system — edge cases, abuse cases, accessibility, penetration vectors. If the Builder can't defend it, it doesn't ship.

The steel-man → contrarian loop

The contrarian's effectiveness depends on one rule: it must engage with the strongest version of the proposal, not the weakest. This is the difference between steel-manning and straw-manning. If the best possible version of an idea still fails scrutiny, it's a genuine non-starter.

Steel-man Agent
Takes the proposal and presents it in its strongest possible form. Addresses obvious objections preemptively. Makes the most compelling case the idea could have.
Contrarian Agent
Receives the steel-manned version. Finds fatal flaws even in this best-case scenario. Responds with either APPROVED: [reason] or REJECTED: [fatal flaws].
Stop condition
Loop iterates until the contrarian issues APPROVED or a maximum of 5 iterations is reached. If max is hit without approval, the proposal is a non-starter — escalate to a human.

Where to insert contrarian gates in your pipeline

PRD → Architecture handoff Contrarian pressure-tests the PRD before architecture begins. Catches scope problems, missing non-goals, and weak success metrics before they get baked into the design.
Architecture → Engineering handoff Contrarian stress-tests the tech spec for scalability assumptions, security trade-offs, and missing failure modes. Much cheaper to catch here than in code.
QA phase QA is already a Breaker role by nature — make this explicit. The QA agent's job is not to validate; it's to find reasons the feature should not ship.
Pre-launch security review A dedicated contrarian pass on the full codebase before deployment. Specifically tasked to find what the Builder agents would not think to look for.

Contrarian agent definition

---
name: contrarian-reviewer
description: Adversarial reviewer. Invoke after any major artifact
             (PRD, tech spec, design, build) before phase handoff.
             Steel-mans first, then finds fatal flaws.
tools: Read, Grep, WebSearch
model: opus
permissionMode: default
---

You are an adversarial reviewer. Your job is not to agree.

For each artifact you receive:
  1. Steel-man it — present its strongest possible form,
     addressing obvious objections preemptively.
  2. Attack that strongest version — find fatal flaws even
     in the best-case scenario.
  3. Respond with one of:
       APPROVED: [reason it survives scrutiny]
       REJECTED: [numbered list of fatal flaws]

Rules:
  - Never straw-man. Engage with the best version only.
  - "Unless you have a better idea" applies — rejection
    must include what would need to be true to approve.
  - Maximum 5 iterations before escalating to a human.
  - Output your review to /specs/reviews/
📖
Further reading
Read more about this concept: The Contrarian Agent: Why Making AI Fight with Itself Produces Better Output by Francis Shanahan — francisshanahan.substack.com/p/the-contrarian-agent-why-making-ai
Phase 03Enable Agent Teams

Enable Agent Teams in Claude Code

Agent Teams is Claude Code's experimental built-in feature for true parallel execution. One session leads; teammates work independently in their own context windows.

Enable the feature

# Add to your environment or settings.json
export CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1

What Agent Teams provides

Example team lead prompt

Given the PRD at /specs/product/prd.md and tech spec at
/specs/technical/arch.md:

Spawn the following teammates:
  - backend-engineer: owns /src/api/ and /src/services/
  - frontend-engineer: owns /src/ui/ and /src/components/
  - qa-engineer: owns /tests/ — writes tests from acceptance
    criteria before code is written (TDD)
  - devops-engineer: owns /infra/ and CI/CD pipeline

Coordinate through the shared task list. Backend and frontend
can work in parallel. QA writes tests first. DevOps unblocks
last. No teammate touches another's owned paths.
Known limitation

If the terminal closes, active teammates are lost — there is currently no session resumption for in-process Agent Teams. For long-running tasks, run inside tmux, screen, or a cloud VM so you don't lose teammate state mid-sprint.

Phase 04Shared Context Strategy

Give Every Agent a Source of Truth

Your agents need to know exactly where to read from and write to. Structure your repo so there's no ambiguity about ownership or output location.

Repository layout

/specs/
  product/
    prd.md              ← PM agent output
    user-stories.md
  technical/
    architecture.md     ← Architect agent output
    api-contracts.md
    data-models.md
  design/
    ux-spec.md          ← Designer agent output
    component-map.md
  qa/
    test-plan.md        ← QA agent reads from acceptance criteria
    test-cases.md
  reviews/
    contrarian-log.md   ← Contrarian agent output
/CLAUDE.md              ← Global context all agents load automatically

What goes in CLAUDE.md

Every agent loads CLAUDE.md automatically at session start. It is the single most important file in the project. Its quality directly determines your output quality.

TDD as the handoff mechanism

Have your QA agent write test cases from the acceptance criteria before engineers start coding. This gives engineers a deterministic success signal — green tests mean done. It also removes scope ambiguity and the "looks good to me" trap during review.

Phase 05Connect Real Tools via MCP

Wire Agents Into Your Real Systems

Agents can operate against your actual tooling — Linear, Slack, GitHub, Notion — through MCP server connections defined once in your project settings.

Agent-to-tool mapping

AgentMCP ToolsActions
PM AgentLinear, NotionCreates epics and issues, writes PRDs to Notion
ArchitectGitHub, NotionCreates ADRs, opens architecture docs
QA AgentLinear, SlackOpens bug tickets, posts test results to Slack
DevOps AgentGitHub, SlackTriggers Actions, posts deploy status
Security ReviewerLinear, SlackOpens CVE tickets, alerts security channel
Contrarian ReviewerLinear, NotionLogs review decisions, blocks handoff tickets on rejection

MCP configuration in Claude Code

// .claude/settings.json
{
  "mcpServers": {
    "linear": {
      "type": "url",
      "url":  "https://mcp.linear.app/mcp"
    },
    "slack": {
      "type": "url",
      "url":  "https://mcp.slack.com/mcp"
    },
    "github": {
      "type": "url",
      "url":  "https://mcp.atlassian.com/v1/mcp"
    },
    "notion": {
      "type": "url",
      "url":  "https://mcp.notion.com/mcp"
    }
  }
}

MCP servers defined in your project settings are automatically available to all agent teammates — no extra configuration per agent required.

Before You Start

Practical Advice

Things that will determine your success before you write a single agent definition.

Act like a conductor

Your job is to set the stage, not to play every instrument. Spend your time on the brief, the quality bar, and the review checkpoint — not on the execution. The agents handle mechanics; you handle judgment.

Invest in your CLAUDE.md and agent definitions

These are the equivalent of your engineering handbook and role descriptions. Vague personas produce vague output. Specify owned paths, output formats, and success criteria before you run anything.

Use TDD as the handoff gate

QA writes tests from acceptance criteria before engineers start. Green tests means done. This removes ambiguity and gives engineers a clear, verifiable target.

Add human review checkpoints

Don't run the full pipeline unattended at first. Gate on: (1) PRD approved, (2) architecture approved, (3) first working build. Tighten autonomy as you build trust in each agent's outputs.

Token costs scale fast

Agent Teams use significantly more tokens than a single session. Start with 3–4 agents on a scoped feature before running the full org chart. Claude Max or Team plans are recommended for sustained pipeline runs.

Be precise in your initial prompts

Unlike single-session Claude Code, there are fewer chances to redirect mid-task. Ambiguous prompts at the start can cascade into hours of compute doing the wrong thing.

Persistent environment matters

Agent Teams don't survive terminal closure. Use tmux, screen, or a cloud VM for long-running pipelines so you don't lose teammate state mid-sprint.


Your First Week

1
Install Claude Code and initialize your repo Run npm install -g @anthropic-ai/claude-code. Create your /specs/ directory structure and a CLAUDE.md.
2
Define 3 starter agents product-manager.md, solution-architect.md, and backend-engineer.md. Keep definitions tight and specific.
3
Enable Agent Teams Add CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 to your environment.
4
Run a pilot on a small, real problem Give the PM agent an actual business problem from your backlog. Trace the handoff through to working code. Review every artifact at each gate.
5
Add the contrarian and expand from there Once the 3-agent pipeline is producing consistent quality output, add contrarian-reviewer.md, then QA, frontend, and DevOps.
Ecosystem

Further Resources

Claude Code's native Agent Teams is the right starting point. As you scale, the community has built orchestration layers worth exploring.

ToolBest ForNotes
Claude Code Agent TeamsGetting started, 3–8 agentsNative, no extra install, experimental
MulticlaudeTeam usage with PR review gatesGo-based, multiplayer support
Gas TownSolo devs, hobby projectsMore complex, better for single-operator use
Ruflo / Claude FlowEnterprise orchestration300+ MCP tools, self-learning routing
VS Code Multi-AgentIn-editor workflowClaude + Codex + Copilot side-by-side

Official documentation

The patterns that work today — context separation, shared task lists, peer-to-peer communication, TDD handoffs, and adversarial review — are foundational and will only grow more powerful as the tooling matures. Start small, run a real pilot, and build from there.