Subagent definitions — model, tools, persona, memory. How system prompts create specialists, and design patterns for effective agents.
Create a free account to access the full module content, quizzes, and track your progress.
Agents are markdown files that define subagent configurations – the model to use, which tools they have access to, and their system prompt (persona). They're stored in .claude/agents/ directories.
When the main Claude session needs to delegate work, it spawns these as focused, constrained workers via the Task() tool.
Every agent is defined by three layers:
tools = what the agent can do (hard boundary, enforced by the system)model = how well the agent does itThe constraint is what makes agents effective. A security scanner with every tool available has no focus. A security scanner limited to Read, Glob, Grep with a sharp persona produces much better results.
| Parameter | What It Controls | Example |
|---|---|---|
name | Identity – how it's referenced in Task(name) | code-fix-scout |
description | What the agent does | "Scans for real bugs" |
tools | Hard permission boundary | Read, Glob, Grep |
model | Which Claude model runs it | opus, sonnet, haiku |
maxTurns | 15 | |
memory | Persistent learning across sessions | project |
The markdown body below the YAML acts as the system prompt – the . This fundamentally changes what the model pays attention to, what tools it reaches for, and how it structures its analysis.
Give the same codebase to a "security auditor" persona and a "UX copywriter" persona, and they'll produce meaningfully different output. Same model, same code, completely different results.
---
name: security-scanner
description: Scans code for security vulnerabilities
model: opus
tools: Read, Glob, Grep
---
You are a senior security engineer reviewing code for vulnerabilities.
Focus on: XSS, injection, exposed secrets, insecure protocols.
Rate each finding 0-100 confidence. Only report findings ≥80.The power of constraints: A well-defined persona is as much about what the agent doesn't do as what it does. A security scanner that also suggests refactoring produces noisier results. Narrower scope → higher quality output.
Focused on one domain. Ignores everything else.
You are an accessibility specialist. Your only job is to find
WCAG 2.1 AA violations. Ignore code quality, performance, and
architecture. Report only accessibility issues.Produces structured output in a specific format.
You are a code reviewer. For each finding, output exactly:
- **File**: path
- **Line**: number
- **Severity**: low/medium/high/critical
- **Confidence**: 0-100
Do not include explanations. Do not suggest fixes.Has strong preferences and justifies them.
You are a senior React developer who strongly prefers:
- Server Components over client components
- Composition over inheritance
When reviewing, explain WHY your recommendation is better.tools: Read, Glob, Grep
Can examine code but can't change anything. Safe to run in parallel – no conflict risk.
tools: Read, Glob, Grep, Edit, Write
maxTurns: 10
Has write access but limited turns. Gets a precise brief, makes one change, reports back.
model: haiku
tools: Read, Glob, Grep, Bash
maxTurns: 5
Uses the cheapest, fastest model. Good for mechanical tasks like running tests or format checks – not deep reasoning.
The memory: project field means the agent builds from previous runs:
.claude/agent-memory/code-fix-scout/
└── memory.md ← "Last review found missing error handling on Apollo queries"
This memory gets injected into the agent's system prompt on future runs. Over time, agents become more precise and focused on your specific codebase patterns.
Agents are dispatched by commands or the main session using the Task() tool:
Launch 4 scouts in parallel:
- Scout 1 focuses on logic errors
- Scout 2 focuses on async issues
- Scout 3 focuses on security
- Scout 4 focuses on error handlingEach scout gets its own context window, runs independently, and returns a report to the parent.
Beyond standard subagents, Claude Code has an experimental feature called – full independent Claude instances that run simultaneously and communicate directly with each other:
~/.claude/tasks/{team-name}/graph TD A1[Agent 1] <-->|message| A2[Agent 2] A2 <-->|message| A3[Agent 3] A1 <-->|message| A3 TL[Shared Task List] --- A1 TL --- A2 TL --- A3
This is different from standard subagents (which only report to a parent). Agent Teams can discuss, debate, and divide work amongst themselves – useful for complex work where inter-agent collaboration genuinely adds value.
| Subagents | Agent Teams | |
|---|---|---|
| Communication | Report to parent only | Message each other directly |
| Cost | Lower (focused tasks) | Higher (each is a full instance) |
| Coordination | Parent orchestrates | Self-coordinating |
| Maturity | Stable | Experimental |
Start with subagents. For most workflows, standard subagents are sufficient, more reliable, and cheaper. Agent Teams are for cases where inter-agent discussion genuinely adds value – like multiple reviewers debating findings or architects arguing about approaches.
Agents are where costs can escalate quickly:
maxTurns is your cost cap. An agent with maxTurns: 50 can make 50 API calls. Set it to the minimum needed – a focused fixer rarely needs more than 10.Cost rule of thumb: Start with Haiku agents and low maxTurns. Upgrade to Sonnet or Opus only when the quality isn't sufficient. Most routine tasks (linting, formatting checks, simple reviews) work fine with Haiku at 5 turns.
Read, Glob, Grep – that's it. More tools = more ways to go off-track.Agents are the "hands" of the system. Their power comes from the combination of tools (capability), persona (focus), and model (quality). Commands orchestrate; agents execute. Memory makes them learn over time. Start simple and constrained – you can always add capability later.