The previous modules covered the five building blocks individually. This module is about how they compose — the patterns that expert users converge on, and the workflows that produce consistently good results.
The Expert Loop
Across practitioners, teams, and case studies, the same core loop keeps appearing:
graph TD A[Discuss & Plan] --> B[Execute] B --> C[Verify] C --> D[Compound] D --> A
- Discuss & Plan: Talk through the approach with Claude. Use Plan Mode for complex tasks. Challenge assumptions before writing code.
- Execute: Switch to auto-accept and let Claude implement. Commit after each logical unit.
- Verify: Run tests, check the build, review diffs. Hooks can automate much of this.
- Compound: If anything went wrong, update CLAUDE.md so it doesn't happen again. Each cycle makes the next one better.
This loop scales from a single developer to a full team. The difference between a beginner and an expert isn't the loop itself — it's how much of it they've automated.
Parallel Sessions
One of the biggest productivity shifts is running multiple Claude sessions simultaneously, each working on a different task.
The approach:
- Open several terminal tabs (or use a multiplexer like tmux)
- Each session works on a separate piece of work
- Use git worktrees to isolate each session's file changes
# Create a worktree for a feature branch
git worktree add ../my-project-feature-auth feature/auth
# Run Claude in that worktree
cd ../my-project-feature-auth
claudeEach worktree is a full checkout of your repo on a different branch. Claude sessions in different worktrees can't conflict with each other — they're editing different copies of the files.
Session teleportation: Use /teleport (or /tp) to move a session from your terminal to the web interface (or vice versa). Useful for handing off a long-running task to your phone while you step away.
incident.io's Fleet Strategy
The team at incident.io runs 4–5 concurrent Claude sessions using git worktrees. They built a simple w bash function:
# Create a new worktree and start Claude
w myproject new-feature claude
# Run a command in an existing worktree
w myproject new-feature git statusThey reported an 18% speed improvement in a repeated API generation workflow — and the ability to tackle UI improvements that previously wouldn't have been prioritised, because the cost of context-switching was too high for a human developer.
Session Handoff Documents
For complex, multi-session tasks, ask Claude to write a handoff document before you end the session. This is a short file summarising what was done, what's left, key decisions made, and any gotchas the next session should know about.
Write a handoff document to handoff.md covering:
- What we've done so far
- What's still outstanding
- Any decisions or trade-offs we made
- Known issues or things to watch out for
The next session starts by reading the handoff file instead of trying to reconstruct context from git history. This is especially valuable when context compaction has dropped earlier conversation, or when handing work to a teammate's session.
Front-Loading Approach Validation
Your top source of friction won't be bugs or test failures — it'll be Claude taking the wrong initial approach to a task. This is especially costly for visual or UI-heavy work, where course-correcting after implementation is expensive.
The fix: spend longer in the discussion phase than feels necessary. For UI work in particular, describe the desired outcome in detail, ask Claude to explain its planned approach, and push back before any code gets written. The upfront time investment is almost always less than the cost of undoing a wrong direction.
Inner-Loop Automation
The "inner loop" is the sequence you repeat most often. For many developers, it's: make a change → test → commit → push → PR. Expert users wrap these in custom commands:
| Command | What It Does |
|---|---|
/commit-push-pr | Format → commit → push → open PR |
/test | Run test suite, summarise failures |
/review | Review current changes against conventions |
/deploy | Build → deploy → smoke test → report URL |
Boris Cherny (creator of Claude Code) uses /commit-push-pr "dozens of times every day." The key insight: every repeated sequence of more than two steps is a candidate for a command.
The Scout-Triage-Fix Pattern
For code review, bug hunting, and quality improvement, a powerful pattern emerges:
graph LR A[Launch parallel scouts] --> B[Collect findings] B --> C[Triage: verify & score] C --> D[Dispatch focused fixers] D --> E[Verify fixes]
- Scout: Launch 3–4 read-only agents in parallel, each with a different focus (security, performance, error handling, accessibility)
- Triage: The parent session collects all findings, re-reads the relevant code, and scores each finding for confidence
- Fix: For high-confidence findings, dispatch focused fixer agents with precise briefs
- Verify: Run tests to confirm fixes don't break anything
This pattern works because scouts are cheap (read-only, can use Haiku or Sonnet) and parallel (no conflicts). The expensive work (fixing) only happens for verified, high-confidence findings.
Verification as a Force Multiplier
Every expert source emphasises the same thing: the quality of Claude's output is proportional to the quality of your verification.
Verification takes several forms:
- Automated hooks: PostToolUse hooks that lint/format/test after every edit (cheapest, fastest)
- Test suites: Claude running your test suite and iterating on failures
- Manual review: You reviewing diffs before committing (highest quality, most expensive in time)
- CI/CD: GitHub Actions or similar pipelines that validate PRs automatically
The best workflows combine all four: hooks catch trivial issues instantly, tests catch logic errors during execution, you review the final diff, and CI validates everything before merge.
Always start from a clean git state and commit checkpoints regularly. If Claude goes off track, you can revert to the last good commit. Teams that do this consistently report much less time spent on recovery.
Model Routing
Different tasks need different models. Routing intelligently is a significant cost optimiser:
| Model | Best For | Relative Cost |
|---|---|---|
| Haiku | Mechanical tasks: formatting, linting, simple tests, file operations | 1× (cheapest) |
| Sonnet | Most development work: features, refactoring, debugging | ~5× |
| Opus | Deep reasoning: security audits, architectural decisions, complex debugging | ~15× |
Switch models mid-session with /model. For agents, set the model in the agent definition to match the task's complexity.
A typical routing strategy: use Sonnet as the default, drop to Haiku for mechanical agent tasks (scouts, format checks), and escalate to Opus only for tasks requiring genuine deep reasoning.
When to Use Claude Code vs Other Tools
Claude Code isn't always the right tool. Understanding when to switch is itself a workflow skill:
| Task | Best Tool | Why |
|---|---|---|
| Quick inline edits | IDE extension (Copilot/Cursor) | Stays in editor flow, no context switch |
| Understanding unfamiliar code | Either | Both work; Claude Code has deeper codebase access |
| Multi-file features | Claude Code | Autonomous multi-step execution across files |
| Refactoring across codebase | Claude Code | Can search, plan, and execute at scale |
| CI/CD automation | Claude Code SDK | Programmatic access, headless execution |
| Rapid single-file prototyping | IDE extension | Tighter feedback loop |
Many developers use both daily — IDE extension for moment-to-moment coding, Claude Code for larger tasks.
- Not automating your inner loop. If you type the same sequence of commands more than twice a day, it should be a slash command.
- Running expensive agents when cheap ones suffice. A Haiku scout checking for lint errors is ~60× cheaper than an Opus one — with identical results for mechanical tasks.
- Skipping verification. Claude's output looks plausible but can be subtly wrong. Tests and linting hooks catch errors that visual review misses.
- Never committing checkpoints. Without frequent commits, a wrong turn means starting over. With them, you just
git revert.
Key Takeaway
Expert users don't write better prompts — they design better systems. The environment you build (CLAUDE.md, commands, hooks, permissions, MCPs) determines output quality more than any individual prompt. Invest in the system: it compounds.
