Delegating to Agents
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 2: Delegating to Agents
The original PDA framework included a module on delegation that was mostly about working with virtual assistants and deciding which tasks to hand off to humans versus automated systems. That advice was solid. But it was written for a world where AI assistance meant a chat window you typed into, not an agent that could open files, run commands, and work through multi-step tasks while you did something else.
Delegation has a different meaning now.
The Agent Spectrum
Not all agents are the same, and treating them as interchangeable leads to either underusing them (asking Claude to “help” with things you could fully hand off) or overusing them (delegating things that genuinely need human judgement).
Here’s a rough spectrum, from least to most autonomous:
Skills are the simplest form. A skill is a markdown file in your ~/.claude/skills/ directory that gives Claude instructions for a specific task. When you say “run the draft-reviewer skill on this document,” Claude reads the skill file and follows it. You’re still in control — you invoke it explicitly, you see the output, you decide what to do next. The skill is just a reusable set of instructions so you don’t have to re-explain your review process every time.
Agents with tools are more capable. You can give Claude access to MCP servers — calendar, email, file systems, web search — and it can use those tools to gather information and act. An agent checking your email and flagging action items isn’t just reading what you paste in; it’s actually fetching the messages, processing them, and returning structured results. You still review and approve anything consequential, but the agent is doing real work.
Subagents go further. Claude Code can spawn subagents — separate instances with their own context and instructions — to work on parallel tasks. A content pipeline might have one subagent researching, another drafting, another reviewing, all running in sequence. You set it in motion and review the final output. The intermediate steps happen without you watching each one.
Autonomous agents are the far end. A cron job that runs your morning-brief skill every day at 7am, compiles your briefing, and writes it to a daily note — that’s fully autonomous. You set it up once and it runs indefinitely. No prompting required.
Where you deploy on this spectrum depends on the stakes and the reversibility of the task.
What Delegation Actually Requires
Before the original PDA 1.0 covered delegation, it talked about a delegation matrix: which tasks to hand off versus keep, based on importance and whether you’re the only one who can do them. That framing still holds, but there’s a new dimension: whether the task is reversible if the agent gets it wrong.
Reading and summarising? Fully reversible — worst case is a bad summary and you re-read the source. Sending an email? Irreversible once it leaves your outbox. Editing code? Reversible if you’re using git, less so if you’re not.
I use a simple test before delegating anything non-trivial:
Can I review the output before it has any effect? If yes, the stakes are lower. If the agent drafts an email that I have to approve before sending, that’s very different from an agent that sends emails on my behalf. The first is delegation with review. The second requires more trust in the agent’s judgement and more careful testing before you give it that authority.
Is the task well-defined enough that the agent can succeed? Agents fail at vague tasks the same way humans do — they produce something, but it’s not what you wanted. The better you can specify the inputs, outputs, and constraints, the more reliably the agent performs. This is where your context files pay off again: an agent working inside your project already understands the conventions, which means your task descriptions can be shorter and still precise.
Have I seen the agent do this correctly before? The first time you delegate anything consequential, watch it closely. The second time, watch a bit less. By the tenth time, you’ve established whether it’s reliable for that task. Trust is earned through observation, not assumed.
Skills Worth Building
The most useful skills I’ve built are the ones that compress a multi-step review or creation process into a single invocation. A few examples from what I actually use:
draft-reviewer — takes a piece of writing, checks it against voice guidelines (from VOICE.md), flags AI slop patterns, checks reading level, identifies weak reasoning. What used to be a manual checklist across four separate concerns is now one command. The skill applies the same standards consistently every time.
morning-brief — gathers calendar events, pending tasks, emails requiring action, and recent progress from daily notes, then produces a formatted briefing. I run this each morning. The underlying skill knows which calendars to check, which task lists to query, how to format the output for my Obsidian daily note.
content-pipeline — takes a raw idea or source material and moves it through research, drafting, and review stages, with subagents handling each stage in sequence. I don’t touch it until there’s a reviewable draft sitting in my inbox folder.
mail-triage — categorises incoming email by type (action needed, FYI, invoice, noise), extracts any invoices, flags things that need a response, and writes a summary to the daily note. What used to take twenty minutes of inbox management takes about ninety seconds of review.
These skills didn’t spring into existence fully formed. Each one started as a manual process I was doing repeatedly, got codified into written instructions, then got refined over several runs until the output was consistently useful.
The Mental Model Shift
The hardest part of delegation — with AI or with human assistants — is letting go of the feeling that you need to be doing the work yourself to be sure it’s done right.
That feeling is worth examining. Sometimes it’s correct: there are tasks where your specific knowledge or judgement is genuinely irreplaceable, and delegating them produces worse outcomes. But often it’s just habit, or an implicit belief that value comes from effort rather than results.
Agents that actually work change the calculus. When the draft-reviewer catches problems your eye would have missed, when the morning-brief surfaces the right information before you’ve had time to look for it, when the mail-triage flags the urgent thing buried in a thread of noise — the evidence accumulates that the delegation is producing better outcomes, not worse ones.
The goal isn’t to automate everything. It’s to keep your attention on the decisions and creative work that genuinely requires it, and let agents handle the rest.
The next module is about the automation layer: how to take delegated work and turn it into systems that run without you having to invoke them.
Check Your Understanding
Answer all questions correctly to complete this module.
1. On the agent spectrum described in this module, which type runs without any invocation from you?
2. What is the key test suggested before delegating any non-trivial task?
3. According to the module, where do the best skills come from?
Pass the quiz above to unlock
Save failed. Please try again.