The Content Pipeline
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 2: The Content Pipeline
The most common mistake with AI writing tools is treating them as a single “write this for me” button.
It doesn’t work because writing is not a single activity. Research, structuring, drafting, editing, and reviewing are different cognitive tasks that happen to produce the same output. Lumping them into one prompt produces mediocre output at every stage because no single prompt can optimise for all of them simultaneously.
The pipeline approach solves this by separating concerns. Each stage has a specific job, specific inputs, specific outputs. When you separate them, you can optimise each one independently — and you can automate the ones that don’t need you.
The content-pipeline Skill
My content-pipeline skill in Claude Code orchestrates the whole process. It’s not magic — it’s a structured prompt that runs stages in order and hands the output of each stage to the next.
Here’s what it does:
Stage 1: Topic Selection
The pipeline starts with a topic, not a blank page. My topic backlog lives in 04 Domains/Signal Over Noise/SoN Topic Pipeline.md in Obsidian — a running list of ideas, news hooks, and things I want to explore. The first step is picking one and checking that I haven’t already covered it recently.
That check matters. Before the pipeline existed, I occasionally wrote an issue only to discover I’d covered the same ground three months ago. The pipeline runs a vault search first:
~/.bun/bin/qmd search "[topic]" --collection areas -n 5
If there’s recent coverage, the pipeline surfaces it. You either differentiate the new angle or pick a different topic.
Stage 2: Research
This is where the newsletter-researcher agent takes over. It’s a dedicated agent with web search access whose only job is building a research brief.
The brief it produces is not a list of links. It’s a structured document: the core argument or finding, three to five supporting sources with the relevant excerpts, data points worth citing, and opposing views worth addressing. This gives the newsletter-writer agent something to work with rather than a raw pile of URLs.
The quality of the brief determines the quality of the draft. An agent writing from a strong brief with clear sources produces drafts that are specific and grounded. An agent writing from weak inputs produces the vague, hedging prose that makes AI newsletters forgettable.
Stage 3: Outline
The research brief doesn’t automatically become a structure. I still decide what shape the issue takes: what’s the opening hook, what’s the main argument, which sources support which points, where does the practical advice sit.
This is a short step — fifteen to twenty minutes with the brief in front of me — but it’s not one I’ve fully automated. The structure of an issue reflects an editorial judgement about what will land with readers, and that judgement currently needs a human. You might reach a different conclusion after experimenting with your own setup.
Stage 4: Draft
The newsletter-writer agent is an Opus-model agent with detailed instructions about writing style, anti-slop patterns, voice characteristics, and structural requirements. It takes three inputs: the research brief, the outline, and a VOICE.md profile.
The agent runs on Opus rather than Sonnet because draft quality matters more than speed here. A better first draft means less editing time, which is where you actually save hours.
The draft it produces is not published directly. It’s a first draft. The agent knows this — the instructions explicitly say to write without self-censoring, produce the full-length draft, and flag anything that needed a guess rather than a fact. That last bit is important for catching fabrication.
Stages 4.25 through 4.5: The Intra-Draft Process
This is something I borrowed from Ann Handley’s writing framework and encoded into the pipeline. After the first draft exists, there are sequential passes before the review agent sees it:
- A trimming pass that cuts clichés, fat phrases, and sentences that don’t advance the argument
- An empathy pass that reads the draft as the subscriber, not the writer — asking “does this serve them or me?”
- A voice and style pass that adds personality, adjusts the tone, and checks the headers
These passes are built into the newsletter-writer agent’s instructions. The agent doesn’t just produce a first draft and stop — it refines through these stages before handing off.
Stage 5: Review
The draft-reviewer agent handles quality checking and fixes issues directly. It doesn’t produce a report for me to implement — it edits the file, then reports what changed.
What it checks: AI slop phrases (the kind that make readers unconsciously disengage), staccato fragment clusters, weak openings, fat phrases, reading level, and voice alignment. If the draft sounds like generic AI output rather than me, the reviewer flags specific paragraphs and rewrites them.
This is the gate that keeps AI-generated content from being obviously AI-generated.
Stage 6: Polish
A final human read. This is not the same as Stage 5 — the reviewer handles the mechanical quality checks. This read is for editorial judgement: does the issue actually say something? Is the opening compelling? Does it earn five minutes of a reader’s time?
This is typically fifteen to twenty minutes and cannot be delegated. It’s where I make the call to cut a section that’s technically fine but doesn’t fit, or to add a specific detail that only I would know to add.
Stage 7: Queue
Schedule the issue in Kit.com and prepare the social snippets. The Kit CLI handles the scheduling — more on that in Module 4.
Why This Is Better Than One Magic Prompt
The pipeline approach gives you something no single prompt can: the ability to intervene at any stage without redoing all the work.
If the research is weak, you can run the researcher again or add sources manually before the draft stage. If the draft structure is wrong, you can revise the outline and re-run just the drafting. If the review flags voice issues, the reviewer fixes them without touching the research or structure.
With a single prompt, a problem at any point means starting over.
The separation also makes the system improvable. When I find that the newsletter-writer agent consistently produces weak opening sentences, I update that agent’s instructions — not the whole pipeline. Each piece is independently tunable.
Building Your Version
You don’t need to start with all seven stages automated. Here’s the sequence I’d recommend:
Start with Stage 4 — the drafting. Get a newsletter-writer agent working with your voice profile (that’s Module 3). Even if you’re still researching manually, having an agent that can take your notes and produce a draft is the biggest single time saving.
Add Stage 5 next — the review. The draft-reviewer is the quality gate that makes the automation trustworthy. Without it, you’re manually checking every AI draft for the problems it consistently produces.
Add Stage 2 once you trust the output — having the researcher agent build briefs saves the most time after drafting, but you want to know the drafting is solid before you trust research you haven’t read yourself.
Stages 1, 3, 6, 7 you may never fully automate, and that’s fine. The goal is not zero human involvement. It’s reducing the overhead to the parts that actually need you.
The Tools You Need
The pipeline relies on three things:
Claude Code with agents enabled. The newsletter-researcher, newsletter-writer, and draft-reviewer agents all run as Claude Code subagents. If you haven’t used Claude Code’s agent feature before, the Building Agents product in this library covers that foundation.
A topic backlog. Doesn’t need to be Obsidian. A plain text file works. The key is having a prioritised list before you start — trying to decide what to write about while the pipeline is waiting is a flow-killer.
An Obsidian vault or equivalent. The pipeline saves research briefs, drafts, and published issues to specific locations. You need somewhere for the output to land between stages.
Module 3 covers the piece that makes or breaks AI-assisted writing: teaching the agents what your voice actually sounds like.
Check Your Understanding
Answer all questions correctly to complete this module.
1. Why does the newsletter-writer agent run on Opus rather than Sonnet?
2. What are the 'intra-draft passes' borrowed from Ann Handley?
3. Why is the draft-reviewer a separate agent from the newsletter-writer?
Pass the quiz above to unlock
Save failed. Please try again.