How to build a content pipeline with AI agents
Five agents. One pillar. A weekly cadence that compounds for years. Ten minutes of human review.
TL;DR
Research agent sources Mondays. Content agent drafts Tuesdays. Editor agent polishes. You review 10 min Wednesdays. Publisher ships Thu/Fri. Analytics agent feeds back Sunday. Weekly output: 1 blog + 1 newsletter + 3 LI + 3 tweets.
What you'll learn
- Why 5 specialized agents beat 1 generalist model for content
- The weekly cadence that keeps output consistent
- How to enforce voice so everything sounds like you
- A measurement loop so the pipeline gets smarter each week
What you need
- A single content pillar (one sentence)
- 20+ of your past posts (voice training samples)
- A Notion workspace for the pipeline database
- Publish channels: blog, newsletter, LinkedIn, Twitter/X
- An agent runtime (Black Box, or DIY on Claude API)
Step 1: Define one content pillar
A pillar is a topic you own. Not "marketing" — "positioning for SaaS founders who can't afford a marketer." Write it in one sentence. Every piece must map back. Pillar > topics. Topics are what you write about. Pillar is what you're known for.
Step 2: Build a research agent
A Research specialist that pulls, weekly: 10 new pieces from Hacker News / industry subreddits / Twitter / competitor blogs on your pillar. Summarizes each to 3 bullets. Writes into a Notion database. Tag with "angle: novel / known / contrarian." The research agent is the top of the funnel. Bad research = bad everything.
Step 3: Have a Content agent draft a weekly piece
Monday 8am: a Content specialist reads the research database, picks the 3 strongest angles, drafts a 1,200-word post + 1 newsletter issue + 3 LinkedIn posts + 3 tweets. All into a Notion "Draft" column. One pillar, multi-format output. Repurposing is baked in.
Step 4: Add an Editor agent for voice + fact-check
Tuesday 9am: an Editor agent re-reads each draft. Voice pass: compares against 20 of your past posts. Fact-check pass: flags any claim that isn't in the research database. Writes comments inline. The Editor is what makes it sound like you and not like GPT.
Step 5: Ship a 10-minute human review slot
Wednesday 10am: you spend 10 minutes reading + editing the drafts. Approve or rewrite the headline. Move to "Scheduled" column. That's the only human step in the pipeline. 10 minutes. Not 30. If you're spending more, the Editor agent needs better voice samples.
Step 6: Auto-publish on Thursday + Friday
Thursday 11am: blog post publishes via your CMS (Ghost, Webflow, or a Cloudflare Pages push). Newsletter goes out at 4pm via Beehiiv/ConvertKit API. Friday 9am: LinkedIn posts via Buffer. 11am + 2pm: tweets via Typefully. The pipeline runs without you past Wednesday. That's the point.
Step 7: Measure + feed back
Every Sunday: an analytics agent pulls performance for the week's content (views, clicks, subscribers, replies). Writes 3 "what worked" notes and 3 "what didn't" notes into a Lessons database. Research agent reads these next Monday before sourcing. The feedback loop is what makes the pipeline smarter week-over-week. Without it, you're just publishing, not learning.
Step 8: Scale volume only after consistency is locked
Hit 8 consecutive weeks of on-time publishing before you add a second pillar. Consistency compounds. Sporadic volume doesn't. 8 weeks is the test. Most operators fail here. The ones who pass own their category in 18 months.
Concrete example: a category-of-one newsletter
A positioning consultant we work with runs this exact 5-agent pipeline. Pillar: "Positioning for SaaS founders without a marketer." Week 1-8: 1 post + 1 newsletter + social. Week 9-52: same cadence, never missed. Result: 9,400 newsletter subs, 2,100 LinkedIn followers, 6 inbound consulting calls/week at $500/hr. Pipeline cost: $500/mo (Black Box). Revenue attributable: ~$180K/yr.
Common pitfalls + how to avoid them
- No pillar. "I write about startups" = invisible. Narrow.
- Skipping the Editor pass. AI writing without voice enforcement = forgettable. Every time.
- Human review turns into human rewrite. If you're rewriting, fix the brief, not the draft.
- Adding pillars before consistency is proven. 8 weeks on-time first. Then consider.
- No measurement. The pipeline can't learn without the Sunday feedback agent.
Key takeaways
- 5 agents > 1 model. Specialization > generality.
- Weekly cadence compounds. Sporadic volume doesn't.
- 10-minute human review is the only human step.
- Sunday feedback loop is what makes the pipeline learn.
- 8 weeks consistent before you scale to a second pillar.
FAQ
How is this different from GPT writing my blog?
A single model writing one blog = a task. A pipeline = a system. Research, Draft, Edit, Publish, Measure, Feedback — five agents with five distinct jobs, coordinated over a week. Output quality scales; one-shot doesn't.
How much content does this produce?
Per week: 1 blog post (~1,200 words), 1 newsletter, 3 LinkedIn posts, 3 tweets. Per year: 52 blogs, 52 newsletters, 156 LI posts, 156 tweets. Enough to own a category.
Won't it all sound the same?
Only if you skip the Editor voice pass and the "contrarian angle" tag. The research agent picks for variety; the editor enforces your voice. Together, outputs feel diverse and consistent.
How long before traffic shows up?
6-12 months for SEO to compound. Month 1-2: social drives early wins. Month 3-6: newsletter starts replying. Month 7+: organic search starts to carry.
What does this cost DIY vs managed?
DIY: ~$120/mo (Claude API + Buffer + Notion + Beehiiv). Managed via Black Box: $500/mo and the 5 agents are built in as 5 of the 18 specialists.
Further reading
Black Box does this automatically
Research, Content, Editor, Publisher, Analytics — 5 of the 18 specialists. $500/mo runs your weekly pipeline end-to-end.
Web4Guru — Web4Guru is the team behind Black Box. We build AI companies for solo operators and small teams. Published April 23, 2026.