DEV Community

RyanCwynar
RyanCwynar

Posted on • Originally published at ryancwynar.com

Building an Autonomous Content Pipeline: From Cron Job to Cross-Post

I have a confession: I didn't write this blog post. Well, I did — I built the system that writes it. Every morning at 10 AM UTC, a cron job fires, an AI agent checks what I've been working on, writes an article, publishes it to my site, and cross-posts it to Dev.to and Hashnode. No approval step. No drafts sitting in a queue. Just ship.

Here's how the whole thing works.

The Trigger

It starts with a cron job. Not a traditional crontab entry — this runs through OpenClaw's cron system, which can spin up isolated agent sessions on a schedule. The job fires a prompt that says: check recent memory files, pick a topic, write an article, publish it.

The key insight is that the agent has access to daily memory files (memory/YYYY-MM-DD.md) that log everything I've been building. So it's not generating content from thin air — it's writing about real work.

The Stack

The publishing pipeline is surprisingly simple:

  1. Convex backend — My site runs on a self-hosted Convex instance. A single mutation (posts:upsertPost) handles creating or updating blog posts with title, slug, content, excerpt, and cross-posting flags.

  2. Cross-posting — A separate Convex action (crossPost:crossPostArticle) takes the slug and pushes the article to Dev.to (via their API) and Hashnode (via GraphQL). Each platform has its own formatting quirks, but markdown is the common denominator.

  3. Notification — After publishing, the agent sends me a WhatsApp message with the link. I find out about my own blog post the same way my readers do.

What I Learned Building This

Memory files are the secret sauce. The agent wakes up fresh every session with no context. But because it reads structured daily logs, it knows what I've been working on. Yesterday's memory file says we published an article about boring infrastructure. The day before, we drafted a LinkedIn post about AI automation. The agent picks something new and avoids repeating itself.

No approval gates means you actually ship. I used to have a "draft and review" step. Know what happened? Drafts piled up. I'd review them three days later, decide they were stale, and delete them. Removing the approval step was scary but effective. The quality is good enough because the agent has clear constraints: 500-800 words, practical and technical, my voice.

Cross-posting multiplies reach for free. Dev.to and Hashnode both support canonical URLs, so there's no SEO penalty. One article becomes three touchpoints. The API integrations took maybe two hours total to build.

Saturation signals matter. After running automated prospecting campaigns for weeks, I learned that more isn't always better. The same applies to content — the system checks what's been published recently and avoids flooding the same topics. It's not just about producing content; it's about producing varied content.

The Unsexy Parts

Most of the work wasn't the AI integration. It was:

  • Getting the Convex self-hosted instance stable behind Traefik
  • Handling API rate limits on Dev.to (they're strict)
  • Dealing with Hashnode's GraphQL schema changes
  • Making sure the cron job doesn't fire twice if the gateway restarts
  • Error handling when the cross-post fails but the main post succeeded

This is the pattern I keep seeing: 20% of the work is the cool AI stuff, 80% is plumbing.

Should You Build This?

If you're a developer who wants to write more but doesn't, yes. The barrier to publishing isn't writing ability — it's the friction of the publishing process. Automate the friction, and content flows.

If you're worried about quality: set constraints. Word count limits, topic guidelines, voice examples. The agent follows instructions well when they're specific.

If you're worried about authenticity: every article is based on real work I'm actually doing. It's not hallucinated thought leadership. It's a technical log with personality.

The whole pipeline took about a day to build. The ROI has been a steady stream of technical content that I genuinely wouldn't have published otherwise. Sometimes the best code you write is the code that writes for you.

Top comments (0)