DEV Community

Nex Tools
Nex Tools

Posted on • Originally published at nextools.hashnode.dev

Building a Content Factory with Claude Code + Remotion

Originally published on Hashnode. Cross-posted for the DEV.to community.

Target: Hashnode (nextools.hashnode.dev), tech platform only
Author: Nextools Hub
Canonical: Hashnode (this will be the canonical URL)
DEV.to copy: use same content, new intro, canonical → Hashnode
Tags: claude-code, automation, remotion, nodejs, content
Status: READY to publish via GraphQL API. Check 2/24h rate limit first.


TL;DR

I shipped 47 Instagram posts, 50 blog articles, and 11 free tools in 30 days as a solo operator. Not because I'm prolific. Because the pipeline produces content while I sleep.

This post walks through the architecture: a queue-first content pipeline built on Claude Code (as the orchestrator), Node.js (the glue), Remotion (video), and Puppeteer (carousels). Zero dependencies on SaaS schedulers like Later or Buffer.

If you're a solo founder or a 2-person team trying to keep a brand alive on Instagram + blog + SEO without burning out, this stack gets you there.


The problem: manual content math doesn't close

My brand requires:

  • 2 IG posts per day (reels + carousels)
  • 1 blog post per day (SEO targets)
  • 1 free calculator tool per week (lead magnet)

With a team? Easy. Solo? The math is brutal:

  • Even 30 min per post = 60 min/day just publishing
  • Plus generation (2-4 hours per piece if done manually)
  • Plus scheduling logic (which pillar, which day, which time)
  • Plus catching errors (Error 4 on IG API = silent duplicate publishes)

SaaS tools like Later and Buffer solve scheduling, not generation. That's the wrong half of the problem.

The architecture: queue-first, generator-agnostic

The contract is simple: nothing publishes until it has a ready slot in the queue. The queue is the source of truth. The generators can over-produce freely because the gantt builder enforces rotation at schedule time.

[Content Generators]  →  Backlog (queue.json status=backlog)
         ↓
[Weekly Gantt Builder]  →  ready + date + time (Sunday 23:00, rotation-aware)
         ↓
[Publisher]  →  Preflight check → publish → log (14:00 + 19:00 + 00:00 daily)
         ↓
[Reconciler]  →  Orphan detection + auto-backlog (08:00 daily)
Enter fullscreen mode Exit fullscreen mode

Five moving parts. Each with a single responsibility. Let's look at each.

Part 1: The queue manager

A 400-LOC Node.js library + CLI. Single source of truth.

// queue-manager.js (shared library)
const loadQueue = (brand) => {
  const path = `קריאייטיב/${brand}-ig-queue.json`;
  return JSON.parse(fs.readFileSync(path, 'utf8'));
};

const appendSlot = (brand, slot) => {
  const queue = loadQueue(brand);
  const nextSlot = Math.max(...queue.slots.map(s => s.slot)) + 1;
  queue.slots.push({
    slot: nextSlot,
    brand,
    status: 'backlog',
    created_at: new Date().toISOString(),
    ...slot
  });
  saveQueue(brand, queue);
  return nextSlot;
};

const resolveFiles = (slot) => {
  // Handles both legacy string paths and new {primary, slides_pattern} objects
  if (typeof slot.files === 'string') return [slot.files];
  const { primary, slides_pattern } = slot.files;
  if (slides_pattern) return expandGlob(slides_pattern);
  return [primary];
};
Enter fullscreen mode Exit fullscreen mode

The CLI wraps this for humans:

$ queue-manager-cli append --brand nex --type carousel \
    --files "קריאייטיב/nex/001/slide-*.png" --caption "$(cat caption.md)"
# → Appended as slot 48

$ queue-manager-cli stats --brand nex
# backlog: 30, ready: 14, scheduled: 8, published: 47
Enter fullscreen mode Exit fullscreen mode

Part 2: The gantt builder (autonomous curator)

The hardest part of a content pipeline is not generation. It's deciding when each piece goes out so pillars stay balanced and no topic repeats within 5 days.

My gantt builder:

  1. Reads all backlog slots
  2. Tags each by pillar (6 pillars for this brand: angel numbers, mirror hours, ritual, contrarian, HD truth, tool CTA)
  3. Assigns them to the next 7 days across 2 daily slots (14:00 + 19:00)
  4. Enforces 5-day dedup window (no same pillar twice within 5 days)
  5. Updates slot status to ready with date and time
const buildWeeklyGantt = (brand) => {
  const queue = loadQueue(brand);
  const backlog = queue.slots.filter(s => s.status === 'backlog');
  const rotation = getPillarRotation(brand); // e.g., Monday = mirror + angel

  const nextWeek = getNextWeekDates();
  const schedule = [];

  for (const date of nextWeek) {
    const dayPillars = rotation[dayOfWeek(date)];
    const timeSlots = ['14:00', '19:00'];

    for (let i = 0; i < timeSlots.length; i++) {
      const pillar = dayPillars[i];
      const candidate = pickCandidate(backlog, pillar, schedule, 5);
      if (!candidate) continue;

      candidate.status = 'ready';
      candidate.date = date;
      candidate.time = timeSlots[i];
      schedule.push(candidate);
    }
  }

  saveQueue(brand, queue);
  return schedule;
};
Enter fullscreen mode Exit fullscreen mode

Runs as a scheduled task every Sunday at 23:00. 14 slots assigned per week, autonomously.

Part 3: The publisher (preflight matters)

Early version of the publisher had a silent bug: Instagram's API returns Error 4 on transient network issues, the Node.js client retries, and you get duplicate publishes.

The fix: preflight check against the live API before every upload.

const preflightCheck = async (slot) => {
  const recent = await fetch(
    `https://graph.instagram.com/v22.0/me/media?limit=25&access_token=${TOKEN}`
  ).then(r => r.json());

  const fingerprint = hashCaption(slot.caption) + '|' + hashFile(slot.files);

  for (const media of recent.data) {
    const liveFingerprint = hashCaption(media.caption) + '|' + hashFile(media.media_url);
    if (fingerprint === liveFingerprint) {
      throw new Error(`DUPLICATE_DETECTED: slot ${slot.slot} already published as ${media.id}`);
    }
  }

  return true;
};

const publish = async (slot) => {
  await preflightCheck(slot); // throws if duplicate
  const mediaId = await uploadToInstagram(slot);
  slot.status = 'published';
  slot.media_id = mediaId;
  slot.published_at = new Date().toISOString();
  saveQueue(slot.brand, queue);
  appendToLog(slot);
};
Enter fullscreen mode Exit fullscreen mode

Zero duplicate publishes in 47 attempts since this preflight was added.

Part 4: The generators

Each generator is a Claude Code slash-command. The contract: produce a tool-package.md or content artifact + append to queue.

Example: carousel generator for angel numbers.

/קרוסלה-nex
brand: nex
pillar: angel-number
number: 444
input: data/angel-meanings/444.md
output:
  - קריאייטיב/nex/444/slide-{1..8}.png (via Puppeteer)
  - קריאייטיב/nex/444/caption.md
queue: append to nex-ig-queue.json as backlog
Enter fullscreen mode Exit fullscreen mode

Behind the scenes, Puppeteer opens an HTML template, injects brand tokens from DESIGN.md, renders each slide to 1080x1350 PNG. Reels use Remotion (React video framework) with a similar approach.

Part 5: The reconciler

Every night at 08:00, a script scans for orphans:

  • Files on disk not referenced by any queue slot
  • Queue slots pointing to non-existent files
  • Published slots with missing media_id

Orphaned files get auto-appended to backlog with a flag. Invalid slots get flagged for manual review. This is how you catch drift before it becomes technical debt.

What Claude Code brings

Why use Claude Code as the orchestrator instead of just Node.js scripts?

  1. Slash-commands as contracts. Each generator is a documented skill with clear inputs and outputs. New pillars = new skills, not new spaghetti in a monorepo.
  2. Hooks for lifecycle events. Session start = load queue state. Session end = run reconciler. No cron ceremony for these.
  3. Memory system. "Remember that this brand's preflight must match on caption + first slide hash, not just caption." Claude remembers across sessions.
  4. MCP integration. My Meta Ads MCP and Obsidian MCP servers plug into the same orchestrator. One language for content + ads + CRM.

You could replicate most of this without Claude Code. But every skill you'd otherwise build from scratch (a report formatter, a decision framework, a publishing playbook) now reuses a known pattern.

Results after 30 days

Metric Goal Actual
IG posts published 60 47
Blog articles 30 50
Free tools live 4 11
Duplicate publishes 0 0
Operator time/day <60 min ~45 min

The delta on IG posts (47 vs 60) was intentional. I tightened the rotation dedup from 3 days to 5 to preserve quality over volume.

What I'd do differently

  1. Build the reconciler FIRST, not last. Orphans accumulated for two weeks before I noticed. The reconciler is the ultimate truth check.
  2. Preflight was non-negotiable earlier. Two duplicate publishes in week 1 damaged engagement. Preflight should be day-1 infrastructure, not day-15 bug fix.
  3. Fewer pillars to start. I launched with 6. Three would've been enough for month 1.

Takeaways if you're building similar

  1. Queue is the source of truth. Not generation, not scheduling, not publishing. The queue.
  2. Preflight against the live API. Not your queue, not your logs. The live platform.
  3. Single responsibility per component. Generators generate. Gantt schedules. Publisher publishes. Reconciler audits. No shared state.
  4. Reconcile nightly. Orphans and drift are inevitable. Catch them within 24 hours.
  5. Use a file as the source of truth. meta-decision-rules.md or queue.json. Not a Notion doc, not a Slack message. Versioned, grep-able, git-diffable.

Repo / code

Specific implementations are private (business-sensitive), but happy to share the architecture and a sanitized version on request. Hit me up on Hashnode.

About

I run mynextools.com (free calculator tools) and a Shopify brand on the side. I build content and automation infrastructure for solo founders who don't have time to build it themselves. Available for consulting on Upwork.


Top comments (0)