DEV Community

Aviral Bhardwaj
Aviral Bhardwaj

Posted on

ContentPilot built Using RocketRide

Building ContentPilot: A Visual AI Pipeline with RocketRide, Node.js, and React

Tags: #showdev #javascript #ai #webdev


I just shipped ContentPilot, a full-stack tool that uses an AI pipeline to draft and schedule LinkedIn and Medium posts. This post walks through the architecture in detail the pipeline, the WebSocket client I wrote, the orchestration with graceful fallback, and the live UI that animates the pipeline as it runs.

Code snippets are real. Repo link at the bottom.

The stack

  • Backend: Node.js + Express, SQLite for persistence, node-cron for scheduling
  • Frontend: React + Vite
  • AI pipeline: RocketRide engine (running locally via VS Code extension or Docker)
  • LLM: OpenAI GPT-4o (via RocketRide pipeline node, or direct as fallback)
  • Publishing: LinkedIn and Medium APIs

The architecture in one picture

┌─────────────┐      HTTP       ┌──────────────┐    WebSocket    ┌──────────────────┐
│   React UI  │ ──────────────▶ │  Express API │ ──────────────▶ │ RocketRide Engine│
│             │                 │              │                 │  (5-node graph)  │
└─────────────┘                 └──────┬───────┘                 └──────────────────┘
                                       │
                                       │ fallback if engine offline
                                       ▼
                                ┌──────────────┐
                                │  OpenAI API  │
                                └──────────────┘
                                       │
                                       │ on schedule
                                       ▼
                                ┌──────────────┐
                                │ LinkedIn /   │
                                │ Medium APIs  │
                                └──────────────┘
Enter fullscreen mode Exit fullscreen mode

Three deliberate boundaries:

  1. The API server doesn't know how AI works. It hands inputs to a pipeline and gets a post back.
  2. The pipeline doesn't know about scheduling or publishing. It does one thing.
  3. The scheduler doesn't know about AI. It picks up finished posts and publishes them.

The pipeline

The RocketRide pipeline is a five-node directed graph:

[Webhook] → [Prompt Builder] → [OpenAI GPT-4o] → [Response Parser] → [Output]
Enter fullscreen mode Exit fullscreen mode

It lives at backend/pipelines/content_generator.pipe. The pipeline file is JSON-ish but you don't really edit it as text — you open it in VS Code with the RocketRide extension installed and the graph renders inline. Each node has typed inputs and outputs, so the parser node knows it's getting an LLM response and the output node knows it's getting structured JSON.

The big architectural win: changing the prompt doesn't touch my application code. I open the pipeline, click the prompt builder node, edit the template, and save. My server doesn't redeploy. My tests don't break. The change is isolated to the layer that should own it.

The RocketRide client

The official RocketRide TypeScript SDK is great, but I wanted Node.js with full control over the WebSocket lifecycle (reconnect behavior, custom logging, request correlation). So I wrote my own client that mirrors the official SDK's API exactly:

// backend/services/rocketride-client.js

class RocketRideClient {
  constructor({ uri, apiKey }) {
    this.uri = uri;
    this.apiKey = apiKey;
    this.ws = null;
    this.pending = new Map();
  }

  async connect() {
    this.ws = new WebSocket(this.uri, {
      headers: { 'x-api-key': this.apiKey }
    });
    await this._waitForOpen();
    this.ws.on('message', (data) => this._handleMessage(data));
  }

  async use({ filepath }) {
    return this._request('use', { filepath });
  }

  async send(token, payload, options = {}, contentType = 'application/json') {
    return this._request('send', { token, payload, options, contentType });
  }

  async terminate(token) {
    return this._request('terminate', { token });
  }

  async disconnect() {
    this.ws?.close();
  }

  _request(method, params) {
    const id = crypto.randomUUID();
    return new Promise((resolve, reject) => {
      this.pending.set(id, { resolve, reject });
      this.ws.send(JSON.stringify({ id, method, params }));
    });
  }

  _handleMessage(data) {
    const msg = JSON.parse(data);
    const pending = this.pending.get(msg.id);
    if (!pending) return;
    this.pending.delete(msg.id);
    msg.error ? pending.reject(msg.error) : pending.resolve(msg.result);
  }
}
Enter fullscreen mode Exit fullscreen mode

The full file is about 150 lines with reconnect logic and error handling. The point is that the RocketRide protocol is small enough that reimplementing the client is a reasonable choice. That's a property I value in infrastructure. Most AI tooling fails this test badly.

Usage is exactly what you'd expect:

const client = new RocketRideClient({
  uri: 'http://localhost:5565',
  apiKey: process.env.ROCKETRIDE_APIKEY
});

await client.connect();
const { token } = await client.use({ filepath: './pipelines/content_generator.pipe' });
const result   = await client.send(token, JSON.stringify(input), {}, 'application/json');
await client.terminate(token);
await client.disconnect();
Enter fullscreen mode Exit fullscreen mode

The orchestration layer (with fallback)

Here's where the architecture pays off. My orchestration service has a single public method:

// backend/services/rocketride.js

async function generate(input) {
  if (await isEngineHealthy()) {
    try {
      return await runViaRocketRide(input);
    } catch (err) {
      logger.warn('RocketRide pipeline failed, falling back', { err });
    }
  }
  return await runViaOpenAIDirect(input);
}
Enter fullscreen mode Exit fullscreen mode

The fallback path uses the same prompt template (loaded from the same source as the pipeline's prompt builder node) and parses the response with the same logic. Output shape is identical. Downstream code doesn't care which path ran.

This means:

  • Dev environment: RocketRide engine running locally, full pipeline used, run history captured.
  • CI environment: No engine, direct OpenAI call, tests pass.
  • Production failure mode: Engine crashes, traffic seamlessly routes to direct OpenAI.

I didn't design for this. It fell out of having a clean boundary.

The live pipeline animation

The frontend has a /generate page that animates the pipeline as it runs. When you submit a topic, the UI subscribes to a status endpoint and updates each node's state in real time:

function PipelineAnimation({ runId }) {
  const [nodeStates, setNodeStates] = useState({});

  useEffect(() => {
    const interval = setInterval(async () => {
      const status = await fetch(`/api/posts/generate/${runId}/status`).then(r => r.json());
      setNodeStates(status.nodes);
      if (status.complete) clearInterval(interval);
    }, 300);
    return () => clearInterval(interval);
  }, [runId]);

  return (
    <div className="pipeline">
      {NODES.map(node => (
        <PipelineNode
          key={node.id}
          {...node}
          state={nodeStates[node.id] || 'pending'}
        />
      ))}
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

Each PipelineNode renders with a CSS class based on its state (pending, running, complete, error), and a small lucide-react icon pulses while it's running. The whole effect is ~80 lines of React and looks genuinely cool. The data comes free from RocketRide's per-node execution tracking — I'm just polling for it.

Run history as zero-cost observability

The /api/pipeline/runs endpoint is fifteen lines:

router.get('/runs', async (req, res) => {
  const runs = await rocketrideClient.listRuns({ limit: 50 });
  res.json(runs.map(r => ({
    id: r.id,
    timestamp: r.startedAt,
    duration: r.completedAt - r.startedAt,
    status: r.status,
    nodeTimings: r.nodes.map(n => ({
      name: n.name,
      duration: n.duration
    }))
  })));
});
Enter fullscreen mode Exit fullscreen mode

The frontend's /pipeline page renders this as a table with sparklines for per-node timing. When generation feels slow, I can look at the dashboard and immediately see whether it's the LLM node (almost always) or the parser (occasionally, when the model returns weird formatting).

I didn't write an APM. I didn't add Datadog. The observability is a side effect of the architecture.

The scheduler and publisher

Once a post is generated, it lives in SQLite with a scheduled_for timestamp (nullable). A node-cron job runs every minute:

cron.schedule('* * * * *', async () => {
  const due = await db.all(`
    SELECT * FROM posts
    WHERE scheduled_for <= ? AND status = 'scheduled'
  `, [Date.now()]);

  for (const post of due) {
    try {
      await publisher.publish(post);
      await db.run('UPDATE posts SET status = ? WHERE id = ?', ['published', post.id]);
    } catch (err) {
      await db.run('UPDATE posts SET status = ?, error = ? WHERE id = ?',
        ['failed', err.message, post.id]);
    }
  }
});
Enter fullscreen mode Exit fullscreen mode

UI

What I'd do differently

A few things I'd change if I were starting over:

  • Use server-sent events instead of polling for the live pipeline animation. Polling every 300ms was easy but it's the wrong primitive.
  • Cache the prompt template in one place. Right now it's loaded both by the pipeline's prompt builder node and by the fallback path. They drift if I'm not careful.
  • Put the scheduler in a separate process. Running it in the same Node.js process as the API is fine for a side project, terrible for anything you actually rely on.

Top comments (0)