DEV Community

Arvind Parekh
Arvind Parekh

Posted on

How We Built a Real-Time AI Thought Feed in 48 Hours

When we set out to build Ship and Tell, we wanted developers to actually see the AI working — not just wait for a result.

We tapped into the Subconscious stream API and built a live thought feed that surfaces each reasoning step as it arrives. The result is a UI that feels alive: amber cards pulse in as new thoughts land, older ones dim, and the final answer snaps into place when the agent finishes.

Here's how we pulled it off in a single hackathon weekend.

The Problem

Most AI tools are black boxes. You click a button, wait, and eventually get output. There's no feedback loop, no sense of progress, and no way to understand why the AI produced what it did.

We wanted to change that.

The Architecture

Ship and Tell listens for GitHub webhook events. When a PR merges, it spawns 5 parallel research agents via the Subconscious SDK:

  1. Problem Hunter — identifies the pain point the PR addresses
  2. Prior Art — finds similar solutions and prior work
  3. Community Finder — maps target developer communities
  4. Technical Explainer — breaks down the implementation
  5. Timing Analyst — evaluates market timing

Each agent streams its reasoning in real time. The frontend polls every 1.5 seconds and renders each thought as a discrete card.

The Streaming Challenge

The Subconscious stream API emits raw JSON deltas — not clean text. We wrote a regex extractor that parses thought strings from the accumulating JSON payload and pushes them into React state on every chunk.

The Result

After all 5 agents finish, a synthesizer combines their research into a blog post, Twitter thread, and HN submission. A Slack message arrives with one-click publish buttons — no copy-paste required.

Built in 48 hours with Next.js 16, React 19, Tailwind CSS 4, and the Subconscious SDK.

Top comments (0)