DEV Community

Programming Central
Programming Central

Posted on • Edited on • Originally published at programmingcentral.hashnode.dev

Generative UI: Building a Real-Time Weather & Stock Assistant with Streaming RSCs

The traditional web request-response cycle is dead. At least, it’s dying for the next generation of AI-native applications. We are moving away from fetching a static HTML bundle and waiting for the full page load. Instead, we are entering the era of Generative UI, where interfaces are rendered in real-time, stream by stream, directly from the server.

In this deep dive, we explore how to build a "Weather & Stock Assistant"—an application that doesn't just return data, but constructs the user interface itself on the fly. We will dissect the architecture behind Streaming UI, Tool Invocation via JSON Schema, and the Stateful Checkpointer, using the Vercel AI SDK and Next.js.

The Paradigm Shift: From Static to Generative

In previous chapters, we explored React Server Components (RSC) and their ability to decouple the component tree from the client-side bundle. We established that RSCs allow us to render complex, data-heavy components exclusively on the server, sending only the necessary UI payload to the client.

In this chapter, we extend that concept. We aren't simply fetching data and rendering a static chart; we are instructing an LLM to act as a UI designer and data fetcher, orchestrating a stream of UI chunks that assemble themselves in the browser. To understand this, we must dissect the three pillars of this architecture: Streaming UI, Tool Invocation via JSON Schema, and the Stateful Checkpointer.

The Streaming UI Paradigm: From Packets to Pixels

Imagine a traditional web application as a postal service. When a user requests a page, the server packages the entire response (HTML, CSS, JS) into a single, large box and ships it. The user sees nothing until the entire box arrives and is unpacked.

Generative UI with streaming transforms this into a live video feed. Instead of sending a finished box, the server sends a continuous stream of frames. Each frame represents a discrete unit of UI—a paragraph, a chart component, a button. The client renders these frames as they arrive, creating a perceived instantaneous experience even while the backend is still processing complex data.

In the context of the Vercel AI SDK, this is handled by the useChat hook. Under the hood, this utilizes the Web Streams API. The server generates a ReadableStream of text. However, in our specific application, we are not just streaming text; we are streaming React Server Components.

When an LLM decides to render a weather chart, it doesn't send a description of the chart. It sends the serialized RSC payload. The client receives this stream, parses the JSON-like structure, and progressively renders the React tree. This is akin to a printer that prints line-by-line rather than waiting for the whole page to be composed.

Tool Invocation: Giving the LLM "Hands"

To make this streaming useful, the LLM needs to interact with the real world—fetching weather data or stock prices. We cannot rely on the LLM's internal knowledge, which is static and prone to hallucination. We need to give the LLM "hands" in the form of tools.

When we ask an LLM to perform a task, we often need its response to conform to a strict structure so our code can parse it reliably. This is where JSON Schema Output comes in.

The Analogy: The Restaurant Order Form

Think of an LLM as a waiter taking an order. If you ask, "What do you recommend?", the waiter might give a free-form paragraph describing dishes. This is unstructured text. If you hand the waiter a specific order form with checkboxes for "Appetizer," "Main Course," and "Dessert," you force the waiter to structure their response into predictable fields.

In our app, the JSON Schema is that order form. We define a schema that requires a tool field (e.g., "getWeather") and an input field (e.g., "city: New York"). The LLM, acting as the waiter, fills out this form. Our code then reads the form and executes the action.

This is critical for reliability. Without JSON Schema, we would have to parse natural language to extract parameters, which is error-prone. With the schema, we can use a validation library like Zod to parse the LLM's output instantly.

The Checkpointer: State as a First-Class Citizen

In a complex agent that performs multi-step reasoning (e.g., "First, get the weather. Second, analyze the stock trend. Third, generate a chart"), the state becomes a graph of execution. This is where the Checkpointer becomes vital.

Analogy: The Video Game Save System
Imagine playing a complex RPG. You defeat a boss (Node 1), then solve a puzzle (Node 2). If the game crashes after Node 2, you don't want to restart from the beginning. You reload your "save file." The Checkpointer is that save file. It captures the exact state of the world (variables, memory, previous decisions) at specific intervals.

In our Weather & Stock Assistant, the Checkpointer allows us to:

  1. Resume: If a user closes the browser during a long data fetch, we can reopen the chat and the agent resumes exactly where it left off.
  2. Debug: We can inspect the state at any point in the graph to see why the agent made a specific decision.
  3. Branch: We can theoretically fork a state to try a different path without affecting the original execution.

Visualizing the Data Flow

To visualize how these concepts interact, consider the flow of data from user input to rendered UI. The user sends a message, which is passed to the LLM. The LLM, constrained by a JSON Schema, outputs a structured tool call. The system executes this tool (fetching data), persists the result via a Checkpointer, and then prompts the LLM again to generate the UI (RSC) based on that data. Finally, the RSC payload is streamed to the client and hydrated into the DOM.

Building the Basic Example: Structured Output

Let's look at a practical implementation. We will build a "User Profile Generator" that takes a free-text description and outputs a structured JSON object. This demonstrates the core concept of JSON Schema Output.

The Schema Definition

We use Zod to define the shape. This serves two purposes: runtime validation and generating the JSON Schema for the LLM.

// app/actions.ts
import { z } from 'zod';

export const userProfileSchema = z.object({
  name: z.string().describe("The user's first name"),
  age: z.number().min(0).max(120).describe("The user's age in years"),
  interests: z.array(z.string()).describe("List of hobbies or interests"),
});
Enter fullscreen mode Exit fullscreen mode

The API Route (Backend)

Here we use streamText to enforce the schema. Note the experimental_providerMetadata where we pass the schema to OpenAI.

// app/api/generate-profile/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

export const runtime = 'edge';

export async function POST(req: Request) {
  const { messages } = await req.json();
  const lastMessage = messages[messages.length - 1].content;

  const userProfileSchema = z.object({
    name: z.string(),
    age: z.number(),
    interests: z.array(z.string()),
  });

  const result = await streamText({
    model: openai('gpt-4o-mini'),
    prompt: `Generate a user profile based on this description: ${lastMessage}`,
    // CRITICAL: Enforcing JSON Schema
    experimental_providerMetadata: {
      openai: {
        response_format: {
          type: 'json_schema',
          json_schema: {
            name: 'user_profile_schema',
            strict: true,
            schema: userProfileSchema,
          },
        },
      },
    },
  });

  return result.toAIStreamResponse();
}
Enter fullscreen mode Exit fullscreen mode

The Client Component (Frontend)

The client uses the useChat hook to consume the stream.

// app/page.tsx
'use client';

import { useChat } from 'ai/react';

export default function UserProfileGenerator() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
    api: '/api/generate-profile',
  });

  return (
    <div style={{ padding: '2rem' }}>
      <h1>AI User Profile Generator</h1>
      <form onSubmit={handleSubmit}>
        <input
          type="text"
          value={input}
          onChange={handleInputChange}
          placeholder="Describe the user (e.g., '25 year old dev from NYC')"
          disabled={isLoading}
        />
        <button type="submit">Generate</button>
      </form>

      <div>
        {messages.map((m, i) => (
          <pre key={i} style={{ background: '#f0f0f0', padding: '1rem' }}>
            {m.content}
          </pre>
        ))}
      </div>
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

Common Pitfalls with JSON Schema

When implementing this, developers often hit specific snags:

  1. Hallucinated Keys: Even with a schema, an LLM might return user_age instead of age.
    • Fix: Always set strict: true in the OpenAI schema definition and always parse the result with Zod on the server before using it.
  2. Edge Timeouts: Vercel Edge functions have a 10-second timeout.
    • Fix: If your generation is slow, switch the runtime to 'nodejs' or use Server Actions for background processing.
  3. Async/Await in RSCs: Trying to await a stream inside a component render function blocks the page.
    • Fix: Use useChat (Client Component) for interactive streaming. Do not mix await inside loops of interactive client components.

Conclusion: The Future is Generated

We are not merely fetching data anymore. We are orchestrating a conversation between the user, the LLM, and external APIs, mediated by a strict structural contract (JSON Schema) and persisted via a state management system (Checkpointer).

The result is a UI that is not built, but generated—streaming live from the server to the client, component by component. This architecture leverages the strengths of the modern stack: the safety of TypeScript, the component model of React, and the intelligence of LLMs, all glued together by the streaming capabilities of Next.js and Vercel.

The concepts and code demonstrated here are drawn directly from the comprehensive roadmap laid out in the book The Modern Stack. Building Generative UI with Next.js, Vercel AI SDK, and React Server Components Amazon Link.

Here are the volumes in the series:

  • Volume 1: Building Intelligent Apps with JavaScript & TypeScript. Foundations, OpenAI API, Zod, and LangChain.js.
  • Volume 2: The Modern Stack. Building Generative UI with Next.js, Vercel AI SDK, and React Server Components.
  • Volume 3: Master Your Data. Production RAG, Vector Databases, and Enterprise Search with JavaScript.
  • Volume 4: Autonomous Agents. Building Multi-Agent Systems and Workflows with LangGraph.js.
  • Volume 5: The Edge of AI. Local LLMs (Ollama), Transformers.js, WebGPU, and Performance Optimization.
  • Volume 6: The AI-Ready SaaS Boilerplate. Auth, Database with Vector Support, and Payment Stack.

You can find them on Leanpub or Amazon.

Top comments (0)