DEV Community

Cover image for Build an AI-Powered Data Management Assistant with TanStack React and Claude
Md. Maruf Rahman
Md. Maruf Rahman

Posted on • Originally published at marufrahman.live

Build an AI-Powered Data Management Assistant with TanStack React and Claude

Instead of clicking through a CRUD admin panel, imagine asking a chatbot to "add a new post, mark my first todo as done, and show me the updated tables". In this tutorial, we'll build that exact experience with TanStack Start, TanStack AI, Anthropic Claude, and a simple json-server backend.

Traditional admin dashboards do the job, but they rarely feel delightful. For simple operations—like checking a couple of posts or updating a todo status—you often have to dig through several screens and forms. With an AI-powered assistant, you flip that around: users describe what they want in natural language, and the assistant decides which tools to call behind the scenes.

📖 Want the complete guide with more examples and advanced patterns? Check out the full article on my blog for an in-depth tutorial with additional code examples, troubleshooting tips, and real-world use cases.

What the Assistant Can Do

The demo uses a json-server backend backed by a simple db.json file. The assistant can:

  • List, search, create, update, and delete posts (title + views)
  • Manage comments for each post
  • Manage todos with completion status
  • Read and update a small profile document
  • Set a browser-side counter stored in localStorage

All of that is accessible from a single chat box. The user doesn't need to know anything about endpoints or payload shapes—the AI agent takes care of calling the right tools with the right arguments.

Architecture Overview

The architecture is surprisingly simple:

  1. Backend: A single /api/chat route powered by @tanstack/ai and the anthropicText adapter
  2. Tools: Tool definitions that wrap json-server endpoints using Zod schemas
  3. Frontend: A React chat UI using @tanstack/ai-react with Server-Sent Events
  4. LLM: Claude Haiku handles natural language understanding and tool selection

Backend: Tools Over a json-server API

On the backend side we don't need a giant framework. A single /api/chat route, powered by @tanstack/ai and the anthropicText adapter, is enough. All the real work happens in tool definitions that wrap the json-server endpoints.

Installation

npm install @tanstack/ai @tanstack/ai-anthropic @tanstack/react-router zod
Enter fullscreen mode Exit fullscreen mode

Chat API Route

Here's the complete chat API route:

import { chat, toServerSentEventsResponse, toolDefinition } from "@tanstack/ai";
import { anthropicText } from "@tanstack/ai-anthropic";
import { createFileRoute } from "@tanstack/react-router";
import z from "zod";

const API_BASE_URL = "http://localhost:4000";

export const Route = createFileRoute("/api/chat")({
  server: { handlers: { POST } },
});

export async function POST({ request }: { request: Request }) {
  // 1. Ensure Anthropic key is configured
  if (!process.env.ANTHROPIC_API_KEY) {
    return new Response(
      JSON.stringify({ error: "ANTHROPIC_API_KEY not configured" }),
      { status: 500, headers: { "Content-Type": "application/json" } }
    );
  }

  // 2. Parse and clean incoming messages from the client
  const body = await request.json();
  const rawMessages = Array.isArray(body.messages) ? body.messages : [];
  const messages = rawMessages.map(cleanMessage).filter(Boolean);

  // 3. System prompt that explains how the assistant should behave
  const systemMessage = {
    role: "system" as const,
    content:
      "You are a precise, professional assistant embedded in a demo dashboard. " +
      "Use the available tools to read and update posts, comments, todos, the profile, and the counter. " +
      "After using tools, always send a clear natural-language summary for the user. " +
      "Format tabular data as Markdown tables and keep answers concise and copy-friendly.",
  };

  // 4. Start the streaming chat with tools enabled
  const stream = chat({
    adapter: anthropicText("claude-haiku-4-5"),
    messages: [systemMessage, ...messages],
    tools: [
      listPostsTool,
      addPostTool,
      editPostTool,
      deletePostTool,
      listTodosTool,
      addTodoTool,
      editTodoTool,
      deleteTodoTool,
      getProfileTool,
      updateProfileTool,
      updateCounterToolDef,
    ],
  });

  // 5. Return an SSE response the React client can subscribe to
  return toServerSentEventsResponse(stream);
}

// Utility to strip unknown fields and keep messages in a safe shape
function cleanMessage(input: any) {
  if (!input || typeof input !== "object") return null;
  if (!input.role) return null;

  const msg: any = { role: input.role };

  if (input.content !== null && input.content !== undefined) {
    msg.content = input.content;
  }

  if (Array.isArray(input.parts) && input.parts.length > 0) {
    msg.parts = input.parts;
  }

  if (Array.isArray(input.toolCalls) && input.toolCalls.length > 0) {
    msg.toolCalls = input.toolCalls;
  }

  if (typeof input.toolCallId === "string") {
    msg.toolCallId = input.toolCallId;
  }

  return msg;
}
Enter fullscreen mode Exit fullscreen mode

Tool Definitions

Tools for posts, todos, and profile follow the same pattern: describe input and output with Zod, then implement a small server function that talks to json-server. Here's an example for posts:

// Posts tools
const listPostsToolDef = toolDefinition({
  name: "list_posts",
  description: "Fetch all posts from json-server. Can optionally filter by search query.",
  inputSchema: z.object({
    query: z.string().optional(),
  }),
  outputSchema: z.array(
    z.object({
      id: z.string(),
      title: z.string(),
      views: z.number(),
    })
  ),
});

const listPostsTool = listPostsToolDef.server(async (args: any) => {
  const { query } = args as { query?: string };
  const url = new URL(API_BASE_URL + "/posts");
  if (query) url.searchParams.set("q", query);
  const response = await fetch(url.toString());
  if (!response.ok) {
    throw new Error("Failed to fetch posts: " + response.statusText);
  }
  return response.json();
});

const addPostToolDef = toolDefinition({
  name: "add_post",
  description: "Create a new post with a title and optional views count.",
  inputSchema: z.object({
    title: z.string(),
    views: z.number().optional(),
  }),
  outputSchema: z.object({
    id: z.string(),
    title: z.string(),
    views: z.number(),
  }),
});

const addPostTool = addPostToolDef.server(async (args: any) => {
  const { title, views = 0 } = args as { title: string; views?: number };
  const response = await fetch(API_BASE_URL + "/posts", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ title, views }),
  });
  if (!response.ok) {
    throw new Error("Failed to add post: " + response.statusText);
  }
  return response.json();
});

const editPostToolDef = toolDefinition({
  name: "edit_post",
  description: "Update an existing post. ID can be string or number.",
  inputSchema: z.object({
    id: z.union([z.string(), z.number()]),
    title: z.string().optional(),
    views: z.number().optional(),
  }),
  outputSchema: z.object({
    id: z.string(),
    title: z.string(),
    views: z.number(),
  }),
});

const editPostTool = editPostToolDef.server(async (args: any) => {
  const { id, title, views } = args as {
    id: string | number;
    title?: string;
    views?: number;
  };

  const update: any = {};
  if (title !== undefined) update.title = title;
  if (views !== undefined) update.views = views;

  const response = await fetch(API_BASE_URL + "/posts/" + String(id), {
    method: "PATCH",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify(update),
  });

  if (!response.ok) {
    throw new Error("Failed to update post: " + response.statusText);
  }

  return response.json();
});

const deletePostToolDef = toolDefinition({
  name: "delete_post",
  description: "Delete a post by ID.",
  inputSchema: z.object({
    id: z.union([z.string(), z.number()]),
  }),
  outputSchema: z.object({ success: z.boolean() }),
});

const deletePostTool = deletePostToolDef.server(async (args: any) => {
  const { id } = args as { id: string | number };
  const response = await fetch(API_BASE_URL + "/posts/" + String(id), {
    method: "DELETE",
  });
  if (!response.ok) {
    throw new Error("Failed to delete post: " + response.statusText);
  }
  return { success: true };
});
Enter fullscreen mode Exit fullscreen mode

The model now has a clear contract for what it can do with your data. Tools for todos and profile follow the same pattern.

Frontend: Chat UI with TanStack AI React

On the client we use @tanstack/ai-react to manage the streaming chat state and connect to /api/chat over Server-Sent Events. The main chat component uses useChat hook to handle messages, streaming, and tool calls.

Chat Component

"use client";

import { useEffect, useRef, useState } from "react";
import { fetchServerSentEvents, useChat } from "@tanstack/ai-react";
import { clientTools } from "@tanstack/ai-client";
import { updateCounterToolDef } from "@/routes/api/chat";

// Client-side implementation for the `set_count` tool.
const updateCounterTool = updateCounterToolDef.client(((args: unknown) => {
  const payload = (args as { count?: number; reloadPage?: boolean }) || {};
  const count = typeof payload.count === "number" ? payload.count : 0;
  const reloadPage = payload.reloadPage;

  window.localStorage.setItem("counter", String(count));

  if (reloadPage !== false) {
    setTimeout(() => {
      window.location.reload();
    }, 2000);
  }

  return { success: true };
}) as any);

export function Chat() {
  const [input, setInput] = useState("");
  const bottomRef = useRef<HTMLDivElement | null>(null);

  // Hydrate messages from localStorage on first render
  const [initialMessages] = useState(() => {
    if (typeof window === "undefined") return [];
    try {
      const raw = window.localStorage.getItem("chatMessages");
      return raw ? JSON.parse(raw) : [];
    } catch {
      return [];
    }
  });

  const { messages, sendMessage, isLoading } = useChat({
    connection: fetchServerSentEvents("/api/chat"),
    tools: clientTools(updateCounterTool),
    initialMessages,
  });

  // Persist messages whenever they change
  useEffect(() => {
    if (!messages || messages.length === 0) return;
    window.localStorage.setItem("chatMessages", JSON.stringify(messages));
  }, [messages]);

  // Auto-scroll when a new message arrives
  useEffect(() => {
    if (!bottomRef.current) return;
    bottomRef.current.scrollIntoView({ behavior: "smooth", block: "end" });
  }, [messages, isLoading]);

  const handleSubmit = (event: React.FormEvent) => {
    event.preventDefault();
    if (!input.trim() || isLoading) return;
    sendMessage(input.trim());
    setInput("");
  };

  return (
    <div className="flex flex-col h-screen bg-gradient-to-br from-slate-50 via-white to-indigo-50">
      <header className="border-b border-slate-200 bg-white/80 backdrop-blur-sm">
        <div className="max-w-4xl mx-auto px-4 py-3 flex items-center justify-between gap-3">
          <div className="flex items-center gap-3">
            <div className="h-9 w-9 rounded-xl bg-gradient-to-br from-indigo-500 to-purple-500 flex items-center justify-center shadow-md">
              <span className="text-xs font-semibold text-white">AI</span>
            </div>
            <div>
              <div className="flex items-center gap-2">
                <h1 className="text-sm sm:text-base font-semibold text-slate-900">
                  TanStack AI Assistant
                </h1>
                <span className="hidden sm:inline-flex items-center gap-1 rounded-full bg-emerald-500/10 px-2 py-0.5 text-[10px] font-medium text-emerald-700 border border-emerald-500/30">
                  <span className="h-1.5 w-1.5 rounded-full bg-emerald-500 animate-pulse" />
                  Online
                </span>
              </div>
              <p className="text-[11px] sm:text-xs text-slate-600">
                Ask questions, run tools, and manage your demo data in a single chat.
              </p>
            </div>
          </div>
        </div>
      </header>

      <div className="flex-1 overflow-y-auto px-3 sm:px-4 md:px-6 py-4">
        <div className="max-w-4xl mx-auto space-y-4">
          {messages.map((message: any) => (
            <div
              key={message.id || Math.random()}
              className={
                message.role === "assistant"
                  ? "flex items-start gap-3"
                  : "flex items-start gap-3 justify-end"
              }
            >
              {message.role === "assistant" && (
                <div className="h-8 w-8 rounded-2xl bg-gradient-to-br from-indigo-500 to-purple-500 flex items-center justify-center shadow-md text-xs text-white font-semibold">
                  AI
                </div>
              )}
              <div
                className={
                  "max-w-[80%] rounded-2xl px-4 py-3 shadow-lg border text-sm leading-relaxed " +
                  (message.role === "assistant"
                    ? "bg-white border-slate-200 text-slate-900"
                    : "bg-gradient-to-r from-indigo-600 to-purple-600 border-indigo-200 text-white")
                }
              >
                <p className="whitespace-pre-wrap break-words">
                  {typeof message.content === "string"
                    ? message.content
                    : JSON.stringify(message.content)}
                </p>
              </div>
            </div>
          ))}

          {isLoading && (
            <div className="flex items-center gap-2 text-xs text-slate-600">
              <span className="h-2 w-2 rounded-full bg-indigo-500 animate-pulse" />
              <span>Assistant is typing</span>
            </div>
          )}

          <div ref={bottomRef} />
        </div>
      </div>

      <form
        onSubmit={handleSubmit}
        className="border-t border-slate-200 bg-white/80 backdrop-blur-sm p-3 sm:p-4"
      >
        <div className="max-w-4xl mx-auto flex gap-2 sm:gap-3 items-end">
          <input
            type="text"
            value={input}
            onChange={(event) => setInput(event.target.value)}
            placeholder="Ask the assistant to manage your posts, todos, or profile…"
            className="flex-1 px-4 py-2.5 sm:py-3 rounded-xl bg-white text-slate-900 placeholder:text-slate-500 border border-slate-300 focus:outline-none focus:ring-2 focus:ring-indigo-500 focus:border-indigo-500 transition-all text-sm sm:text-base shadow-sm"
            disabled={isLoading}
          />
          <button
            type="submit"
            disabled={!input.trim() || isLoading}
            className="px-6 sm:px-8 py-2.5 sm:py-3 bg-gradient-to-r from-indigo-500 to-purple-500 text-white rounded-xl font-medium shadow-lg hover:shadow-xl disabled:opacity-50 disabled:cursor-not-allowed transition-all text-sm sm:text-base hover:scale-[1.02] active:scale-[0.99]"
          >
            Send
          </button>
        </div>
      </form>
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

Key Features

  • Message persistence: Messages are stored in localStorage and rehydrated on page reload
  • Auto-scrolling: Automatically scrolls to the latest message
  • Streaming responses: Real-time streaming via Server-Sent Events
  • Tool integration: Client-side tools (like counter) work seamlessly with server-side tools
  • Modern UI: Beautiful gradient design with responsive layout

How It Works

  1. User sends a message → Frontend sends message to /api/chat
  2. Backend processes → Claude analyzes the message and decides which tools to call
  3. Tools execute → Server-side tools make API calls to json-server
  4. Response streams → Results stream back to the client via SSE
  5. UI updates → Chat component displays the assistant's response

The model can call multiple tools in one response, handle complex queries, and format results as Markdown tables.

Setting Up json-server

Create a db.json file:

{
  "posts": [
    { "id": "1", "title": "First Post", "views": 100 },
    { "id": "2", "title": "Second Post", "views": 200 }
  ],
  "todos": [
    { "id": "1", "title": "Learn TanStack AI", "completed": false },
    { "id": "2", "title": "Build AI Assistant", "completed": true }
  ],
  "profile": {
    "name": "Demo User"
  }
}
Enter fullscreen mode Exit fullscreen mode

Start json-server:

npx json-server --watch db.json --port 4000
Enter fullscreen mode Exit fullscreen mode

Environment Variables

Create a .env file:

ANTHROPIC_API_KEY=your_anthropic_api_key_here
Enter fullscreen mode Exit fullscreen mode

Get your API key from Anthropic Console.

Best Practices

  1. Clear tool descriptions - Write descriptive tool descriptions so the LLM understands when to use them
  2. Zod schemas - Use Zod for input/output validation and type safety
  3. Error handling - Handle errors gracefully in tool implementations
  4. System prompts - Craft clear system prompts that guide the assistant's behavior
  5. Message persistence - Store chat history for better user experience
  6. Streaming - Use Server-Sent Events for real-time responses
  7. Client tools - Use client-side tools for browser-specific operations (like localStorage)

Extending the Pattern

To adapt this assistant to your own project, you can:

  • Replace json-server with endpoints that talk to your real database
  • Add new tools for resources like invoices, projects, or tickets
  • Tune the system prompt to match your brand voice and safety requirements
  • Add authentication to secure tool calls
  • Implement rate limiting to prevent abuse
  • Add logging to track tool usage and errors

Once you've done that, you can ship an AI-powered operations dashboard where users manage data by describing outcomes instead of hunting for the right buttons. That's the real power of combining TanStack React with a modern LLM like Claude.

Resources and Further Reading

Conclusion

The end result of this setup is a small but powerful pattern: expose your data through tools, let the LLM choose which tools to call, and keep the UI focused on a great chat experience. json-server makes it easy to prototype, but you can swap it for real microservices or a production API layer without changing the core architecture.

Key Takeaways:

  • TanStack AI provides a powerful framework for building AI-powered applications
  • Tool definitions with Zod schemas create a clear contract for the LLM
  • Server-Sent Events enable real-time streaming responses
  • Client-side tools extend functionality to browser-specific operations
  • Message persistence improves user experience
  • Natural language interface makes data management more intuitive

Whether you're building a simple demo or a production AI assistant, this pattern provides the foundation you need. The combination of TanStack React, Claude, and tool calling creates a powerful and flexible system for AI-powered data management.


What's your experience with AI-powered interfaces? Share your tips and tricks in the comments below! 🚀


💡 Looking for more details? This is a condensed version of my comprehensive guide. Read the full article on my blog for additional examples, advanced patterns, troubleshooting tips, and more in-depth explanations.

If you found this guide helpful, consider checking out my other articles on React development and AI integration patterns.

Top comments (0)