Introduction
Building a real-time collaborative canvas is harder than it looks. The interface seems simple enough, boxes, lines, and cursors moving around. But underneath it is a constant stream of concurrent edits, conflicting updates, and shared state that has to stay consistent for everyone at the same time. The hard part is not drawing on the canvas. It is making sure multiple people can work on the same board simultaneously without things breaking quietly in the background.
In this article, we will build a collaborative whiteboard using Velt that supports shared editing, live presence, comments, and AI-assisted interactions all inside the same canvas. The focus is not just on rendering nodes but on wiring the system in a way that keeps every user in sync without adding friction to the experience.
Here is what the final result looks like.
Now let's look at the stack behind it and how each piece fits together.
The Stack
- Next.js for the app framework
- ReactFlow for the infinite canvas and custom node types
- Velt for CRDT sync, live cursors, presence, comments, and notifications
- MiniMax M2.5Â for the AI features
- Tailwind CSS & Shadcn for styling and dark mode
- Zustand for state management
I used Velt Docs MCP and Skills throughout the build. The MCP server pulls Velt's API references directly into your editor context, and Skills are pre-built prompt templates that tell the agent exactly how to set up features like presence, comments, and notifications. Adding collaborative features that would have taken a couple of days ended up taking a few prompts with Skills and Velt Docs MCP.
For the editor side, I paired with GitHub Copilot Agent inside VS Code, it cut down the back-and-forth that usually comes with integrating a new SDK. If you haven't tried the Copilot Agent yet, it's worth a look. It takes multi-step actions in the editor rather than just completing code at the cursor, which works well for integration-heavy projects like this one.
How CRDT Makes This Work
The stack above gives you the canvas and the AI, but collaboration only works if multiple users can edit the same board without stepping on each other. This is handled through Veltâs CRDT-based sync.
CRDT records every change as an operation rather than a replacement, so edits from different users merge naturally, and the document keeps moving forward. Velt builds on Yjs and manages this layer for you. Wrap the app with VeltProvider, call client.setDocument(), and the shared state starts syncing immediately. Node positions, text, cursors, and comments all stay inside the same document.
That sync layer is already running the moment you call setDocument(). Everything else in this build sits on top of it. Before getting into the implementation, here is how to get the project running.
Before You Start
Clone the repo and add your environment variables to get it running locally.
git clone https://github.com/Studio1HQ/claude-velt-mcp-app
cd velt-whiteboard
npm install
Add these three keys to your .env:
NEXT_PUBLIC_VELT_API_KEY=your_velt_key
MINIMAX_API_KEY=your_minimax_key
Then run npm run dev and open localhost:3000.
You can grab your Velt API key from the Velt dashboard and your MiniMax key from the MiniMax platform.
The rest of this post walks through how the project is structured and how each piece is built, so you have the full context of what you are looking at.
Setting Up Velt
We did not write any of the code setup manually. During the building of this project, we used Velt Agent Skills and the Velt MCP server. One prompt handled the entire setup, the agent scanned the project, picked the right provider location, and generated the files.
If you are on Cursor or Claude Code, Velt also has an AI Plugin that bundles the MCP server, skills, rules, and an expert agent in one install, so you can skip even these three steps.
We used VS Code, so we set up the skills and MCP server separately. Here is how:
Step 1: Install Agent Skills
Run this in your project root:
npx skills add velt-js/agent-skills
This pulls four skill sets into your project: setup best practices, comments, CRDT, and notifications. Your agent picks the right one automatically based on what you ask it to do.
Step 2: Add the Velt MCP server to your editor
npx -y @velt-js/mcp-installer
Restart your editor after running this.
Step 3: Start the installation
In your AI agent chat (Copilot, Claude, Cursor, whichever you use), type:
install velt
The agent walks through the setup one step at a time: your project path, API key, which features you want, and where to put VeltProvider. It scans the codebase to detect your auth pattern and document ID source, shows you its findings, and generates an implementation plan before touching anything. You approve the plan, it applies the changes, and then runs a QA pass automatically.
The prompt we used for this project:
Set up Velt collaboration in my Next.js app. I need comments, presence, cursors, and CRDT for ReactFlow. My API key is in .env as NEXT_PUBLIC_VELT_API_KEY.
What would have taken a couple of hours of reading docs came down to that one prompt. The folder structure and code below are exactly what came out of it.
components/
providers/
velt-provider-wrapper.tsx â wraps the app with VeltProvider
velt-authenticator.tsx â identifies user and sets document
whiteboard/
whiteboard.tsx â main canvas
nodes/ â StickyNote, TextNode, ShapeNode
layout/
header.tsx â where all Velt UI components live
lib/
ai/
ai-helpers.ts
canvas-actions.ts
store/
whiteboard-store.ts
There are two separate provider files here. VeltProviderWrapper just wraps the app with VeltProvider and passes the API key. VeltAuthenticator sits inside it and handles the actual user identification and document setup.
// velt-authenticator.tsx
await client.identify({
userId: currentUser.userId,
name: currentUser.name,
email: currentUser.email,
photoUrl: currentUser.photoUrl,
organizationId: currentUser.organizationId,
});
// runs after identify resolves
client.setDocument(documentId);
The order matters. setDocument only runs after isUserIdentified flips to true, which is tracked with a useState flag. If you call setDocument before the user is identified, Velt won't associate that session correctly with the document; thatâs why the agent put these in one order.
The demo uses hardcoded users with a dropdown switcher. If you want to plug in real auth, swap the currentUser object in the store with whatever your auth provider returns, and the rest of the setup stays the same.
The Canvas
The canvas layer is built on two packages, @xyflow/react for the canvas itself and zustand for global state management.
ReactFlow handles the canvas, nodes, edges, and interactions. Zustand is for global state management across the app.
Having everything in a single store makes it much easier to wire up features like AI later, since any part of the app can read or write the same state without prop drilling or extra context layers.
State Management
The store is in lib/store/whiteboard-store.ts. It keeps track of the active tool, selected shape, selected template, a node ID counter, and a few other things. One thing worth noting is how setSelectedTool works. When you switch to "shapes" or "templates", it also flips the corresponding panel open automatically.
setSelectedTool: (tool: ToolType) => {
set({ selectedTool: tool });
if (tool === "shapes") set({ isShapesPanelOpen: true });
if (tool === "templates") set({ isTemplatesPanelOpen: true });
}
The node ID counter is just an incrementing number prefixed with "node-". Simple, but it prevents collisions when multiple nodes are dropped quickly.
Setting Up ReactFlow
In whiteboard.tsx, you register all custom node types before passing them to <ReactFlow>. This maps a string like "sticky" to the actual React component.
const nodeTypes = useMemo(() => ({
sticky: StickyNote,
text: TextNode,
shape: ShapeNode,
}), []);
<ReactFlow
nodes={nodes}
edges={edges}
nodeTypes={nodeTypes}
onNodesChange={onNodesChange}
onEdgesChange={onEdgesChange}
onConnect={onConnect}
onPaneClick={handlePaneClick}
>
<Background />
<Controls />
</ReactFlow>
onPaneClick is where click-to-place logic sits. When a tool or shape is selected in the store, clicking the canvas converts that canvas coordinate to a flow position and creates a new node there.
Building a Node
StickyNote.tsx is a good starting point for understanding how all nodes work. Every node receives data, selected, and id as props via NodeProps. The component keeps its own local state for text, color, and isEditing. On double-click, it switches to a textarea, and on blur, it commits back.
function StickyNote({ data, selected, id }: NodeProps) {
const [isEditing, setIsEditing] = useState(false);
return (
<>
<NodeResizeControl>...</NodeResizeControl>
<NodeToolbar>{/* color picker */}</NodeToolbar>
<Handle type="source" position={Position.Right} />
{isEditing ? <textarea /> : <div>{text}</div>}
</>
);
}
export default memo(StickyNote);
Notice memo() at the bottom. Without it, every canvas interaction re-renders all nodes. ReactFlow recommends this for any non-trivial node. The Handle components on all four sides is what allows edges to connect from any direction. ShapeNode follows the same pattern, but uses a switch on shapeType to render different SVG shapes, rectangles, circles, diamonds, and so on.
Templates
Templates are in lib/constants/templates.ts as a plain array of TemplateType objects. Each template has an id, name, description, and a nodes array. That nodes array is just ReactFlow node definitions with preset positions, types, colors, and labels. No runtime logic.
{
id: "kanban",
name: "Kanban Board",
nodes: [
{ type: "text", position: { x: 50, y: 50 }, data: { text: "đ Backlog" }, style: { width: 250 } },
{ type: "text", position: { x: 330, y: 50 }, data: { text: "âĄïž Up next" }, style: { width: 250 } },
// ...
]
}
When you click a template in the sidebar, it sets selectedTemplate in the store. The next click on the canvas calls getNextNodeId() for each node in the template, offsets all positions relative to where you clicked, and adds them to the ReactFlow nodes array in one go. The template disappears from the selection state immediately after.
To add your own template, you define the layout once by hand and commit it to the array. If the layout is repetitive (like the brainstorm grid), you can use Array.from with a generator to compute positions instead of writing them all out. The existing templates show both approaches.
With the canvas layer in place, the next piece is layering Velt on top of it for real-time collaboration.
Real-Time Collaboration with Velt
Once VeltProvider wraps the app and the document is set, the collaboration layer is essentially ready. The only thing left is placing the right components in the right spots.
Cursors and Presence
VeltCursor and VeltPresence are both mounted inside VeltProviderWrapper in VeltProvider.tsx, sitting alongside the app children. This means they are active across the entire canvas, not scoped to a specific component. Every connected user gets a labeled cursor that moves in real time, and their avatar shows up in the "Online" section of the header.
The Header Tools
All four collaboration controls sit in TopBar.tsx. The component imports VeltCommentTool, VeltPresence, VeltNotificationsTool, and VeltSidebarButton from @veltdev/react and drops them into the header alongside the user switcher dropdown.
import {
VeltPresence,
VeltNotificationsTool,
VeltCommentTool,
VeltSidebarButton,
} from "@veltdev/react";
// Inside the header JSX:
<VeltCommentTool />
<VeltSidebarButton />
<VeltNotificationsTool />
<VeltPresence />
VeltCommentTool activates the comment pin mode so users can click anywhere on the canvas to leave a comment. VeltSidebarButton toggles a panel that lists every comment on the document in one place. VeltNotificationsTool shows a bell icon with a badge that updates when someone mentions you or replies to your thread. VeltPresence renders the avatar stack at the right side of the header.
Comments
The comment editor Velt ships uses SlateJS under the hood, so it supports rich text out of the box, including @mentions. When a user clicks anywhere on the canvas with comment mode active, a pin drops at that position, and a comment thread opens. The pin stays anchored there, visible to everyone on the document.
Dark Mode
Velt's components use Shadow DOM by default, which means your global CSS won't reach them. To apply custom theming, you first disable Shadow DOM on the components you want to style:
<VeltComments shadowDom={false} />
<VeltCommentsSidebar shadowDom={false} />
Then you use Velt's theme playground to generate your CSS token set. It gives you a full palette for both light and dark mode that you can copy directly. Paste it inside the body tag in globals.css and it covers both themes automatically.
body {
/* Border Radius */
--velt-border-radius-xs: 0.5rem;
--velt-border-radius-sm: 1rem;
/* Light Mode */
--velt-light-mode-accent: #e31646;
--velt-light-mode-background-0: #ffffff;
--velt-light-mode-text-0: #0a0a0a;
/* Dark Mode */
--velt-dark-mode-accent: #e31646;
--velt-dark-mode-background-0: #000000;
--velt-dark-mode-text-0: #ffffff;
/* and so on... */
}
The theme playground is the fastest way to get this right. You pick your accent color, adjust the radius and spacing, and it generates the full token set.
AI on the Canvas
With the canvas rendering and state managed globally, let's add AI to the canvas.
To add AI to the canvas, you need three things:
- an API route that calls the model,
- a set of structured action types that the model can return, and
- a function that converts those actions into ReactFlow nodes.
The AI sidebar in this project connects all three.
The sidebar has four modes.
- Ask AI for free-form chat,
- Brainstorm to generate ideas as sticky notes,
- Summarize to get a read of what's on the board, and
- A Template namer that looks at the current layout and suggests what to call it.
You type a prompt, it reads the live canvas state, and either responds with text or drops new nodes directly onto the canvas.
Add Model
The model powering this is MiniMax M2.5, a large context model that's fast and works well for structured output tasks like this one. What made it easy to drop in is that it runs on an Anthropic-compatible API. You get the API key from MiniMax's platform, then point the Anthropic SDK at their base URL and swap the model name. No new SDK, no adapter layer.
const client = new Anthropic({
apiKey: process.env.MINIMAX_API_KEY,
baseURL: "https://api.minimax.io/anthropic",
});
const message = await client.messages.create({
model: "MiniMax-M2.5",
max_tokens: 2000,
system: CANVAS_SYSTEM_PROMPT + contextSummary,
messages: [{ role: "user", content: prompt }],
});
The context window is large enough that you can serialize the entire node list and send it with every request. The current canvas state gets appended to the system prompt as a JSON summary of all nodes, their IDs, types, text, and colors, so the model always knows what's already on the board before it responds.
MiniMax M2.5 in the Editor
Since the same API is Anthropic-compatible, you can also wire it into GitHub Copilot Chat inside VS Code, not just inside the project. During the build, we added MiniMax M2.5 as the active model in Copilot and used it to ask questions and build features directly from the editor.
It works the same way as any other model in Copilot Chat, except it is your own key and your own model.
Canvas Actions
Back to the app, the model needs to do more than just respond with text. It needs to place nodes, update colors, and interact with the canvas in a predictable way. Rather than letting the model return free-form text and trying to parse intent from it, the system prompt instructs it to always return structured JSON with two fields: message and actions. The actions array is a typed list of operations the model wants to perform on the canvas.
{
message: "Added 5 brainstorming sticky notes",
actions: [
{
type: "add_sticky",
items: [
{ text: "Reduce onboarding steps", color: "#fef08a" },
{ text: "Add social login", color: "#fef08a" },
]
}
]
}
buildNodesFromActions() in canvas-actions.ts takes that actions array plus a canvas anchor point and converts it into ReactFlow nodes, laid out in a grid starting from that position. If the action is update_color, applyColorUpdates() maps over existing nodes and patches their color in place. Both functions are pure, they take nodes in and return nodes out, with no side effects.
The AI does not directly touch the ReactFlow state. It just returns data. The sidebar calls buildNodesFromActions, gets the result, and passes it to setNodes. That separation keeps the AI logic completely independent of how the canvas renders.
With everything wired up, run npm run dev and open localhost:3000. Velt takes a second to initialize on the first load, after that, the canvas is live.
Switch between the two demo users using the dropdown in the header to simulate two people on the same board. Both cursors show up, comments are shared, and the AI sidebar works independently from either user session.
Demo
You can check the deployed version of our app here - https://mural-velt.vercel.app/
Bottom Line
This article covered building a collaborative whiteboard from the canvas layer up, real-time sync with Velt's CRDT, live cursors and comments, and an AI sidebar that reads and writes to the board. The pieces are modular enough that extending it is mostly additive.
A few things worth building on top of this:
- Lottie reactions on nodes so collaborators can leave emoji responses without opening a comment thread
- Export to PNG or PDF so teams can take the board outside the browser
- More node types like embeds, images, or code blocks
The CRDT layer is already there, so most of these would be new node types or toolbar additions rather than architectural changes.
- GitHub repo
- ReadME
- Velt Docs MCP, worth setting up in your IDE before you start a Velt integration
- Velt Skills










Top comments (0)