<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Studio1</title>
    <description>The latest articles on DEV Community by Studio1 (@studio1hq).</description>
    <link>https://dev.to/studio1hq</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/studio1hq"/>
    <language>en</language>
    <item>
      <title>Building a Zulip Style Collaborative Chat App with Next.js and Velt</title>
      <dc:creator>Astrodevil</dc:creator>
      <pubDate>Tue, 05 May 2026 17:43:33 +0000</pubDate>
      <link>https://dev.to/studio1hq/building-a-zulip-style-collaborative-chat-app-with-nextjs-and-velt-2mcp</link>
      <guid>https://dev.to/studio1hq/building-a-zulip-style-collaborative-chat-app-with-nextjs-and-velt-2mcp</guid>
      <description>&lt;p&gt;Zulip is known for keeping conversations organized. Topic-based threads, clear context, and async-friendly discussions make it a favorite for technical and distributed teams. Unlike traditional chat apps, conversations in Zulip stay readable even as teams scale.&lt;/p&gt;

&lt;p&gt;Recreating this experience can be a bit difficult. Real-time messaging, user presence, inline comments, and notifications usually require a complex backend and real-time infrastructure.&lt;/p&gt;

&lt;p&gt;In this tutorial, we will build a Zulip-style collaborative chat application using Next.js, Tailwind CSS, and Velt. Next.js powers the UI and application structure, while Velt adds collaboration features like presence, comments, and notifications without writing backend code.&lt;/p&gt;

&lt;p&gt;By the end, you will have a working chat interface inspired by Zulip, with real-time collaboration built in and ready to extend.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Zulip’s Approach Works&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://zulip.com/" rel="noopener noreferrer"&gt;Zulip’s design&lt;/a&gt; solves a problem most chat tools struggle with: conversations getting messy over time.&lt;/p&gt;

&lt;p&gt;Instead of long, linear message streams, Zulip organizes discussions into topics. Each message belongs to a clear context, so conversations stay focused and easy to follow even days or weeks later.&lt;/p&gt;

&lt;p&gt;Three ideas make this work especially well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real-time collaboration:&lt;/strong&gt; Messages, comments, and updates appear instantly for everyone in the channel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context-rich discussions:&lt;/strong&gt; Replies stay tied to a specific topic or message, so feedback does not get lost in the noise.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visible presence:&lt;/strong&gt; You always know who is online and actively participating, which makes collaboration feel immediate and shared.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What makes this challenging to build in a SaaS product is not the UI, but the infrastructure behind it. Real-time messaging, presence tracking, comments, and notifications typically require WebSockets, backend event systems, data synchronization, and careful handling of concurrent users. Building and maintaining this reliably can take significant engineering effort.&lt;/p&gt;

&lt;p&gt;In this tutorial, we recreate Zulip’s collaborative behavior without building that infrastructure ourselves, by using Velt as the collaboration layer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwuxo9h5r3v74v6r8ilz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwuxo9h5r3v74v6r8ilz.png" alt="Zulip Chat" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What We’re Building With
&lt;/h2&gt;

&lt;p&gt;Before we dive into the code, let’s quickly look at the tools we’ll use and why they fit this project well.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Next.js:&lt;/strong&gt; Next.js gives us a solid foundation for building interactive applications. With the App Router, we get a clean layout structure and client-side interactivity that works well for chat-style interfaces.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;React:&lt;/strong&gt; React handles the UI composition and state updates. The chat interface, message list, and user interactions all rely on simple, predictable React patterns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tailwind CSS:&lt;/strong&gt; Tailwind helps us build a clean and modern UI quickly. It keeps styling close to the components and makes it easy to adjust layouts, spacing, and themes without writing custom CSS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;shadcn ui (powered by Radix UI):&lt;/strong&gt; These provide accessible, reusable UI primitives like buttons, dropdowns, and avatars. They give us a polished look without locking us into heavy component libraries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zustand:&lt;/strong&gt; Zustand is used for lightweight state management. In this project, it manages demo users and allows us to switch between them to test real-time collaboration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Velt:&lt;/strong&gt; &lt;a href="https://velt.dev/" rel="noopener noreferrer"&gt;Velt&lt;/a&gt; is the key piece. Instead of building real-time infrastructure ourselves, Velt provides presence, comments, notifications, and collaboration out of the box. Once integrated, features like reactions, read status, and threaded comments work automatically without writing extra backend or real-time code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, this stack lets us focus on the chat experience and UI, while Velt handles the collaboration layer behind the scenes.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To follow along with this tutorial, you’ll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Node.js (v16 or later)&lt;/strong&gt; installed on your machine&lt;/li&gt;
&lt;li&gt;Basic familiarity with Next.js, React, and TypeScript&lt;/li&gt;
&lt;li&gt;A Velt account (you can sign up for free at &lt;a href="http://velt.devhttps://velt.dev/" rel="noopener noreferrer"&gt;velt.dev&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Working knowledge of &lt;strong&gt;Tailwind CSS&lt;/strong&gt; fundamentals&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You do not need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prior experience with Velt&lt;/li&gt;
&lt;li&gt;Any backend or database setup&lt;/li&gt;
&lt;li&gt;Experience building real-time systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’ll walk through the collaboration setup step by step, and the app runs entirely on the frontend.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Project Setup&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Instead of scaffolding a new project from scratch, we’ll start from an existing Zulip style chat app that already has the UI and collaboration logic wired up. This lets us focus on understanding how the pieces fit together.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clone the Repository
&lt;/h3&gt;

&lt;p&gt;Begin by cloning the project and moving into the directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gitclone https://github.com/Studio1HQ/zulip-velt
&lt;span class="nb"&gt;cd &lt;/span&gt;zulip-velt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install Dependencies
&lt;/h3&gt;

&lt;p&gt;Install all required dependencies using npm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will install Next.js, Tailwind CSS, shadcn ui components, Zustand for state management, and the Velt SDK used for collaboration features.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure the Velt API Key
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://console.velt.dev/" rel="noopener noreferrer"&gt;Velt requires a public API key&lt;/a&gt; to enable collaboration features like presence, comments, and notifications.&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;.env.local&lt;/code&gt; file in the root of the project and add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;NEXT_PUBLIC_VELT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your_velt_api_key_here
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can generate this key from the Velt dashboard after creating a free account.&lt;/p&gt;

&lt;p&gt;Once the key is added, restart the development server if it is already running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmv90vi7o2ow5ixcqlmzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmv90vi7o2ow5ixcqlmzv.png" alt="Velt dashboard" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Run the App Locally
&lt;/h3&gt;

&lt;p&gt;Start the development server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open your browser and visit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should now see a Zulip-style chat interface with channels on the left, a message area in the center, and collaboration controls in the header.&lt;/p&gt;

&lt;p&gt;At this point, the UI is already functional. In the next sections, we’ll break down how the app is structured and how Velt is integrated to power real-time collaboration without any backend setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsas11ltx3o5lolidq3m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsas11ltx3o5lolidq3m.png" alt="Zulip chat2" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Setup
&lt;/h2&gt;

&lt;p&gt;At a high level, the project is divided into five main parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;app&lt;/strong&gt;: Handles routing, layouts, and global configuration using the Next.js App Router&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;components&lt;/strong&gt;: Contains all reusable UI and chat-related components&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;helper&lt;/strong&gt;: Manages demo user data and user switching logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;hooks&lt;/strong&gt;: Stores custom React hooks such as theme management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;lib&lt;/strong&gt;: Holds small utility functions used across the app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s take a closer look at the most important folders we’ll work with in this tutorial.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Understanding the App Router and Layout Setup&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This project uses the Next.js App Router, which organizes the application using layouts and pages instead of traditional route files. If you are new to the App Router, don’t worry. We’ll focus only on what matters for this app.&lt;/p&gt;

&lt;p&gt;The key idea is simple: layouts wrap pages, and this is where we place Velt and theme handling.&lt;/p&gt;

&lt;p&gt;In this project, there are two layout files, each with a different responsibility.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;layout.tsx&lt;/code&gt; is the root layout for the entire application. It defines the HTML structure, metadata, and global styles.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ThemeProvider&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@/hooks/use-theme&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;VeltProvider&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@veltdev/react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;RootLayout&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="nx"&gt;children&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;children&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ReactNode&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;VeltProvider&lt;/span&gt; &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NEXT_PUBLIC_VELT_ID&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ThemeProvider&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;children&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;ThemeProvider&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;VeltProvider&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;page.tsx&lt;/code&gt; - This file renders the main chat interface. You’ll see it importing layout and chat components and placing them on the page.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;useEffect&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Header&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/components/layout/header&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Sidebar&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/components/layout/sidebar&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ChatArea&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@/components/chat/chat-area&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Home&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;isSidebarOpen&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setIsSidebarOpen&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;isSidebarCollapsed&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setIsSidebarCollapsed&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;isMobile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setIsMobile&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;checkMobile&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mobile&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;innerWidth&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;768&lt;/span&gt;
      &lt;span class="nf"&gt;setIsMobile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mobile&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mobile&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;setIsSidebarOpen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;setIsSidebarCollapsed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;checkMobile&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;resize&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;checkMobile&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;removeEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;resize&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;checkMobile&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[])&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;toggleSidebar&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isMobile&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;setIsSidebarOpen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;isSidebarOpen&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isSidebarCollapsed&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;setIsSidebarCollapsed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isSidebarOpen&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;setIsSidebarCollapsed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;setIsSidebarOpen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;closeSidebar&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isMobile&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;setIsSidebarOpen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"h-screen flex flex-col bg-background"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Header&lt;/span&gt; 
        &lt;span class="na"&gt;onToggleSidebar&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;toggleSidebar&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="na"&gt;isSidebarOpen&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;isSidebarOpen&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="na"&gt;isMobile&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;isMobile&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"flex-1 flex overflow-hidden"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Sidebar&lt;/span&gt; 
          &lt;span class="na"&gt;isOpen&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;isSidebarOpen&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
          &lt;span class="na"&gt;isCollapsed&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;isSidebarCollapsed&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
          &lt;span class="na"&gt;isMobile&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;isMobile&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
          &lt;span class="na"&gt;onClose&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;closeSidebar&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ChatArea&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;app/layout.tsx&lt;/code&gt; is the global shell&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;app/(app)/layout.tsx&lt;/code&gt; is where Velt is initialized&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;page.tsx&lt;/code&gt; is where the chat UI begins&lt;/li&gt;
&lt;li&gt;Providers are placed at layout level so all components can access them&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Building the Chat Interface with Components&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Most of the Zulip-style experience in this app is built using reusable React components. Each part of the interface has a clear responsibility, which makes the code easier to read and extend.&lt;/p&gt;

&lt;p&gt;We’ll focus on three areas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The chat area where messages appear&lt;/li&gt;
&lt;li&gt;The message component itself&lt;/li&gt;
&lt;li&gt;The layout components that shape the overall UI&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Chat Area:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;chat-area.tsx&lt;/code&gt; - This component is responsible for rendering the list of messages and attaching collaboration features to them.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;message.tsx&lt;/code&gt; - Each message in the chat is rendered as its own component. This keeps the UI modular and makes it easy to attach collaboration features at the message level.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Layout Components:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The layout of the app is built using two main components: a header and a sidebar.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;sidebar&lt;/strong&gt; represents Zulip style channels and navigation.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;header&lt;/strong&gt; is where collaboration controls live.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inside the header, you’ll see Velt components like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;VeltPresence&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;VeltNotificationsTool&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;VeltCommentsSidebar&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;VeltSidebarButton&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What these do:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;VeltPresence&lt;/code&gt; shows who is currently online&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;VeltNotificationsTool&lt;/code&gt; displays collaboration notifications&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;VeltCommentsSidebar&lt;/code&gt; Opens a panel with all comments&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;VeltSidebarButton&lt;/code&gt; toggles the comments sidebar&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these work automatically once Velt is initialized at the layout level.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;VeltPresence&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;VeltNotificationsTool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;VeltSidebarButton&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;VeltCommentsSidebar&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;useVeltClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@veltdev/react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;names&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;userIds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;useUserStore&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@/helper/userdb&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;HeaderProps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;onToggleSidebar&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;isSidebarOpen&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;isMobile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Header&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="nx"&gt;onToggleSidebar&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;isSidebarOpen&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;isMobile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="nx"&gt;HeaderProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;theme&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useTheme&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setUser&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useUserStore&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useVeltClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prevUserRef&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useRef&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;isInitializingRef&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useRef&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Prevent overlapping initialization calls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8ppbpvxlz56kxhsmhl6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8ppbpvxlz56kxhsmhl6.png" alt="Zulip dashboard" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Client Initialization, User Identification, and Document Setting&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt; &lt;span class="c1"&gt;// Handle Velt client initialization, user identification, and document setting&lt;/span&gt;
  &lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;isInitializingRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Velt init skipped:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;!!&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;!!&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;initializing&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;isInitializingRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;initializeVelt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;isInitializingRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Detect user switch&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;isUserSwitch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;prevUserRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;uid&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;uid&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nx"&gt;prevUserRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Starting Velt init for user:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;uid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;isUserSwitch&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="c1"&gt;// Re-identify the user (handles initial and switches)&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;veltUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;uid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;organizationId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;organization_id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;displayName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;photoUrl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;photoUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;identify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;veltUser&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Velt user identified:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;veltUser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setDocuments&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;zulip-velt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;documentName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;zulip-velt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;]);&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Velt documents set: zulip-velt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Error initializing Velt:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;isInitializingRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="nf"&gt;initializeVelt&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt; &lt;span class="c1"&gt;// Re-run on client or user change&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Reusable UI Components&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To keep the chat interface clean and consistent, this project uses a set of reusable UI components. These components handle common UI patterns like buttons, inputs, avatars, and scroll areas.&lt;/p&gt;

&lt;p&gt;Instead of building these from scratch, the project uses &lt;strong&gt;shadcn ui&lt;/strong&gt;, which is a collection of accessible, unstyled components built on top of Radix UI. This approach gives us flexibility while keeping the UI consistent.&lt;/p&gt;

&lt;p&gt;All reusable UI components live inside the &lt;code&gt;components/ui&lt;/code&gt; folder.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reusable UI primitives&lt;/strong&gt;: Shared building blocks like buttons, inputs, avatars, and dropdowns used across the app to keep styling consistent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Built with shadcn ui (powered by Radix UI)&lt;/strong&gt;: Accessible, unstyled components that provide behavior and keyboard support without locking the design.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scroll handling for chat&lt;/strong&gt;: The &lt;code&gt;ScrollArea&lt;/code&gt; component manages smooth scrolling for long message lists in the chat view.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Separation of concerns&lt;/strong&gt;: UI components handle appearance and interaction, while chat and layout components focus on structure and logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy to extend and customize&lt;/strong&gt;: Updating a UI component in this folder automatically updates its usage across the entire app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv7m3tkii8oqsvnj9w82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv7m3tkii8oqsvnj9w82.png" alt=" " width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;User Management for Collaboration Testing&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To test real-time collaboration features like presence, comments, and notifications, we need a way to simulate multiple users. Instead of setting up a full authentication system, this project uses a simple and beginner-friendly approach with predefined users.&lt;/p&gt;

&lt;p&gt;User state is managed using &lt;a href="https://zustand.docs.pmnd.rs/" rel="noopener noreferrer"&gt;Zustand&lt;/a&gt;, a lightweight state management library.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;create&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;zustand&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;persist&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;zustand/middleware&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;User&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;uid&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;displayName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;photoUrl&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;UserStore&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;User&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;setUser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;User&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userIds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user001&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user002&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;names&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Nany&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Mary&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;useUserStore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;create&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;UserStore&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;()(&lt;/span&gt;
  &lt;span class="nf"&gt;persist&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;set&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;setUser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user-storage&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Theme Management with &lt;code&gt;useTheme&lt;/code&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Modern apps usually support both light and dark themes, and this project follows the same pattern. Theme state is managed using a custom React hook, which keeps the logic reusable and easy to maintain.&lt;/p&gt;

&lt;p&gt;The theme logic lives inside a single file.&lt;/p&gt;

&lt;p&gt;The theme is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stored in React state&lt;/li&gt;
&lt;li&gt;Loaded from &lt;code&gt;localStorage&lt;/code&gt; on page load&lt;/li&gt;
&lt;li&gt;Saved back to &lt;code&gt;localStorage&lt;/code&gt; when changed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures the user’s theme preference persists across page refreshes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Syncing Theme with Velt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The theme value is also passed to Velt components, allowing them to automatically switch between light and dark modes.&lt;/p&gt;

&lt;p&gt;This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The app UI and Velt UI stay visually consistent&lt;/li&gt;
&lt;li&gt;No extra styling logic is required for collaboration components&lt;/li&gt;
&lt;li&gt;Once the hook is set up, theme changes propagate everywhere.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Utility Functions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;lib&lt;/code&gt; folder contains small helper utilities that are shared across the application. In this project, it mainly exists to support reusable UI components and keep common logic out of individual files. The utilities in this file are primarily related to class name handling. When working with Tailwind CSS and reusable components, it’s common to conditionally apply multiple class names.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Running the App and Testing the Output&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now that we’ve walked through the project structure and key files, it’s time to run the app and verify that everything works as expected.&lt;/p&gt;

&lt;p&gt;Seeing the app in action makes the collaboration features easier to understand in the next sections.&lt;/p&gt;

&lt;p&gt;From the project root, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the server starts, open your browser and navigate to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see a Zulip style chat interface with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A sidebar for channels&lt;/li&gt;
&lt;li&gt;A header with user controls&lt;/li&gt;
&lt;li&gt;A central chat area for messages&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To test collaboration features, open the app in two browser windows or one normal window and one incognito window.&lt;/p&gt;

&lt;p&gt;Use the user switcher in the header to change between demo users such as Nany and Mary.&lt;/p&gt;

&lt;p&gt;Each window should represent a different user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08w6t0lx9imn4xpn2j6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08w6t0lx9imn4xpn2j6u.png" alt="Chat Window" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Try the following actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Presence:&lt;/strong&gt; Notice user avatars appear in the header when multiple users are online.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comments:&lt;/strong&gt; Hover over a message and add a comment. The comment should appear instantly in both windows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notifications:&lt;/strong&gt; Add a comment in one window and check the notification bell in the other.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Theme Switching:&lt;/strong&gt; Toggle between light and dark mode and observe that both the app and Velt components update correctly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How It Works&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here’s a simple view of what happens when the app runs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The app loads and initializes Velt using the API key in the layout&lt;/li&gt;
&lt;li&gt;A demo user is selected using the Zustand store&lt;/li&gt;
&lt;li&gt;Velt identifies the active user and associates them with the app&lt;/li&gt;
&lt;li&gt;A shared document context is set for collaboration&lt;/li&gt;
&lt;li&gt;Messages render in the chat area&lt;/li&gt;
&lt;li&gt;Comments, presence, and notifications are automatically tracked by Velt&lt;/li&gt;
&lt;li&gt;Updates sync instantly across all connected clients&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key idea is that the app focuses on UI and user flow, while Velt handles real-time synchronization and collaboration logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Collaborative Features You Get Automatically&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Once Velt is integrated, collaboration features start working without writing additional backend or real-time code.&lt;/p&gt;

&lt;p&gt;These features are available out of the box:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Inline comments:&lt;/strong&gt; Add comments directly on chat messages and view them in context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User presence:&lt;/strong&gt; See who is currently online and active in the chat.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notifications:&lt;/strong&gt; Get notified when someone comments or interacts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reactions and comment status:&lt;/strong&gt; React to comments and mark them as resolved or active.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read awareness:&lt;/strong&gt; See when comments have been viewed by other users.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these features work automatically once Velt is set up.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Things to Note Before Shipping to Production&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before using this setup in a real product, keep the following in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Replace demo users with a real authentication system&lt;/li&gt;
&lt;li&gt;Identify users using real user IDs and metadata&lt;/li&gt;
&lt;li&gt;Use dynamic document IDs instead of a hardcoded value&lt;/li&gt;
&lt;li&gt;Add permissions to control who can view or comment&lt;/li&gt;
&lt;li&gt;Handle error states when collaboration services are unavailable&lt;/li&gt;
&lt;li&gt;Review Velt customization options to match your product UI&lt;/li&gt;
&lt;li&gt;Test performance with multiple concurrent users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These steps ensure the app scales safely beyond a demo environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Demo Video&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You can see the completed Zulip style chat app with real-time collaboration in action here:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live Demo:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://zulip-velt-chat.vercel.app/" rel="noopener noreferrer"&gt;https://zulip-velt-chat.vercel.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/ppyzsybdw2w"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This demo shows user switching, comments, presence, notifications, and theme support working together.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Zulip demonstrates how powerful chat can be when collaboration is built into the experience. Organized conversations, clear context, and real-time awareness turn chat from simple messaging into a system teams can rely on.&lt;/p&gt;

&lt;p&gt;For builders, the real challenge is not the UI. It is the collaboration layer behind it. Features like real-time sync, presence tracking, inline comments, notifications, and reactions usually require complex infrastructure and long development cycles. This is especially true when building a chat-based SaaS product.&lt;/p&gt;

&lt;p&gt;In this tutorial, we focused on that exact problem. Instead of rebuilding collaboration from scratch, we showed how to add Zulip-style collaborative features inside a chat application using Next.js and a modern frontend stack, while delegating the hard real-time problems to Velt.&lt;/p&gt;

&lt;p&gt;The result is a working chat app where collaboration feels native, but the code stays simple and frontend-focused.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaways&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clean component structure makes complex apps easier to understand&lt;/li&gt;
&lt;li&gt;Collaboration does not need custom real-time infrastructure&lt;/li&gt;
&lt;li&gt;Velt enables comments, presence, notifications, and more out of the box&lt;/li&gt;
&lt;li&gt;Frontend-focused development speeds up experimentation and iteration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are building a chat app, internal tool, or collaborative product, Velt lets you focus on your core experience instead of infrastructure.&lt;/p&gt;

&lt;p&gt;Try building with &lt;a href="https://velt.dev/" rel="noopener noreferrer"&gt;Velt&lt;/a&gt; and add collaboration to your app in minutes at velt.dev.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Scaling Karpathy’s AutoResearch Using Nebius Token Factory</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Sun, 03 May 2026 13:50:44 +0000</pubDate>
      <link>https://dev.to/studio1hq/scaling-karpathys-autoresearch-using-nebius-token-factory-81a</link>
      <guid>https://dev.to/studio1hq/scaling-karpathys-autoresearch-using-nebius-token-factory-81a</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;We ran about 250 prompt optimization experiments overnight using a small AI agent loop. The idea was simple: let an AI system propose an experiment, run it, evaluate the result, and then try again with a better idea. Instead of manually testing prompts one by one, the system keeps improving its own attempts over multiple iterations.&lt;/p&gt;

&lt;p&gt;This idea comes from &lt;a href="https://github.com/karpathy/autoresearch" rel="noopener noreferrer"&gt;Andrej Karpathy’s AutoResearch&lt;/a&gt;, where an AI agent can automate the typical machine learning research cycle. In a normal workflow, researchers adjust parameters, run experiments, observe the results, and repeat the process many times before reaching a good configuration. AutoResearch shows that this repetitive process can be handled by an intelligent agent.&lt;/p&gt;

&lt;p&gt;In this article, we will walk through how we built a cloud-native AutoResearch loop using &lt;a href="https://tokenfactory.nebius.com/" rel="noopener noreferrer"&gt;Nebius Token Factory for LLM inference&lt;/a&gt;, allowing the agent to run hundreds of experiments automatically while keeping structured records of every attempt. &lt;/p&gt;

&lt;h2&gt;
  
  
  Pitfalls in the Original AutoResearch Implementation
&lt;/h2&gt;

&lt;p&gt;Karpathy’s AutoResearch project is a very interesting idea and a great starting point to understand how AI agents can run experiments automatically. &lt;/p&gt;

&lt;p&gt;The repository mainly consists of simple files such as &lt;strong&gt;&lt;code&gt;program.md&lt;/code&gt;&lt;/strong&gt; (which defines the research goal), &lt;strong&gt;&lt;code&gt;train.py&lt;/code&gt;&lt;/strong&gt; (which runs the experiment), and a lightweight results log that records experiment outcomes. The agent reads the goal, modifies the experiment code, runs it, and stores the results, demonstrating how an AI system can iterate through experiments automatically.&lt;/p&gt;

&lt;p&gt;It was built as a research prototype to demonstrate the concept, not as a full system for running large-scale experiments in real workflows. &lt;/p&gt;

&lt;p&gt;As a result, a few limitations arise when we try to scale the idea.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Experiment tracking using TSV files:&lt;/strong&gt; In the original implementation, experiment results are stored in a simple TSV (tab-separated values) file. This file usually contains basic fields like experiment ID, score, and parameters. While this is easy to implement, it becomes difficult to manage as the number of experiments grows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limited structure for experiment metadata:&lt;/strong&gt; A flat TSV file does not provide structured storage for experiment details such as prompts, responses, timestamps, or reasoning steps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local model dependency:&lt;/strong&gt; The original workflow usually depends on running models locally. This means developers often need access to local GPUs or preconfigured environments to run inference.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Difficulty scaling experiment loops:&lt;/strong&gt; Because of the local infrastructure and simple logging system, running hundreds of experiments in a controlled way becomes harder.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-2030371219518931079-421" src="https://platform.twitter.com/embed/Tweet.html?id=2030371219518931079"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-2030371219518931079-421');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=2030371219518931079&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Are Going to Build
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;The use case for this project is prompt optimization. In many real workflows, developers often try multiple prompts manually to get the best response from a language model. This usually involves repeated trial-and-error, where prompts are slightly modified, tested, and evaluated before arriving at a good result. When the number of experiments increases, managing and tracking these attempts becomes difficult.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this tutorial, we will build a small system that keeps the same AutoResearch idea while addressing a few of those practical limitations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AutoResearch-style experiment loop:&lt;/strong&gt; We implement the same research cycle where an agent proposes an experiment, runs it, evaluates the result, stores the outcome, and then proposes the next attempt based on previous results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud-based reasoning using Nebius Token Factory:&lt;/strong&gt; Instead of relying on local models, the agent uses &lt;a href="https://nebius.com/services/token-factory" rel="noopener noreferrer"&gt;Nebius Token Factory&lt;/a&gt; to generate experiment ideas and responses. This removes the dependency on local GPUs and makes the research loop easier to run.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured experiment tracking using JSON:&lt;/strong&gt; Instead of logging experiments in a flat TSV file, each experiment is stored as a structured JSON record. This allows us to track prompts, responses, scores, and timestamps more clearly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt optimization as the experiment domain:&lt;/strong&gt; For this tutorial, the system will try to generate the best explanation for vector databases. Each experiment proposes a different prompt and evaluates how well the response matches our criteria.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Running a large experiment loop:&lt;/strong&gt; The system runs around 250 iterations, allowing the agent to gradually improve prompts by learning from previous experiment results.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcw0wg1ee8ap9foj7bz73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcw0wg1ee8ap9foj7bz73.png" alt="Image1" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Nebius Token Factory
&lt;/h2&gt;

&lt;p&gt;To run the AutoResearch loop properly, the agent needs a reliable way to generate experiment ideas and responses. Instead of running models locally, we use Nebius Token Factory as the inference layer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Managed model inference:&lt;/strong&gt; Nebius Token Factory lets us run open models through an API. We do not need to download models or manage GPUs locally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access to open models:&lt;/strong&gt; The platform provides models such as &lt;strong&gt;Llama, DeepSeek, and Qwen&lt;/strong&gt;, which can be used for reasoning, prompt generation, and experiment responses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI-compatible API:&lt;/strong&gt; The API follows the same structure used in the OpenAI SDK. This makes integration simple in Python applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent reasoning backend:&lt;/strong&gt; In our system, the AutoResearch agent calls Nebius Token Factory to analyze previous experiments and propose the next prompt.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supports large experiment loops:&lt;/strong&gt; Because inference runs in the cloud, we can run &lt;strong&gt;hundreds of experiment iterations&lt;/strong&gt; without worrying about local compute limits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No local GPU requirement:&lt;/strong&gt; Developers can run the experiment loop directly from their machine while the model inference happens in Nebius infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more demos, &lt;a href="https://github.com/nebius/token-factory-cookbook" rel="noopener noreferrer"&gt;refer to the cookbooks here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fulamywlxs830ejq8z7jz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fulamywlxs830ejq8z7jz.png" alt="Image2" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tutorial: Building an AutoResearch-like System with Nebius Token Factory
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1 — Clone the Project Repository
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Clone the repository.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/studio1hq/Nebius_autoresearch.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install required dependencies.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;requests python-dotenv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Set up a working environment.&lt;/li&gt;
&lt;li&gt;Review the Project Structure&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;What it Does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;program.md&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Defines the research goal&lt;/td&gt;
&lt;td&gt;Contains the task the agent is trying to optimize. In this project, the goal is to generate the best prompt for explaining vector databases.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;agent.py&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Generates the next experiment&lt;/td&gt;
&lt;td&gt;Uses &lt;strong&gt;Nebius Token Factory&lt;/strong&gt; to analyze previous experiment results and propose the next prompt to test.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;experiment.py&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Runs the experiment&lt;/td&gt;
&lt;td&gt;Sends the generated prompt to &lt;strong&gt;Nebius Token Factory&lt;/strong&gt; and retrieves the model’s response.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;scorer.py&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Evaluates experiment output&lt;/td&gt;
&lt;td&gt;Scores the response based on simple rules such as response length and presence of relevant keywords. Returns a numeric score.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;main.py&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Controls the AutoResearch loop&lt;/td&gt;
&lt;td&gt;Loads previous experiment history, runs the experiment cycle, logs results, and repeats the process for multiple iterations.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;results.json&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Stores experiment history&lt;/td&gt;
&lt;td&gt;Saves structured experiment records including prompt, response, score, and timestamp. Easier to analyze compared to flat TSV logs.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Step 2 — Configure Nebius Token Factory
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Create a Nebius Token Factory API key&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open the Token Factory dashboard:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow50jbq72jik3e8s4qtk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow50jbq72jik3e8s4qtk.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the API Keys section in the left sidebar and click Get API key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56f26yl8o3d1q49p70ev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56f26yl8o3d1q49p70ev.png" alt="Image6" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a new key for your project. This key will be used by the application to authenticate API requests sent to &lt;a href="https://docs.tokenfactory.nebius.com/quickstart" rel="noopener noreferrer"&gt;Nebius Token Factory&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Store the API key as an environment variable&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;.env&lt;/code&gt; file in the root of the project and store the key as an environment variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;NEBIUS_API_KEY&lt;/span&gt;=&lt;span class="n"&gt;your_api_key&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x13035f1ohr3vqhe76v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x13035f1ohr3vqhe76v.png" alt="Image9" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configure the Token Factory client&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.tokenfactory.nebius.com/cookbook/overview" rel="noopener noreferrer"&gt;Nebius Token Factory&lt;/a&gt; provides an OpenAI-compatible API, so we can use the same client structure used for OpenAI integrations.&lt;/p&gt;

&lt;p&gt;Initialize the client using the Nebius API base URL and the API key stored in the environment variable.&lt;/p&gt;

&lt;p&gt;The request sends the prompt to the selected model (for example, &lt;code&gt;llama-3-70b-instruct&lt;/code&gt;) and receives the generated response.&lt;/p&gt;

&lt;p&gt;In this system, these API calls power two parts of the AutoResearch loop:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent reasoning: W&lt;/strong&gt;here the agent proposes the next experiment prompt (&lt;a href="http://agent.pyhttps://github.com/Studio1HQ/Nebius_autoresearch/blob/main/nebius_autoresearch/agent.py" rel="noopener noreferrer"&gt;agent.py&lt;/a&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="nx"&gt;typing&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;List&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Any&lt;/span&gt;
&lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;generate_response&lt;/span&gt;

&lt;span class="nx"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;propose_experiment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;history&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;Dict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;]],&lt;/span&gt; &lt;span class="nx"&gt;goal&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="dl"&gt;"""&lt;/span&gt;&lt;span class="s2"&gt;
    Proposes a new system prompt based on the research goal and experiment history.

    Args:
        history: List of previous experiment results (dicts with 'prompt' and 'score').
        goal: The research goal string.

    Returns:
        str: The proposed system prompt for the next experiment.
    &lt;/span&gt;&lt;span class="dl"&gt;"""&lt;/span&gt;
    &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Format&lt;/span&gt; &lt;span class="nx"&gt;history&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;not&lt;/span&gt; &lt;span class="nx"&gt;history&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nx"&gt;history_text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;No previous experiments.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="err"&gt;#&lt;/span&gt; &lt;span class="nx"&gt;Use&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;last&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="nx"&gt;experiments&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;provide&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="nx"&gt;without&lt;/span&gt; &lt;span class="nx"&gt;overflowing&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="nb"&gt;window&lt;/span&gt;
        &lt;span class="nx"&gt;recent_history&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;history&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;:]&lt;/span&gt;
        &lt;span class="nx"&gt;history_text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
            &lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Attempt {i+1}:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
            &lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Prompt: {r.get('prompt', 'Unknown')}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
            &lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Score: {r.get('score', 0)}/10&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; 
            &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;recent_history&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="nx"&gt;system_instruction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;You are an expert AI researcher optimizing a system prompt to achieve a specific goal.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Analyze the previous attempts and their scores. Identify what worked and what didn't.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Then, generate a NEW, improved system prompt that is likely to achieve a higher score.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Do not repeat previous prompts. Be creative and precise.&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Format your response exactly as follows:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;THOUGHT: &amp;lt;your analysis and plan&amp;gt;&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;PROMPT: &amp;lt;the actual system prompt text&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nx"&gt;user_message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Research Goal:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;{goal}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Experiment History:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;{history_text}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Based on the above, generate the next system prompt to test.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Ensure you include your thought process before the prompt.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The agent builds a context using the &lt;strong&gt;goal + past experiments&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;This prompt is sent to Nebius using &lt;code&gt;generate_response()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The model suggests the &lt;strong&gt;next experiment (new prompt)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;This is where the &lt;strong&gt;“thinking” of the agent happens&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Experiment execution:&lt;/strong&gt; where the prompt is sent to the model, and the response is evaluated. (&lt;a href="http://experiment.pyhttps://github.com/Studio1HQ/Nebius_autoresearch/blob/main/nebius_autoresearch/experiment.py" rel="noopener noreferrer"&gt;experiment.py&lt;/a&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;generate_response&lt;/span&gt;

&lt;span class="nx"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run_experiment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;system_prompt&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="dl"&gt;"""&lt;/span&gt;&lt;span class="s2"&gt;
    Runs the experiment by using the proposed system prompt to explain vector databases.
    &lt;/span&gt;&lt;span class="dl"&gt;"""&lt;/span&gt;
    &lt;span class="nx"&gt;test_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;{system_prompt}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;Explain vector databases. Respond in under 120 words.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;generate_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;test_prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The generated prompt is sent again to Nebius.&lt;/li&gt;
&lt;li&gt;The model produces the &lt;strong&gt;actual output&lt;/strong&gt; for evaluation.&lt;/li&gt;
&lt;li&gt;This response is later scored and stored.&lt;/li&gt;
&lt;li&gt;This is the &lt;strong&gt;execution step of the experiment&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With this configuration in place, the AutoResearch loop can use Nebius Token Factory as its inference backend. &lt;/p&gt;

&lt;p&gt;For example, (refer to &lt;a href="http://client.pyhttps://github.com/Studio1HQ/Nebius_autoresearch/blob/main/nebius_autoresearch/client.py" rel="noopener noreferrer"&gt;client.py&lt;/a&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;NEBIUS_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;ValueError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;NEBIUS_API_KEY environment variable not set&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Nebius Token Factory OpenAI-compatible endpoint
&lt;/span&gt;    &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://api.tokenfactory.nebius.com/v1/chat/completions&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="n"&gt;headers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Content-Type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Authorization&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Bearer &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;meta-llama/Llama-3.3-70B-Instruct&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;messages&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="c1"&gt;# Optional parameters for generation control
&lt;/span&gt;        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;temperature&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;max_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;4096&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;/p&gt;

&lt;p&gt;You can also use an OpenAI-compatible API interface. The same client patterns used in OpenAI-based applications can be used here with minimal changes, making integration into existing workflows straightforward.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAI&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://api.tokenfactory.nebius.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;NEBIUS_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;completion&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llama-3-70b-instruct&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is the answer to all questions?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;completion&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3 — Understanding the AutoResearch Loop
&lt;/h2&gt;

&lt;p&gt;The system follows a simple research loop where the agent proposes experiments, evaluates the results, and improves the next attempt based on previous runs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7m0hqwbqifrqje6w2e1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7m0hqwbqifrqje6w2e1.png" alt="Image7" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Load the research goal:&lt;/strong&gt; The system reads the task defined in &lt;code&gt;program.md&lt;/code&gt;. In this case, the goal is to generate the best prompt for explaining vector databases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load previous experiment history:&lt;/strong&gt; The program reads &lt;code&gt;results.json&lt;/code&gt; to understand what prompts were already tested and how they performed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The agent proposes the next experiment:&lt;/strong&gt; The agent uses Nebius Token Factory to analyze the previous results and suggest a new prompt to try.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run the experiment:&lt;/strong&gt; The new prompt is sent to the model through the Token Factory API, and the response is generated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Score the result:&lt;/strong&gt; The response is evaluated using the scoring function defined in &lt;code&gt;scorer.py&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Store the experiment record:&lt;/strong&gt; The prompt, response, score, and timestamp are saved in &lt;code&gt;results.json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repeat the loop:&lt;/strong&gt; The system continues this cycle for around &lt;strong&gt;250 experiment iterations&lt;/strong&gt;, allowing the agent to gradually improve the prompts.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 4 — Run the System
&lt;/h2&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python main.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5eootoe4iotf9g8ccvsb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5eootoe4iotf9g8ccvsb.png" alt="Image8" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Terminal output shows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;iteration number&lt;/li&gt;
&lt;li&gt;proposed prompt&lt;/li&gt;
&lt;li&gt;generated response&lt;/li&gt;
&lt;li&gt;score.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every 25 experiments, the system prints a short report in the terminal showing the number of experiments completed, the best score so far, and the prompt that produced the best result. This helps track how the agent is improving over time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgaryev8vat5we4yqxjbb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgaryev8vat5we4yqxjbb.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each experiment is stored as a structured record in results.json. The record contains the prompt used, the response generated by the model, the score assigned by the evaluation function, and the timestamp of the run. This structured format makes it easier to inspect and analyze experiment history later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fup5ejurslni6g2hfc0r8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fup5ejurslni6g2hfc0r8.png" alt="Image11" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Instead of starting from scratch every time, the system loads the existing results.json file when the program starts. It detects the last completed experiment and continues from the next iteration. This allows the experiment loop to resume without losing previous results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;AutoResearch shows how an AI agent can run experiments automatically by proposing ideas, testing them, and learning from the results. &lt;/p&gt;

&lt;p&gt;In this tutorial, we extended that idea by using Nebius Token Factory for cloud-based model inference and structured experiment logging for better tracking. This makes the research loop easier to run, easier to observe, and more practical for developers experimenting with AI workflows.&lt;/p&gt;

&lt;p&gt;If you want to try similar experimentation workflows, you can start with &lt;a href="https://tokenfactory.nebius.com/" rel="noopener noreferrer"&gt;Nebius Token Factory&lt;/a&gt; to run open models through a simple API without managing GPUs. The broader &lt;a href="https://nebius.com/" rel="noopener noreferrer"&gt;Nebius Cloud platform&lt;/a&gt; also provides GPU infrastructure, scalable inference, and tools for building AI applications. &lt;/p&gt;

&lt;p&gt;Explore the available services, experiment with different models, and build your own AI-driven systems using the Nebius ecosystem.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>nebius</category>
      <category>machinelearning</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Build a HackMD-Style Collaborative Markdown Editor with React, Antigravity IDE &amp; Velt</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Mon, 27 Apr 2026 07:21:30 +0000</pubDate>
      <link>https://dev.to/studio1hq/build-a-hackmd-style-collaborative-markdown-editor-with-react-antigravity-ide-velt-5gfj</link>
      <guid>https://dev.to/studio1hq/build-a-hackmd-style-collaborative-markdown-editor-with-react-antigravity-ide-velt-5gfj</guid>
      <description>&lt;h3&gt;
  
  
  TL;DR
&lt;/h3&gt;

&lt;p&gt;Building real-time collaboration from scratch takes significant effort. You need sync logic, presence, comments, and infrastructure before you even ship the feature.&lt;/p&gt;

&lt;p&gt;In this guide, we generate a pixel perfect HackMD style editor UI using Antigravity, connect live markdown preview in React, and then use Velt to add presence, live sync, and comments in just a few steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We’re Building
&lt;/h2&gt;

&lt;p&gt;We are building a HackMD style markdown editor with a clean two pane layout. On the left, users can write markdown. On the right, they see a live rendered preview. The interface follows a dark theme and mirrors the structure and layout of HackMD closely.&lt;/p&gt;

&lt;p&gt;This is not just a static clone. The final result will support real time collaboration, allowing multiple users to edit, comment, and stay aware of each other inside the same document.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/TAVQnzl7YOk"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;p&gt;We use a focused, minimal stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;React with Vite and TypeScript:&lt;/strong&gt; Provides a fast development setup and a clean component-based architecture.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://antigravity.google/" rel="noopener noreferrer"&gt;**Antigravity&lt;/a&gt;:** Used to generate a pixel-accurate editor UI directly from a reference image. This allows us to replicate the layout precisely without manual design iteration.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://velt.dev/" rel="noopener noreferrer"&gt;**Velt React SDK&lt;/a&gt;:** Adds the collaboration layer. We use it to enable presence, live state sync, and contextual comments without building real-time infrastructure from scratch.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7y75lx2zb0ozmfw66d02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7y75lx2zb0ozmfw66d02.png" alt="Image1" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Generating a Pixel-Perfect UI with Antigravity
&lt;/h2&gt;

&lt;p&gt;Antigravity is an AI-powered development platform and “agent-first” IDE where AI agents assist with coding tasks across your editor, terminal, and browser, moving beyond simple code completion toward autonomous execution of complex software workflows. &lt;/p&gt;

&lt;p&gt;It lets you generate and modify real code based on high-level instructions, orchestrating planning, editing, and validation with minimal manual effort.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Use Antigravity?
&lt;/h3&gt;

&lt;p&gt;Cloning an interface like HackMD manually is time-consuming. Matching spacing, typography, layout, and dark mode details takes careful iteration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbtl1dxi959wckm4y742a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbtl1dxi959wckm4y742a.png" alt="Image2" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We used Antigravity to generate the editor UI directly from the reference image. The prompt enforced strict visual fidelity. No redesign. No interpretation.&lt;/p&gt;

&lt;p&gt;This gave us:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rapid UI cloning:&lt;/strong&gt; Full split layout with header, editor, preview, and status bar in minutes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pixel-accurate output:&lt;/strong&gt; Layout and styling matched the reference closely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No design drift&lt;/strong&gt;: The UI stayed consistent with the original.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the UI ready, we could move straight to functionality and collaboration.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Prompt Strategy
&lt;/h3&gt;

&lt;p&gt;The prompt was written with strict visual constraints. Every layout detail, spacing rule, and styling decision had to follow the reference image exactly.&lt;/p&gt;

&lt;p&gt;We enforced a simple rule: the image always wins. If there was any conflict between best practice and the screenshot, the screenshot was treated as the authority.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;You&lt;/span&gt; &lt;span class="nx"&gt;are&lt;/span&gt; &lt;span class="nx"&gt;an&lt;/span&gt; &lt;span class="nx"&gt;expert&lt;/span&gt; &lt;span class="nx"&gt;frontend&lt;/span&gt; &lt;span class="nx"&gt;engineer&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;UI&lt;/span&gt; &lt;span class="nx"&gt;pixel&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;perfect&lt;/span&gt; &lt;span class="nx"&gt;implementer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

&lt;span class="nx"&gt;Your&lt;/span&gt; &lt;span class="nx"&gt;task&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;build&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;only&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;editor&lt;/span&gt; &lt;span class="nx"&gt;UI&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="nx"&gt;HackMD&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;style&lt;/span&gt; &lt;span class="nx"&gt;markdown&lt;/span&gt; &lt;span class="nx"&gt;editor&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;exactly&lt;/span&gt; &lt;span class="nx"&gt;matching&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;provided&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;This&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;not&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="nx"&gt;redesign&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;interpretation&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;or&lt;/span&gt; &lt;span class="nx"&gt;approximation&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;This&lt;/span&gt; &lt;span class="nx"&gt;must&lt;/span&gt; &lt;span class="nx"&gt;be&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;visual&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;behavioral&lt;/span&gt; &lt;span class="nx"&gt;clone&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

&lt;span class="o"&gt;---&lt;/span&gt;

&lt;span class="err"&gt;###&lt;/span&gt; &lt;span class="nx"&gt;CRITICAL&lt;/span&gt; &lt;span class="nc"&gt;INSTRUCTIONS &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;DO&lt;/span&gt; &lt;span class="nx"&gt;NOT&lt;/span&gt; &lt;span class="nx"&gt;IGNORE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;DO&lt;/span&gt; &lt;span class="nx"&gt;NOT&lt;/span&gt; &lt;span class="nx"&gt;make&lt;/span&gt; &lt;span class="nx"&gt;any&lt;/span&gt; &lt;span class="nx"&gt;assumptions&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="nx"&gt;about&lt;/span&gt; &lt;span class="nx"&gt;layout&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;spacing&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;colors&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;typography&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sizing&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;or&lt;/span&gt; &lt;span class="nx"&gt;behavior&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;DO&lt;/span&gt; &lt;span class="nx"&gt;NOT&lt;/span&gt; &lt;span class="nx"&gt;invent&lt;/span&gt; &lt;span class="nx"&gt;UI&lt;/span&gt; &lt;span class="nx"&gt;elements&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="nx"&gt;that&lt;/span&gt; &lt;span class="nx"&gt;are&lt;/span&gt; &lt;span class="nx"&gt;not&lt;/span&gt; &lt;span class="nx"&gt;visible&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;DO&lt;/span&gt; &lt;span class="nx"&gt;NOT&lt;/span&gt; &lt;span class="nx"&gt;omit&lt;/span&gt; &lt;span class="nx"&gt;UI&lt;/span&gt; &lt;span class="nx"&gt;elements&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="nx"&gt;that&lt;/span&gt; &lt;span class="nx"&gt;are&lt;/span&gt; &lt;span class="nx"&gt;visible&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;DO&lt;/span&gt; &lt;span class="nx"&gt;NOT&lt;/span&gt; &lt;span class="nx"&gt;restyle&lt;/span&gt; &lt;span class="nx"&gt;or&lt;/span&gt; &lt;span class="err"&gt;“&lt;/span&gt;&lt;span class="nx"&gt;improve&lt;/span&gt;&lt;span class="err"&gt;”&lt;/span&gt; &lt;span class="nx"&gt;anything&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;DO&lt;/span&gt; &lt;span class="nx"&gt;NOT&lt;/span&gt; &lt;span class="nx"&gt;change&lt;/span&gt; &lt;span class="nx"&gt;colors&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;icons&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;fonts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;or&lt;/span&gt; &lt;span class="nx"&gt;alignment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;DO&lt;/span&gt; &lt;span class="nx"&gt;NOT&lt;/span&gt; &lt;span class="nx"&gt;guess&lt;/span&gt; &lt;span class="nx"&gt;breakpoints&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="err"&gt;—&lt;/span&gt; &lt;span class="nx"&gt;infer&lt;/span&gt; &lt;span class="nx"&gt;responsiveness&lt;/span&gt; &lt;span class="nx"&gt;strictly&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;standard&lt;/span&gt; &lt;span class="nx"&gt;proportional&lt;/span&gt; &lt;span class="nx"&gt;scaling&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;Follow&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="nx"&gt;exactly&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;it&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="nx"&gt;If&lt;/span&gt; &lt;span class="nx"&gt;something&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="nx"&gt;unclear&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;replicate&lt;/span&gt; &lt;span class="nx"&gt;it&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;faithfully&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;possible&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;visual&lt;/span&gt; &lt;span class="nx"&gt;evidence&lt;/span&gt; &lt;span class="nx"&gt;alone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

&lt;span class="o"&gt;---&lt;/span&gt;

&lt;span class="err"&gt;###&lt;/span&gt; &lt;span class="nx"&gt;INPUT&lt;/span&gt; &lt;span class="nx"&gt;CONTEXT&lt;/span&gt;

&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;You&lt;/span&gt; &lt;span class="nx"&gt;are&lt;/span&gt; &lt;span class="nx"&gt;working&lt;/span&gt; &lt;span class="nx"&gt;inside&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;basic&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt; &lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;You&lt;/span&gt; &lt;span class="nx"&gt;are&lt;/span&gt; &lt;span class="nx"&gt;building&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;only&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;editor&lt;/span&gt; &lt;span class="nx"&gt;UI&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;no&lt;/span&gt; &lt;span class="nx"&gt;authentication&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;no&lt;/span&gt; &lt;span class="nx"&gt;backend&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;no&lt;/span&gt; &lt;span class="nx"&gt;real&lt;/span&gt; &lt;span class="nx"&gt;GitHub&lt;/span&gt; &lt;span class="nx"&gt;integration&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;The&lt;/span&gt; &lt;span class="nx"&gt;editor&lt;/span&gt; &lt;span class="nx"&gt;consists&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;Left&lt;/span&gt; &lt;span class="nx"&gt;pane&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Markdown&lt;/span&gt; &lt;span class="nx"&gt;editor&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;Right&lt;/span&gt; &lt;span class="nx"&gt;pane&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Live&lt;/span&gt; &lt;span class="nx"&gt;markdown&lt;/span&gt; &lt;span class="nx"&gt;preview&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;The&lt;/span&gt; &lt;span class="nx"&gt;provided&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;single&lt;/span&gt; &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;truth&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

&lt;span class="o"&gt;---&lt;/span&gt;

&lt;span class="err"&gt;###&lt;/span&gt; &lt;span class="nx"&gt;REQUIRED&lt;/span&gt; &lt;span class="nx"&gt;OUTPUT&lt;/span&gt;

&lt;span class="nx"&gt;Produce&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;production&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;ready&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt; &lt;span class="nx"&gt;code&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="nx"&gt;that&lt;/span&gt; &lt;span class="nx"&gt;recreates&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;UI&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;pixel&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;perfectly&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;

&lt;span class="nx"&gt;You&lt;/span&gt; &lt;span class="nx"&gt;must&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;Use&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;React&lt;/span&gt; &lt;span class="nx"&gt;functional&lt;/span&gt; &lt;span class="nx"&gt;components&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;
&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;Use&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nc"&gt;CSS &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;or&lt;/span&gt; &lt;span class="nx"&gt;CSS&lt;/span&gt; &lt;span class="nx"&gt;Modules&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;styled&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;components&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;precisely&lt;/span&gt; &lt;span class="nx"&gt;match&lt;/span&gt; &lt;span class="nx"&gt;styles&lt;/span&gt;
&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;Ensure&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;layout&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;fully&lt;/span&gt; &lt;span class="nx"&gt;responsive&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;all&lt;/span&gt; &lt;span class="nx"&gt;screen&lt;/span&gt; &lt;span class="nx"&gt;sizes&lt;/span&gt; &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="nx"&gt;preserving&lt;/span&gt; &lt;span class="nx"&gt;proportions&lt;/span&gt;
&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;Match&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

   &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Background&lt;/span&gt; &lt;span class="nx"&gt;colors&lt;/span&gt;
   &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Pane&lt;/span&gt; &lt;span class="nx"&gt;widths&lt;/span&gt;
   &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Divider&lt;/span&gt; &lt;span class="nx"&gt;behavior&lt;/span&gt;
   &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Toolbar&lt;/span&gt; &lt;span class="nx"&gt;icons&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;placement&lt;/span&gt;
   &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Font&lt;/span&gt; &lt;span class="nx"&gt;family&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;weight&lt;/span&gt;
   &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Line&lt;/span&gt; &lt;span class="nx"&gt;height&lt;/span&gt;
   &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Button&lt;/span&gt; &lt;span class="nx"&gt;styles&lt;/span&gt;
   &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Hover&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;focus&lt;/span&gt; &lt;span class="nf"&gt;states &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;only&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nx"&gt;visible&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;implied&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Spacing&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;margins&lt;/span&gt;
&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;Implement&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

   &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Markdown&lt;/span&gt; &lt;span class="nx"&gt;input&lt;/span&gt; &lt;span class="nx"&gt;on&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;left&lt;/span&gt;
   &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Live&lt;/span&gt; &lt;span class="nx"&gt;preview&lt;/span&gt; &lt;span class="nx"&gt;rendering&lt;/span&gt; &lt;span class="nx"&gt;on&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;right&lt;/span&gt;
&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;Match&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;dark&lt;/span&gt; &lt;span class="nx"&gt;mode&lt;/span&gt; &lt;span class="nx"&gt;styling&lt;/span&gt; &lt;span class="nx"&gt;exactly&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;shown&lt;/span&gt;
&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;Match&lt;/span&gt; &lt;span class="nx"&gt;scrollbar&lt;/span&gt; &lt;span class="nx"&gt;appearance&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;closely&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;possible&lt;/span&gt;
&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;Use&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;no&lt;/span&gt; &lt;span class="nx"&gt;external&lt;/span&gt; &lt;span class="nx"&gt;UI&lt;/span&gt; &lt;span class="nx"&gt;libraries&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="nx"&gt;unless&lt;/span&gt; &lt;span class="nx"&gt;strictly&lt;/span&gt; &lt;span class="nx"&gt;necessary&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;markdown&lt;/span&gt; &lt;span class="nx"&gt;parsing&lt;/span&gt;
&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt; &lt;span class="nx"&gt;Use&lt;/span&gt; &lt;span class="nx"&gt;semantic&lt;/span&gt; &lt;span class="nx"&gt;HTML&lt;/span&gt; &lt;span class="nx"&gt;where&lt;/span&gt; &lt;span class="nx"&gt;applicable&lt;/span&gt;

&lt;span class="o"&gt;---&lt;/span&gt;

&lt;span class="err"&gt;###&lt;/span&gt; &lt;span class="nx"&gt;LAYOUT&lt;/span&gt; &lt;span class="nx"&gt;REQUIREMENTS&lt;/span&gt;

&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Two&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;column&lt;/span&gt; &lt;span class="nx"&gt;split&lt;/span&gt; &lt;span class="nx"&gt;layout&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Left&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;editable&lt;/span&gt; &lt;span class="nx"&gt;markdown&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="nx"&gt;area&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Right&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;rendered&lt;/span&gt; &lt;span class="nx"&gt;markdown&lt;/span&gt; &lt;span class="nx"&gt;preview&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Divider&lt;/span&gt; &lt;span class="nx"&gt;exactly&lt;/span&gt; &lt;span class="nx"&gt;positioned&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;shown&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Toolbar&lt;/span&gt; &lt;span class="nx"&gt;at&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;top&lt;/span&gt; &lt;span class="nx"&gt;exactly&lt;/span&gt; &lt;span class="nx"&gt;matching&lt;/span&gt; &lt;span class="nx"&gt;icon&lt;/span&gt; &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;spacing&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;alignment&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Bottom&lt;/span&gt; &lt;span class="nx"&gt;GitHub&lt;/span&gt; &lt;span class="nx"&gt;buttons&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;template&lt;/span&gt; &lt;span class="nx"&gt;buttons&lt;/span&gt; &lt;span class="nx"&gt;must&lt;/span&gt; &lt;span class="nx"&gt;appear&lt;/span&gt; &lt;span class="nx"&gt;exactly&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nf"&gt;shown &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;visual&lt;/span&gt; &lt;span class="nx"&gt;only&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="o"&gt;---&lt;/span&gt;

&lt;span class="err"&gt;###&lt;/span&gt; &lt;span class="nx"&gt;RESPONSIVENESS&lt;/span&gt; &lt;span class="nx"&gt;REQUIREMENTS&lt;/span&gt;

&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;On&lt;/span&gt; &lt;span class="nx"&gt;smaller&lt;/span&gt; &lt;span class="nx"&gt;screens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Maintain&lt;/span&gt; &lt;span class="nx"&gt;proportional&lt;/span&gt; &lt;span class="nx"&gt;scaling&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Preserve&lt;/span&gt; &lt;span class="nx"&gt;visual&lt;/span&gt; &lt;span class="nx"&gt;hierarchy&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Do&lt;/span&gt; &lt;span class="nx"&gt;NOT&lt;/span&gt; &lt;span class="nx"&gt;collapse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;remove&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;or&lt;/span&gt; &lt;span class="nx"&gt;redesign&lt;/span&gt; &lt;span class="nx"&gt;panes&lt;/span&gt; &lt;span class="nx"&gt;unless&lt;/span&gt; &lt;span class="nx"&gt;explicitly&lt;/span&gt; &lt;span class="nx"&gt;shown&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;No&lt;/span&gt; &lt;span class="nx"&gt;mobile&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;specific&lt;/span&gt; &lt;span class="nx"&gt;UI&lt;/span&gt; &lt;span class="nx"&gt;unless&lt;/span&gt; &lt;span class="nx"&gt;clearly&lt;/span&gt; &lt;span class="nx"&gt;implied&lt;/span&gt; &lt;span class="nx"&gt;by&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt;

&lt;span class="o"&gt;---&lt;/span&gt;

&lt;span class="err"&gt;###&lt;/span&gt; &lt;span class="nx"&gt;FUNCTIONAL&lt;/span&gt; &lt;span class="nx"&gt;REQUIREMENTS&lt;/span&gt;

&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Markdown&lt;/span&gt; &lt;span class="nx"&gt;typing&lt;/span&gt; &lt;span class="nx"&gt;updates&lt;/span&gt; &lt;span class="nx"&gt;preview&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nx"&gt;real&lt;/span&gt; &lt;span class="nx"&gt;time&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Toolbar&lt;/span&gt; &lt;span class="nx"&gt;buttons&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="nx"&gt;NOT&lt;/span&gt; &lt;span class="nx"&gt;need&lt;/span&gt; &lt;span class="nx"&gt;real&lt;/span&gt; &lt;span class="nx"&gt;functionality&lt;/span&gt; &lt;span class="nx"&gt;unless&lt;/span&gt; &lt;span class="nx"&gt;explicitly&lt;/span&gt; &lt;span class="nx"&gt;visible&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;GitHub&lt;/span&gt; &lt;span class="nx"&gt;buttons&lt;/span&gt; &lt;span class="nx"&gt;are&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;visual&lt;/span&gt; &lt;span class="nx"&gt;only&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;No&lt;/span&gt; &lt;span class="nx"&gt;routing&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;no&lt;/span&gt; &lt;span class="nx"&gt;persistence&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;no&lt;/span&gt; &lt;span class="nx"&gt;API&lt;/span&gt; &lt;span class="nx"&gt;calls&lt;/span&gt;

&lt;span class="o"&gt;---&lt;/span&gt;

&lt;span class="err"&gt;###&lt;/span&gt; &lt;span class="nx"&gt;DELIVERY&lt;/span&gt; &lt;span class="nx"&gt;FORMAT&lt;/span&gt;

&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Return&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt; &lt;span class="nf"&gt;component&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;CSS&lt;/span&gt;
  &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Brief&lt;/span&gt; &lt;span class="nx"&gt;explanation&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;structure&lt;/span&gt;
&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;Code&lt;/span&gt; &lt;span class="nx"&gt;must&lt;/span&gt; &lt;span class="nx"&gt;be&lt;/span&gt; &lt;span class="nx"&gt;clean&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;readable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;copy&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;paste&lt;/span&gt; &lt;span class="nx"&gt;ready&lt;/span&gt;

&lt;span class="o"&gt;---&lt;/span&gt;

&lt;span class="err"&gt;###&lt;/span&gt; &lt;span class="nx"&gt;FINAL&lt;/span&gt; &lt;span class="nx"&gt;RULE&lt;/span&gt;

&lt;span class="nx"&gt;If&lt;/span&gt; &lt;span class="nx"&gt;there&lt;/span&gt; &lt;span class="nx"&gt;is&lt;/span&gt; &lt;span class="nx"&gt;ever&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="nx"&gt;conflict&lt;/span&gt; &lt;span class="nx"&gt;between&lt;/span&gt; &lt;span class="nx"&gt;best&lt;/span&gt; &lt;span class="nx"&gt;practices&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;THE&lt;/span&gt; &lt;span class="nx"&gt;IMAGE&lt;/span&gt; &lt;span class="nx"&gt;ALWAYS&lt;/span&gt; &lt;span class="nx"&gt;WINS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;

&lt;span class="nx"&gt;Use&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;provided&lt;/span&gt; &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;absolute&lt;/span&gt; &lt;span class="nx"&gt;authority&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="nx"&gt;and&lt;/span&gt; &lt;span class="nx"&gt;replicate&lt;/span&gt; &lt;span class="nx"&gt;it&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="nx"&gt;exactly&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Generated Component Structure
&lt;/h3&gt;

&lt;p&gt;Antigravity generated a clean, modular React structure instead of a single large file. The UI was split into focused components, each responsible for one section of the editor. This made the layout easy to reason about and ready for collaboration features.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Header.tsx&lt;/code&gt; — Top navigation and toolbar&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Editor.tsx&lt;/code&gt; — Left pane markdown input&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Preview.tsx&lt;/code&gt; — Right pane rendered markdown&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;StatusBar.tsx&lt;/code&gt; — Bottom metadata bar&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Layout.tsx&lt;/code&gt; — Structural wrapper composing the editor layout&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This structure is accurate based on the repo you shared. It matches the actual &lt;code&gt;src/components&lt;/code&gt; breakdown and reflects how the UI is assembled in &lt;code&gt;App.tsx&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qe6e34wsi5ysd4xhmuu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qe6e34wsi5ysd4xhmuu.png" alt="Image3" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementing Markdown + Live Preview
&lt;/h3&gt;

&lt;p&gt;The editor follows a simple two-pane model. The left pane is a controlled textarea where users write markdown. The right pane renders the parsed markdown in real time.&lt;/p&gt;

&lt;p&gt;State is lifted to a shared parent component so that every keystroke updates both the editor and the preview instantly. This keeps the UI predictable and ensures the preview always reflects the latest content.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F98ce85j4tfyflully52f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F98ce85j4tfyflully52f.png" alt="Image4" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Making the Editor Collaborative with Velt
&lt;/h2&gt;

&lt;p&gt;Once the local markdown editor was working, the next step was to make it collaborative. Instead of building real-time infrastructure from scratch, we integrated Velt to handle sync, presence, and comments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Use Velt?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://velt.dev/" rel="noopener noreferrer"&gt;&lt;strong&gt;Velt&lt;/strong&gt; is a collaboration SDK&lt;/a&gt; that lets developers embed real-time collaboration features into web products quickly and efficiently. It provides fully managed components and backend support so you can add multiplayer-style experiences without building real-time infrastructure from scratch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features of Velt:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Live Sync&lt;/strong&gt; – Real-time shared state across users so everyone sees updates instantly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comments&lt;/strong&gt; – Contextual commenting components like those in Figma, Google Docs, and spreadsheet tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Presence &amp;amp; Cursors&lt;/strong&gt; – Shows active users and cursor positions in shared sessions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiplayer Editing&lt;/strong&gt; – Multiple users can edit content concurrently with conflict resolution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notifications&lt;/strong&gt; – Built-in support for alerts and updates (mentions, replies).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recording &amp;amp; Huddles&lt;/strong&gt; – audio/video/screen recording and in-app collaborative sessions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customizable SDK&lt;/strong&gt; – Components and behavior can be styled and extended to match your product.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Agent Skills and MCP Integration&lt;/strong&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Velt recently introduced &lt;a href="https://docs.velt.dev/get-started/skills" rel="noopener noreferrer"&gt;Agent Skills&lt;/a&gt; and an &lt;a href="https://docs.velt.dev/get-started/mcp-installer" rel="noopener noreferrer"&gt;implementation MCP&lt;/a&gt; that allow collaboration features to be integrated using AI agents. Instead of manually wiring presence, comments, and live sync, agents can now orchestrate much of the integration flow.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Installing &amp;amp; Setting Up Velt
&lt;/h3&gt;

&lt;p&gt;We started by &lt;a href="https://docs.velt.dev/get-started/quickstart" rel="noopener noreferrer"&gt;installing the Velt React SDK&lt;/a&gt; and adding it to the project. This gives us access to collaboration primitives such as live state, presence, and comments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @veltdev/react
&lt;span class="c"&gt;# Optional: npm install --save-dev @veltdev/types&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we wrapped the root of the application with the Velt provider. This initializes the collaboration layer and connects the app to Velt using an API key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l6s9gg51bhki5pgd6j5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l6s9gg51bhki5pgd6j5.png" alt="Image5" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From this point on, collaboration features can be layered into existing components without restructuring the entire application.&lt;/p&gt;

&lt;p&gt;Since Velt also supports &lt;a href="https://docs.velt.dev/get-started/skills" rel="noopener noreferrer"&gt;Agent Skills&lt;/a&gt; and &lt;a href="https://docs.velt.dev/get-started/mcp-installer" rel="noopener noreferrer"&gt;MCP-based implementations&lt;/a&gt; &lt;strong&gt;i&lt;/strong&gt;n an agent-enabled environment, collaboration features can be scaffolded automatically, without manually wiring every component. The agent can configure provider setup, inject components, and connect live state with minimal manual steps.&lt;/p&gt;

&lt;p&gt;In other words:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manual SDK setup → Explicit integration in code&lt;/li&gt;
&lt;li&gt;Agent Skills / MCP → AI-assisted integration with reduced setup effort&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this project, we used the manual SDK approach. But teams using agent driven workflows can accelerate collaboration integration even further.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding Text Comments
&lt;/h3&gt;

&lt;p&gt;With Velt initialized, the next step was &lt;a href="https://docs.velt.dev/async-collaboration/comments/overview" rel="noopener noreferrer"&gt;enabling inline comments&lt;/a&gt; inside the document.&lt;/p&gt;

&lt;p&gt;We wrapped the editor layout with the VeltComments component in text mode. This attaches a collaborative comment layer directly to the markdown content without changing the editor’s internal logic.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Contextual inline comments:&lt;/strong&gt; Users can select text and leave feedback directly within the document.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anchored collaboration:&lt;/strong&gt; Comments stay attached to specific sections even as content evolves.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-user discussion:&lt;/strong&gt; Multiple users can comment and reply in the same document in real time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point, the editor moves from being a single user tool to a shared workspace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;App&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;apiKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VITE_VELT_API_KEY&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;currentUser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setCurrentUser&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;staticUsers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;switchUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;VeltUser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;setCurrentUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;localStorage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hackmd-current-user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="c1"&gt;// Load user preference on app start&lt;/span&gt;
  &lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;storedUserId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;localStorage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hackmd-current-user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;storedUserId&lt;/span&gt;
      &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;staticUsers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;u&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;u&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;storedUserId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;staticUsers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;staticUsers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="nf"&gt;setCurrentUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[]);&lt;/span&gt;

  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;VeltProvider&lt;/span&gt; &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;apiKey&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;VeltComments&lt;/span&gt; &lt;span class="na"&gt;textMode&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;darkMode&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;AppContent&lt;/span&gt;
        &lt;span class="na"&gt;currentUser&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;currentUser&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="na"&gt;staticUsers&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;staticUsers&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="na"&gt;onSwitchUser&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;switchUser&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;VeltProvider&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;App&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Enabling Live Sync
&lt;/h3&gt;

&lt;p&gt;Comments make the document collaborative, but the content itself is still local. To enable true multi-user editing, we replaced local React state with &lt;a href="https://docs.velt.dev/realtime-collaboration/live-state-sync/overview" rel="noopener noreferrer"&gt;Velt’s shared live state&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Instead of managing markdown with &lt;code&gt;useState&lt;/code&gt;, we switched to &lt;code&gt;useLiveState&lt;/code&gt;. This hook stores the document content in a shared real time layer managed by Velt.&lt;/p&gt;

&lt;p&gt;Every update to the markdown now propagates instantly across connected users. No WebSockets, no manual sync logic, no conflict resolution setup.&lt;/p&gt;

&lt;p&gt;The rest of the component structure remains unchanged. Only the state source is replaced.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi user editing&lt;/strong&gt; — Multiple users can type in the same document simultaneously.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instant shared updates&lt;/strong&gt; — Changes appear in real time across all active sessions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the moment where the editor becomes fully collaborative.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useLiveState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@veltdev/react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Header&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./Header&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Editor&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./Editor&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Preview&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./Preview&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;StatusBar&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./StatusBar&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;VeltUser&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../types/veltUser&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;defaultMarkdown&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../constants/defaultTemplate&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;LayoutProps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;currentUser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;VeltUser&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nl"&gt;staticUsers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;VeltUser&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
    &lt;span class="nl"&gt;onSwitchUser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;VeltUser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Layout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;FC&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;LayoutProps&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;currentUser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;staticUsers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;onSwitchUser&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;markdown&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setMarkdown&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useLiveState&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hackmd-clone-markdown&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;defaultMarkdown&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;flex&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;flexDirection&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;column&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;100vh&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;100vw&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;overflow&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hidden&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Header&lt;/span&gt; &lt;span class="na"&gt;currentUser&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;currentUser&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;staticUsers&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;staticUsers&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;onSwitchUser&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;onSwitchUser&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="na"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;flex&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="na"&gt;flex&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="na"&gt;overflow&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hidden&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="na"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;relative&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Editor&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;markdown&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;onChange&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;setMarkdown&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
                &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;1px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="na"&gt;backgroundColor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#000&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="na"&gt;opacity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="na"&gt;zIndex&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
                &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Preview&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;markdown&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;StatusBar&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;Layout&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Presence Awareness
&lt;/h3&gt;

&lt;p&gt;Editing and commenting are core collaboration features, but &lt;a href="https://docs.velt.dev/realtime-collaboration/presence/overview" rel="noopener noreferrer"&gt;presence&lt;/a&gt; adds awareness. It lets users see who else is currently active inside the document.&lt;/p&gt;

&lt;p&gt;With Velt, presence is automatically tracked once the provider is configured. Active users can be identified in the session, enabling visual indicators such as avatars or active participant signals.&lt;/p&gt;

&lt;p&gt;This creates a collaborative awareness layer. Users know when others are viewing or editing the same document, which reduces overlap and improves coordination.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;VeltPresence&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@veltdev/react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;VeltUser&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../types/veltUser&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;HeaderProps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;currentUser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;VeltUser&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nl"&gt;staticUsers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;VeltUser&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
    &lt;span class="nl"&gt;onSwitchUser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;VeltUser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Header&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;FC&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;HeaderProps&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;currentUser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;staticUsers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;onSwitchUser&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;showUserMenu&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setShowUserMenu&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;var(--toolbar-height)&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;backgroundColor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#2f3136&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Darker gray for toolbar&lt;/span&gt;
            &lt;span class="na"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;flex&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;alignItems&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;center&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;justifyContent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;space-between&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;0 16px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;borderBottom&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;1px solid #111&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;fontSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;14px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#b9bbbe&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
            &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="cm"&gt;/* Left Section */&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;flex&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;alignItems&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;center&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;gap&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;8px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; 
                    &lt;span class="na"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;flex&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                    &lt;span class="na"&gt;alignItems&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;center&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                    &lt;span class="na"&gt;gap&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;8px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                    &lt;span class="na"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#fff&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                    &lt;span class="na"&gt;fontWeight&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;600&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="na"&gt;marginRight&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;12px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                        &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;24px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                        &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;24px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                        &lt;span class="na"&gt;borderRadius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;50%&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                        &lt;span class="na"&gt;background&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#3370b7&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                        &lt;span class="na"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;flex&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                        &lt;span class="na"&gt;alignItems&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;center&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
                        &lt;span class="na"&gt;justifyContent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;center&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Power&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;14&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;color&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"white"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
                    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;My workspace&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;span&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

                &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;1px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;20px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;background&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#4f545c&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;margin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;0 4px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

                &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="cm"&gt;/* Editor Mode Buttons */&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
                &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;flex&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;background&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#333&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;borderRadius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;4px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;2px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;4px 8px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;background&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#444&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;borderRadius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;3px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#fff&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Pencil&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;14&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
                    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;4px 8px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#888&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Columns&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;14&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
                    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;4px 8px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#888&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Eye&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;14&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
                    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

                &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;4px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Plus&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;4px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;HelpCircle&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
                &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt; &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;4px&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Search&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

...

export default Header;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What We Didn’t Have to Build
&lt;/h2&gt;

&lt;p&gt;Using Velt removed the need to build and maintain a complex collaboration infrastructure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No WebSocket layer for managing real-time connections&lt;/li&gt;
&lt;li&gt;No CRDT or conflict resolution system for concurrent edits&lt;/li&gt;
&lt;li&gt;No custom backend service for syncing document state&lt;/li&gt;
&lt;li&gt;No notification engine for mentions and updates&lt;/li&gt;
&lt;li&gt;No database layer for storing and anchoring comments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allowed us to ship faster, reduce engineering overhead, and keep the codebase focused on core product functionality rather than infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;You can &lt;a href="https://github.com/Studio1HQ/hackmd-clone/" rel="noopener noreferrer"&gt;run the full demo&lt;/a&gt; locally and explore the collaborative features in action.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clone the repository&lt;/li&gt;
&lt;li&gt;Install dependencies&lt;/li&gt;
&lt;li&gt;Add your Velt API key&lt;/li&gt;
&lt;li&gt;Start the development server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once running, &lt;a href="https://hackmd-velt.vercel.app/" rel="noopener noreferrer"&gt;open the app&lt;/a&gt; in two different browsers or devices. You will see live sync, comments, and presence working in real time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6kiq98fa3lve4he7uuby.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6kiq98fa3lve4he7uuby.png" alt="Image5" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;Modern tooling changes how fast we can ship collaborative software. AI can drastically accelerate UI replication, allowing you to move from design to production-ready components in minutes. At the same time, collaboration infrastructure no longer needs to be built from scratch. By layering Velt on top of a clean React architecture, you can enable live sync, comments, and presence without managing real-time systems yourself.&lt;/p&gt;

&lt;p&gt;If you’re building collaborative features into your product, explore Velt and see how quickly you can turn a single user interface into a shared workspace.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resources&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.velt.dev/" rel="noopener noreferrer"&gt;&lt;strong&gt;Velt Documentation&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Studio1HQ/coda.io-velt" rel="noopener noreferrer"&gt;&lt;strong&gt;GitHub Repository&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://hackmd-velt.vercel.app/" rel="noopener noreferrer"&gt;&lt;strong&gt;Live Demo&lt;/strong&gt;&lt;/a&gt;: Try the application yourself&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>react</category>
      <category>antigravity</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Production-Aware AI: Giving LLMs Real Debugging Context</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Thu, 09 Apr 2026 05:22:32 +0000</pubDate>
      <link>https://dev.to/studio1hq/production-aware-ai-giving-llms-real-debugging-context-187g</link>
      <guid>https://dev.to/studio1hq/production-aware-ai-giving-llms-real-debugging-context-187g</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Large language models struggle with production debugging because they do not have visibility into how code actually executes at runtime.&lt;/li&gt;
&lt;li&gt;Inputs such as logs, stack traces, and metrics provide incomplete signals, which often cause confident but incorrect conclusions about root causes.&lt;/li&gt;
&lt;li&gt;When AI reasoning is grounded in function-level runtime data collected from production systems, debugging becomes accurate, explainable, and reliable.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Large language models are increasingly used by developers to understand code, analyze failures, and assist during incident response. In controlled environments, they are effective at explaining logic and suggesting fixes. In production systems, however, their usefulness often drops sharply.&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://lokalise.com/blog/blog-the-developer-delay-report/" rel="noopener noreferrer"&gt;recent survey of developers&lt;/a&gt; found that a quarter of developers spend more time debugging than writing code each week. The same survey reported that bugs and tooling failures cost teams nearly 20 working days per year in lost productivity. These numbers reflect a reality most engineering teams already experience. &lt;/p&gt;

&lt;p&gt;Production debugging takes time because failures depend on runtime factors such as traffic patterns, concurrency, queue depth, and system state that are absent in non-production environments. Most AI systems do not observe these execution conditions. They analyze code structure and reported symptoms, rather than the runtime behavior that caused the failure.&lt;/p&gt;

&lt;p&gt;In this article, we will discuss why production context is critical for AI debugging, what production-aware AI really means, and how runtime intelligence enables more accurate and trustworthy debugging outcomes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Production Issues Cannot Be Understood from Code Alone
&lt;/h2&gt;

&lt;p&gt;Code defines control flow and data handling, but production behavior is determined by runtime conditions such as traffic volume, concurrency, and system state.&lt;/p&gt;

&lt;p&gt;In production, requests arrive concurrently and compete for shared resources. As traffic increases, queues begin to accumulate work, caches evolve, and external dependencies respond with variable latency or partial failures. Together, these factors influence execution order, timing, and resource contention in ways that are not visible when reading code or running isolated tests.&lt;/p&gt;

&lt;p&gt;Many production failures arise only when specific runtime conditions are met. Race conditions appear under concurrent access. Performance regressions surface under sustained or uneven load. Retry mechanisms can magnify transient upstream failures into system-wide impact. In each case, the logic itself may be correct, while the observed failure is a result of how that logic behaves under real execution pressure.&lt;/p&gt;

&lt;p&gt;This leads to a common outcome during incident response. The code appears correct because the failure is not caused by a logical error. The root cause exists in how the code executes under real production conditions, not in how it reads in isolation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47gqpvmdldj288p0zzox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47gqpvmdldj288p0zzox.png" alt="Image1" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How LLMs Debug Today: Strengths and Structural Limits
&lt;/h2&gt;

&lt;p&gt;Large language models assist debugging by analyzing text. They infer intent, recognize common patterns, and map symptoms to known classes of problems. This makes them effective for code review, error explanation, and reasoning about familiar failure modes.&lt;/p&gt;

&lt;p&gt;However, their understanding is entirely constrained by the inputs they receive. Without access to runtime execution data, their conclusions are based on probability rather than evidence.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;What LLMs Do Well&lt;/th&gt;
&lt;th&gt;Structural Limitation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Code understanding&lt;/td&gt;
&lt;td&gt;Explain logic, control flow, and common anti patterns&lt;/td&gt;
&lt;td&gt;Cannot observe how code executes under real load&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Input analysis&lt;/td&gt;
&lt;td&gt;Reason over logs, stack traces, and snippets&lt;/td&gt;
&lt;td&gt;Inputs represent symptoms, not full execution context&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pattern matching&lt;/td&gt;
&lt;td&gt;Identify known bug patterns and typical fixes&lt;/td&gt;
&lt;td&gt;Fails when failures are novel or environment specific&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Root cause analysis&lt;/td&gt;
&lt;td&gt;Propose plausible explanations&lt;/td&gt;
&lt;td&gt;Cannot validate causality without runtime signals&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Decision making&lt;/td&gt;
&lt;td&gt;Rank likely fixes based on training data&lt;/td&gt;
&lt;td&gt;Relies on probabilistic inference when facts are missing&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Without visibility into execution order, timing, frequency, and state, LLMs are forced to guess. The results may sound correct, but they are not grounded in how the system actually behaved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hallucinations Are Caused by Missing Runtime Evidence
&lt;/h2&gt;

&lt;p&gt;Hallucinations in AI-assisted debugging usually appear when the system does not have enough information about what actually happened during execution. This is common in production, where AI is asked to explain failures using logs, stack traces, or small pieces of code that describe symptoms but not runtime behavior.&lt;/p&gt;

&lt;p&gt;Recent research on AI reliability shows that incorrect answers increase when important contextual details are missing. In debugging scenarios, these details include execution order, timing, system state, and how frequently specific code paths were executed. Without this information, AI systems infer causes based on likelihood rather than evidence.&lt;/p&gt;

&lt;p&gt;The same pattern appears in &lt;a href="https://arxiv.org/pdf/2505.04441" rel="noopener noreferrer"&gt;studies on AI-driven debugging and code repair&lt;/a&gt;. When models are given execution traces or feedback from real runs, fault localization and fix accuracy improve. When this runtime information is absent, models often produce explanations and fixes that appear reasonable but fail to address the real cause of the issue.&lt;/p&gt;

&lt;p&gt;Prompt refinement does not address this limitation. Clearer prompts help structure responses, but they do not introduce new facts. If execution data is missing, the model still reasons without evidence about how the system behaved.&lt;/p&gt;

&lt;p&gt;In production debugging, hallucinations are therefore expected. They occur when AI systems are asked to explain failures they cannot observe, not because the reasoning process is flawed, but because the necessary runtime evidence is absent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Missing Context in AI Debugging Workflows
&lt;/h2&gt;

&lt;p&gt;Most AI debugging workflows rely on the same signals engineers have used for years. These signals are useful, but they describe outcomes, not execution, which creates a gap between what failed and why it failed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What AI usually receives today&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Logs:&lt;/strong&gt; Logs capture messages emitted by code paths that were explicitly instrumented. They are selective, often incomplete, and rarely reflect execution order, frequency, or timing across concurrent requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stack traces:&lt;/strong&gt; Stack traces show where an error surfaced, not how the system reached that state. They lack information about prior execution paths, state changes, and interactions with other components.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metrics:&lt;/strong&gt; Metrics summarize system behavior at an aggregate level. They indicate that something is slow or failing, but they do not identify which functions caused the issue or how behavior changed over time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What is missing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Function level execution behavior:&lt;/strong&gt; Which functions ran, how often they executed, and how long they took under real load conditions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runtime performance characteristics:&lt;/strong&gt; Execution timing, concurrency effects, retries, and resource contention that emerge only during live operation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connection between user impact and code:&lt;/strong&gt; Clear linkage between affected endpoints or workflows and the exact functions responsible for the observed behavior.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When AI reasons over incomplete signals, it cannot establish causality. Proposed fixes are derived from statistical patterns rather than observed execution, which often results in changes that compile or deploy successfully but do not resolve the underlying issue. Effective debugging requires visibility into execution behavior, not only error reports or surface-level symptoms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgh59kapx7jr4l0k42ond.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgh59kapx7jr4l0k42ond.png" alt="Image1" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Defining Production-Aware AI
&lt;/h2&gt;

&lt;p&gt;Consider a common production incident. An API endpoint becomes slow after a deployment. Logs show no errors. Metrics show increased latency. The code itself looks unchanged or correct. An AI system reviewing this information can suggest several possible causes, such as a database query, a cache miss, or an external dependency. Each suggestion sounds reasonable, but none is confirmed.&lt;/p&gt;

&lt;p&gt;This is where production awareness matters. A production-aware AI does not rely only on aggregated metrics or isolated log lines. It reasons using information about how the system actually executed under real traffic. It can see which functions ran more often than before, where execution time increased, and which code paths were exercised during the slowdown.&lt;/p&gt;

&lt;p&gt;Production-aware AI is defined by the context it uses. It grounds reasoning in runtime behavior rather than static structure. It focuses on how functions are executed, how often they ran, and how their performance changes over time, instead of relying only on what the code looks like or what developers expect it to do.&lt;/p&gt;

&lt;p&gt;This approach changes the quality of debugging. Instead of proposing likely explanations, the AI reasons from observed execution evidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Function-Level Runtime Intelligence Changes AI Debugging
&lt;/h2&gt;

&lt;p&gt;Function-level runtime intelligence gives AI direct visibility into how software behaves while it is running. This visibility changes debugging from interpreting symptoms to analyzing execution.&lt;/p&gt;

&lt;p&gt;Instead of inferring behavior from secondary signals, AI can reason using execution facts collected in real time.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Function-level data as the missing signal:&lt;/strong&gt; Function-level data shows which functions executed, how frequently they ran, and how long they took under real load. This information allows AI to identify abnormal behavior at the exact point where performance or correctness changed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linking endpoints to execution paths:&lt;/strong&gt; Runtime intelligence connects external symptoms to internal execution. When an HTTP endpoint slows down, or a queue backs up, AI can trace the issue to the specific functions involved, rather than reasoning only at the service or request level.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temporal awareness across deployments:&lt;/strong&gt; By comparing runtime behavior before and after a deployment, AI can identify which functions changed execution characteristics. This makes regressions visible without relying on alerts or manual comparison.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Hud Enables Production-Aware AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1layxsapduf33orzdqxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1layxsapduf33orzdqxh.png" alt="Image3" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.hud.io/" rel="noopener noreferrer"&gt;Hud&lt;/a&gt; captures function-level execution behavior directly from production systems. Instead of relying on aggregated metrics, sampled traces, or predefined alert rules, it observes how individual functions execute under real traffic, including errors and performance changes. &lt;/p&gt;

&lt;p&gt;This execution data can be consumed directly by engineers and AI systems to reason about production behavior based on observed runtime evidence.&lt;/p&gt;

&lt;p&gt;Below are the core capabilities that allow Hud to provide production-aware runtime context for AI-assisted debugging.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Runtime code sensing at the function level:&lt;/strong&gt; &lt;a href="https://docs.hud.io/docs/installation-guide" rel="noopener noreferrer"&gt;Hud acts as a runtime code sensor&lt;/a&gt;. You get continuous function-level execution data from production, without manual instrumentation or ongoing maintenance. This data reflects how code actually runs under real traffic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic detection of errors and slowdowns:&lt;/strong&gt; Hud automatically detects errors and performance degradations based on changes in runtime behavior, not static rules.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linking user impact to code:&lt;/strong&gt; When an endpoint slows down, or a queue backs up, Hud connects that business-level symptom directly to the functions responsible. You can see which parts of the code caused the impact, not just where it surfaced.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Post-deployment behavior comparison:&lt;/strong&gt; Hud automatically detects deployments and compares function behavior across versions. You can see what changed in production after a release and identify regressions without manual diffing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runtime context for AI debugging:&lt;/strong&gt; Hud provides a full forensic runtime context that you can use inside the IDE or pass to &lt;a href="https://docs.hud.io/docs/hud-mcp-server" rel="noopener noreferrer"&gt;AI agents through its MCP server&lt;/a&gt;. This allows AI to reason from execution evidence instead of guessing from partial signals.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/JoOhI6QF6Zs"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;Without visibility into how code actually ran in production, AI systems reason over symptoms instead of causes, which leads to incorrect or incomplete fixes. Production systems demand runtime grounded reasoning, where function-level behavior, execution timing, and real traffic conditions are first-class inputs. &lt;/p&gt;

&lt;p&gt;When AI is given this level of visibility, hallucination decreases, and confidence aligns with correctness. Production-aware AI is therefore not an optimization, but a requirement for reliable debugging.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.hud.io/docs/what-you-can-do-with-hud" rel="noopener noreferrer"&gt;Hud&lt;/a&gt; gives you function-level runtime visibility directly from production, with no configuration and no maintenance. Explore &lt;a href="https://www.hud.io/" rel="noopener noreferrer"&gt;how Hud works&lt;/a&gt;, &lt;a href="https://docs.hud.io/" rel="noopener noreferrer"&gt;read the documentation&lt;/a&gt;, or &lt;a href="https://www.hud.io/book-a-demo/" rel="noopener noreferrer"&gt;book a demo&lt;/a&gt; to see how production-aware debugging changes the way you and your AI systems understand failures.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>mcp</category>
      <category>llm</category>
    </item>
    <item>
      <title>Build a Semantic Movie Discovery App with Claude Code and Weaviate Agent Skills</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Fri, 27 Mar 2026 20:45:45 +0000</pubDate>
      <link>https://dev.to/studio1hq/build-a-semantic-movie-discovery-app-with-claude-code-and-weaviate-agent-skills-30gd</link>
      <guid>https://dev.to/studio1hq/build-a-semantic-movie-discovery-app-with-claude-code-and-weaviate-agent-skills-30gd</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Versatility in agentic coding is increasing as new tools such as Model Context Protocol (MCP) servers and Agent Skills become more common. At the same time, many developers ask the same question when building AI applications: should they use MCP servers or Agent Skills? The important thing is understanding what each approach does well and choosing the one that fits your use case.&lt;/p&gt;

&lt;p&gt;In this post, we’ll explain what MCP servers and Agent Skills are and how they differ, including architecture diagrams and technical details. In the later sections, we’ll also walk through how to use &lt;a href="https://github.com/weaviate/agent-skills" rel="noopener noreferrer"&gt;Weaviate Agent Skills&lt;/a&gt; with &lt;a href="https://code.claude.com/docs/en/overview" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; to build a “Semantic Movie Discovery” application with several useful features.&lt;/p&gt;

&lt;p&gt;Let’s get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding MCP
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt; (MCP) is an open standard introduced by Anthropic that enables Large Language Models (LLMs) to interact with external systems such as data sources, APIs and services. MCP provides a structured way for an &lt;a href="https://weaviate.io/agentic-ai" rel="noopener noreferrer"&gt;AI agent&lt;/a&gt; to connect to compliant tools through a single interface instead of requiring custom integrations for each service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqfus3ya7jofj8kchzml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqfus3ya7jofj8kchzml.png" alt="MCP Architecture " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  MCP Architecture
&lt;/h3&gt;

&lt;p&gt;The MCP system operates on a client–server model and consists of three main components.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Host:&lt;/strong&gt; the application that runs the AI model and provides the environment where the agent operates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client:&lt;/strong&gt; the protocol connector inside the host that handles communication between the model and MCP servers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server:&lt;/strong&gt; an external service that exposes tools, resources, or prompts that the agent can access.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  MCP and Agentic Coding
&lt;/h3&gt;

&lt;p&gt;Before MCP, each AI tool required custom integrations for every external service it wanted to connect to. MCP simplifies this process by introducing a shared protocol that multiple agents and tools can use.&lt;/p&gt;

&lt;p&gt;Developers can now expose capabilities through an MCP server once and allow any compatible agent to access them without building separate integrations for each system.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Understanding Agent Skills&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview" rel="noopener noreferrer"&gt;Agent Skills&lt;/a&gt;, also introduced by Anthropic, provide developers with a simple way to extend AI coding agents without running MCP servers. An Agent Skill is a structured configuration file, usually written as markdown files with YAML metadata that defines capabilities, parameter schemas and natural-language instructions describing how the agent should use those capabilities.&lt;/p&gt;

&lt;p&gt;AI tools such as Claude Code read these files at session start and load the skills directly into the agent's working context without requiring an additional runtime.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn13awyixqnmfnllmjlld.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn13awyixqnmfnllmjlld.png" alt="Agent Skills with an AI tool (Claude Code)" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How Agent Skills Work
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;When Claude Code detects a skill file in the project directory (typically under &lt;code&gt;.claude/skills/&lt;/code&gt;), it loads the manifest into the agent's context at the beginning of the session.&lt;/li&gt;
&lt;li&gt;The skill definition describes available capabilities, how to invoke them correctly and when to prefer one approach over another. Because the instructions are written in natural language alongside parameter schemas, the agent can reason about how to use the skill.&lt;/li&gt;
&lt;li&gt;Skills are portable across repositories. If a developer commits a skill file to a repository, any collaborator who clones the project and opens it in Claude Code automatically gains access to the same capabilities without additional setup.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MCP and Agent Skills solve different problems in agent systems. MCP provides a standardized way for AI agents to connect to external tools, APIs, databases and services through a client–server architecture with structured schemas. Agent Skills extend the agent’s capabilities through configuration files that define workflows, instructions and parameter schemas without requiring a running server.&lt;/p&gt;

&lt;p&gt;In simple terms, &lt;strong&gt;MCP enables agents to access external systems, while Agent Skills define how agents perform tasks or workflows within their environment.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Weaviate Agent Skills
&lt;/h2&gt;

&lt;p&gt;Weaviate has released an official set of &lt;a href="https://github.com/weaviate/agent-skills" rel="noopener noreferrer"&gt;Agent Skills&lt;/a&gt; designed for use with Claude Code and other compatible agent-based development environments like Cursor, Antigravity, Windsurf and more. These skills provide structured access to Weaviate vector databases, allowing agents to perform common operations such as search, querying, schema inspection, data exploration and collection management.&lt;/p&gt;

&lt;p&gt;The repository includes ready-to-use skill definitions for tasks like semantic, hybrid and keyword search, along with natural language querying through the Query Agent. It also supports workflows such as creating collections, importing data and fetching filtered results, and cookbooks. This enables agents to interact/build with Weaviate and perform multi-step retrieval and agentic tasks more effectively.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgiqyrgy3vpbq0xxz5ej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgiqyrgy3vpbq0xxz5ej.png" alt="Weaviate Ecosystem Tools and Features" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent Skills and Vector Databases
&lt;/h2&gt;

&lt;p&gt;AI coding agents face difficulties when working with vector databases. Vector database APIs provide extensive capabilities, including basic “key–value” retrieval, single-vector near-text searches, multimodal near-image searches, hybrid BM25-plus-vector search, generative modules and multi-tenant system support. Without structured guidance, even a capable coding agent may produce suboptimal queries: correct syntax but the wrong search strategy, missing parameters or failure to use powerful features like the Weaviate Query Agent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://weaviate.io/blog/weaviate-agent-skills" rel="noopener noreferrer"&gt;Weaviate Agent Skills&lt;/a&gt; address this by providing correct usage patterns, parameter recommendations and decision logic, enabling coding agents to generate production-ready code from their initial attempts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Weaviate Agent Skills repository is organized into two main parts&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Facdcuqk3n68wemqdz6hj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Facdcuqk3n68wemqdz6hj.png" alt="Overview of Weaviate Agent Skills" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Weaviate&lt;/strong&gt; 𝗦𝗸𝗶𝗹𝗹 (skills/weaviate): Focused scripts for tasks such as schema inspection, data ingestion and vector search. Agents use these while writing application logic or backend code.&lt;/li&gt;
&lt;li&gt;𝗖𝗼𝗼𝗸𝗯𝗼𝗼𝗸𝘀 &lt;strong&gt;Skill&lt;/strong&gt; (skills/weaviate-cookbooks): End-to-end project examples that combine tools such as FastAPI, Next.js and Weaviate to demonstrate full application workflows.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Weaviate Agent Skills work with several development environments, including Claude Code, Cursor, GitHub Copilot, VS Code and Gemini CLI. When connected to a Weaviate Cloud instance, agents can directly interact with database modules and perform search, data management and retrieval tasks.&lt;/p&gt;

&lt;p&gt;To evaluate how effective Weaviate Agent Skills really are, let’s build a small project and see how they accelerate RAG and agentic application development with Claude Code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Semantic Movie Discovery Application
&lt;/h2&gt;

&lt;p&gt;We will build a &lt;strong&gt;Movie Discovery App&lt;/strong&gt; that takes a natural-language description and returns the most semantically similar movies from a Weaviate collection. In the process, we will explore Weaviate capabilities such as multimodal storage, named vector search, generative AI (RAG) and the Query Agent in action with Claude Code, showing how these Agentic tools help you build applications faster.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.python.org/downloads/" rel="noopener noreferrer"&gt;Python 3.10&lt;/a&gt; or higher&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.weaviate.io/weaviate/quickstart" rel="noopener noreferrer"&gt;Weaviate Cloud&lt;/a&gt; – Create a free cluster and obtain an API key.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.themoviedb.org/" rel="noopener noreferrer"&gt;TMDB API key&lt;/a&gt; – Used to fetch movie metadata&lt;/li&gt;
&lt;li&gt;OpenAI API key – Required for &lt;a href="https://weaviate.io/rag" rel="noopener noreferrer"&gt;RAG&lt;/a&gt; features.&lt;/li&gt;
&lt;li&gt;Access to &lt;a href="https://code.claude.com/docs/en/quickstart" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nodejs.org/en/download" rel="noopener noreferrer"&gt;Node.js 18+&lt;/a&gt; and npm – Required to run the Next.js frontend&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 1: Project Setup
&lt;/h3&gt;

&lt;p&gt;Create a &lt;strong&gt;movie-discovery-app&lt;/strong&gt; folder&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;mkdir&lt;/span&gt; &lt;span class="n"&gt;movie&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;discovery&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create and activate a  &lt;strong&gt;Python virtual environment&lt;/strong&gt; in the folder&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;movie-discovery-app py &lt;span class="nt"&gt;-m&lt;/span&gt; venv venv &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;source &lt;/span&gt;venv&lt;span class="se"&gt;\S&lt;/span&gt;cripts&lt;span class="se"&gt;\a&lt;/span&gt;ctivate.bat 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install Python dependencies&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;weaviate-client&lt;span class="o"&gt;==&lt;/span&gt;4.20.1 fastapi uvicorn[standard] openai weaviate-agents&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;1.3.0 requests python-dotenv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install Node.js dependencies for the frontend&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;frontend &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, create a &lt;code&gt;.env&lt;/code&gt; file at the project root. Add the following parameters to configure &lt;strong&gt;Weaviate Agent Skills with Claude Code&lt;/strong&gt;, along with your &lt;strong&gt;OpenAI API key&lt;/strong&gt; and &lt;strong&gt;TMDB API key&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;WEAVIATE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;your&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;cluster&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;without&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;https&lt;/span&gt;
&lt;span class="n"&gt;WEAVIATE_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;your&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;
&lt;span class="n"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;your&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;
&lt;span class="n"&gt;TMDB&lt;/span&gt; &lt;span class="n"&gt;API&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;your&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tmdb&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After signing up for Weaviate, click the &lt;strong&gt;Create Cluster&lt;/strong&gt; button to start a new cluster for your use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo4cx6bxr7o7xkbqyu1j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo4cx6bxr7o7xkbqyu1j.png" alt="Image1" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;“How to Connect”&lt;/strong&gt; to view the required Weaviate connection parameters.&lt;/p&gt;

&lt;p&gt;Now that everything is set up, we can connect Weaviate Cloud with &lt;strong&gt;Claude Code&lt;/strong&gt; by running &lt;code&gt;claude&lt;/code&gt; in your project terminal:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9y0xh1tmthf9gp5hilm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9y0xh1tmthf9gp5hilm.png" alt="Claude Code screnshot" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use the following prompt in your Claude terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Write and run &lt;span class="sb"&gt;`check_modules.py`&lt;/span&gt; that connects using &lt;span class="sb"&gt;`weaviate.connect_to_weaviate_cloud`&lt;/span&gt;with &lt;span class="sb"&gt;`skip_init_checks=True`&lt;/span&gt;, loads credentials from &lt;span class="sb"&gt;`.env`&lt;/span&gt; with &lt;span class="sb"&gt;`python-dotenv`&lt;/span&gt;,
and prints the full JSON list of enabled Weaviate modules.
Run it with &lt;span class="sb"&gt;`venv/Scripts/python check_modules.py`&lt;/span&gt;."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Create A Weaviate Collection and Import Sample Movie Data
&lt;/h3&gt;

&lt;p&gt;In this step, we create a Weaviate collection and import the movie dataset into Weaviate.  The dataset contains movie metadata sourced from the TMDB API. Each entry includes: &lt;em&gt;title, overview, release_date, poster_url, popularity, and other important movie fields&lt;/em&gt;. You can import a JSON or CSV dataset directly into Weaviate.&lt;/p&gt;

&lt;p&gt;Run this prompt to retrieve the dataset from the TMDB API and save it to a file named &lt;em&gt;movies.json&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Create a TMDB dataset JSON file, movies.json, that contains 100 movie metadata and poster URLs directly from the TMDB API. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterwards, &lt;a href="https://github.com/weaviate/agent-skills/blob/main/skills/weaviate/references/import_data.md" rel="noopener noreferrer"&gt;Weaviate Import Skills&lt;/a&gt; creates a Weaviate collection and imports the data from &lt;em&gt;movies.json&lt;/em&gt; into the Weaviate database. Claude code activates Weaviate to perform this action when prompted with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Import &lt;span class="sb"&gt;`movie.json`&lt;/span&gt; into a new Weaviate collection called Movie
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbeb2l8quvgqtbfmbzt7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbeb2l8quvgqtbfmbzt7.png" alt="Claude Code" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then the data is imported&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuihrumms8ofngypte6vi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuihrumms8ofngypte6vi.png" alt="Terminal Output" width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Building the FastAPI Backend and Next.js Frontend with Weaviate Cookbooks
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/weaviate/agent-skills/blob/main/skills/weaviate-cookbooks/references/frontend_interface.md" rel="noopener noreferrer"&gt;Weaviate cookbooks&lt;/a&gt; enable the app to use a two-layer architecture: a FastAPI backend that exposes REST endpoints and a Next.js frontend that renders the UI. The backend connects directly to Weaviate Cloud and the Weaviate Query Agent. Weaviate cookbooks also include some frontend guidelines to communicate with the &lt;a href="https://github.com/weaviate/agent-skills/blob/main/skills/weaviate-cookbooks/references/frontend_interface.md" rel="noopener noreferrer"&gt;Weaviate backend&lt;/a&gt; over HTTP.&lt;/p&gt;

&lt;p&gt;The app is organized into two views accessed via a collapsible sidebar:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Search view&lt;/strong&gt;: performs semantic search and RAG using Weaviate named vectors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chat view&lt;/strong&gt;: handles multi-turn conversations through the Weaviate Query Agent.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Our app includes the following features:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Layer&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Component&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Role&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Backend&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;backend.py (FastAPI) - REST API on port 8000/docs&lt;/td&gt;
&lt;td&gt;Routes: GET /health, GET /search, POST /ai/explain, POST /ai/plan, POST /chat&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Frontend&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Next.js + TypeScript (port 3000)&lt;/td&gt;
&lt;td&gt;Single-page app with sidebar navigation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;SearchView.tsx&lt;/td&gt;
&lt;td&gt;Semantic search (near_text), AI explanations (single_prompt), Movie Night Planner (grouped_task)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;MovieCard.tsx&lt;/td&gt;
&lt;td&gt;Renders base64 poster inline, watchlist add/remove button&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;ChatView.tsx&lt;/td&gt;
&lt;td&gt;Multi-turn Query AI Agent chat&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;AppSidebar.tsx&lt;/td&gt;
&lt;td&gt;Navigation (Search/Chat), Weaviate logo + feature summary, watchlist manager with ‘.txt’ export&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Use the following prompts with Claude Code to generate the backend and frontend:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backend Prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;/weaviate cookbooks 

Create &lt;span class="sb"&gt;`backend.py`&lt;/span&gt;: a FastAPI app with CORS enabled for localhost:3000.
Connect to Weaviate Cloud using credentials from .env with skip_init_checks=True.
The /search endpoint should return genre and vote_average alongside title, description, release_year, and poster.
Implement these routes:  
&lt;span class="p"&gt;
-&lt;/span&gt; GET  /health                  → {"status": "ok"}  
&lt;span class="p"&gt;-&lt;/span&gt; GET  /search?q=...&amp;amp;limit=3    → near_text on text_vector, return title/description/release_year/poster  
&lt;span class="p"&gt;-&lt;/span&gt; POST /ai/explain              → generate.near_text with single_prompt  
&lt;span class="p"&gt;-&lt;/span&gt; POST /ai/plan                 → generate.near_text with grouped_task  
&lt;span class="p"&gt;-&lt;/span&gt; POST /chat                    → QueryAgent.ask() with full message history

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Frontend Prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Using Weaviate cookbooks frontend reference, create a Next.js TypeScript app in the frontend/ folder.
MovieCard.tsx should display a star rating (vote_average) and genre tag beneath the movie title. 

Components needed:  
&lt;span class="p"&gt;
-&lt;/span&gt; page.tsx        — SidebarProvider layout, view state (search | chat)  
&lt;span class="p"&gt;-&lt;/span&gt; SearchView.tsx  — search input, MovieCard grid, AI explain and plan buttons  
&lt;span class="p"&gt;-&lt;/span&gt; MovieCard.tsx   — poster image, title, year, description, watchlist button  
&lt;span class="p"&gt;-&lt;/span&gt; ChatView.tsx    — message bubbles, source citations, clear chat  
&lt;span class="p"&gt;-&lt;/span&gt; AppSidebar.tsx  — navigation, Weaviate logo + feature list, watchlist + exportBackend base URL from NEXT_PUBLIC_BACKEND_HOST env var (default localhost:8000)

Run backend and frontend servers with: uvicorn backend:app --reload --port 800 and npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this, Claude Code will automatically build the app by adding relevant files and start both servers. You can start using the application immediately.&lt;/p&gt;

&lt;p&gt;The FastAPI backend runs at &lt;code&gt;http://localhost:8000/docs&lt;/code&gt;while the frontend app is available at &lt;code&gt;http://localhost:3000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can also manually start both processes in separate terminals:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Terminal 1 — Backend &lt;/span&gt;
uvicorn backend:app &lt;span class="nt"&gt;--reload&lt;/span&gt; &lt;span class="nt"&gt;--port&lt;/span&gt; 8000
&lt;span class="c"&gt;# Terminal 2 — Frontend&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;frontend &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Congratulations! You’ve completed the project without needing to do much manual configuration or coding.&lt;/strong&gt; 🔥&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo
&lt;/h3&gt;

&lt;p&gt;So far, we have used Weaviate Agent Skills with Claude Code to build a Semantic Movie Discovery Application powered by an OpenAI API key, a TMDB API key, and Weaviate.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/4udXaqI0PaQ"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Movie Discovery app we built includes the following features&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Semantic search:&lt;/strong&gt; Describe a mood or theme and retrieve matching movies using vector-based search (&lt;code&gt;near_text&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI explanations:&lt;/strong&gt; Generate per-movie summaries using RAG with &lt;code&gt;single_prompt&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Movie Night Planner:&lt;/strong&gt; Create a viewing order, snack pairings and a theme summary using &lt;code&gt;grouped_task&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conversational chat:&lt;/strong&gt; Ask questions about the movie collection through a chat interface powered by the Weaviate Query Agent, with source citations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Watchlist:&lt;/strong&gt; Save movies during your session and export the list as a &lt;code&gt;.txt&lt;/code&gt; file.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What’s Next?
&lt;/h3&gt;

&lt;p&gt;You could add image-based search to find similar movies and better meet your movie choices. You could also include a hybrid search feature that incorporates keyword-heavy queries and image search. &lt;/p&gt;

&lt;p&gt;You can take your app even further by getting up to speed with Weaviate’s latest &lt;a href="https://weaviate.io/blog" rel="noopener noreferrer"&gt;releases&lt;/a&gt; and becoming familiar with features such as server-side batching, async replication improvements, Object TTL and many more.&lt;/p&gt;

&lt;p&gt;To explore further, check out the latest Weaviate &lt;a href="https://weaviate.io/blog" rel="noopener noreferrer"&gt;releases&lt;/a&gt; and join the discussion on the &lt;a href="https://forum.weaviate.io/" rel="noopener noreferrer"&gt;community forum&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Weaviant Agent Skills in Action&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Weaviate modules were used in the application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Text2vec-weaviate:&lt;/strong&gt; Responsible for text embeddings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi2multivec-weaviate:&lt;/strong&gt; Responsible for embedding images.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generative-openai:&lt;/strong&gt; Integrates GPT directly into the query workflow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaviate Skill:&lt;/strong&gt; Creates a collection and imports data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaviate Cookbooks Skill:&lt;/strong&gt; For defining the app’s logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weaviate Query Agent:&lt;/strong&gt; A higher-level abstraction that accepts natural language queries, decides the best query method, executes queries, synthesizes results and returns answers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Weaviate Agent Skills help in shipping faster and more accurate RAG applications. Backend development tasks such as schema inspection, data ingestion and search operations are automated and optimized. Ultimately, this helps developers save valuable development time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Both MCP servers and Agent Skills provide useful patterns for building AI-powered applications. MCP servers are well-suited for exposing external tools and services through a standardized interface, while Agent Skills focus on guiding coding agents with structured workflows and best practices.&lt;/p&gt;

&lt;p&gt;In this tutorial, we demonstrated how Weaviate Agent Skills can simplify development by helping Claude Code generate correct database queries, ingestion pipelines and search logic. By combining vector search, multimodal storage and generative capabilities, we built a semantic movie discovery application with minimal manual setup.&lt;/p&gt;

&lt;p&gt;As agentic development environments continue to evolve, tools like MCP servers and Agent Skills will likely be used together. The key is understanding where each approach fits and selecting the one that best supports your application architecture.&lt;/p&gt;

&lt;p&gt;Happy building.&lt;/p&gt;




&lt;h3&gt;
  
  
  Resources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://modelcontextprotocol.io/docs/getting-started/intro" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/weaviate/agent-skills" rel="noopener noreferrer"&gt;Weaviate Agent Skills&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://code.claude.com/docs/en/overview" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/Studio1HQ/movie-discovery-app" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt; for the Movie Discovery App&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>rag</category>
      <category>webdev</category>
    </item>
    <item>
      <title>We Cut Our MCP Token Spend in Half. Here's the Architecture</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Wed, 25 Mar 2026 19:04:52 +0000</pubDate>
      <link>https://dev.to/studio1hq/we-cut-our-mcp-token-spend-in-half-heres-the-architecture-1jic</link>
      <guid>https://dev.to/studio1hq/we-cut-our-mcp-token-spend-in-half-heres-the-architecture-1jic</guid>
      <description>&lt;p&gt;When we started scaling our MCP workflows, token usage was something we barely tracked. The system worked well, responses were accurate, and adding more tools felt like the right next step. Over time, the cost began rising in ways that did not align with how much the system was actually used.&lt;/p&gt;

&lt;p&gt;At first, we assumed this was due to higher usage or more complex queries. The data showed something else. Even simple requests were using more tokens than expected. This led us to ask a basic question. What exactly are we sending to the LLM on every call?&lt;/p&gt;

&lt;p&gt;A closer look made things clearer. The issue came from how the system was built. We handled context, tool definitions, and execution flow by adding extra tokens at every step.&lt;/p&gt;

&lt;p&gt;This article explains how we found the root cause and redesigned the architecture to fix it. The changes cut our MCP token usage by nearly half and gave us better control over how the system behaves.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Token Usage in MCP Systems
&lt;/h2&gt;

&lt;p&gt;Once we started examining token usage, a clear pattern showed up. The LLM was receiving far more context than most requests actually needed. A large part of this came from tool definitions being sent repeatedly on every call.&lt;/p&gt;

&lt;p&gt;Each request included the full list of tools, even when only one or two were needed. On top of that, earlier outputs and intermediate results were passed back into the model. The context kept growing, even for simple queries.&lt;/p&gt;

&lt;p&gt;The execution flow added to the problem. The LLM would choose a tool, call it, process the result, and then repeat the same cycle if another step was needed. Each step added more tokens, and the same data often appeared many times across calls.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fraya207lc4ie4r2yqsd2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fraya207lc4ie4r2yqsd2.png" alt="Image1" width="800" height="1422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This setup worked at a small scale. As the number of tools increased, the cost grew quickly. More tools meant more context. More steps meant repeated processing. The system was doing extra work without adding real value. At this point, the cause was clear. Token usage came from how the system handled context and execution. The design itself was driving the overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing Bifrost
&lt;/h2&gt;

&lt;p&gt;We started looking for a way to change how the system handled tool execution. The goal was simple. Reduce the amount of context sent to the LLM and avoid repeated processing across steps.&lt;/p&gt;

&lt;p&gt;During this process, we came across &lt;a href="https://www.getmaxim.ai/bifrost" rel="noopener noreferrer"&gt;Bifrost&lt;/a&gt;, an &lt;a href="https://github.com/maximhq/bifrost" rel="noopener noreferrer"&gt;open source&lt;/a&gt; MCP gateway. It works between the application, the model, and the tools. It brings structure for how tools are discovered and executed, so the LLM receives only what is needed on each call.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhnphaglsh5ymggy61oe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhnphaglsh5ymggy61oe.png" alt="Image" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This changed how we thought about the system. Tool access became more controlled. Context stayed limited to what was required for each request. The overall flow of execution became easier to follow and reason about.&lt;/p&gt;

&lt;p&gt;These changes directly addressed the issues we were seeing. Tool definitions were sent only when required. Repeated decision loops were reduced. The system handled execution in a more controlled and predictable way.&lt;/p&gt;

&lt;p&gt;From here, the focus moved away from adjusting prompts and toward changing how the system runs end-to-end.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architectural Changes with Bifrost Code Mode
&lt;/h2&gt;

&lt;p&gt;The main change came from how execution was handled inside Bifrost. &lt;a href="https://docs.getbifrost.ai/mcp/code-mode" rel="noopener noreferrer"&gt;Code Mode&lt;/a&gt; is a Bifrost feature that changes how the LLM interacts with MCP tools. Earlier, the LLM handled both planning and step-by-step tool interaction. Each step required another call, and each call carried a growing context.&lt;/p&gt;

&lt;p&gt;Code Mode separates these responsibilities. The LLM focuses on planning. It generates executable code that defines the full workflow for a task. &lt;/p&gt;

&lt;p&gt;Code Mode works best when multiple MCP servers are involved, workflows have several steps, or tools need to share data. For simpler setups with one or two tools, Classic MCP works well.&lt;/p&gt;

&lt;p&gt;A mixed setup also works. Use Code Mode for heavier workflows like search or databases, and keep simple tools as direct calls.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcz78lp878cwfdmchwomm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcz78lp878cwfdmchwomm.png" alt="Image2" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Selecting the right tools&lt;/li&gt;
&lt;li&gt;Passing data between tools&lt;/li&gt;
&lt;li&gt;Defining how the final output is produced&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system exposes a minimal interface to the LLM. It can list available tools, read tool details, and, when required, understand how each tool works. Tool definitions are accessed on demand, which keeps the initial context small.&lt;/p&gt;

&lt;p&gt;Once the plan is generated, execution moves to a runtime environment. The code runs in a sandbox and interacts directly with tools. All intermediate steps, tool responses, and data transformations stay within this layer.&lt;/p&gt;

&lt;p&gt;This removes the need for repeated LLM calls during execution. The workflow runs in one pass, guided by the generated code. The LLM is involved mainly at the planning stage and for producing the final response if required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawpurvuv48ogzbgr1rdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawpurvuv48ogzbgr1rdu.png" alt="Image" width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The flow becomes more structured. A request comes in, relevant tools are identified, code is generated, and execution happens in a controlled environment. The system handles state and intermediate data outside the LLM.&lt;/p&gt;

&lt;p&gt;This approach improves clarity in how tasks are executed. The generated code can be inspected, debugged, and understood directly. Each request follows a defined path, which makes behavior easier to track and reason about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Bifrost CLI in Our Workflow
&lt;/h2&gt;

&lt;p&gt;Getting started required two commands. First, start the gateway:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; @maximhq/bifrost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then launch the CLI from a separate terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; @maximhq/bifrost-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;MCP servers are registered once through the API. The key flag is &lt;code&gt;is_code_mode_client&lt;/code&gt;, which tells Bifrost to handle that server through Code Mode instead of sending its tool definitions on every request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:8080/api/mcp/client &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "name": "youtube",
    "connection_type": "http",
    "connection_string": "http://localhost:3001/mcp",
    "tools_to_execute": ["*"],
    "is_code_mode_client": true
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once registered, the LLM discovers tools on demand using &lt;code&gt;listToolFiles&lt;/code&gt; and &lt;code&gt;readToolFile&lt;/code&gt;, then submits a full execution plan through &lt;code&gt;executeToolCode&lt;/code&gt;. A workflow that previously took six LLM turns now completes in three to four.&lt;/p&gt;

&lt;p&gt;Bifrost organizes tool definitions using two binding levels. Server-level (default) groups all tools from a server into one &lt;code&gt;.pyi&lt;/code&gt; file. Tool-level gives each tool its own file — better for servers with 30+ tools. Set it once in &lt;code&gt;config.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"tool_manager_config"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"code_mode_binding_level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"server"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Debugging became simpler because the generated code is the execution plan. When something went wrong, the issue was visible directly in the code rather than buried in prompt chains. This setup also made execution easier to inspect.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;youtube&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AI infrastructure&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;maxResults&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;titles&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;snippet&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;items&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;titles&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;titles&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;count&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;titles&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The execution runs in a Starlark interpreter, a restricted subset of Python. A few constraints to keep in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No import statements, file I/O, or network access&lt;/li&gt;
&lt;li&gt;Classes are not supported, use dictionaries&lt;/li&gt;
&lt;li&gt;Tool calls run synchronously; async handling is not required&lt;/li&gt;
&lt;li&gt;Each tool call has a default timeout of 30 seconds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Code Mode also works with &lt;a href="https://docs.getbifrost.ai/mcp/agent-mode" rel="noopener noreferrer"&gt;Agent Mode&lt;/a&gt; for automated workflows. The &lt;code&gt;listToolFiles&lt;/code&gt; and &lt;code&gt;readToolFile&lt;/code&gt; tools are always auto-executable since they are read-only. &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;executeToolCode&lt;/code&gt; tool only auto-executes if every tool call within the generated code is on the approved list. If any call falls outside that list, Bifrost returns it to the user for approval before running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Impact on Token Usage and System Efficiency
&lt;/h2&gt;

&lt;p&gt;The reduction in token usage came from four specific changes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tool schemas were sent only when required&lt;/li&gt;
&lt;li&gt;Intermediate outputs stayed within the execution layer&lt;/li&gt;
&lt;li&gt;Repeated context across steps was removed&lt;/li&gt;
&lt;li&gt;Fewer LLM calls were needed, since execution moved to a sandbox and ran in a single flow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These changes had a clear effect. Token usage dropped by nearly half. Latency reduced along with it. Execution became more predictable, since each request followed a defined path with fewer moving parts.&lt;/p&gt;

&lt;p&gt;The broader takeaway is clear. Token cost comes from system design. Small changes in prompts or outputs help at the edges. The main overhead comes from the system's structure.&lt;/p&gt;

&lt;p&gt;LLMs work best when they focus on planning. Managing execution through repeated loops adds cost and introduces variability. A separate execution layer keeps the flow stable and easier to understand. Context also needs careful control. It should be built for each request with only the required information. Letting it grow across steps results in unnecessary overhead and increased token usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Token inefficiency in MCP workflows comes from system design. Bifrost and Code Mode introduced a clear separation between planning and execution. The LLM handles planning, and the runtime handles execution. This brought immediate and measurable improvements in both cost and system behavior.&lt;/p&gt;

&lt;p&gt;If you are working with MCP workflows at scale, &lt;a href="https://www.getmaxim.ai/bifrost" rel="noopener noreferrer"&gt;Bifrost&lt;/a&gt; is worth exploring. The &lt;a href="https://docs.getbifrost.ai/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; provides a good starting point to set up the gateway, connect servers, and run workflows using Code Mode.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Managing Multi Provider AI Workflows in the Terminal with Bifrost CLI</title>
      <dc:creator>Astrodevil</dc:creator>
      <pubDate>Sat, 21 Mar 2026 10:52:18 +0000</pubDate>
      <link>https://dev.to/studio1hq/managing-multi-provider-ai-workflows-in-the-terminal-with-bifrost-cli-ece</link>
      <guid>https://dev.to/studio1hq/managing-multi-provider-ai-workflows-in-the-terminal-with-bifrost-cli-ece</guid>
      <description>&lt;p&gt;Command-line tools are still a common way to work with AI. They give better control and fit naturally into everyday workflows, which is why many people continue to use them.&lt;/p&gt;

&lt;p&gt;A common issue with CLI-based tools is that they are often tied to a single provider. Switching between options usually means updating configs and handling multiple API keys. In some cases, it may even involve changing tools. This can slow things down and make everyday work feel a bit frustrating.&lt;/p&gt;

&lt;p&gt;Bifrost CLI aims to simplify this setup. It provides a single way to connect CLI tools to multiple providers, without changing how the tools are used. In this article, let us look at how it works and how to get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Bifrost CLI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.getmaxim.ai/bifrost" rel="noopener noreferrer"&gt;Bifrost&lt;/a&gt; is an &lt;a href="https://github.com/maximhq/bifrost" rel="noopener noreferrer"&gt;open-source AI gateway&lt;/a&gt; that works between applications and model providers. It offers provider-compatible endpoints such as OpenAI, Anthropic, and Gemini formats. It manages request routing, API keys, and response formatting in one place, so separate setups for each provider are not required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.getbifrost.ai/quickstart/cli/getting-started" rel="noopener noreferrer"&gt;Bifrost CLI&lt;/a&gt; was recently released to extend this setup to command-line workflows. It allows existing CLI tools to connect through the Bifrost gateway in place of calling providers directly. The CLI tool continues to work in the same way, with only the endpoint updated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq1juk7uh66enws74o00.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq1juk7uh66enws74o00.png" alt="Bitfrost CLI" width="800" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The CLI tool is configured with Bifrost as the base URL. After this, all requests go through the gateway. Bifrost routes each request to the selected provider, converts it into the required API format, and returns a compatible response. The CLI workflow stays the same, with support for multiple providers through a single endpoint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features of Bifrost CLI
&lt;/h2&gt;

&lt;p&gt;Bifrost CLI brings several practical features that improve how CLI-based workflows are set up and managed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Setup for CLI Tools:&lt;/strong&gt; Configures base URLs, API keys, and model settings for each agent. This reduces manual steps and keeps the environment ready to use.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Discovery from Gateway:&lt;/strong&gt; Fetches available models directly from the Bifrost gateway using the &lt;code&gt;/v1/models&lt;/code&gt; endpoint. This ensures the CLI always reflects the current set of available options.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP Integration for Tool Access:&lt;/strong&gt; Attaches Bifrost’s MCP server to tools like Claude Code. This allows access to external tools and extended capabilities from within the CLI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session Activity Indicators:&lt;/strong&gt; Displays activity badges for each tab. It becomes easy to see if a session is running, idle, or has triggered an alert.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secure Credential Storage:&lt;/strong&gt; Stores selections and keys securely. Virtual keys are saved in the OS keyring and are not written in plain text on disk.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Bifrost CLI is quick to set up and runs directly from the terminal. The flow includes starting the gateway, launching the CLI, and selecting the agent and model through a guided setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Start the Bifrost Gateway
&lt;/h3&gt;

&lt;p&gt;Make sure the gateway is running locally (default: &lt;code&gt;http://localhost:8080&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; @maximhq/bifrost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Install and Launch Bifrost CLI
&lt;/h3&gt;

&lt;p&gt;In a new terminal, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; @maximhq/bifrost-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhk8d7qf1ycaa6vmnnnu1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhk8d7qf1ycaa6vmnnnu1.png" alt="Terminal" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If installed, you can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bifrost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Enter Gateway Details
&lt;/h3&gt;

&lt;p&gt;Provide the Bifrost endpoint URL.&lt;/p&gt;

&lt;p&gt;For local setup, this is usually: &lt;code&gt;http://localhost:8080&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If authentication is enabled, you can also enter a virtual key at this stage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Choose a CLI Agent
&lt;/h3&gt;

&lt;p&gt;Select the CLI agent you want to use, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Codex CLI&lt;/li&gt;
&lt;li&gt;Claude Code&lt;/li&gt;
&lt;li&gt;Gemini CLI&lt;/li&gt;
&lt;li&gt;Opencode&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The CLI shows which agents are available and can install missing ones during setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzg89ikeylge3qwp6wpyc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzg89ikeylge3qwp6wpyc.png" alt="CLI UI" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Select a Model
&lt;/h3&gt;

&lt;p&gt;The CLI fetches available models from the gateway and shows them in a searchable list.&lt;/p&gt;

&lt;p&gt;You can choose one directly or enter a model name manually.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsicwoqe9jep2tpd8jt9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsicwoqe9jep2tpd8jt9.png" alt="Choose model name" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Launch the Session
&lt;/h3&gt;

&lt;p&gt;Review the configuration and start the session. The selected agent runs with the chosen model and setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Work with Sessions
&lt;/h3&gt;

&lt;p&gt;After launch, the CLI stays open in a tabbed interface.&lt;/p&gt;

&lt;p&gt;You can open new sessions, switch between them, or close them without restarting the CLI. Each tab shows the current activity state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Bifrost CLI Session Flow
&lt;/h2&gt;

&lt;p&gt;Bifrost CLI is built for repeated, session-based use in the terminal. You can switch between runs, update settings, and continue your work without having to go through the full setup again each time. &lt;/p&gt;

&lt;p&gt;Here are the key steps in the session flow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3226ybyduzng3dmsakr0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3226ybyduzng3dmsakr0.png" alt="Session flow" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Launch:&lt;/strong&gt; Select the agent and model, then start the session.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Work:&lt;/strong&gt; Use the agent as usual. All requests go through Bifrost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Switch Sessions:&lt;/strong&gt; Press &lt;code&gt;Ctrl + B&lt;/code&gt; to open the tab bar, switch between sessions, or start a new one.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Return:&lt;/strong&gt; When a session ends, the CLI returns to the setup screen with the previous configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relaunch:&lt;/strong&gt; Change the agent or model, or rerun the same setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistence:&lt;/strong&gt; The last configuration is saved and shown the next time the CLI starts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Working with Multiple Models
&lt;/h2&gt;

&lt;p&gt;Bifrost CLI makes it easy to work with different models from the same setup. You do not need to change configurations or restart the tool each time you want to try a different option.&lt;/p&gt;

&lt;p&gt;During setup, the CLI fetches available models from the Bifrost gateway and shows them in a list. You can select one directly or enter a model name if you already know what you want to use.&lt;/p&gt;

&lt;p&gt;If you want to try another model, you can start a new session and choose a different one. Each session runs separately, so you can compare outputs or test different setups side by side.&lt;/p&gt;

&lt;p&gt;All requests go through Bifrost, so differences between providers are handled in the background. The CLI experience stays the same across models.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use Bifrost CLI
&lt;/h2&gt;

&lt;p&gt;Bifrost CLI is useful when working with multiple providers or running repeated sessions from the terminal. Since it is built on top of Bifrost, it also brings the benefits of a central gateway into CLI workflows.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Testing Different Models:&lt;/strong&gt; Try different models across providers from the same setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Running Iterative Sessions:&lt;/strong&gt; Start, stop, and relaunch sessions with minor configuration changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Working from the Terminal:&lt;/strong&gt; Keep the entire workflow inside the CLI, with Bifrost handling routing in the background.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comparing Outputs:&lt;/strong&gt; Run multiple sessions side by side and observe how different models respond.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Managing Multiple Providers:&lt;/strong&gt; Use Bifrost as a single entry point to work across providers in one place.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Centralized Control with Bifrost:&lt;/strong&gt; Route all requests through Bifrost for consistent handling of API keys, requests, and responses.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup helps keep workflows consistent and organized across different providers and sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Bifrost CLI brings multi-provider access into the terminal through a single setup. It keeps existing workflows intact and reduces the need to manage separate configurations.&lt;/p&gt;

&lt;p&gt;You can run sessions, switch agents, and try different models from the same interface, with Bifrost handling routing and integration in the background.&lt;/p&gt;

&lt;p&gt;To get started or explore more details, check the &lt;a href="https://docs.getbifrost.ai/quickstart/cli/getting-started" rel="noopener noreferrer"&gt;Bifrost CLI documentation&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>python</category>
      <category>programming</category>
    </item>
    <item>
      <title>Why Your OpenClaw Agent Gets Slower and More Expensive Over Time</title>
      <dc:creator>Astrodevil</dc:creator>
      <pubDate>Fri, 20 Mar 2026 21:00:34 +0000</pubDate>
      <link>https://dev.to/studio1hq/why-your-openclaw-agent-gets-slower-and-more-expensive-over-time-5c5e</link>
      <guid>https://dev.to/studio1hq/why-your-openclaw-agent-gets-slower-and-more-expensive-over-time-5c5e</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;OpenClaw feels fast in the first week. You send a message, the agent responds, and the workflow makes sense. Then gradually, without any obvious change, responses take a little longer, and the API bill at the end of the month is higher than it was two weeks ago, with no single thing you can point to as the cause.&lt;/p&gt;

&lt;p&gt;That is not a coincidence, and it is not bad luck. It is what happens when three separate problems compound on each other quietly, over time, without any of them being obvious on its own.&lt;/p&gt;

&lt;p&gt;Context bloating, static content being reprocessed on every call, and every request hitting the same model regardless of what it actually needs, these are not dramatic failures. They are the kind of inefficiencies that feel invisible until they are not, and by the time the invoice makes them obvious, they have been running for weeks.&lt;/p&gt;

&lt;p&gt;In this post, we will break down what is driving each of them and why routing, not prompt tuning or model switching, is the fix that addresses all three at the layer where they actually live.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why the Default Setup Works Against You Over Time&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;OpenClaw's default configuration is built to get you started. It is not designed to remain efficient as your usage grows, and the gap between the two becomes apparent faster than most people expect. Three things are responsible for most of it.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Context grows faster than you think&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before you type a single message, your agent has already loaded a significant amount into the context window. &lt;code&gt;SOUL.md&lt;/code&gt;, &lt;code&gt;AGENTS.md&lt;/code&gt;, bootstrap files, the results of a memory search against everything you have accumulated, all of it lands in the prompt before your request even starts.&lt;/p&gt;

&lt;p&gt;That base footprint is manageable in week one. By week three, the memory graph has grown, the search results are broader, and the conversation history from your previous sessions is traveling with every new request. The agent is not selectively pulling relevant data; it loads everything it has access to every time.&lt;/p&gt;

&lt;p&gt;The result is a base token cost per request that is meaningfully higher than it was when you started, without any deliberate change on your part.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Static tokens are processed fresh every time&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A large portion of what is loaded into every request consists of content that has not changed since last week, system instructions, bootstrap files, and agent configuration. Provider-side caching exists specifically to avoid paying full price for static content on repeat calls, but the default OpenClaw setup does not use it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zje4v0xidwlrqxrneqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zje4v0xidwlrqxrneqi.png" alt="Every call. Same cost. No cache" width="800" height="446"&gt;&lt;/a&gt; The same unchanged content, reprocessed from scratch on every heartbeat call.&lt;/p&gt;

&lt;p&gt;Every call processes that unchanged content from scratch. For a setup running a 30-minute heartbeat, that means a full API call with no caching, hitting the configured model, every half hour, regardless of whether anything meaningful is happening in the session. Most users never think of the heartbeat as a cost source, but over a full month, it adds up to a figure worth paying attention to.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Every request hits the same model&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;OpenClaw routes all requests to a single globally configured model. There is no built-in distinction among task types: a status check, a memory lookup, a formatting task, and a multi-step reasoning problem all map to the same endpoint at the same price.&lt;/p&gt;

&lt;p&gt;In practice, the majority of what an agent handles day-to-day is simple work. Summaries, lookups, structured output, short responses. None of it requires a frontier model, but all of it gets one anyway. That is not a usage problem; it is a configuration gap, and it is the highest-leverage thing to fix.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Structural Fix: Routing&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The problem with every approach people try first, switching to a cheaper model, trimming prompts, and reducing heartbeat frequency, is that they address one variable at a time. The bill declines slightly, then rises again. What is needed is a layer that sits between OpenClaw and the provider, evaluates each request before it is sent, and determines which model to route it to. That is what routing is, and that is why it is a structural fix rather than a configuration tweak.&lt;/p&gt;

&lt;p&gt;That layer is &lt;a href="https://manifest.build/" rel="noopener noreferrer"&gt;Manifest&lt;/a&gt;, an open-source OpenClaw plugin built specifically to solve this. It sits between your agent and the provider, and the original OpenClaw configuration remains unchanged.&lt;/p&gt;

&lt;p&gt;Manifest intercepts every request before it reaches the LLM. The routing decision takes under 2 ms with zero external calls, after which the request is forwarded to the appropriate model. During that interval, five distinct mechanisms run before the request moves anywhere, starting with how the scoring algorithm decides which tier a request belongs to.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How the scoring algorithm works&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before any request leaves your setup, Manifest runs a scoring pass across 23 dimensions. These dimensions fall into two groups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;13 keyword-based checks that scan the prompt for patterns like "prove", "write function", or "what is", and&lt;/li&gt;
&lt;li&gt;10 structural checks that evaluate token count, nesting depth, code-to-prose ratio, tool count, and conversation depth, among others.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each dimension carries a weight. The weighted sum maps to one of four tiers through threshold boundaries. Alongside the tier assignment, Manifest produces a confidence score between 0 and 1 that reflects how clearly the request fits that tier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe211zoxmig3h19y1wzrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe211zoxmig3h19y1wzrg.png" alt="Manifest scores" width="800" height="450"&gt;&lt;/a&gt; How Manifest scores a request across 23 dimensions and assigns it a tier in under 2 ms.&lt;/p&gt;

&lt;p&gt;One edge case worth knowing: short follow-up messages like "yes" or "do it" do not get scored in isolation. Manifest tracks the last 5 tier assignments within a 30-minute window and uses that session momentum to keep follow-ups at the right tier, rather than dropping them to simple because they contain almost no content.&lt;/p&gt;

&lt;p&gt;Certain signals also force a minimum tier regardless of score. Detected tools push the floor to the standard. Context above 50,000 tokens forces complex. Formal logic keywords move the request directly to reasoning.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The four tiers and what they route&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The tier system is where the cost reduction actually happens. Manifest defines four tiers, each mapped to a different class of model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simple:&lt;/strong&gt; greetings, definitions, short factual questions. Routed to the cheapest model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standard:&lt;/strong&gt; general coding help, moderate questions. Good quality at low cost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex:&lt;/strong&gt; multi-step tasks, large context, code generation. Best quality models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reasoning:&lt;/strong&gt; formal logic, proofs, math, multi-constraint problems. Reasoning-capable models only.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a typical active session, most requests fall into the simple or standard category. Routing those away from frontier models, while sending only what genuinely needs it to complex or reasoning, is where the up to 70% cost reduction reported by users comes from.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtsw09jblpxao5q4zcco.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtsw09jblpxao5q4zcco.png" alt="Manifest maps each request type" width="800" height="450"&gt;&lt;/a&gt; How Manifest maps each request type to the cheapest model that can handle it.&lt;/p&gt;

&lt;p&gt;Every routed response returns three headers you can inspect: &lt;code&gt;X-Manifest-Tier&lt;/code&gt;, &lt;code&gt;X-Manifest-Model&lt;/code&gt;, and &lt;code&gt;X-Manifest-Confidence&lt;/code&gt;. If a request was routed differently than you expected, those headers tell you exactly what the algorithm saw.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;OAuth and provider auth&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Manifest lets users authenticate with their own Anthropic or OpenAI credentials directly through OAuth. If OAuth is unavailable or a session is inactive, it falls back to an API key. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fre8deeyllgjry2faztj6.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fre8deeyllgjry2faztj6.gif" alt="Manifest Auth" width="600" height="337"&gt;&lt;/a&gt; Manifest lets users authenticate with their own Anthropic or OpenAI credentials&lt;/p&gt;

&lt;p&gt;This keeps your model access under your own account, which matters for rate limits, spend visibility, and not routing your traffic through a third-party proxy. More providers are being added.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Fallbacks and what they protect&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Each tier supports up to 5 fallback models. If the primary model for a tier is unavailable or rate-limited, Manifest automatically moves to the fallback chain. The request still resolves, just against the next available model in that tier's list. This is particularly relevant for the reasoning tier, where model availability can be less predictable during high-traffic periods, and losing a request entirely is more costly than a slight capability downgrade.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Spend limits without manual monitoring&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Manifest lets you set rules per agent against two metrics: tokens and cost. Each rule has a period (hourly, daily, weekly, or monthly), a threshold, and an action. Notify sends an email alert when the threshold is crossed. Block returns HTTP 429 and stops requests until the period resets.&lt;/p&gt;

&lt;p&gt;Rules that block are evaluated on every ingest, while rules that notify run on an hourly cron and fire once per rule per period to avoid repeated alerts for the same breach. For a setup with a 30-minute heartbeat running continuously, a daily cost block is the most direct way to prevent a runaway spend event from compounding overnight without any manual check.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Rest Is Worth Knowing
&lt;/h2&gt;

&lt;p&gt;Routing is the core of what Manifest does, but it ships with a few other things that are worth understanding before you use it in production.&lt;/p&gt;

&lt;p&gt;Manifest provides a dashboard that gives a full view of each call: input tokens, output tokens, cache-read tokens, cost, latency, model, and routing tier. Cost is calculated against a live pricing table covering 600+ models, so nothing is estimated. The message log stores all requests and is filterable by agent, model, and time range.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfc9ccmp7jszkhuo2ik6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfc9ccmp7jszkhuo2ik6.png" alt="Manifest dashboard" width="800" height="450"&gt;&lt;/a&gt; Manifest dashboard&lt;/p&gt;

&lt;p&gt;In local mode, nothing leaves your machine. In cloud mode, only OpenTelemetry metadata is sent: model name, token counts, and latency. Message content never moves. The full codebase is open source and self-hostable at &lt;a href="https://github.com/mnfst/manifest" rel="noopener noreferrer"&gt;github.com/mnfst/manifest&lt;/a&gt;, and the routing logic is fully documented.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A quick note before we move on.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Everything in this post reflects how Manifest works at the time of writing, and the space is moving fast enough that some details may already look different by the time you read it. The OAuth providers, the supported models, the scoring thresholds, and the team were shipping changes even while this article was being written. For anything that has moved since, the &lt;a href="https://manifest.build/docs/introduction" rel="noopener noreferrer"&gt;docs&lt;/a&gt; are the right place to check.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With that said, back to the article. Here is how all of it fits together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting It Together
&lt;/h2&gt;

&lt;p&gt;The three problems do not take turns. They compound on the same request, every time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8605ni77vqje298aahx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8605ni77vqje298aahx.png" alt="Three problems" width="800" height="450"&gt;&lt;/a&gt; Three problems converging into every single request, all at once.&lt;/p&gt;

&lt;p&gt;A heartbeat call on a 30-minute cycle loads accumulated context, reprocesses unchanged system files, and hits a frontier model for a task that needed none of that. Week one is a small number. In week three, it is a pattern you cannot see until the invoice lands.&lt;/p&gt;

&lt;p&gt;Routing is the layer that addresses all three at once, not because it solves context or caching directly, but because it changes the cost of every request before it leaves your setup, and once that layer is in place, the three problems no longer have room to compound.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Where to Start&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The order matters here. Do not start by switching models or trimming prompts.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Manifest and let it run for a few days without changing anything else. The dashboard will show you where the cost is actually coming from.&lt;/li&gt;
&lt;li&gt;Check the model distribution. If simple and standard requests are hitting your highest-tier model, routing is the first thing to configure.&lt;/li&gt;
&lt;li&gt;Set a daily cost block rule to prevent a runaway session from compounding overnight.&lt;/li&gt;
&lt;li&gt;Once routing is active, the cache read token metric indicates how much static content was served from cache versus processed fresh. That number is worth watching.&lt;/li&gt;
&lt;li&gt;Add per-tier fallbacks to prevent availability gaps from interrupting the session.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The &lt;a href="https://manifest.build/docs/introduction" rel="noopener noreferrer"&gt;&lt;strong&gt;Manifest docs&lt;/strong&gt;&lt;/a&gt; cover installation, routing configuration, and limit setup in full. If you want the broader context on what makes OpenClaw production-ready, &lt;a href="https://dev.to/arindam_1729/5-openclaw-plugins-that-actually-make-it-production-ready-14kn"&gt;this post&lt;/a&gt; is a good place to start.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
    <item>
      <title>Running LLM Applications Across Providers with Bifrost</title>
      <dc:creator>Arindam Majumder </dc:creator>
      <pubDate>Tue, 17 Mar 2026 16:15:23 +0000</pubDate>
      <link>https://dev.to/studio1hq/running-llm-applications-across-providers-with-bifrost-313h</link>
      <guid>https://dev.to/studio1hq/running-llm-applications-across-providers-with-bifrost-313h</guid>
      <description>&lt;p&gt;Many modern applications include AI features that rely on large language models accessed through APIs. When an application sends a prompt to a model and receives a response, that request usually goes through an external service.&lt;/p&gt;

&lt;p&gt;Getting access to different LLM models is easier today. Providers such as &lt;a href="https://platform.openai.com/api-keys" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt; and &lt;a href="https://platform.claude.com/" rel="noopener noreferrer"&gt;Anthropic&lt;/a&gt; provide model APIs, and platforms like &lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;Amazon Bedrock&lt;/a&gt; and &lt;a href="https://cloud.google.com/vertex-ai" rel="noopener noreferrer"&gt;Google Vertex&lt;/a&gt; AI give access to several models from one place. Because of this, many applications connect to more than one provider to compare models, manage cost, or keep a backup option if one service fails.&lt;/p&gt;

&lt;p&gt;But each provider works a little differently. Authentication methods, rate limits, and request formats are not the same. Managing these differences inside an application can slowly add complexity to the system. In this article, let us explore Bifrost, an open-source LLM gateway that provides a single layer to route requests and manage interactions with multiple model providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Cost of Provider Integrations
&lt;/h2&gt;

&lt;p&gt;Connecting to several LLM providers may look simple at the start. Adding another provider can feel like just integrating one more API.&lt;/p&gt;

&lt;p&gt;That situation changes once the application runs in production. Requests may need to go to different models based on cost, response quality, or latency. If a provider slows down or becomes unavailable, the system must redirect requests to another provider and keep the service running.&lt;/p&gt;

&lt;p&gt;Handling these situations introduces additional logic into the codebase. The application needs to manage how requests are routed between models. It must also include retry logic for failed calls, fallback providers during outages, and tracking for how requests are distributed across models.&lt;/p&gt;

&lt;p&gt;Each of these responsibilities adds extra work to the system. Over time, operational logic becomes part of the application and increases maintenance effort. This overhead becomes the hidden cost of working directly with multiple model providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing Bifrost: A Gateway for LLM Infrastructure
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.getbifrost.ai/overview" rel="noopener noreferrer"&gt;Bifrost&lt;/a&gt; is an &lt;a href="https://github.com/maximhq/bifrost" rel="noopener noreferrer"&gt;open-source&lt;/a&gt; LLM and MCP gateway designed to manage interactions between applications and model providers. It sits between the application and the LLM services and acts as a central layer that controls how requests move between systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsyseg3iy2fg1v6h6yhe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsyseg3iy2fg1v6h6yhe.png" alt="Image1" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Applications often connect directly to each provider they use. Bifrost adds a gateway layer between the application and the providers, so requests pass through a single entry point before reaching the model services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffygdaoyre598cw4i7cdw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffygdaoyre598cw4i7cdw.png" alt="Image2" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This structure separates provider management from the application. The application sends requests to one endpoint, and the gateway manages communication with different model providers. Provider configuration and request handling stay inside the gateway layer, reducing provider-specific logic in the application code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Infrastructure Capabilities
&lt;/h2&gt;

&lt;p&gt;Bifrost provides several infrastructure capabilities for managing LLM interactions across providers. These capabilities move provider-specific handling out of the application and into the gateway layer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-provider routing:&lt;/strong&gt; Bifrost supports multiple AI providers through a single API interface. Applications send requests to one endpoint, and the gateway routes each request to the configured provider or model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load balancing:&lt;/strong&gt; When multiple providers or API keys are configured, Bifrost distributes requests across them based on defined rules. Traffic spreads across providers and reduces the chance of hitting rate limits on a single service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic fallback:&lt;/strong&gt; When a provider returns an error or becomes unavailable, Bifrost sends the request to another configured provider.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semantic caching:&lt;/strong&gt; Bifrost stores responses and returns them for similar prompts. Prompt comparison uses semantic similarity. This reduces repeated API calls and improves response time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Platform Support and Integrations
&lt;/h2&gt;

&lt;p&gt;Bifrost fits environments where applications use multiple models and providers. The gateway exposes an OpenAI-compatible API, so applications that already use OpenAI SDKs can connect with minimal changes and send requests through a single endpoint.&lt;/p&gt;

&lt;p&gt;Bifrost works with several &lt;a href="https://docs.getbifrost.ai/providers/supported-providers/overview" rel="noopener noreferrer"&gt;LLM providers&lt;/a&gt;, such as OpenAI, Anthropic, Amazon Bedrock, Google Vertex AI, Cohere, and Mistral. Applications can reach these providers through the same gateway interface.&lt;/p&gt;

&lt;p&gt;The gateway also supports the &lt;a href="https://docs.getbifrost.ai/mcp/overview" rel="noopener noreferrer"&gt;Model Context Protocol (MCP)&lt;/a&gt;. Systems that use MCP can connect tools and external services through the same layer used for model requests. Bifrost also includes a &lt;a href="https://docs.getbifrost.ai/plugins/getting-started" rel="noopener noreferrer"&gt;plugin system&lt;/a&gt; for adding custom behavior such as request validation, logging, or request transformation.&lt;/p&gt;

&lt;p&gt;Bifrost can run using tools such as NPX or Docker and can operate in local setups or production environments. The project is open source under the MIT license and can run across different infrastructure environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Gateway Performance and Benchmark&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A gateway processes every request sent to a model provider. The performance of this layer becomes important in systems that handle a large number of AI requests.&lt;/p&gt;

&lt;p&gt;Bifrost is written in Go, a language often used for backend services that process many requests simultaneously. The system focuses on keeping the extra processing time very small.&lt;/p&gt;

&lt;p&gt;Benchmark tests show that Bifrost adds about 11 microseconds of latency at 5,000 requests per second. One microsecond equals 0.001 milliseconds, so 11 microseconds equals 0.011 milliseconds, which means the delay introduced by the gateway remains extremely small.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://docs.getbifrost.ai/benchmarking/getting-started" rel="noopener noreferrer"&gt;published benchmarks&lt;/a&gt; were executed on AWS EC2 t3.medium and t3.large instances. These are cloud virtual machines with moderate CPU and memory resources that are commonly used to run backend services and APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqud1pe1ewno7lns871w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqud1pe1ewno7lns871w.png" alt="Image3" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bifrost also provides a &lt;a href="https://github.com/maximhq/bifrost-benchmarking" rel="noopener noreferrer"&gt;public benchmarking repository&lt;/a&gt; with the scripts and setup used in the tests. Anyone can run the same tests or perform custom benchmarking based on their own infrastructure, traffic patterns, or model providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with Bifrost
&lt;/h2&gt;

&lt;p&gt;Bifrost is designed for quick setup and can run locally or in a server environment. The gateway can start in a few steps and begin routing LLM requests through a single endpoint.&lt;/p&gt;

&lt;p&gt;One way to start Bifrost is by using &lt;strong&gt;NPX&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; @maximhq/bifrost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bifrost can also run using &lt;strong&gt;Docker&lt;/strong&gt;, which allows the gateway to start inside a container environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:8080 maximhq/bifrost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the gateway starts, applications can send LLM requests to the Bifrost endpoint. The gateway then routes the requests to the configured model providers.&lt;/p&gt;

&lt;p&gt;Configuration options allow the gateway to define providers, API keys, routing rules, caching behavior, and fallback settings. These configurations control how requests move between different LLM providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing
&lt;/h2&gt;

&lt;p&gt;Managing several LLM providers inside an application can introduce extra operational logic and maintenance effort. A gateway layer offers a cleaner structure for handling these interactions.&lt;/p&gt;

&lt;p&gt;Bifrost provides this layer by placing a gateway between applications and model providers. Requests go through one endpoint, and the gateway manages routing and provider communication.&lt;/p&gt;

&lt;p&gt;This approach keeps provider integrations outside the core application code and places request management in a separate infrastructure layer.&lt;/p&gt;

&lt;p&gt;To explore configuration options, deployment steps, and additional features, &lt;a href="https://docs.getbifrost.ai/overview" rel="noopener noreferrer"&gt;refer to the official Bifrost documentation&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>proxy</category>
      <category>litellm</category>
    </item>
    <item>
      <title>Create Your Custom WSL from Any Linux Distribution (Part - 2)</title>
      <dc:creator>Debajyati Dey</dc:creator>
      <pubDate>Tue, 10 Dec 2024 14:00:00 +0000</pubDate>
      <link>https://dev.to/studio1hq/create-your-custom-wsl-from-any-linux-distribution-part-2-1h2j</link>
      <guid>https://dev.to/studio1hq/create-your-custom-wsl-from-any-linux-distribution-part-2-1h2j</guid>
      <description>&lt;p&gt;In the previous part of this two-part blog series, we discussed how to install and set up Void Linux in WSL. In this article we'll cover how to do the same for Arch Linux! Hell Yeah!!&lt;/p&gt;

&lt;p&gt;In the former blog, we went through how to obtain the tar of the desired Distro using a docker container. Here we will see how to obtain the tar if we don't have access to a working docker container.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Summary of the Content Prior to Reading the Article&lt;/strong&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;In this article, I'll guide you through installing Arch Linux on WSL without using Docker, instead opting for a VirtualBox VM to create the necessary tar file. We'll cover generating the tar archive, transferring it to your host machine, and importing it into WSL. Additionally, we'll discuss fixing the common automounting error post-installation. This method ensures you can enjoy Arch Linux on WSL, leveraging the flexibility of VM-based installation.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Installing Arch Linux in WSL
&lt;/h2&gt;

&lt;p&gt;Unfortunately, the official docker image of archlinux is not usable now (I think so). Because I faced a lot of issues when inside the container running. The commands didn't ever execute and always getting a newline whenever pressed enter. Very weird. Whatever...&lt;/p&gt;

&lt;p&gt;So docker is NOT going to get our job done. Instead, we can use virtualbox to create a VM instance of Arch Linux. I am not going to give a complete archlinux installation tutorial here. That will be too much for this article. There are plenty of tutorials available in YouTube of archlinux installation on virtualbox. And if you are a &lt;strong&gt;REAL&lt;/strong&gt; NERD Linux &lt;strong&gt;fanboy&lt;/strong&gt; (like me!!!) you may want to install Arch "&lt;strong&gt;The Arch Way&lt;/strong&gt;" (without &lt;code&gt;archinstall&lt;/code&gt; script!).&lt;/p&gt;

&lt;h3&gt;
  
  
  Assuming You Have Already Done A Base Installation inside VBox
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Getting the Tar File
&lt;/h4&gt;

&lt;p&gt;Run this command(assuming you are the root user and currently in the &lt;code&gt;/root&lt;/code&gt; directory), inside the virtual machine to generate the needed whole system archive -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;tar&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-cpvzf&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;archlinux.tar.gz&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--exclude&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;/proc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--exclude&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;/sys&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--exclude&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;/root/archlinux.tar.gz&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--one-file-system&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me break down the command into understandable parts.&lt;/p&gt;

&lt;p&gt;here the &lt;code&gt;--one-file-system&lt;/code&gt; flag means -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fao9dns3vuqhetpzay3pt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fao9dns3vuqhetpzay3pt.png" alt="meaning of the --one-file-system flag" width="773" height="54"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have excluded the &lt;code&gt;/proc&lt;/code&gt; and &lt;code&gt;/sys&lt;/code&gt; directories to make the tar file comparatively less bulky in size. It is safe because, they will be generated anyway while importing in WSL.&lt;/p&gt;

&lt;p&gt;And finally, you may already understand why we excluded the tar file itself.&lt;br&gt;&lt;br&gt;
Because if we did include the tar, there's a 99% probability that, 2 &lt;strong&gt;TERRIFYING things&lt;/strong&gt; could happen!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Self-Referencing File&lt;/strong&gt;: The tar command would attempt to include the &lt;code&gt;archlinux.tar.gz&lt;/code&gt; file in the archive. This can lead to recursive inclusion, where the tar process continually adds the same file over and over, causing an infinite loop of inclusion until disk space runs out or the process is forcibly terminated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Import Issues&lt;/strong&gt;: Even If the tar command does manage to execute without crashing, when you attempt to import this archive on WSL, including the &lt;code&gt;archlinux.tar.gz&lt;/code&gt; file can cause confusion. The import process (which involves extraction) might attempt to re-extract the tar file recursively, complicating the extraction process and potentially leading to errors like system overload or any kind of unexpected results.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It will take a certain amount of time to complete the making of the full system backup.&lt;/p&gt;

&lt;p&gt;So now you have the tar file in your current directory. Check that with &lt;code&gt;ls&lt;/code&gt; command.&lt;/p&gt;
&lt;h4&gt;
  
  
  Transferring the tar file
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In order to get the tar file outside the VM. You have many options. The most preferable one would be to install an advanced DE(Desktop Environment) like, XFCE or GNOME and then install the the virtualbox guest additions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After you have the guest additions installed, shutdown the VM and run it again. When guest additions are installed in the system, we can use the shared folders feature of virtualbox.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuerk95r455xladcd8ad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuerk95r455xladcd8ad.png" alt="Using Shared Folders in VBox" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Quickly setup a shared folder and then transfer the tar into the host machine from the guest through the shared folder.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hurray! Now you have the tar file in your host machine ready to be imported.&lt;/p&gt;
&lt;h4&gt;
  
  
  Importing it into WSL
&lt;/h4&gt;

&lt;p&gt;Provide the appropriate directory for where the virtual hard disk image(vhdx) file will be created, in the command below. ( in place of &lt;code&gt;E:\VMs\WSLs\Arch\&lt;/code&gt; )&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;wsl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--import&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Arch&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;E:\VMs\WSLs\Arch\&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;\archlinux.tar.gz&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I keep all the custom WSL installations (vhdx files) inside this directory &lt;code&gt;E:\VMs\WSLs\&lt;/code&gt;. This is the way I keep them organised.&lt;/p&gt;

&lt;p&gt;For reference you can also watch this YouTube Tutorial by AgileDevArt -&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/CFWZqe5bkAE"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Post Installation
&lt;/h2&gt;

&lt;p&gt;From the previous part, you already know how to create and set user accounts and passwords in a linux command line. So, no need to fill this section with the same Instructions. Knowledge is easily transferable among similar Operating Systems.&lt;/p&gt;

&lt;p&gt;But, still there will be a problem that you will face which must be fixed!&lt;/p&gt;

&lt;h3&gt;
  
  
  Fixing the Automounting Error
&lt;/h3&gt;

&lt;p&gt;When entering in your archlinux &lt;strong&gt;WSL&lt;/strong&gt; with the command &lt;code&gt;wsl -d Arch&lt;/code&gt; after a fresh install, it shall first print &lt;strong&gt;'Processing fstab with mount -a failed.'&lt;/strong&gt; in the console and then shall enter the bash shell of the arch distribution.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is fstab?
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;/etc/fstab&lt;/code&gt; is the configuration file which contains the information about all available partitions and indicates how and where they are mounted.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is the reason behind this problem?
&lt;/h4&gt;

&lt;p&gt;From your base installation method (in the VM of course), you should know that the original base install in VirtualBox had some separate filesystem (partitions). Those partitions don't exist in WSL or now has a different filesystem UUID.&lt;/p&gt;

&lt;h4&gt;
  
  
  What to do now?
&lt;/h4&gt;

&lt;p&gt;Comment out or delete all the uncommented lines in &lt;code&gt;/etc/fstab&lt;/code&gt; because their corresponding filesystem partitions no longer exist in WSL.&lt;br&gt;&lt;br&gt;
After a full system reboot (I mean reboot your windows machine), the errors should disappear.&lt;/p&gt;

&lt;p&gt;Read More at &lt;a href="https://unix.stackexchange.com/a/780166/605989" rel="noopener noreferrer"&gt;unix.stackexchange.com&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;So at last you have the complete knowledge.&lt;/p&gt;

&lt;p&gt;From now on you can have any Linux distro you want inside WSL.&lt;/p&gt;

&lt;p&gt;You just basically need a tar, which you can obtain using a container or VM. (depending on the situation)&lt;/p&gt;

&lt;p&gt;If you found it useful, please consider to share this article with your other developer friends.&lt;/p&gt;

&lt;p&gt;Feel free to connect with me :)&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Thanks for reading! 🙏🏻 &lt;br&gt; Written with 💚 by &lt;a href="https://dev.to/ddebajyati"&gt;Debajyati Dey&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;&lt;a href="https://github.com/Debajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tu7kfqhw7z1yzmng4ah.png" alt="My GitHub" width="40" height="39"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://www.linkedin.com/in/debajyati-dey/" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femp5sh8d4fq0g89lqsia.png" alt="My LinkedIn" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://app.daily.dev/debajyatidey" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20akag0pdeq95u76k9e8.png" alt="My Daily.dev" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://peerlist.io/debajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flscfsnjdwyhm803f7mlv.png" alt="My Peerlist" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://x.com/ddebajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0265bz6hmdfybuw0a605.png" alt="My Twitter" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Linux users are hackers! Happy Hacking! 🐱‍💻&lt;/p&gt;

</description>
      <category>linux</category>
      <category>archlinux</category>
      <category>tutorial</category>
      <category>bash</category>
    </item>
    <item>
      <title>Create Your Custom WSL from any Linux Distribution (Part-1)</title>
      <dc:creator>Debajyati Dey</dc:creator>
      <pubDate>Sun, 08 Dec 2024 14:11:29 +0000</pubDate>
      <link>https://dev.to/studio1hq/create-your-custom-wsl-from-any-linux-distribution-part-1-51k1</link>
      <guid>https://dev.to/studio1hq/create-your-custom-wsl-from-any-linux-distribution-part-1-51k1</guid>
      <description>&lt;h2&gt;
  
  
  Summary of the Content Prior to Reading the Article
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Ever wanted Arch or Void Linux as your WSL distro for Windows? Do you know that you can actually (YES ACTUALLY!!!) install any Linux distribution as your WSL distro? This guide covers how to import any Linux distro to WSL2 using a tar file. We'll use a Docker container to get the tar file, import it to WSL, and set up Void Linux as an example. Follow the steps to download the Docker image, export it to a tar file, and import it to WSL. We'll walk through post-installation configurations like creating user accounts, setting up default shell and user, updating the system, and making WSL accessible as a Windows desktop app. By the end, you'll have a fully functional, custom Linux distro on your Windows machine.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Initial Discussion
&lt;/h2&gt;

&lt;p&gt;If you run &lt;code&gt;wsl -l -o&lt;/code&gt; in your windows terminal (cmd or PowerShell), you'll see an output like this -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fits0lzxvpf0uh4asaabr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fits0lzxvpf0uh4asaabr.png" alt="List of valid distributions that can be installed using" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This list is really disappointing. I mean there are many more distros out there with different package management systems and different useful features. Like - fedora, Arch, Void, Artix, Alpine, etc.&lt;/p&gt;

&lt;p&gt;Now you may also think that the options are very limited in case of WSL. Which is not actually that true.&lt;/p&gt;

&lt;p&gt;If you even search at &lt;strong&gt;MS Store&lt;/strong&gt;, You'll see some third party linux distributions that are specifically developed for WSL (&lt;strong&gt;WSL Only Linux Distributions&lt;/strong&gt;).&lt;/p&gt;

&lt;p&gt;While the &lt;strong&gt;ArchWSL&lt;/strong&gt; and &lt;strong&gt;Fedora WSL&lt;/strong&gt; at &lt;strong&gt;MS Store&lt;/strong&gt; may seem great at first before installing, these distros have often showed compatibility issues and sometimes very weird bugs; even conflicts with &lt;a href="https://scoop.sh" rel="noopener noreferrer"&gt;scoop&lt;/a&gt; or &lt;a href="https://chocolatey.org/" rel="noopener noreferrer"&gt;chocolatey&lt;/a&gt; apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to install other distros then?
&lt;/h2&gt;

&lt;p&gt;WSL2 provides us a way to import any linux distro as a WSL instance from a tar file(or say backup) of the Linux OS in a machine.&lt;/p&gt;

&lt;p&gt;For example you have a laptop where you have fully installed some linux distribution as the operating system. Then you can use the tar command to make a compressed tar file replicating your whole OS starting from the &lt;code&gt;/&lt;/code&gt; (root) directory point as one file system.&lt;/p&gt;

&lt;p&gt;Now you can transfer the tar file in your windows machine using a USB drive. Next, you use the &lt;code&gt;--import&lt;/code&gt; flag of the wsl command and, a new WSL instance with a filesystem (virtual hard drive) &amp;amp; a provided name gets registered within the subsystem.&lt;/p&gt;

&lt;p&gt;Let's walk through a complete tutorial to get you covered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Void Linux in WSL
&lt;/h2&gt;

&lt;p&gt;Well by far the easiest way to get a tar file of an OS is to use a docker container. Follow the steps I describe below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Obtaining The TAR file (Archive)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;First of first, pull the official docker image of void linux from GitHub Container Registry. Make sure you already have a WSL instance (Ubuntu or openSUSE or any other) installed and setup for docker.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull ghcr.io/void-linux/void-glibc-full
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;After pulling it should show up in the images list-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoffap3etkjbqd0d767y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoffap3etkjbqd0d767y.png" alt="installed docker images" width="800" height="182"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now run the container using -&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; ghcr.io/void-linux/void-glibc-full sh
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;It should successfully enter the voidlinux container. You can run any command to check if the shell is really working or not. like shown below -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9o29aojgyhfzem9s9hs9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9o29aojgyhfzem9s9hs9.png" alt="Running the void linux docker container in interactive mode" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Next keeping this terminal instance alive without exiting the container, open another WSL terminal instance and run this command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82ncey2sqx686ugckent.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82ncey2sqx686ugckent.png" alt="List of running containers" width="800" height="104"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should be able to see the container included in &lt;code&gt;running containers list&lt;/code&gt; like in the image.&lt;/p&gt;

&lt;p&gt;Run the following commands below -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyd3rpblx8ebowloqicu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyd3rpblx8ebowloqicu.png" alt="Getting The Running Container ID" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We've got our running container ID. Yoohooo!&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now run this final command to obtain the tar file. Change the path of the tar file based on your choice. In my case it is the VMs folder in my E: drive. (You should read first before running any command)&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;export&lt;/span&gt; &lt;span class="nv"&gt;$dockerContainerID&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /mnt/e/VMs/voidlinux.tar
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;You can stop the container and exit the WSL terminal afterwards.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Using the TAR file
&lt;/h3&gt;

&lt;p&gt;If you go to the path where the tar file was created in windows explorer, you will see the tar file there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5lcxgzfo2n49rl52abk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5lcxgzfo2n49rl52abk.png" alt="seeing the voidlinux tar file" width="781" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's create a new folder 'WSLs'. Move the tar file in there. In there create a new folder Void.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpexzzz9s5wg2ujfpa8y1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpexzzz9s5wg2ujfpa8y1.png" alt="Moved the tar file in a specific directory" width="759" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now open the folder 'WSLs' in your terminal (cmd or pwsh), and run this command -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;wsl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--import&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Void&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;E:\VMs\WSLs\Void\&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;\voidlinux.tar&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The first argument after &lt;code&gt;wsl --import&lt;/code&gt; is the one you choose to be the name of the distribution that is going to be imported.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The 2nd argument is the absolute path of the directory where the virtual hard disk image file(.vhdx) is going to created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The 3rd argument is the path of the tar file.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After this command is successfully executed, you'll see the filesystem of Void in the Linux subsystem (open file explorer to see).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff343oe46tb74d2gs5v9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff343oe46tb74d2gs5v9u.png" alt="Void Linux successfully registered" width="759" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yeah! Cool!&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;💡Caution!&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Don't delete the (.vhdx) file just got created. If you think now you can delete the virtual hard disk image file because Void is now imported and currently exist in your Linux subsystem, then you are totally wrong. The filesystem you can see (just as in the above image) is only accessible when you have the vhdx file and!!! exists in the same path.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You've successfully installed Void Linux in your Linux Subsystem. Huge Congratulations! ✨✨✨&lt;/p&gt;

&lt;h2&gt;
  
  
  Things to Do After Installing a Custom Distribution
&lt;/h2&gt;

&lt;p&gt;Now if you are thinking, you're all done, then you're again making a mistake.&lt;/p&gt;

&lt;p&gt;How?&lt;/p&gt;

&lt;p&gt;Because the way we installed the OS in the subsystem, it was not like how automated we get the online available WSL distros. Generally, when you install ubuntu, kali or OpenSUSE via the commandline or MS Store, it automatically creates a user account (makes it the default one) for you with a password, with a bunch of configurations behind the scenes.&lt;/p&gt;

&lt;p&gt;Now because we installed the OS in our subsystem with a bare import, we got nothing of setups out of the box. There's only one user account, which is the root user.&lt;/p&gt;

&lt;p&gt;We will create a user account, provide it a password and add it in the &lt;code&gt;sudoers&lt;/code&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Open the void linux shell with &lt;code&gt;wsl -d Void&lt;/code&gt; in your powershell/cmd terminal. (Don't worry we will make a convenient way at the end for launching the app or say, shell)&lt;/p&gt;

&lt;p&gt;Once it opens, you'll run the following commands.&lt;/p&gt;

&lt;p&gt;But before creating user accounts let's perform some prerequisites.&lt;/p&gt;

&lt;p&gt;First, run -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;clear
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you'll be shocked!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldrj8av6n6zx3peoj8yf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldrj8av6n6zx3peoj8yf.png" alt="clear command NOT found" width="234" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because our OS is imported from a base install of void linux container. We don't have all important tools yet. Clearly, the clear command shouldn't work because we don't have 'ncurses" installed. "ncurses" is a &lt;strong&gt;library&lt;/strong&gt; that provides &lt;strong&gt;terminal&lt;/strong&gt; handling and user interface &lt;strong&gt;functions for&lt;/strong&gt; C &lt;strong&gt;programs. Void Linux uses the xbps package manager to install, update and remove apps/softwares.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzhn0plhmb3xw0r7dw8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzhn0plhmb3xw0r7dw8i.png" alt="Info about the XBPS Package Manager" width="800" height="515"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xbps-install &lt;span class="nt"&gt;-S&lt;/span&gt; ncurses
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fga68sueub7id6ghm2ef1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fga68sueub7id6ghm2ef1.png" alt="installing ncurses" width="800" height="982"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the clear command should work successfully.&lt;/p&gt;

&lt;p&gt;The second step should be updating the system. As Void Linux is a rolling release distribution, it gets frequent updates. You should update the system as often as you can (like every day if possible). Update it with -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xbps-install &lt;span class="nt"&gt;-Syu&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;-S&lt;/code&gt; flag means sync, &lt;code&gt;-y&lt;/code&gt; means yes and &lt;code&gt;-u&lt;/code&gt; means update. We combine these flags together and write it as &lt;code&gt;-Syu&lt;/code&gt; . The system update may take some time if you are having poor internet connection.&lt;/p&gt;

&lt;p&gt;After the full system upgrade, install &lt;code&gt;less&lt;/code&gt; and &lt;code&gt;bash&lt;/code&gt;. Bash (Bourne Again Shell) is not currently installed in your system. It is not provided by default. The shell you are using to run commands is &lt;code&gt;sh&lt;/code&gt; (Bourne Shell), the predecessor of Bash.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xbps-install &lt;span class="nt"&gt;-S&lt;/span&gt; less bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;💯 😎 Pro Tip&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Pipe those commands into less which take more height in their STDOUT than your terminal height. Thus, you would be able to scroll through the output with arrow keys and search words with a forward slash (/). Particularly useful when viewing help texts of a program. No need to touch the mouse!&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Now change the default shell from SH to BASH. -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;chsh &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;which bash&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if you exit the shell and re-enter in it, you'll see that the prompt string of the commandline has been changed indicating that, - the terminal is running in its default shell has been changed to bash from sh.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjdmvnyq43san89m0x3z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjdmvnyq43san89m0x3z.png" alt="default $PS1 of bash" width="141" height="36"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And it should look something like above.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a user account
&lt;/h3&gt;

&lt;p&gt;Now it is the time to create a user account which will be the default user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xbps-install &lt;span class="nt"&gt;-S&lt;/span&gt; &lt;span class="nb"&gt;sudo&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will install sudo in your voidlinux system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffr9jmp23dx60xl339poy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffr9jmp23dx60xl339poy.png" alt="Installing sudo" width="800" height="681"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now run this command below, to create a user account with home directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;useradd &lt;span class="nt"&gt;-m&lt;/span&gt; &amp;lt;your-username&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;replace &lt;code&gt;&amp;lt;your-username&amp;gt;&lt;/code&gt; with the username you want.&lt;/p&gt;

&lt;p&gt;Run the below command to list all available groups.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /etc/group
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxh1yfp335aj5ytf3sys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxh1yfp335aj5ytf3sys.png" alt="Listing all the currently available groups in the system" width="257" height="764"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now append the groups you want, to your user using the &lt;code&gt;usermod&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;This is the syntax -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; &amp;lt;group_name&amp;gt; &amp;lt;username&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Heh! BTW you can place multiple groups in place of &lt;code&gt;&amp;lt;group_name&amp;gt;&lt;/code&gt; by keeping them separated by commas. For example, I think I would do this -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; wheel,audio,video,kvm,tty,storage,plugdev,lp,dialout,users &amp;lt;username&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For satisfaction, you can run -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;groups&lt;/span&gt; &amp;lt;username&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to see that the user actually got access to the groups we specified in the command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z8oyi54ju0hgwyzprnx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z8oyi54ju0hgwyzprnx.png" alt="Displaying all the groups, the newly created user has access to" width="675" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In case you don't know, -&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Groups&lt;/th&gt;
&lt;th&gt;Their Meaning ( Usecase )&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;wheel&lt;/td&gt;
&lt;td&gt;to grant users the ability to execute commands as the superuser (root) using &lt;code&gt;sudo&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;tty&lt;/td&gt;
&lt;td&gt;related to terminal devices. Group for access to terminal devices if needed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;storage&lt;/td&gt;
&lt;td&gt;This group is typically used for users who need access to storage devices, like - external drives.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;audio&lt;/td&gt;
&lt;td&gt;Grants access to audio devices.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;video&lt;/td&gt;
&lt;td&gt;Grants access to video devices.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dialout&lt;/td&gt;
&lt;td&gt;Provides access to serial ports.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;lp&lt;/td&gt;
&lt;td&gt;Grants access to printer devices.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;kvm&lt;/td&gt;
&lt;td&gt;for users who need to manage virtual machines using KVM (Kernel-based Virtual Machine).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;plugdev&lt;/td&gt;
&lt;td&gt;Allows access to removable devices like USB drives.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;users&lt;/td&gt;
&lt;td&gt;This is a general group for regular users.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Neat! We're almost done creating a regular user account. But the last crucial step is not done yet!&lt;/p&gt;

&lt;p&gt;We need to add the user to the sudoers file so that it can get superuser access (admin privilege) using the sudo command. Also, for that we should first set a password to both the regular and root user. You can set the same password for both users if the WSL we installed is not going to be used/touched by anyone else whom you don't trust, and you also tend to forget things.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting A Password
&lt;/h3&gt;

&lt;p&gt;To add or change passwords for a user we need to have the &lt;code&gt;passwd&lt;/code&gt; utility installed in the system. If not already available, install it -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xbps-install &lt;span class="nt"&gt;-S&lt;/span&gt; passwd &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;type &lt;code&gt;passwd&lt;/code&gt; and you'll be prompted for setting the password for the root user.&lt;/p&gt;

&lt;p&gt;Next, -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;passwd &amp;lt;username&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;username&amp;gt;&lt;/code&gt; with the user we just created now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding user to Sudoers File
&lt;/h3&gt;

&lt;p&gt;Now we need to edit the &lt;code&gt;sudoers&lt;/code&gt; file to properly grant superuser access to our user. For that we need an editor. If you are comfortable with nvim install it otherwise install nano for text editing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xbps-install &lt;span class="nt"&gt;-S&lt;/span&gt; neovim
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xbps-install &lt;span class="nt"&gt;-S&lt;/span&gt; nano
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now do sudo visudo to edit the sudoers file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;EDITOR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nvim &lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; visudo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace nvim with nano if you want to use nano.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x42atgrs05wmz2lal3x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x42atgrs05wmz2lal3x.png" alt="user privilege specification in the sudoers file" width="712" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Find the uncommented line shown in the image, in your file. And write this line (look below) under there.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&amp;lt;username&amp;gt; &lt;span class="nv"&gt;ALL&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;ALL&lt;span class="o"&gt;)&lt;/span&gt; ALL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It would look like this -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop2pziayaplr0fjubczr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop2pziayaplr0fjubczr.png" alt="Specifying the user privilege of the new user in the sudoers file" width="385" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now save the file and exit.&lt;/p&gt;

&lt;p&gt;Finally set the new user as the default user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;myUsername&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;username&amp;gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"[user]&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;default=&lt;/span&gt;&lt;span class="nv"&gt;$myUsername&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /etc/wsl.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we are all set. Exit the Linux shell and terminate the distro by running -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;wsl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--terminate&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Void&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then open it again with -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;wsl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Void&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, yeah! We did it!&lt;/p&gt;

&lt;h2&gt;
  
  
  Optional Extra Configurations
&lt;/h2&gt;

&lt;p&gt;As of now without any configuration, the prompt string (PS1) would look like this -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7pel1t6f3g3mvn3zzc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7pel1t6f3g3mvn3zzc7.png" alt="Default prompt string of the commandline in the session of the current user" width="152" height="55"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;which is really reallly ugly to me. I mean, I would need to enter pwd every time whenever I need to check which directory I am currently in, which absolutely sucks!&lt;/p&gt;

&lt;p&gt;So, let's change the prompt string.&lt;/p&gt;

&lt;p&gt;open your .bashrc file ( assuming you are using nvim )&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nvim ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Go to the last line.&lt;/p&gt;

&lt;p&gt;Add this line of code to the next line to change the prompt string.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;PS1&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\[\e&lt;/span&gt;&lt;span class="s2"&gt;[32m&lt;/span&gt;&lt;span class="se"&gt;\]&lt;/span&gt;&lt;span class="s2"&gt;[&lt;/span&gt;&lt;span class="se"&gt;\[\e&lt;/span&gt;&lt;span class="s2"&gt;[m&lt;/span&gt;&lt;span class="se"&gt;\]\[\e&lt;/span&gt;&lt;span class="s2"&gt;[31m&lt;/span&gt;&lt;span class="se"&gt;\]\u\[\e&lt;/span&gt;&lt;span class="s2"&gt;[m&lt;/span&gt;&lt;span class="se"&gt;\]\[\e&lt;/span&gt;&lt;span class="s2"&gt;[33m&lt;/span&gt;&lt;span class="se"&gt;\]&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="se"&gt;\[\e&lt;/span&gt;&lt;span class="s2"&gt;[m&lt;/span&gt;&lt;span class="se"&gt;\]\[\e&lt;/span&gt;&lt;span class="s2"&gt;[32m&lt;/span&gt;&lt;span class="se"&gt;\]\h\[\e&lt;/span&gt;&lt;span class="s2"&gt;[m&lt;/span&gt;&lt;span class="se"&gt;\]&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\[\e&lt;/span&gt;&lt;span class="s2"&gt;[36m&lt;/span&gt;&lt;span class="se"&gt;\]\w\[\e&lt;/span&gt;&lt;span class="s2"&gt;[m&lt;/span&gt;&lt;span class="se"&gt;\]\[\e&lt;/span&gt;&lt;span class="s2"&gt;[32m&lt;/span&gt;&lt;span class="se"&gt;\]&lt;/span&gt;&lt;span class="s2"&gt;]&lt;/span&gt;&lt;span class="se"&gt;\[\e&lt;/span&gt;&lt;span class="s2"&gt;[m&lt;/span&gt;&lt;span class="se"&gt;\]\[\e&lt;/span&gt;&lt;span class="s2"&gt;[30;46m&lt;/span&gt;&lt;span class="se"&gt;\]\\&lt;/span&gt;&lt;span class="nv"&gt;$\&lt;/span&gt;&lt;span class="s2"&gt;[&lt;/span&gt;&lt;span class="se"&gt;\e&lt;/span&gt;&lt;span class="s2"&gt;[m&lt;/span&gt;&lt;span class="se"&gt;\]&lt;/span&gt;&lt;span class="s2"&gt; "&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I know the string may seem unreadable obfuscated some nonsensical gibberish, but it is only because of the ANSI Escape codes heavily used here.&lt;/p&gt;

&lt;p&gt;Save the file, exit and do&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;source&lt;/span&gt; ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to make the changes take effect and HURRAYY! Now you have an elegant and useful prompt string before the commandline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftj7ged46xrdroge6mk8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftj7ged46xrdroge6mk8p.png" alt="Voidlinux WSL set up with a fancy colorful prompt string" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also if you read man pages a lot you will need to have the MANPATH environment variable set on the startup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lwqjr7yzti3iqeqa4eh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lwqjr7yzti3iqeqa4eh.png" alt="Setting the MANPATH env variable in the .bash_profile file" width="564" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add this line (as shown in the image above)-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;MANPATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/share/man
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if you have &lt;strong&gt;man&lt;/strong&gt; and &lt;strong&gt;man-db&lt;/strong&gt; installed, you can successfully access man pages from the commandline with the &lt;strong&gt;man&lt;/strong&gt; command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keeping this WSL as a Desktop App
&lt;/h2&gt;

&lt;p&gt;As we all clearly understand, launching the WSL instance by opening any cmd shell and running &lt;code&gt;wsl -d Void&lt;/code&gt; is not a very convenient approach.&lt;/p&gt;

&lt;p&gt;Most probably after a reboot of your PC, you'll see a new terminal profile has been automatically added, in which our void linux shell resides. If not, then create a new terminal profile in your windows terminal for void.&lt;/p&gt;

&lt;p&gt;Now go through the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Right click in your Desktop Background and select the option of new desktop shortcut. You'll see a popup like below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx6l6e2lfdqw6tulxthm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx6l6e2lfdqw6tulxthm.png" alt="Typing in the given input area, the command/location of the file/process we are creating a desktop shortcut for" width="614" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have to write the appropriate command in the empty input area provided, that will open the newly created terminal profile of void linux in its $HOME directory of the default user.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now, type - &lt;code&gt;C:\Users\user\AppData\Local\Microsoft\WindowsApps\wt.exe nt -p Void --tabColor #27e336&lt;/code&gt; in there and click &lt;strong&gt;next&lt;/strong&gt;. (make sure the path of &lt;code&gt;wt.exe&lt;/code&gt; is correct. If you have &lt;code&gt;wt&lt;/code&gt; which is windows terminal in a different path, then type that one)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now type the name of the shortcut as you want. I'm giving it the name - &lt;strong&gt;void&lt;/strong&gt; Now click &lt;strong&gt;Finish.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Congrats! Another milestone achieved! Now you can access this WSL from your windows search bar directly by typing void. How amazing!&lt;/p&gt;

&lt;p&gt;I suggest you change the icon of the app to anything else that will enable it to get easily caught in sight (you may need to download an image and convert it to an ico file if you want it to have the void linux logo as the icon).&lt;/p&gt;

&lt;p&gt;This is how my desktop app look like -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft89g3d00keaneka33c3s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft89g3d00keaneka33c3s.png" alt="Void Linux Desktop Shortcut" width="760" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And, that's all! Really! You finally got a new custom Linux distribution in your subsystem which is not readily available in the MS Store or online WSL registries, configured and ready to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Soo, how are you feeling! Tell me in the comments!&lt;/p&gt;

&lt;p&gt;If must feel chaotic good to get access to most of the needed cutting edge development softwares/tools right within the terminal. One Package Manager to rule them all. If you are tired of getting old or outdated packages in Debian/Ubuntu then this is going to be an overwhelming experience.&lt;/p&gt;

&lt;p&gt;In case you couldn't follow the steps to produce the tar file, or somehow faced any kind of trouble and thus not getting the tar (archive). Don't worry.&lt;/p&gt;

&lt;p&gt;I am attaching the mega link of the Void Linux tar file I created for you so that you can at least try it out! ;)&lt;/p&gt;

&lt;p&gt;This is the decrypt key of the mega file - &lt;code&gt;3uMXrmDWP6WUb6kKjzb5B0Zc-Qh1w5oLE2LbZ4lOzhA&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Thank you for giving this article a read!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mega.nz/file/XoYDyA7B#3uMXrmDWP6WUb6kKjzb5B0Zc-Qh1w5oLE2LbZ4lOzhA" rel="noopener noreferrer"&gt;Mega Link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you found it useful, please consider to share this article with your other developer friends.&lt;/p&gt;

&lt;p&gt;Feel free to connect with me :)&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Thanks for reading! 🙏🏻 &lt;br&gt; Written with 💚 by &lt;a href="https://dev.to/ddebajyati"&gt;Debajyati Dey&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;&lt;a href="https://github.com/Debajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tu7kfqhw7z1yzmng4ah.png" alt="My GitHub" width="40" height="39"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://www.linkedin.com/in/debajyati-dey/" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femp5sh8d4fq0g89lqsia.png" alt="My LinkedIn" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://app.daily.dev/debajyatidey" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20akag0pdeq95u76k9e8.png" alt="My Daily.dev" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://peerlist.io/debajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flscfsnjdwyhm803f7mlv.png" alt="My Peerlist" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th&gt;&lt;a href="https://x.com/ddebajyati" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0265bz6hmdfybuw0a605.png" alt="My Twitter" width="40" height="40"&gt;&lt;/a&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Linux users are hackers! Happy Hacking! 🐱‍💻&lt;/p&gt;

</description>
      <category>linux</category>
      <category>docker</category>
      <category>bash</category>
      <category>microsoft</category>
    </item>
    <item>
      <title>How to Create a Number-Guessing Game in Python</title>
      <dc:creator>Sophia Iroegbu</dc:creator>
      <pubDate>Tue, 26 Nov 2024 12:59:21 +0000</pubDate>
      <link>https://dev.to/studio1hq/how-to-create-a-number-guessing-game-in-python-3kbd</link>
      <guid>https://dev.to/studio1hq/how-to-create-a-number-guessing-game-in-python-3kbd</guid>
      <description>&lt;p&gt;Hello there! 👋&lt;/p&gt;

&lt;p&gt;In this guide, you will learn how to build a number-guessing game using basic Python concepts, such as loops, if-else statements, handling inputs, and more. This is inspired by the &lt;a href="https://roadmap.sh/projects/number-guessing-game" rel="noopener noreferrer"&gt;Number guessing game project&lt;/a&gt; in the Roadmap projects section.&lt;/p&gt;

&lt;p&gt;Let’s get started! 😎&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the game’s function
&lt;/h2&gt;

&lt;p&gt;We would need to create a function to handle the generation of the random number for the user to guess using Python’s random module.&lt;/p&gt;

&lt;p&gt;Start by importing the module then create the function using &lt;code&gt;random.randint()&lt;/code&gt; within the module to generate a random number between 1 - 100, this is the number the player has to guess and it will be stored in a variable, &lt;code&gt;number_to_guess&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next, set &lt;code&gt;guessed_correctly&lt;/code&gt; var to false to stop the game once the player guessed the right number and set an attempts_limit to make the game more difficult.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;number_guessing_game&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attempts_limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="n"&gt;number_to_guess&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;guessed_correctly&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
  &lt;span class="n"&gt;attempts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, use a print statement to welcome your player with some messages and instructions on how to play the game. You can customize this to your preference.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NB: This should be within the function.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;  &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Welcome to Number guessing game&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I have selected a number from 1-100, can you guess it?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating the guessing loop
&lt;/h2&gt;

&lt;p&gt;Next, we need to create the core of the game. This will manage the loop that continues until the player guesses the number correctly or reaches their guess limit.&lt;/p&gt;

&lt;p&gt;Start by using a while loop to increase the attempt count with each try. If the player doesn't guess correctly, they should be given another turn to guess as long as they haven't exceeded their limit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;  &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;attempts&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;attempts_limit&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;guessed_correctly&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
     &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Please add your guess: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
     &lt;span class="n"&gt;attempts&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, use an if-else statement to compare the player's guess with the random number and provide feedback. If the guess is lower than the correct number, print "too low." If it's too high, print "too high." If it matches the correct number, set &lt;code&gt;guessed_correctly&lt;/code&gt; to True, break the loop, and print a congratulations message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;number_to_guess&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Too low!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;number_to_guess&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Too high&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;guessed_correctly&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Congratulations, you guessed the number in &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;attempts&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; attempts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, let's add an extra layer of error handling. Users can be unpredictable, and many might try to break your program. For example, if a player decides to use a letter or a decimal number to guess, the program will stop unexpectedly. That's why we need this extra layer.&lt;/p&gt;

&lt;p&gt;Using the try-except block, we can catch such an error. It should only accept whole numbers and should send an error message and stop if the player decides to use something else.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;  &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;attempts&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;attempts_limit&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;guessed_correctly&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Please add your guess: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
      &lt;span class="n"&gt;attempts&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;

      &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;number_to_guess&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Too low!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;guess&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;number_to_guess&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Too high&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;guessed_correctly&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Congratulations, you guessed the number in &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;attempts&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; attempts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;ValueError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Oops! This is not a valid number, please a whole number&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once that's done, we move on to the final step. If the player runs out of guesses and hasn't guessed the correct number, display a message saying "Game over" and inform them that they are out of guesses.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;guessed_correctly&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are out of guesses, the correct guess was &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;number_to_guess&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Game over, Thanks for playing!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing your game
&lt;/h2&gt;

&lt;p&gt;Now that’s all done! Let’s test our game and see if it works. Also, remember to call your function at the bottom of your program to run your program.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;number_guessing_game&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytvfo0x8gpl5yf7kuua3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytvfo0x8gpl5yf7kuua3.png" alt=" " width="800" height="613"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And there you have it! A simple yet fun game in Python that you can create as a beginner. I hope this helps you become more comfortable with key programming concepts like loops, conditionals, and random numbers. &lt;/p&gt;

&lt;p&gt;The source code can be found &lt;a href="https://github.com/Sophyia7/Python-Tutorials" rel="noopener noreferrer"&gt;here&lt;/a&gt;. If you prefer the video version of this guide, check it out:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/MrTWan2td28"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>programming</category>
      <category>tutorial</category>
      <category>python</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
