<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rohan Sharma</title>
    <description>The latest articles on DEV Community by Rohan Sharma (@rohan_sharma).</description>
    <link>https://dev.to/rohan_sharma</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rohan_sharma"/>
    <language>en</language>
    <item>
      <title>My Docs Are Safer Than My Search History 😎</title>
      <dc:creator>Rohan Sharma</dc:creator>
      <pubDate>Sun, 15 Feb 2026 10:20:01 +0000</pubDate>
      <link>https://dev.to/rohan_sharma/my-docs-are-safer-than-my-search-history-5e4f</link>
      <guid>https://dev.to/rohan_sharma/my-docs-are-safer-than-my-search-history-5e4f</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/github-copilot-cli"&gt;GitHub Copilot CLI Challenge&lt;/a&gt;: Build with AI&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Hey there! Welcome back. This is my latest project that I'm super excited to share with you!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Remember that one time your friend asked you to "just quickly check" their Google Doc, and you spent 20 minutes figuring out if you had view-only or editing access? Or when your team's important document got accidentally shared with the entire internet? Yeah, me too. 😅&lt;/p&gt;

&lt;p&gt;That's why I built &lt;strong&gt;Radhika's AI DocManager&lt;/strong&gt;, a document management system that doesn't mess around when it comes to security, roles, and AI-powered features.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kp6onz4p0im0d9de97o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kp6onz4p0im0d9de97o.png" alt="banner"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Live Demo&lt;/strong&gt;: &lt;a href="https://radhika-docmanager.vercel.app/" rel="noopener noreferrer"&gt;radhika-docmanager.vercel.app&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  What's This All About? 🤔
&lt;/h2&gt;

&lt;p&gt;Imagine you and your partner have a shared notebook. But here's the twist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;You&lt;/strong&gt; can write anything you want in your sections&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Your partner&lt;/strong&gt; can only read some sections (View Only)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Your partner&lt;/strong&gt; can comment on other sections but not edit them (Comment)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Some pages&lt;/strong&gt; are locked with a password because they contain surprise party plans 🎉&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's basically what this project does, but for teams and organizations!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Question for you:&lt;/strong&gt; &lt;em&gt;Have you ever accidentally deleted someone else's important document or had your document deleted by someone? How did that go? 😬&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  The Cool Features That'll Make You Go "Woah!" 🚀
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1️⃣ Four Roles, Four Levels of Trust
&lt;/h3&gt;

&lt;p&gt;Think of roles like relationship stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;User&lt;/strong&gt; (Dating Stage): You can only see and manage your own stuff. Can't touch anyone else's documents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Admin&lt;/strong&gt; (Committed Relationship): You can see everything in your organization and manage your team members. You're the responsible one now!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Super Admin&lt;/strong&gt; (Marriage Level): Like Admin but with superpowers! You approve who joins your org, can promote people to Admin, and have elevated privileges. But you're still tied to your organization!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;God&lt;/strong&gt; (The Parent): Full control over the ENTIRE platform across ALL organizations. Can post documents to ALL organizations at once. The ONLY role with cross-org access! Ultimate power! 💪

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Important Note&lt;/strong&gt;: God has read access to all documents across organizations for platform management, but the primary focus is on public documents and cross-org coordination. Organizations still maintain their privacy for internal operations.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each role has a "weight" number. Higher weight = more authority. Just like how your mom outranks you when deciding what's for dinner! 😄&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; Only God can see across organizations. Super Admin, Admin, and User are all scoped to their own organization!&lt;/p&gt;
&lt;h3&gt;
  
  
  2️⃣ Document Security That Actually Makes Sense
&lt;/h3&gt;

&lt;p&gt;Your documents can have different classification levels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Public&lt;/strong&gt;: Everyone can see it (like your Instagram story)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Organization&lt;/strong&gt;: Only your team can see it (like your company Slack)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal&lt;/strong&gt;: More restricted (like your team's strategy docs)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Confidential&lt;/strong&gt;: Top secret stuff (like your salary slip)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;General&lt;/strong&gt;: The default, casual classification&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Plus, you can set &lt;strong&gt;access levels&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;View Only&lt;/strong&gt;: Read-only, no comments allowed (like when your partner says "just look, don't touch")&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comment&lt;/strong&gt;: Can read and comment but not edit (like leaving sticky notes on a physical document)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edit&lt;/strong&gt;: Can make changes to the content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full Access&lt;/strong&gt;: Complete control (the relationship goals)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Want to add extra protection?&lt;/strong&gt; Lock any document with a 9-digit password! 🔐&lt;/p&gt;
&lt;h3&gt;
  
  
  3️⃣ AI That Works For YOU (Not For Big Tech)
&lt;/h3&gt;

&lt;p&gt;Here's the thing: I hate vendor lock-in. You know what's worse than a bad breakup? Being forced to stay with a service because you can't leave!&lt;/p&gt;

&lt;p&gt;That's why Radhika's AI DocManager lets you &lt;strong&gt;bring your own API keys&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Groq&lt;/strong&gt; (FREE tier available! Fast and perfect for getting started)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI&lt;/strong&gt; (Premium quality)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anthropic&lt;/strong&gt; (Great for long documents)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your keys are encrypted with &lt;strong&gt;AES-256-GCM&lt;/strong&gt; encryption. That's military-grade security, folks! Even if someone breaks into the database, your keys are safer than your ex's secrets in your DMs. 🤫&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The AI can:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Summarize documents (TL;DR generator)&lt;/li&gt;
&lt;li&gt;Analyze sentiment (is this doc angry or happy?)&lt;/li&gt;
&lt;li&gt;Extract key points (bullet points anyone?)&lt;/li&gt;
&lt;li&gt;Improve writing (make it sound professional)&lt;/li&gt;
&lt;li&gt;Translate content (hola, bonjour, namaste!)&lt;/li&gt;
&lt;li&gt;Generate Q&amp;amp;A (instant study guide)&lt;/li&gt;
&lt;li&gt;Custom prompts (ask it anything!)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pro tip:&lt;/strong&gt; You also get FREE local tools that work without any API key — Word Count, Structure Analysis, and Text Preview. All running in your browser!&lt;/p&gt;
&lt;h3&gt;
  
  
  4️⃣ Organizations That Don't Mix Like Oil and Water
&lt;/h3&gt;

&lt;p&gt;Multiple organizations, complete data isolation. Think of it like this:&lt;/p&gt;

&lt;p&gt;You have three friend groups:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;College friends (Acme Corp)&lt;/li&gt;
&lt;li&gt;Work friends (Globex Inc)&lt;/li&gt;
&lt;li&gt;Gym friends (Initech LLC)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each group has its own private WhatsApp group. Nobody from college friends can see what's happening in your work friends group. That's exactly how organizations work here!&lt;/p&gt;

&lt;p&gt;To join an organization, you need an &lt;strong&gt;Organization Code&lt;/strong&gt; (like a secret club password). A Super Admin must approve your membership request. No random people crashing your party! 🎊&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  How GitHub Copilot CLI Supercharged My Development 🚀
&lt;/h2&gt;

&lt;p&gt;Okay, confession time: Building this project would have taken me MUCH longer without GitHub Copilot CLI. Let me tell you how it became my coding partner!&lt;/p&gt;
&lt;h3&gt;
  
  
  What is GitHub Copilot CLI?
&lt;/h3&gt;

&lt;p&gt;Think of it as having a really smart friend who sits in your terminal and helps you with commands, debugging, and understanding code. You just talk to it in natural language!&lt;/p&gt;
&lt;h3&gt;
  
  
  How I Used It in This Project
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Generating the Complete Supabase Schema&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This was HUGE. Instead of manually writing hundreds of lines of SQL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh copilot suggest &lt;span class="s2"&gt;"generate supabase schema for document management system with organizations, users, documents, comments, and audit logs with proper foreign keys and indexes"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It gave me a complete schema structure! I just had to customize it for my needs. Saved hours of work!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Setting Up Row Level Security (RLS)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Supabase RLS policies are tricky. I asked:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh copilot suggest &lt;span class="s2"&gt;"create row level security policy for organization isolation in supabase"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It generated the exact SQL I needed to ensure users can only see data from their organization!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Database Schema Debugging&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When my foreign key constraints weren't working:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh copilot explain &lt;span class="s2"&gt;"Why is my foreign key constraint failing between documents and profiles?"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Boom! Got the answer instantly and fixed the relationship properly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Creating Storage Buckets with Policies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Setting up Supabase storage:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh copilot suggest &lt;span class="s2"&gt;"create supabase storage bucket for documents with 50MB limit and access policies"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Got the complete SQL for buckets AND storage policies. No more digging through docs!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Complex Git Operations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Had to rebase multiple commits with conflicting changes:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh copilot suggest &lt;span class="s2"&gt;"rebase last 5 commits and squash them into one"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Got the exact git commands I needed. No more Stack Overflow!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. TypeScript Type Errors&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When dealing with complex Supabase types:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh copilot explain &lt;span class="s2"&gt;"Cannot find name 'UserRole' in this scope"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Instantly told me I needed to import from &lt;code&gt;@/lib/supabase/types&lt;/code&gt;. No more hunting through files!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Debugging Permission Logic&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When my role-based access control wasn't working:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh copilot explain &lt;span class="s2"&gt;"why is my outranks function returning false for admin checking user role"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Helped me understand the weight comparison logic and fix the bug in minutes!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. File Upload Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Struggled with Supabase storage upload with progress:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh copilot suggest &lt;span class="s2"&gt;"upload file to supabase storage bucket with progress tracking and error handling"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Got complete working code with progress bars and proper error handling!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Understanding bcrypt Hashing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When implementing password security:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh copilot explain &lt;span class="s2"&gt;"difference between bcrypt compare and hash and when to use each"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Clear explanation that helped me implement secure authentication correctly!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Deployment to Vercel&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Deploying with all environment variables:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh copilot suggest &lt;span class="s2"&gt;"deploy next.js app to vercel with environment variables from .env file"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Got the proper CLI commands with all the flags needed!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11. Testing Database Queries&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When testing complex SQL with multiple JOINs:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh copilot explain &lt;span class="s2"&gt;"how to test row level security policies in Supabase without deploying"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It explained the whole local testing process step by step!&lt;/p&gt;

&lt;p&gt;But do you know? I've even implemented most of the frontend with it as well. 😎&lt;/p&gt;
&lt;h3&gt;
  
  
  Why Copilot CLI is a Game Changer
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No Context Switching&lt;/strong&gt;: Stay in your terminal, no need to open browser&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Natural Language&lt;/strong&gt;: Ask questions like you'd ask a friend&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Project-Aware&lt;/strong&gt;: It understands your codebase context&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instant Answers&lt;/strong&gt;: Faster than googling and reading 10 different answers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Real Talk&lt;/strong&gt;: I probably saved 10-15 hours of googling, debugging, and trial-and-error just by having Copilot CLI help me with terminal commands, git operations, and understanding error messages.&lt;/p&gt;

&lt;p&gt;If you're not using it yet, you're missing out! It's like having a senior developer on speed dial. 🎯&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  The Tech Magic Behind the Curtain 🎩✨
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;(Don't worry, I'll keep it light!)&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Built With Love Using:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Next.js 14&lt;/strong&gt; (App Router)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supabase&lt;/strong&gt; (PostgreSQL database + storage)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TypeScript&lt;/strong&gt; (because typos are for noobs)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tailwind CSS + shadcn/ui&lt;/strong&gt; (for that crispy dark mode 🌙)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bun runtime&lt;/strong&gt; (faster than your morning coffee hitting your system)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Security Layers:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Passwords&lt;/strong&gt;: bcrypt hashing (can't crack it even if you try)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Keys&lt;/strong&gt;: AES-256-GCM encryption (Fort Knox level)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document Passwords&lt;/strong&gt;: Another layer of bcrypt (double protection!)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Organization Isolation&lt;/strong&gt;: Complete data separation (no mixing allowed)&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Document Support:
&lt;/h3&gt;

&lt;p&gt;Upload pretty much anything:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PDF (with embedded viewer)&lt;/li&gt;
&lt;li&gt;Word docs (auto text extraction!)&lt;/li&gt;
&lt;li&gt;Plain text, CSV, Markdown&lt;/li&gt;
&lt;li&gt;HTML, JSON, RTF, ODT&lt;/li&gt;
&lt;li&gt;Even Excel and PowerPoint (why not?)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;File size limit:&lt;/strong&gt; 50 MB per document. That's like... a LOT of cat pictures! 🐱&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  The Boyfriend-Girlfriend Analogy That'll Make You Understand Permissions 💑
&lt;/h2&gt;

&lt;p&gt;Let's say you and your partner are working on planning a surprise party:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User Role&lt;/strong&gt; (You):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can create your own to-do list&lt;/li&gt;
&lt;li&gt;You can only see your own tasks&lt;/li&gt;
&lt;li&gt;You can't see or touch your partner's secret guest list&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Admin Role&lt;/strong&gt; (Your Partner who's more organized):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can see both your lists (within your organization)&lt;/li&gt;
&lt;li&gt;Can delete tasks from User-level people&lt;/li&gt;
&lt;li&gt;Can manage who's invited to the planning team&lt;/li&gt;
&lt;li&gt;But can only see YOUR organization's party, not other orgs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Super Admin Role&lt;/strong&gt; (The Senior Party Planner):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Like Admin but can also approve new members joining YOUR organization&lt;/li&gt;
&lt;li&gt;Can promote people to Admin within your org&lt;/li&gt;
&lt;li&gt;Has elevated privileges for your organization&lt;/li&gt;
&lt;li&gt;But still can't see OTHER organizations' parties (that's God's job!)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;God Role&lt;/strong&gt; (The Person Whose Birthday It Is):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Knows about ALL surprise parties EVERYWHERE (cross-org access!)&lt;/li&gt;
&lt;li&gt;Can access any planning doc in any organization for platform management&lt;/li&gt;
&lt;li&gt;Can post announcements to all party groups at once&lt;/li&gt;
&lt;li&gt;Bypasses all password locks (it's their birthday, after all!)&lt;/li&gt;
&lt;li&gt;The ONLY role that can see across all organizations!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;But remember&lt;/strong&gt;: God is the platform administrator, not Big Brother watching everything. The focus is on managing public documents and cross-org coordination, while respecting organizational privacy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;See? Not so complicated! 😊&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Real-World Use Cases
&lt;/h2&gt;
&lt;h3&gt;
  
  
  For Teams:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Store company policies (Organization classification)&lt;/li&gt;
&lt;li&gt;Share meeting notes (Comment access)&lt;/li&gt;
&lt;li&gt;Collaborate on proposals (Edit access)&lt;/li&gt;
&lt;li&gt;Lock sensitive HR docs (Password protection)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  For Content Creators:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Draft blog posts (Draft status)&lt;/li&gt;
&lt;li&gt;Get feedback from editors (Assign reviewers)&lt;/li&gt;
&lt;li&gt;Publish final versions (Published status)&lt;/li&gt;
&lt;li&gt;Archive old content (Archived status)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  For Students:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Store class notes&lt;/li&gt;
&lt;li&gt;Collaborate on group projects&lt;/li&gt;
&lt;li&gt;Share study guides&lt;/li&gt;
&lt;li&gt;Keep research papers organized&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Question for you:&lt;/strong&gt; &lt;em&gt;What would YOU use this for? I'd love to hear your use case! Drop it in the comments! 💭&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  The Journey (AKA The Fun Part) 🎢
&lt;/h2&gt;

&lt;p&gt;Building this was like assembling IKEA furniture while blindfolded. Here's what I learned:&lt;/p&gt;
&lt;h3&gt;
  
  
  Challenge #1: Role-Based Access Control
&lt;/h3&gt;

&lt;p&gt;Creating a system where User &amp;lt; Admin &amp;lt; Super Admin &amp;lt; God without breaking everything? HARD. I used a "weight" system (User = 10, Admin = 50, Super Admin = 75, God = 100). Simple math, complex implications!&lt;/p&gt;
&lt;h3&gt;
  
  
  Challenge #2: Organization Isolation
&lt;/h3&gt;

&lt;p&gt;Making sure Acme Corp never accidentally sees Globex Inc's documents? I had to filter EVERYTHING by organization. Every. Single. Query.&lt;/p&gt;
&lt;h3&gt;
  
  
  Challenge #3: Encryption That Doesn't Break
&lt;/h3&gt;

&lt;p&gt;Encrypting API keys is easy. Making sure you can decrypt them later? That's the trick! Used AES-256-GCM with unique IVs for each key. Sounds fancy, works perfectly!&lt;/p&gt;
&lt;h3&gt;
  
  
  Challenge #4: God's Multi-Org Publishing
&lt;/h3&gt;

&lt;p&gt;When God creates a document for "All Orgs", the system:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Uploads the file ONCE to storage&lt;/li&gt;
&lt;li&gt;Creates a document record for EACH organization&lt;/li&gt;
&lt;li&gt;All records point to the same file&lt;/li&gt;
&lt;li&gt;When God changes the status, ALL copies update together&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's like posting on all your social media at once, but harder!&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  The Tech Implementation (For My Developer Friends)
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Permission Checking:
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;outranks&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;roleA&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;roleB&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;getRoleWeight&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;roleA&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getRoleWeight&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;roleB&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;isAtLeast&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;roleA&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;roleB&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;getRoleWeight&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;roleA&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="nf"&gt;getRoleWeight&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;roleB&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Simple, elegant, effective!&lt;/p&gt;
&lt;h3&gt;
  
  
  Document Deletion Logic:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;User can delete: Own docs only&lt;/li&gt;
&lt;li&gt;Admin can delete: Own + User docs (Admin outranks User)&lt;/li&gt;
&lt;li&gt;Super Admin can delete: Own + User + Admin docs&lt;/li&gt;
&lt;li&gt;God can delete: Own + any public document&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  AI Action Flow:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;User clicks an AI action&lt;/li&gt;
&lt;li&gt;System decrypts their API key in memory (never stored decrypted!)&lt;/li&gt;
&lt;li&gt;Sends document content to AI provider&lt;/li&gt;
&lt;li&gt;Returns result to user&lt;/li&gt;
&lt;li&gt;Result NOT stored (privacy first!)&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Organization Membership:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;New user registers with Organization Code&lt;/li&gt;
&lt;li&gt;Account created with "pending" status&lt;/li&gt;
&lt;li&gt;Super Admin approves or rejects&lt;/li&gt;
&lt;li&gt;If approved, user gets full access&lt;/li&gt;
&lt;li&gt;If rejected, user can't log in&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Clean workflow, no confusion!&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Try It Yourself!
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Live Demo&lt;/strong&gt;: &lt;a href="https://radhika-docmanager.vercel.app/" rel="noopener noreferrer"&gt;https://radhika-docmanager.vercel.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation&lt;/strong&gt;: &lt;a href="https://radhika-docmanager.vercel.app/docs" rel="noopener noreferrer"&gt;https://radhika-docmanager.vercel.app/docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Want to test it locally? Here's the speed run:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the repo from &lt;a href="https://github.com/RS-labhub/AI-DocManager" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Set up Supabase project (free tier)&lt;/li&gt;
&lt;li&gt;Copy &lt;code&gt;.env.example&lt;/code&gt; to &lt;code&gt;.env&lt;/code&gt; and fill in your credentials&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;bun install &amp;amp;&amp;amp; bun dev&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Visit &lt;code&gt;http://localhost:3000/api/seed&lt;/code&gt; to get demo accounts&lt;/li&gt;
&lt;li&gt;Log in and start creating documents!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Demo accounts&lt;/strong&gt; (all use password &lt;code&gt;Password123!&lt;/code&gt;):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;god@system.local&lt;/code&gt; - God role (platform-wide access!)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;superadmin@acme.com&lt;/code&gt; - Super Admin (approve memberships!)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;admin@acme.com&lt;/code&gt; - Admin role (manage your team!)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;user1@acme.com&lt;/code&gt; - Regular user (the everyday experience)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Or try the live app&lt;/strong&gt;: &lt;a href="https://radhika-docmanager.vercel.app/" rel="noopener noreferrer"&gt;radhika-docmanager.vercel.app&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check out the &lt;a href="https://radhika-docmanager.vercel.app/docs" rel="noopener noreferrer"&gt;full documentation&lt;/a&gt; for setup details!&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  What's Special About This Project?
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. Security First
&lt;/h3&gt;

&lt;p&gt;Most projects add security as an afterthought. I built it from day one. Encryption, hashing, isolation — the works!&lt;/p&gt;
&lt;h3&gt;
  
  
  2. No Vendor Lock-In
&lt;/h3&gt;

&lt;p&gt;Your API keys, your choice. Switch providers anytime without losing data.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Real Enterprise Features
&lt;/h3&gt;

&lt;p&gt;Multi-org support, approval workflows, audit logs, reviewer assignments — this isn't a toy project!&lt;/p&gt;
&lt;h3&gt;
  
  
  4. Actually Good UX
&lt;/h3&gt;

&lt;p&gt;Dark mode that doesn't hurt your eyes. Clean interface. Logical workflows. I actually USED it while building it!&lt;/p&gt;
&lt;h3&gt;
  
  
  5. Named After Someone Special
&lt;/h3&gt;

&lt;p&gt;Radhika's DocManager is named after Radhika Sharma. Built by Rohan Sharma (yes, it's me). Want to know more about Radhika? Find the secret page in my portfolio. ❤️&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://rohansrma.vercel.app/" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frohansrma.vercel.app%2Fog-image.jpg" height="auto" class="m-0"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://rohansrma.vercel.app/" rel="noopener noreferrer" class="c-link"&gt;
            Rohan Sharma - Software Developer, Professional Blog Writer and UI/UX Designer
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            Explore the portfolio of Rohan Sharma, featuring cutting-edge software projects, insightful blogs, and creative UI/UX work.
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frohansrma.vercel.app%2Ffavicon.ico"&gt;
          rohansrma.vercel.app
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;h3&gt;
  
  
  6. Built with GitHub Copilot CLI
&lt;/h3&gt;

&lt;p&gt;The entire development process was supercharged by GitHub Copilot CLI; from debugging complex database queries to writing deployment scripts. It's like pair programming with an AI! 🤖&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Project for the GitHub Copilot CLI Challenge? 🏆
&lt;/h2&gt;

&lt;p&gt;This challenge is all about showcasing how GitHub Copilot CLI enhances the development process, and boy, did it ever!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge&lt;/strong&gt;: Build an AI-powered, multi-tenant document management system with enterprise-grade security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Reality&lt;/strong&gt;: That's A LOT of complexity; database schemas, encryption, role hierarchies, file uploads, organization isolation, and more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Solution&lt;/strong&gt;: GitHub Copilot CLI became my development companion, helping me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Debug complex SQL queries and foreign key constraints&lt;/li&gt;
&lt;li&gt;Generate secure encryption keys and understand crypto operations&lt;/li&gt;
&lt;li&gt;Navigate git operations when managing multiple feature branches&lt;/li&gt;
&lt;li&gt;Understand error messages and fix bugs faster&lt;/li&gt;
&lt;li&gt;Write deployment scripts and environment configurations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Result&lt;/strong&gt;: A production-ready application deployed at &lt;a href="https://radhika-docmanager.vercel.app/" rel="noopener noreferrer"&gt;radhika-docmanager.vercel.app&lt;/a&gt; with features that would normally take months to build!&lt;/p&gt;

&lt;p&gt;Without Copilot CLI, I would have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Generate the entire Supabase schema&lt;/strong&gt; from a single prompt describing my data model&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Row Level Security policies&lt;/strong&gt; for organization isolation automatically&lt;/li&gt;
&lt;li&gt;Spent hours googling obscure error messages&lt;/li&gt;
&lt;li&gt;Made security mistakes in encryption implementation&lt;/li&gt;
&lt;li&gt;Struggled with git conflicts during feature merges&lt;/li&gt;
&lt;li&gt;Wasted time reading documentation for every command&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I've even implemented most of the frontend with it. Ehehe.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instead&lt;/strong&gt;, I focused on building features and solving real problems while Copilot CLI handled the "how do I do this?" questions instantly.&lt;/p&gt;

&lt;p&gt;That's the power of AI meeting the command line! 🚀&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Building Radhika's AI DocManager was a rollercoaster. There were moments of "WHY ISN'T THIS WORKING?!" and moments of "OMG IT ACTUALLY WORKS!"&lt;/p&gt;

&lt;p&gt;But you know what? Creating something that helps teams manage documents securely while leveraging AI (without selling their soul to Big Tech) feels pretty amazing.&lt;/p&gt;

&lt;p&gt;If you've made it this far, thank you for reading! You're awesome! 🌟&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/RS-labhub/AI-DocManager" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Star the Github Repo 🌠&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try the project, break it, suggest features, report bugs, I want to hear it all!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And if you're wondering why it's called "Radhika's DocManager", it's named after Radhika Sharma, someone special whose memory inspired this project. Sometimes the best projects come from the heart. ❤️&lt;/p&gt;

&lt;h2&gt;
  
  
  Links &amp;amp; Contact
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Live App&lt;/strong&gt;: &lt;a href="https://radhika-docmanager.vercel.app/" rel="noopener noreferrer"&gt;https://radhika-docmanager.vercel.app/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation&lt;/strong&gt;: &lt;a href="https://radhika-docmanager.vercel.app/docs" rel="noopener noreferrer"&gt;https://radhika-docmanager.vercel.app/docs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Repo&lt;/strong&gt;: &lt;a href="https://github.com/RS-labhub/AI-DocManager" rel="noopener noreferrer"&gt;RS-labhub/AI-DocManager&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;My Portfolio&lt;/strong&gt;: &lt;a href="https://rohansrma.vercel.app" rel="noopener noreferrer"&gt;rohansrma.vercel.app&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Email&lt;/strong&gt;: &lt;a href="mailto:rs4101976@gmail.com"&gt;rs4101976@gmail.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LinkedIn&lt;/strong&gt;: &lt;a href="https://www.linkedin.com/in/rohan-sharma-9386rs/" rel="noopener noreferrer"&gt;Rohan Sharma&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;X/Twitter&lt;/strong&gt;: &lt;a href="https://twitter.com/rrs00179" rel="noopener noreferrer"&gt;@rrs00179&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try the live app and let me know what you think!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thank youuuuuuuuuuuuuuuu for reading! ❣️&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>githubchallenge</category>
      <category>cli</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>Creating a Chatbot that actually Stands Out! (vibe coded version)🦖</title>
      <dc:creator>Rohan Sharma</dc:creator>
      <pubDate>Thu, 29 Jan 2026 05:04:47 +0000</pubDate>
      <link>https://dev.to/rohan_sharma/creating-a-chatbot-that-actually-stands-out-vibe-coded-version-draft-1ake</link>
      <guid>https://dev.to/rohan_sharma/creating-a-chatbot-that-actually-stands-out-vibe-coded-version-draft-1ake</guid>
      <description>&lt;p&gt;Hi there,&lt;/p&gt;

&lt;p&gt;As I promised at the start of this year, I'm fulfilling my commitment. 😆&lt;/p&gt;

&lt;p&gt;In this blog, I will discuss how you can create a chatbot far better than the rest of the other chatbots around. This blog is mostly for learning, so keep an eye at every section. I will be referencing the Chatbot I've created. Btw, my chatbot is called "Radhika", so I will call it by its name itself.&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://radhika-sharma.vercel.app/" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fradhika-sharma.vercel.app%2Fog-image.png" height="auto" class="m-0"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://radhika-sharma.vercel.app/" rel="noopener noreferrer" class="c-link"&gt;
            Radhika
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            Radhika is a versatile AI chatbot designed to assist with a wide range of tasks, from answering questions to providing recommendations and engaging in casual conversation.
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fradhika-sharma.vercel.app%2Ffavicon.ico"&gt;
          radhika-sharma.vercel.app
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;




&lt;p&gt;Try it out, give feedback and suggestions, request changes, etc,.&lt;/p&gt;

&lt;p&gt;It's &lt;a href="https://github.com/RS-labhub/Radhika" rel="noopener noreferrer"&gt;open-source&lt;/a&gt; as well, so leave a github star if you loved it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note:&lt;br&gt;
Switch to GROQ in case Gemini encounters an error. Currently, I'm using the free plan, so the tokens might get exhausted.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Let's start building our Chatbot!
&lt;/h2&gt;

&lt;p&gt;Before doing anything else, decide on your tech stack.&lt;/p&gt;

&lt;p&gt;There are many ways to build a chatbot, and plenty of stacks you can choose from. Pick the one you are most comfortable with.&lt;/p&gt;

&lt;p&gt;For Radhika, I used TypeScript, which is a typed version of JavaScript. I built it using Next.js so both the frontend and backend live in the same project. This also makes deployment simpler since everything is deployed together. Simple and efficient.&lt;/p&gt;

&lt;p&gt;(To be honest, vibe coding NextJS apps is easier, and the AI tools do a great job)&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Design it.
&lt;/h2&gt;

&lt;p&gt;If you're a designer, then design the bot in applications like &lt;a href="https://canva.com" rel="noopener noreferrer"&gt;Canva&lt;/a&gt; or &lt;a href="https://figma.com" rel="noopener noreferrer"&gt;Figma&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you want to see Radhika's design, please check this &lt;a href="https://www.figma.com/design/4L2393IapDT1r5yuZ2RpbK/RADHIKA?node-id=0-1&amp;amp;t=LdKYKSk4oXyFix9k-1" rel="noopener noreferrer"&gt;radhika_figma_design&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zksi7lq96nl6o3giv48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zksi7lq96nl6o3giv48.png" alt="radhika-figma"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're not a designer, describe the prompt as much as you can with all the details you've in your mind. Don't think about the features, think about the design.&lt;/p&gt;

&lt;p&gt;Remember, our first target should either be the backend or the frontend. You can start with any of them, but I usually suggest creating the frontend first and then working on the backend. &lt;/p&gt;

&lt;p&gt;For a start, I'm adding a sample prompt for this, but before that I need to cover an important question, i.e., &lt;strong&gt;WHERE TO ADD THIS PROMPT?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can add the prompts to generate the frontend in these platforms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://v0.app/" rel="noopener noreferrer"&gt;V0 by Vercel&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://base44.com/" rel="noopener noreferrer"&gt;Base44&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But since we are going to create a full-stack chatbot, I suggest to use &lt;code&gt;claude-opus-4.5&lt;/code&gt; model in your IDE. And if you&lt;/p&gt;

&lt;p&gt;If you're a student, try to avail the &lt;a href="https://github.com/settings/education/benefits" rel="noopener noreferrer"&gt;Github's Student Pack&lt;/a&gt; and then use your IDE + Github Copilot. (you'll get a hell lot of free credits to most of the models, including &lt;code&gt;claude-opus-4.5&lt;/code&gt; create anything good.)&lt;/p&gt;

&lt;p&gt;Here's your GO TO PROMPT to build your first Chatbot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;### Agentic UI Replication Prompt (Design + Three.js + Maintainable Code)&lt;/span&gt;

You are a senior product designer and frontend engineer with strong experience in &lt;span class="gs"&gt;**scalable UI systems**&lt;/span&gt;, &lt;span class="gs"&gt;**Three.js**&lt;/span&gt;, and &lt;span class="gs"&gt;**long-term maintainable codebases**&lt;/span&gt;.

Your task is to replicate a &lt;span class="gs"&gt;**AI assistant ChatBot**&lt;/span&gt; with a &lt;span class="gs"&gt;**three-column layout**&lt;/span&gt;, glassmorphism, and subtle real-time visualizations, while keeping the implementation &lt;span class="gs"&gt;**clean, readable, and modular**&lt;/span&gt;.

&lt;span class="gu"&gt;## Core Principles&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Prioritize &lt;span class="gs"&gt;**readability over cleverness**&lt;/span&gt;
&lt;span class="p"&gt;*&lt;/span&gt; Small, focused files
&lt;span class="p"&gt;*&lt;/span&gt; Clear separation of concerns
&lt;span class="p"&gt;*&lt;/span&gt; No duplicated logic
&lt;span class="p"&gt;*&lt;/span&gt; All visuals, logic, and data flows should be easy to reason about

&lt;span class="gu"&gt;## Project Structure Guidelines&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Organize the codebase by &lt;span class="gs"&gt;**feature, not by file type**&lt;/span&gt;
&lt;span class="p"&gt;*&lt;/span&gt; Each major UI section should live in its own folder
&lt;span class="p"&gt;*&lt;/span&gt; Each folder should contain:
&lt;span class="p"&gt;
  *&lt;/span&gt; One main component
&lt;span class="p"&gt;  *&lt;/span&gt; One styles file
&lt;span class="p"&gt;  *&lt;/span&gt; Optional subcomponents
&lt;span class="p"&gt;  *&lt;/span&gt; A clear entry point
&lt;span class="p"&gt;*&lt;/span&gt; Three.js logic must be isolated from UI layout code
&lt;span class="p"&gt;*&lt;/span&gt; Avoid large monolithic components

&lt;span class="gu"&gt;## Overall Visual Style&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Dark navy to near-black gradient background
&lt;span class="p"&gt;*&lt;/span&gt; Glassmorphic cards with subtle blur and soft inner glow
&lt;span class="p"&gt;*&lt;/span&gt; Rounded corners throughout
&lt;span class="p"&gt;*&lt;/span&gt; Accent color: cyan / electric blue
&lt;span class="p"&gt;*&lt;/span&gt; Premium, calm, futuristic AI aesthetic
&lt;span class="p"&gt;*&lt;/span&gt; No harsh borders, no visual noise

&lt;span class="gu"&gt;## Left Sidebar (Assistant Identity + Modes)&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Fixed vertical glass panel
&lt;span class="p"&gt;*&lt;/span&gt; Sections split into small components:
&lt;span class="p"&gt;
  *&lt;/span&gt; Assistant identity header
&lt;span class="p"&gt;  *&lt;/span&gt; Mode selector list
&lt;span class="p"&gt;  *&lt;/span&gt; Unlock CTA card
&lt;span class="p"&gt;  *&lt;/span&gt; Quick actions list
&lt;span class="p"&gt;*&lt;/span&gt; Active mode visually highlighted with soft cyan glow
&lt;span class="p"&gt;*&lt;/span&gt; All icons and labels driven from a single config file

&lt;span class="gu"&gt;## Center Panel (Primary Interaction Area)&lt;/span&gt;

&lt;span class="gu"&gt;### Header&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Mode title
&lt;span class="p"&gt;*&lt;/span&gt; Model badge
&lt;span class="p"&gt;*&lt;/span&gt; Minimal action icons
&lt;span class="p"&gt;*&lt;/span&gt; Primary CTA button
&lt;span class="p"&gt;*&lt;/span&gt; Header logic isolated from content logic

&lt;span class="gu"&gt;### Core Visualization (Three.js)&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Use a dedicated Three.js scene module
&lt;span class="p"&gt;*&lt;/span&gt; Render an abstract &lt;span class="gs"&gt;**circular particle system**&lt;/span&gt;
&lt;span class="p"&gt;*&lt;/span&gt; Particles form a softly rotating sphere or neural cluster
&lt;span class="p"&gt;*&lt;/span&gt; Color palette limited to cyan, blue, and soft white
&lt;span class="p"&gt;*&lt;/span&gt; Animation:
&lt;span class="p"&gt;
  *&lt;/span&gt; Slow rotation
&lt;span class="p"&gt;  *&lt;/span&gt; Subtle breathing or noise-based movement
&lt;span class="p"&gt;*&lt;/span&gt; Interaction:
&lt;span class="p"&gt;
  *&lt;/span&gt; Light cursor-based parallax
&lt;span class="p"&gt;*&lt;/span&gt; The canvas must be:
&lt;span class="p"&gt;
  *&lt;/span&gt; Self-contained
&lt;span class="p"&gt;  *&lt;/span&gt; Easily removable or replaceable
&lt;span class="p"&gt;  *&lt;/span&gt; Not tightly coupled to UI state

&lt;span class="gu"&gt;### Main Content&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Headline and subtext in a simple content component
&lt;span class="p"&gt;*&lt;/span&gt; Quick action pills rendered from a data array
&lt;span class="p"&gt;*&lt;/span&gt; No hardcoded UI strings inside logic

&lt;span class="gu"&gt;### Input Section&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Input bar broken into:
&lt;span class="p"&gt;
  *&lt;/span&gt; Text input component
&lt;span class="p"&gt;  *&lt;/span&gt; Left action icons
&lt;span class="p"&gt;  *&lt;/span&gt; Right action icons
&lt;span class="p"&gt;*&lt;/span&gt; Icons reusable across the app
&lt;span class="p"&gt;*&lt;/span&gt; Keyboard behavior handled in a dedicated hook or utility

&lt;span class="gu"&gt;## Right Sidebar (Analytics &amp;amp; Activity)&lt;/span&gt;

&lt;span class="gu"&gt;### Activity Matrix (Three.js)&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Separate Three.js scene from layout code
&lt;span class="p"&gt;*&lt;/span&gt; Visualize activity using:
&lt;span class="p"&gt;
  *&lt;/span&gt; Nodes and connecting lines
&lt;span class="p"&gt;  *&lt;/span&gt; Temporal motion paths
&lt;span class="p"&gt;*&lt;/span&gt; Minimal neon wireframe aesthetic
&lt;span class="p"&gt;*&lt;/span&gt; Motion should feel ambient, not informational
&lt;span class="p"&gt;*&lt;/span&gt; Scene must support pausing when off-screen

&lt;span class="gu"&gt;### Stats &amp;amp; Mode Usage&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Each stat card as its own component
&lt;span class="p"&gt;*&lt;/span&gt; Mode usage bars driven by data, not hardcoded values
&lt;span class="p"&gt;*&lt;/span&gt; Visual styles shared via common tokens

&lt;span class="gu"&gt;### AI Status Section&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Reusable status row component
&lt;span class="p"&gt;*&lt;/span&gt; Indicator colors derived from state mapping
&lt;span class="p"&gt;*&lt;/span&gt; No inline conditional styling

&lt;span class="gu"&gt;## Three.js &amp;amp; Performance Constraints&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Keep particle counts low and configurable
&lt;span class="p"&gt;*&lt;/span&gt; Centralized animation loop management
&lt;span class="p"&gt;*&lt;/span&gt; Clean disposal of geometries and materials
&lt;span class="p"&gt;*&lt;/span&gt; requestAnimationFrame usage should be controlled and predictable
&lt;span class="p"&gt;*&lt;/span&gt; WebGL failure should gracefully fall back to static UI&lt;span class="sb"&gt;


&lt;/span&gt;&lt;span class="gu"&gt;## Maintainability Constraints&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; No inline styles for complex layouts
&lt;span class="p"&gt;*&lt;/span&gt; No deeply nested components without justification
&lt;span class="p"&gt;*&lt;/span&gt; Use clear naming conventions
&lt;span class="p"&gt;*&lt;/span&gt; Comments should explain intent, not obvious behavior
&lt;span class="p"&gt;*&lt;/span&gt; Any complex logic must be documented at the top of the file

&lt;span class="gu"&gt;## Final Goal&lt;/span&gt;

The result should feel like a &lt;span class="gs"&gt;**production-ready AI dashboard**&lt;/span&gt; that is:
&lt;span class="p"&gt;
*&lt;/span&gt; Visually calm
&lt;span class="p"&gt;*&lt;/span&gt; Technically elegant
&lt;span class="p"&gt;*&lt;/span&gt; Easy to extend
&lt;span class="p"&gt;*&lt;/span&gt; Easy for another engineer to understand within minutes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Make some iterations until you reach a satisfactory level. (Satisfaction doesn't mean you need the best out of the best. It should be something you think is good enough)&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Start implementing the Core Features
&lt;/h2&gt;

&lt;p&gt;And since we are creating it through vibe coding, let's go through a little-little code only. Want no code? Skip this section.&lt;/p&gt;

&lt;p&gt;In your chatbot, you can add multi-provider support or you can choose only a single provider. In my case, I love to have multi-provider support because this allows the user to switch to any model they want.&lt;/p&gt;

&lt;p&gt;The following diagram represents the implementation I've done in Radhika:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsz746i5xhmjc6o48zpce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsz746i5xhmjc6o48zpce.png" alt="flowchart"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But the most important part than this is to parse the request sent by the user.&lt;/p&gt;

&lt;p&gt;The request handler begins by parsing the JSON body from the incoming POST request:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;mode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;general&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;groq&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;apiKey&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;messages&lt;/code&gt;&lt;/strong&gt;: The conversation history sent by the client.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;mode&lt;/code&gt;&lt;/strong&gt;: Determines which system prompt to use (e.g., bff, learning, etc.). // totally depends on you if want to add multiple modes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;provider&lt;/code&gt;&lt;/strong&gt;: Specifies the AI backend to use (&lt;code&gt;groq&lt;/code&gt;, &lt;code&gt;openai&lt;/code&gt;, &lt;code&gt;claude&lt;/code&gt;, &lt;code&gt;gemini&lt;/code&gt;, or whatever provider you're using).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;apiKey&lt;/code&gt;&lt;/strong&gt;: Required for OpenAI and Claude if a user key is needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Assign the Prompt&lt;/strong&gt;&lt;br&gt;
Add a system prompt on how you want your chatbot should behave and talk about. You can also link it with a KB (knowledge base) or create your own mini knowledge base to make your bot more specific to something.&lt;/p&gt;

&lt;p&gt;I've not implemented this as I wanted my Radhika to be generic and serve as a good template for someone like you.&lt;/p&gt;

&lt;p&gt;If you're also adding multi-provider, then route to the correct provider. Each provider has custom logic to instantiate the model, handle errors, and stream the response using:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;streamText&lt;/span&gt;&lt;span class="p"&gt;({...})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Looking for a prompt? Here it is: (add this in the same chat session or the create one)&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;### Agentic Backend Architecture Prompt&lt;/span&gt;

You are a senior backend engineer designing a &lt;span class="gs"&gt;**production-ready AI chat backend**&lt;/span&gt; with &lt;span class="gs"&gt;**multi-provider model support**&lt;/span&gt;, clean architecture, and long-term maintainability.

Your goal is to build a backend that powers an AI assistant similar to &lt;span class="gs"&gt;**Radhika**&lt;/span&gt;, where users can switch models dynamically while keeping the system simple, extensible, and readable.

&lt;span class="gu"&gt;## Core Backend Principles&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Clear separation of responsibilities
&lt;span class="p"&gt;*&lt;/span&gt; No provider-specific logic inside request handlers
&lt;span class="p"&gt;*&lt;/span&gt; Easy to add or remove AI providers
&lt;span class="p"&gt;*&lt;/span&gt; Streaming responses by default
&lt;span class="p"&gt;*&lt;/span&gt; Minimal abstractions, no over-engineering
&lt;span class="p"&gt;*&lt;/span&gt; Code should read like documentation

&lt;span class="gu"&gt;## Request Handling Flow&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Use a single POST endpoint for chat interactions
&lt;span class="p"&gt;*&lt;/span&gt; Parse the incoming JSON request at the boundary

&lt;span class="sb"&gt;`'``ts
const body = await req.json();
const { messages, mode = "general", provider = "groq", apiKey } = body;
`&lt;/span&gt;'&lt;span class="sb"&gt;``

* `messages`: full conversation history from the client
* `mode`: determines which system prompt to apply
* `provider`: selected AI backend
* `apiKey`: optional user-supplied key for providers that require it

Validation should happen immediately after parsing and fail early with clear errors.

## Prompt Management

* System prompts must be centralized
* Store prompts in a dedicated module or folder
* Map prompts by mode name
* Avoid hardcoded prompt strings inside logic
* Allow future expansion into:

  * Knowledge base injection
  * Context enrichment
  * Mode-specific behavior tuning

If no custom knowledge base is attached, default to a generic assistant prompt suitable for broad usage.

## Provider Routing

* Use a provider router layer
* Route requests based on the `provider` field
* Each provider must live in its own isolated module
* No shared state between providers

Each provider module should:

* Instantiate its own client
* Normalize input messages
* Handle provider-specific errors
* Support streaming responses using:

`'``&lt;/span&gt;ts
await streamText({ ... })
&lt;span class="sb"&gt;`'``

The request handler should never know how a provider works internally.

## Streaming Architecture

* Streaming must be first-class, not optional
* Keep streaming logic abstracted behind a common interface
* Providers should emit tokens in a unified format
* Handle disconnects and stream cleanup gracefully

## File Organization Guidelines

* Organize by responsibility, not file type
* Suggested structure:

  * `&lt;/span&gt;/routes&lt;span class="sb"&gt;` for request handlers
  * `&lt;/span&gt;/providers&lt;span class="sb"&gt;` for AI provider implementations
  * `&lt;/span&gt;/prompts&lt;span class="sb"&gt;` for system and mode prompts
  * `&lt;/span&gt;/utils&lt;span class="sb"&gt;` for shared helpers
  * `&lt;/span&gt;/types&lt;span class="err"&gt;`&lt;/span&gt; for request and provider contracts
&lt;span class="p"&gt;*&lt;/span&gt; One provider per file
&lt;span class="p"&gt;*&lt;/span&gt; One responsibility per file
&lt;span class="p"&gt;*&lt;/span&gt; No oversized files

&lt;span class="gu"&gt;## Error Handling&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Normalize all provider errors
&lt;span class="p"&gt;*&lt;/span&gt; Never leak raw provider error messages to clients
&lt;span class="p"&gt;*&lt;/span&gt; Return consistent error shapes
&lt;span class="p"&gt;*&lt;/span&gt; Log provider-specific details internally

&lt;span class="gu"&gt;## Extensibility Rules&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Adding a new provider should require:
&lt;span class="p"&gt;
  *&lt;/span&gt; One new provider file
&lt;span class="p"&gt;  *&lt;/span&gt; One registration entry in the provider router
&lt;span class="p"&gt;*&lt;/span&gt; No changes to the core request handler
&lt;span class="p"&gt;*&lt;/span&gt; No breaking changes to existing providers

&lt;span class="gu"&gt;## Maintainability Constraints&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Avoid deeply nested conditionals
&lt;span class="p"&gt;*&lt;/span&gt; Prefer early returns
&lt;span class="p"&gt;*&lt;/span&gt; Use explicit naming over clever abstractions
&lt;span class="p"&gt;*&lt;/span&gt; Add comments only where intent is non-obvious
&lt;span class="p"&gt;*&lt;/span&gt; Keep configuration and logic separate

&lt;span class="gu"&gt;## Final Goal&lt;/span&gt;

The backend should feel like a &lt;span class="gs"&gt;**reference implementation**&lt;/span&gt; that:
&lt;span class="p"&gt;
*&lt;/span&gt; Supports multiple AI providers cleanly
&lt;span class="p"&gt;*&lt;/span&gt; Streams responses efficiently
&lt;span class="p"&gt;*&lt;/span&gt; Is easy for other developers to fork
&lt;span class="p"&gt;*&lt;/span&gt; Serves as a strong template for building AI chat systems
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Adding a Database (optional)
&lt;/h2&gt;

&lt;p&gt;Though this is totally optional, if you want to add persistent storage to the user chats that can be retrieved later, then add a database. Or if you want to track your bot users, you can create a database.&lt;/p&gt;

&lt;p&gt;I used it to implement both.&lt;/p&gt;

&lt;p&gt;Initially, I was using the &lt;a href="https://supabase.com/" rel="noopener noreferrer"&gt;Supabase&lt;/a&gt; free tier, but I was hitting the limits, and my app was becoming stale. Then I switched to &lt;a href="https://appwrite.io/" rel="noopener noreferrer"&gt;Appwrite&lt;/a&gt;. Both are totally different; one is SQL, while the latter one is NoSQL. Although use &lt;a href="https://www.npmjs.com/package/node-appwrite?activeTab=versions" rel="noopener noreferrer"&gt;&lt;code&gt;node-appwrite&lt;/code&gt;&lt;/a&gt; package to skip the manual schema add-ons.&lt;/p&gt;

&lt;p&gt;If you want to create something similar to what I've created, then modify or replicate &lt;a href="https://github.com/RS-labhub/Radhika/blob/master/scripts/setup-appwrite-schema.ts" rel="noopener noreferrer"&gt;setup_appwrite_schema&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Looking for a prompt? Think your logic first and write it, and then ask ChatGPT to convert it into an agentic prompt. You will enjoy this!&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  You're so done!
&lt;/h2&gt;

&lt;p&gt;This is how simple to create a full-fledged chatbot with a professional codebase.&lt;/p&gt;

&lt;p&gt;Once done, you can upload it to &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; and then host it on serverless platforms like &lt;a href="https://vercel.com/" rel="noopener noreferrer"&gt;Vercel&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If that's not enough to you, then start with Radhika and modify as much as you can.&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/RS-labhub" rel="noopener noreferrer"&gt;
        RS-labhub
      &lt;/a&gt; / &lt;a href="https://github.com/RS-labhub/Radhika" rel="noopener noreferrer"&gt;
        Radhika
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Radhika is a multi-model AI assistant built for your mood and friendship 💞
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/RS-labhub/Radhika/master/public/banner.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FRS-labhub%2FRadhika%2Fmaster%2Fpublic%2Fbanner.png" alt="banner"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Radhika&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;A modern AI assistant that adapts to how you work and think. Multiple modes, multiple models, one seamless chat experience. Features multiple LLM providers, image generation, voice interaction, and persistent chat history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try now&lt;/strong&gt;: &lt;a href="https://radhika-sharma.vercel.app" rel="nofollow noopener noreferrer"&gt;https://radhika-sharma.vercel.app&lt;/a&gt;&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Detailed Explanation&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;br&gt;
&lt;thead&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;th&gt;Preview&lt;/th&gt;
&lt;br&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;/thead&gt;
&lt;br&gt;
&lt;tbody&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;&lt;a href="https://dev.to/rohan_sharma/creating-a-chatbot-that-actually-stands-out-vibe-coded-version-draft-1ake" rel="nofollow"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FRS-labhub%2FRadhika%2Fmaster%2Fpublic%2Fcover-image.png" alt="Blog Post"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;
&lt;br&gt;
&lt;strong&gt;Blog Post&lt;/strong&gt;&lt;br&gt;Read the blog for in-depth explanation.&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;&lt;a href="https://radhika-sharma.vercel.app" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FRS-labhub%2FRadhika%2Fmaster%2Fpublic%2Fyoutube-thumbnail.png" alt="YouTube Demo"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;
&lt;br&gt;
&lt;strong&gt;YouTube Demo&lt;/strong&gt;&lt;br&gt;Coming soon.&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;/tbody&gt;
&lt;br&gt;
&lt;/table&gt;&lt;/div&gt;&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Features&lt;/h2&gt;
&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;6 Chat Modes&lt;/strong&gt;: General, Productivity, Wellness, Learning, Creative, BFF&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Provider LLM&lt;/strong&gt;: Groq, Gemini, OpenAI, Claude&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image Generation&lt;/strong&gt;: Pollinations, DALL·E 3, Hugging Face, Free alternatives&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Voice&lt;/strong&gt;: Speech-to-text input &amp;amp; text-to-speech output&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auth &amp;amp; Persistence&lt;/strong&gt;: Appwrite auth with chat history &amp;amp; favorites&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UI&lt;/strong&gt;: Light/dark themes, modern &amp;amp; pixel UI styles&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Quick Start&lt;/h2&gt;

&lt;/div&gt;

&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;git clone https://github.com/RS-labhub/radhika.git
&lt;span class="pl-c1"&gt;cd&lt;/span&gt; radhika
bun install   &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; or npm install&lt;/span&gt;
bun run dev   &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; or npm run dev&lt;/span&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Open: &lt;a href="http://localhost:3000" rel="nofollow noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;License&lt;/h2&gt;

&lt;/div&gt;

&lt;p&gt;MIT License - see &lt;a href="https://github.com/RS-labhub/Radhika/LICENSE" rel="noopener noreferrer"&gt;LICENSE&lt;/a&gt;&lt;/p&gt;




&lt;div&gt;
&lt;p&gt;&lt;strong&gt;Built with ❤️ by &lt;a href="https://github.com/RS-labhub" rel="noopener noreferrer"&gt;Rohan Sharma&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/RS-labhub/Radhika/public/Author.jpg"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2FRS-labhub%2FRadhika%2Fpublic%2FAuthor.jpg" alt="author"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/RS-labhub/radhika" rel="noopener noreferrer"&gt;⭐ Star&lt;/a&gt; • &lt;a href="https://github.com/RS-labhub/radhika/issues" rel="noopener noreferrer"&gt;🐛 Issues&lt;/a&gt; • &lt;a href="https://github.com/RS-labhub/radhika/discussions" rel="noopener noreferrer"&gt;🗣️&lt;/a&gt;…&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/RS-labhub/Radhika" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;I have implemented a lot of other features like voice recognition and synthesis using WebKit and &lt;a href="https://elevenlabs.io/" rel="noopener noreferrer"&gt;Eleven Labs&lt;/a&gt;, image generation using pollination.ai, openai, gemini, huggingface models, etc,. (&lt;a href="https://pollinations.ai/" rel="noopener noreferrer"&gt;pollinations ai&lt;/a&gt; and HuggingFace models are free to generate images/videos/texts/etc,.)&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Creating a chatbot is super-easy and doesn't need much knowledge. It's just like texting messages to agents.&lt;/p&gt;

&lt;p&gt;However, you need prompting skills to get the product in a very less time. &lt;/p&gt;

&lt;p&gt;Want some prompting tips? Comment or reach out to me!&lt;/p&gt;

&lt;p&gt;Before sharing the ways to reaching me, I want you to star the GitHub repo of Radhika.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/RS-labhub/radhika" class="crayons-btn crayons-btn--primary" rel="noopener noreferrer"&gt;Star Radhika on Github 🌠&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;Find the Live Demo here: &lt;a href="https://radhika-sharma.vercel.app/" rel="noopener noreferrer"&gt;https://radhika-sharma.vercel.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Want to connect with me? Visit my &lt;a href="https://rohan-sharma-portfolio.vercel.app" rel="noopener noreferrer"&gt;Portfolio&lt;/a&gt;, contact's page.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>javascript</category>
      <category>ai</category>
    </item>
    <item>
      <title>2025 in Review: Growth, Grief, and the Cost of Momentum</title>
      <dc:creator>Rohan Sharma</dc:creator>
      <pubDate>Wed, 31 Dec 2025 04:15:22 +0000</pubDate>
      <link>https://dev.to/rohan_sharma/2025-in-review-growth-grief-and-the-cost-of-momentum-f8b</link>
      <guid>https://dev.to/rohan_sharma/2025-in-review-growth-grief-and-the-cost-of-momentum-f8b</guid>
      <description>&lt;p&gt;Hey there,&lt;/p&gt;

&lt;p&gt;I know, I know it's been a long time since I last posted or wrote anything. It might be possible that you are either pissed off at me or have literally forgotten me. In both cases, &lt;strong&gt;I apologize&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So, &lt;em&gt;why was I dead?&lt;/em&gt; Possible reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;being very very very busy at work&lt;/li&gt;
&lt;li&gt;being depressed&lt;/li&gt;
&lt;li&gt;lost willingness to write&lt;/li&gt;
&lt;li&gt;lose interest in everything&lt;/li&gt;
&lt;li&gt;no reasons, just trying to coverup myself with lies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before you go ahead and read the rest of the blog, please select an option from above and write it down in the comment section. Let's see what you think!&lt;/p&gt;




&lt;p&gt;Let's start this blog by justifying myself.&lt;br&gt;
The actual reason for me being dead was -- &lt;strong&gt;several&lt;/strong&gt;. I will write my 2025 wrap-up and then wind up the reasons if you still didn't get it while reading it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Starting with January 2025.&lt;/strong&gt; The year began with a big change. I joined  &lt;strong&gt;&lt;a href="https://llmware.ai/" rel="noopener noreferrer"&gt;&lt;code&gt;LLMWare&lt;/code&gt;&lt;/a&gt;&lt;/strong&gt; as a devRel Engineer and am still serving to &lt;a href="https://quira.sh" rel="noopener noreferrer"&gt;&lt;code&gt;Quira&lt;/code&gt;&lt;/a&gt; as Community Moderator. February was already on my mind because of the GATE exam (a national exam in India for PG admissions). So, I also started to prepare for that. The weather stayed kind, which helped more than I realized at the time. Oh btw, I forgot, I went to college every day just to reach the 75% attendance milestone/.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;February 2025.&lt;/strong&gt; This was the month things started to feel real. I had the GATE exam in the first week, and once it was over, I shifted fully back to work. I had taken quite a few days off earlier and did not want to slow things down further. On February 8, 2025, I consciously decided to make &lt;code&gt;llmware&lt;/code&gt; my P0 and commit to it properly. A few days later, my university exams began. It was work during the day and exams around it. I managed. In the middle of all that, I lost something I can never get back - a friend. It was 16th Feb, 2025. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After a rough start, 2025 rolled into March.&lt;/strong&gt; Things began to move with more seriousness. I started giving more of my time to work. College was sucking, as usual. Staying busy became a way to escape the matrix of overthinking. Along the way, I missed a few family moments, and by the end of the month, my brothers left home for work. For the first time, the house was quiet. Not for a day, but for DAYS. Professionally, things were on track. Personally, I was starting to fall behind.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;April 2025 started.&lt;/strong&gt; &lt;code&gt;llmware&lt;/code&gt; started gaining real momentum on &lt;a href="https://github.com/llmware-ai/llmware" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. Work became my default state. I began keeping my distance from people and spending more time alone. During the month, I met the Founder &amp;amp; CEO of &lt;strong&gt;&lt;a href="https://discord.gg/deveco" rel="noopener noreferrer"&gt;devEco&lt;/a&gt;&lt;/strong&gt;. We had a few meaningful conversations, and I decided to take on part-time work with them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It's May 2025 now.&lt;/strong&gt; Things accelerated at &lt;code&gt;llmware&lt;/code&gt;. I was putting in more hours than ever. Alongside that, I officially joined devEco as well. Suddenly, my days were split between &lt;code&gt;college&lt;/code&gt;, &lt;code&gt;llmware&lt;/code&gt;, &lt;code&gt;quira&lt;/code&gt;, and &lt;code&gt;devEco&lt;/code&gt;. Even then, most of the conversations were still with myself. With still some time left to spare, I pushed myself to participate in a &lt;a href="https://dev.to/challenges"&gt;DEV Challenge&lt;/a&gt;, the authorization challenge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now, it's June 2025, the biggest month for me.&lt;/strong&gt; I pushed past every limit and worked relentlessly. On non-college days, I was putting in 14 to 15 hours. On college days, around 8 to 9. I led the documentation for &lt;code&gt;llmware&lt;/code&gt;’s major product announcement, &lt;strong&gt;Model HQ&lt;/strong&gt;. The work spanned two months, but most of it came together in June. I also won the DEV Challenge I had participated in the previous month.&lt;/p&gt;

&lt;p&gt;Professionally, I was at my peak. And it was at that peak that I started forgetting everything else. I barely ate because I simply did not think about it. The intensity was self-imposed. There was no pressure from either company. In fact, I was repeatedly asked to slow down and rest. I did not listen. During this stretch, I also wrote five blogs and scheduled them through early July.&lt;/p&gt;

&lt;p&gt;Sadness followed me through it all. When the Founder &amp;amp; CEO of Quira told me the company was winding down, it shook me deeply. Quira was my first job, and the place where I learned how to build and belong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;July 2025 started with university exams.&lt;/strong&gt; I slowed down at work, not because of the exams, but because I was quietly losing interest, even though I kept showing up and delivering. Exams themselves never bothered me much. What weighed heavier was going to college every day. I am a one nighter. I usually study for three to four hours and then sit for the exam. Somehow, I cleared everything with zero backlogs.&lt;/p&gt;

&lt;p&gt;Quira was officially closed now. I gifted the Founder &amp;amp; CEO with &lt;a href="https://quira-voices.vercel.app/" rel="noopener noreferrer"&gt;&lt;strong&gt;this&lt;/strong&gt;&lt;/a&gt;. He loved it.&lt;/p&gt;

&lt;p&gt;Around the same time, one of my brothers decided to resign from his job. He was struggling with the food and climate of the region, and it had finally caught up with him.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;August 2025.&lt;/strong&gt; Happiness began to return, slowly. My brothers came back. One for good, to prepare for what was next, and the other just for the holidays. I was still hoping for a comeback from that friend, but it never happened. The hardest part was seeing each other every day in college, like strangers.&lt;/p&gt;

&lt;p&gt;I started working again like a normal human being, and for the first time in months, I had some spare time. Around then, a former DevRel from Quira reached out and asked if I could help at his current company. He was someone who had taught me a lot. I knew he did not really need help. He was trying to help me instead. I did not say no.&lt;/p&gt;

&lt;p&gt;That is how I joined &lt;strong&gt;&lt;a href="https://tessl.io" rel="noopener noreferrer"&gt;Tessl&lt;/a&gt;&lt;/strong&gt; as a part-time DevEx Engineer. Once again, I was working with three companies. This time, the work was different, harder, and more demanding in its own way. I stopped thinking too much about myself and kept moving forward.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My birthday month, September 2025.&lt;/strong&gt; I was born on September 1. Fun fact, World War II also officially began on that day. Strange coincidence. Overall, it was a good month. I enjoyed working at my new company, even though it was more demanding than anything I had done before. I had never worked with so many people at once. The team at &lt;code&gt;Tessl&lt;/code&gt; is filled with well-known names, and I genuinely loved being part of it. Life felt steady again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;October 2025.&lt;/strong&gt; This was the first month when I didn't work at &lt;code&gt;llmware&lt;/code&gt; at all. Things were moving internally, and we were all waiting. I ended up with a break, but I did not really stop. I started working on something else, still under wraps (yes, a secret). I also updated one of my projects, &lt;a href="https://radhika-sharma.vercel.app/" rel="noopener noreferrer"&gt;Radhika&lt;/a&gt;, and launched version 2.0. By the way, version 3.0 is coming in a few days as well. The month was also filled with festivals, and for once, I celebrated them properly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And it's November 2025, another exam month.&lt;/strong&gt; I slowly resumed work with &lt;code&gt;llmware&lt;/code&gt;, &lt;code&gt;tessl&lt;/code&gt;, and &lt;code&gt;devEco&lt;/code&gt; continued as usual. But somewhere along the way, I lost interest in work itself. It started to feel boring. I had my 3rd university exam this month. Around this time, something shifted in me. I stopped feeling much of anything. No excitement, no fear. Just a quiet numbness. My circle became smaller than it had ever been. But overall, it was good.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lastly, December 2025.&lt;/strong&gt; I was fully back at work, almost like before. This time, though, I had more holidays than usual. I am still on break until January 1, 2026. The time off did something unexpected. It forced me to notice what I had lost this year, what I wanted but never paused to ask for. Loneliness showed up quietly. Real FOMO too. Feelings I had avoided all year finally caught up.&lt;/p&gt;

&lt;p&gt;At this point, I cannot clearly say whether I am happy, sad, exhausted, lost, or simply alone. Maybe it is a mix of everything.&lt;/p&gt;

&lt;p&gt;Have you ever gone through a phase like this?&lt;/p&gt;




&lt;h2&gt;
  
  
  Now's let me write what you're most interested in reading
&lt;/h2&gt;

&lt;p&gt;Leaving personal life in a corner, let's talk about the professional growth I've achieved this year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building Community and Trust at &lt;a href="https://quira.sh" rel="noopener noreferrer"&gt;Quira&lt;/a&gt;:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8scnz0edx2jy4badeff3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8scnz0edx2jy4badeff3.png" alt="rodrigo's recommendation" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Engaged in activity and participation in Quests&lt;/li&gt;
&lt;li&gt;Handled the community in hard times.&lt;/li&gt;
&lt;li&gt;Helped many members get started and ship their first projects on &lt;code&gt;Quira&lt;/code&gt; through one-on-one sessions.&lt;/li&gt;
&lt;li&gt;Proposed ideas that significantly improved community participation, communication, and retention.&lt;/li&gt;
&lt;li&gt;Built a warm and approachable community space where members felt comfortable reaching out. I was never seen as a distant or formal figure, but as a trusted colleague, which I truly valued.&lt;/li&gt;
&lt;li&gt;Demonstrated strong developer advocacy by supporting both the product and the people using it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I Built and Shipped at &lt;a href="https://llmware.ai" rel="noopener noreferrer"&gt;LLMWare&lt;/a&gt;:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Elevated &lt;code&gt;LLMWare&lt;/code&gt;'s GitHub activity. (6,500+ Stars and 2,000+ Forks Organic Farming in 6 months)&lt;/li&gt;
&lt;li&gt;Increased the awareness of SLMs.&lt;/li&gt;
&lt;li&gt;Created a whole new website. (A newer version is coming soon)&lt;/li&gt;
&lt;li&gt;Wrote documentation for Model HQ (a &lt;code&gt;llmware&lt;/code&gt;'s product) and hosted it. (got great feedback from senior developers)&lt;/li&gt;
&lt;li&gt;Involved in QA and testing of Model HQ.&lt;/li&gt;
&lt;li&gt;Delivered over 15 high-quality contents.&lt;/li&gt;
&lt;li&gt;Contributed to the social presence of &lt;code&gt;llmware&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Growing &lt;a href="https://discord.gg/deveco" rel="noopener noreferrer"&gt;devEco's &lt;/a&gt; Developer Presence:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Managed socials across multiple platforms for &lt;code&gt;devEco&lt;/code&gt;, &lt;code&gt;devEco products&lt;/code&gt;, and &lt;code&gt;devEco clients&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Achieved a great percentage of increase in activities like reactions, comments, reposts, and impressions.&lt;/li&gt;
&lt;li&gt;Sparked active conversations through thoughtful content, up from almost none.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengthening the Developer Ecosystem at &lt;a href="https://tessl.io" rel="noopener noreferrer"&gt;Tessl&lt;/a&gt;:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Strengthened the &lt;a href="https://ainativedev.io/" rel="noopener noreferrer"&gt;AIND&lt;/a&gt; ecosystem by improving the Landscape platform, fixing major bugs, refining categories, and adding new tools consistently.&lt;/li&gt;
&lt;li&gt;Enhanced content operations through automation projects and streamlined publishing workflows.&lt;/li&gt;
&lt;li&gt;Delivered high-quality research, carousels, and weekly trending tool updates that improved visibility and engagement across developer channels.&lt;/li&gt;
&lt;li&gt;Provided cross-functional support to improve UX, data quality, scraping workflows, and overall product reliability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;I am sure I have not captured everything here, and that is okay. What matters is that I know how much I worked, and that I gave my best to everything I was part of. I do not know if I will continue on the same path, but for now, I am not worried about it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Things I loved the most!!!
&lt;/h2&gt;

&lt;p&gt;There were a lot of moments this year that I loved a lot. Let's start!&lt;/p&gt;

&lt;p&gt;At &lt;strong&gt;Quira&lt;/strong&gt;, I did not just meet people. I made a real friend circle. And I believe this deeply: when someone reaches out to you as a friend and shares their feelings and pain points, it means you have occupied a special place in their heart. That feeling is rare, and I will always value it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLMWare&lt;/strong&gt;. Ahh. I honestly do not have enough words for this one. I met some incredibly sharp senior developers through LLMWare. Since I am under NDA, I cannot share much, but this experience opened doors to many Silicon Valley giants for me. More importantly, this was the first time I worked on a real product at a very early-stage startup. Working closely with the CEO and CTO felt natural and calm. They are kind, thoughtful, and genuinely great people to work with.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;devEco&lt;/strong&gt; filled me with sessions of laughter, joys, arguments, and whatnot! The founder is insanely cool and welcomes everyone with warmth. Becoming a core devEco member is a different kind of fun. He talks to me, teaches me, appreciates me when I do something right, and corrects me when I make mistakes. Interestingly, I didn't made much mistakes. I have 90% appreciation in my hands. Ehehe. But I must say, meet with this human once and you won't regret it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tessl&lt;/strong&gt; was the biggest surprise. I was honestly hesitant at first. There were a lot of people, and in the past, I had only worked with 3 or 4 at a time. Being a newbie in such an environment hits differently. But guess what. It was smooth. Really smooth. A huge thanks to the DevRel at Tessl. This guy taught me a lot and supported me constantly. He is the person I feel most comfortable with. I can even talk to him about my personal life with ease. He speaks, shares ideas, listens to ideas, has zero ego or arrogance, treats everyone equally, always helps others, travels way too much, and still manages everything.&lt;/p&gt;

&lt;p&gt;And not just him. The entire Tessl team is welcoming and amazing. I even got to work with the Father of DevOps himself, and he even appreciated my work(ehehe). That one stays with me.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Achieved!!!
&lt;/h2&gt;

&lt;p&gt;Well, I achieved a lot of things. But there are some things beyond work that matter just as much.&lt;/p&gt;

&lt;p&gt;I know most of you want to know &lt;strong&gt;how much money I made&lt;/strong&gt;. Of course, I am not going to tell that. But you can assume. Let me know in the comments. I will say this though. I earned a good amount. And now, I want more. I already have plans for 2026. Or maybe not. I am a moody guy. My plans change often.&lt;/p&gt;

&lt;p&gt;Apart from money, I gained a lot. We renovated our house. It is still not fully done. Two more months to go. We also bought a lot of furniture, although that part feels less special since we already have a family business around it.&lt;/p&gt;

&lt;p&gt;I also got my &lt;strong&gt;Aloo&lt;/strong&gt; (Mr. Potato). See below. Cute, right? Thanks to Cloudinary.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzil7tvfr2bx0iex852v5.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzil7tvfr2bx0iex852v5.jpeg" alt="aloo" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Beyond money and things, I earned something better. Respect. Trust. Value. Most importantly, I unlocked an intense learning mode. Once it switches on, it is hard to turn off.&lt;/p&gt;




&lt;p&gt;Seems like I told everything I needed to tell. For those who still want to know the actual reason for my being dead are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I lost my interest in almost everything.&lt;/li&gt;
&lt;li&gt;Doing work is now boring for me.&lt;/li&gt;
&lt;li&gt;I was a bit sad.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I do not want to write everything again, and I do not want to summarize it either. But somewhere above, these were the reasons.&lt;/p&gt;

&lt;p&gt;Before ending, I want to thank my friends. I will not take names properly, but if you are reading this, you already know who you are.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Thanks to my college friend, &lt;strong&gt;SK&lt;/strong&gt;. Without him, I would never have cleared my college exams or survived college life. We spent almost all our time together. Being single, searching for someone, then running away. He can be brainless, but not brain-rotted. A good guy with intense slang specialization.&lt;/li&gt;
&lt;li&gt;Thanks to &lt;strong&gt;HG&lt;/strong&gt;. A person I have never met in real life. But if you are not in a good mood, one single text can turn into World War III. Good, kind-hearted, always listens. Extra thin and weak human. I will not say much, else WW IV will start. But thank you for always being there.&lt;/li&gt;
&lt;li&gt;Thanks to &lt;strong&gt;Sam&lt;/strong&gt; for being Same Sam. Ultimate mad and animal human. I do not know what to say about you. Take care, you sick human. And thanks for being there.&lt;/li&gt;
&lt;li&gt;Extending my gratitude to &lt;strong&gt;Dishu&lt;/strong&gt; and &lt;strong&gt;Piliya&lt;/strong&gt;. Both are placed. Both are talkative. Both get angry for no reason. Both are kind and always text me for anything. Also, neither is a cute human. Thank you, guys.&lt;/li&gt;
&lt;li&gt;Thanks to my elderly sister &lt;strong&gt;NG&lt;/strong&gt; as well. Overaged person with professional traumas. Best in arguments and worst in winning them. LOL. Thank you.&lt;/li&gt;
&lt;li&gt;And the last guy, &lt;strong&gt;KOS&lt;/strong&gt;. I worked with him. We talked about work for 1% of the time and movies, series, and heroines for 99%. Thanks for helping me debug bugs that were already fixed.&lt;/li&gt;
&lt;li&gt;Thanks to &lt;strong&gt;SK&lt;/strong&gt; as well, whom I lost this year. You've helped me a lot. Those were the happy days. Happy life ahead!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Again, do not take my words too seriously. But if you do, you are already a great person 😂&lt;/p&gt;

&lt;p&gt;Thanks for being there.&lt;/p&gt;




&lt;p&gt;Before I wrap this up, one last thing.&lt;/p&gt;

&lt;p&gt;If you ever feel like knowing more about me, or just want to talk about work, life, random ideas, or nothing in particular, you can reach out to me anywhere below. I am usually around, and I genuinely enjoy conversations more than formal networking.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;My portfolio: &lt;a href="https://rohan-sharma-portfolio.vercel.app" rel="noopener noreferrer"&gt;https://rohan-sharma-portfolio.vercel.app&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;X: &lt;a href="https://x.com/rrs00179" rel="noopener noreferrer"&gt;https://x.com/rrs00179&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;LinkedIn: &lt;a href="https://www.linkedin.com/in/rohan-sharma-9386rs/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/rohan-sharma-9386rs/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/RS-labhub" rel="noopener noreferrer"&gt;https://github.com/RS-labhub&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No pressure to follow or connect. Just putting this here.&lt;/p&gt;

&lt;p&gt;That is all from my side. Thank you so much for reading till here. I will be active from next year and keep delivering good content.&lt;/p&gt;

&lt;p&gt;I forgot to ask, &lt;strong&gt;how did your year go?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>career</category>
      <category>devrel</category>
    </item>
    <item>
      <title>How to Create a Local Chatbot Without Coding in Less Than 10 Minutes on AI PCs</title>
      <dc:creator>Rohan Sharma</dc:creator>
      <pubDate>Wed, 02 Jul 2025 04:10:07 +0000</pubDate>
      <link>https://dev.to/llmware/how-to-create-a-local-chatbot-without-coding-in-less-than-10-minutes-on-ai-pcs-2ajl</link>
      <guid>https://dev.to/llmware/how-to-create-a-local-chatbot-without-coding-in-less-than-10-minutes-on-ai-pcs-2ajl</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;🔖 &lt;em&gt;No cloud. No internet. No coding.&lt;/em&gt; &lt;br&gt;
🔖 &lt;em&gt;Just you, your laptop, and 100+ powerful AI models running locally.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Imagine building your own chatbot that can answer your questions, summarize documents, analyze images, and even understand tables, all without needing an internet connection.&lt;/p&gt;

&lt;p&gt;Sounds futuristic?&lt;/p&gt;

&lt;p&gt;Thanks to &lt;strong&gt;Model HQ&lt;/strong&gt;, this is now a reality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model HQ&lt;/strong&gt; developed by &lt;a href="http://llmware.ai" rel="noopener noreferrer"&gt;LLMWare&lt;/a&gt;, is an innovative application that allows you to create and run a chatbot locally on your PC or laptop &lt;strong&gt;without an internet connection&lt;/strong&gt;. Best of all, this can be done with &lt;strong&gt;NO CODE&lt;/strong&gt; in &lt;strong&gt;less than 10 minutes&lt;/strong&gt;, even on older laptops up to 5 years old, provided they have 16GB or more of RAM.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll walk you through how to create your own local chatbot using &lt;strong&gt;Model HQ&lt;/strong&gt; ; a revolutionary AI desktop app by &lt;a href="https://llmware.ai" rel="noopener noreferrer"&gt;LLMWare.ai&lt;/a&gt;. Whether you’re a student, developer, or a professional looking for a private and offline AI assistant, this tool puts the power of cutting-edge AI models &lt;strong&gt;directly on your laptop&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let’s break it down.&lt;/p&gt;

&lt;p&gt;If you want to know about &lt;strong&gt;Model HQ in detail&lt;/strong&gt;, then read the blog below:&lt;br&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/llmware/how-to-run-ai-models-privately-on-your-ai-pc-with-model-hq-no-cloud-no-code-3o9k" class="crayons-story__hidden-navigation-link"&gt;How to Run AI Models Privately on Your AI PC with Model HQ; No Cloud, No Code&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;
          &lt;a class="crayons-logo crayons-logo--l" href="/llmware"&gt;
            &lt;img alt="LLMWare logo" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F8208%2F4bf5768d-460d-460b-9ccc-a80499ca040e.png" class="crayons-logo__image"&gt;
          &lt;/a&gt;

          &lt;a href="/rohan_sharma" class="crayons-avatar  crayons-avatar--s absolute -right-2 -bottom-2 border-solid border-2 border-base-inverted  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1936949%2Fa1fd5434-8c99-4531-9491-2d117d2e6996.jpg" alt="rohan_sharma profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/rohan_sharma" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Rohan Sharma
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Rohan Sharma
                &lt;a href="/++"&gt;&lt;img alt="Subscriber" class="subscription-icon" src="https://assets.dev.to/assets/subscription-icon-805dfa7ac7dd660f07ed8d654877270825b07a92a03841aa99a1093bd00431b2.png"&gt;&lt;/a&gt;
              
              &lt;div id="story-author-preview-content-2629400" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/rohan_sharma" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1936949%2Fa1fd5434-8c99-4531-9491-2d117d2e6996.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Rohan Sharma&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

            &lt;span&gt;
              &lt;span class="crayons-story__tertiary fw-normal"&gt; for &lt;/span&gt;&lt;a href="/llmware" class="crayons-story__secondary fw-medium"&gt;LLMWare&lt;/a&gt;
            &lt;/span&gt;
          &lt;/div&gt;
          &lt;a href="https://dev.to/llmware/how-to-run-ai-models-privately-on-your-ai-pc-with-model-hq-no-cloud-no-code-3o9k" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Jun 27 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/llmware/how-to-run-ai-models-privately-on-your-ai-pc-with-model-hq-no-cloud-no-code-3o9k" id="article-link-2629400"&gt;
          How to Run AI Models Privately on Your AI PC with Model HQ; No Cloud, No Code
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag crayons-tag--filled  " href="/t/showdev"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;showdev&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/security"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;security&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/nocode"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;nocode&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/llmware/how-to-run-ai-models-privately-on-your-ai-pc-with-model-hq-no-cloud-no-code-3o9k" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/exploding-head-daceb38d627e6ae9b730f36a1e390fca556a4289d5a41abb2c35068ad3e2c4b5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;80&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/llmware/how-to-run-ai-models-privately-on-your-ai-pc-with-model-hq-no-cloud-no-code-3o9k#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              17&lt;span class="hidden s:inline"&gt; comments&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            5 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Download Model HQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Model HQ&lt;/strong&gt; is an AI desktop application that allows you to interact with over &lt;strong&gt;100+ top-performing AI models&lt;/strong&gt;, including large ones with up to &lt;strong&gt;32 billion parameters&lt;/strong&gt; — all running &lt;strong&gt;locally on your PC&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Unlike cloud-based tools, there’s &lt;strong&gt;no internet required&lt;/strong&gt;, and your data never leaves your machine. That means &lt;strong&gt;more privacy, better speed&lt;/strong&gt;, and zero cost for each query you run.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this blog, we will be looking into the &lt;strong&gt;CHAT&lt;/strong&gt; feature of Model HQ that helps us to create a chatbot running locally on our machine.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;First, get the app.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://llmware-modelhq.checkoutpage.com/modelhq-client-app-for-windows" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;Download or Buy Model HQ for Windows&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not ready to buy? No problem.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://llmware.ai/enterprise#developers-waitlist" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;Join the 90-Day Free Developer Trial&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once installed, you’ll have access to an interface that feels like your own AI control panel.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Choosing the Right AI Model
&lt;/h2&gt;

&lt;p&gt;Once installation is done, open the ModelHQ application, and then you will be prompted to add a setup method. The setup guide is provided after buying the application.&lt;/p&gt;

&lt;p&gt;After this, you will land in the main menu. Now, click on the Chat button.&lt;/p&gt;

&lt;p&gt;You’ll be prompted to select an AI model. If you’re unsure which model to choose, you can click on “choose for me,” and the application will select a suitable model based on your needs. Model HQ comes up with 100+ models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Available Model Options:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Small Model&lt;/strong&gt;:&lt;br&gt;
~1– 3 billion parameters:- Fastest response time, suitable for basic chat.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Medium Model&lt;/strong&gt;:&lt;br&gt;
~7– 8 billion parameters:- Balanced performance, ideal for chat, data analysis, and standard RAG tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Large Model&lt;/strong&gt;:&lt;br&gt;
~9 – up to 32 billion parameters:- Most powerful chat, RAG, and best for advanced and complex analytical workloads.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the way, Model HQ will pick a smart default based on your system and use case.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The size of the model you choose can significantly impact both speed and output quality. &lt;strong&gt;Smaller models are faster but may provide less detailed responses&lt;/strong&gt;. Follow this simple rule:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvs5p0403z5nb9malw1g1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvs5p0403z5nb9malw1g1.png" alt="table"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3. Downloading Models
&lt;/h2&gt;

&lt;p&gt;For demonstration purposes, we are selecting the &lt;strong&gt;Small Model&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
If no models have been downloaded previously (e.g., in the &lt;strong&gt;No Setup&lt;/strong&gt;, &lt;strong&gt;Fast Setup&lt;/strong&gt;, or &lt;strong&gt;Full Setup&lt;/strong&gt; paths), the selected model will begin downloading automatically.&lt;br&gt;&lt;br&gt;
This process typically takes &lt;strong&gt;2–7 minutes&lt;/strong&gt;, depending on the model you selected and your internet speed. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is only a &lt;strong&gt;one-time internet requirement&lt;/strong&gt;; once the models are downloaded, you don’t need internet anymore.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4: Start Chatting
&lt;/h2&gt;

&lt;p&gt;Once you’ve selected a model, you can start a chat by typing in your questions. For example, you might ask a simple question like, “What are the top sites to see in Paris?” The model will generate a response based on its training data.&lt;/p&gt;
&lt;h3&gt;
  
  
  Customizing Your Chat Experience
&lt;/h3&gt;

&lt;p&gt;Model HQ allows you to customize your chat experience further. You can adjust settings such as the maximum output length and the randomness of the responses (known as temperature). By default, the app is set to generate up to 1,000 tokens, which is usually sufficient for smaller models. However, even if you’re using larger models, be cautious about increasing this limit, as it can consume more memory and take longer to generate responses. So, in short, you can adjust &lt;strong&gt;generation settings&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Max Tokens&lt;/strong&gt;: How long should the response be?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Temperature&lt;/strong&gt;: Should the answer be creative or precise?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stop/Restart&lt;/strong&gt;: Hit ❌ to stop a long generation anytime.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;     &lt;/p&gt;
&lt;h2&gt;
  
  
  Step 5: Integrating Sources for Enhanced Responses
&lt;/h2&gt;

&lt;p&gt;One of the standout features of Model HQ is its ability to integrate sources, such as documents and images, into your chat. To do this, simply click on the “source” button and upload a file, such as a PDF or Word document.&lt;/p&gt;
&lt;h3&gt;
  
  
  Example: Using a Document as a Source
&lt;/h3&gt;

&lt;p&gt;For instance, if you upload an executive employment agreement, you can ask specific questions about the clauses within the document. The model will reference the uploaded document to provide accurate answers. This feature is invaluable for fact-checking and ensuring that you have the right information at your fingertips.&lt;/p&gt;
&lt;h3&gt;
  
  
  Chatting with Images
&lt;/h3&gt;

&lt;p&gt;Model HQ also allows you to chat with images. By uploading an image, the application can analyze the content and answer questions based on what it sees. This capability opens up a world of possibilities for multimedia processing, all done locally on your machine without any additional costs.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Step 6: Saving and Downloading Results
&lt;/h2&gt;

&lt;p&gt;After you’ve finished your session, you can save the chat results for future reference. This is particularly useful if you need to compile information for reports or presentations. Simply download the results, and you’ll have everything you need at your fingertips.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Step 7: Exploring Advanced Features
&lt;/h2&gt;

&lt;p&gt;As you become more comfortable with Model HQ, you can explore its advanced features. For example, you can experiment with different models to see how they perform with various types of queries. You can also adjust the generation settings to fine-tune the responses based on your specific needs.&lt;/p&gt;

&lt;p&gt;If you’re a visual learner, then watch this YouTube walkthrough:&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/6z3kyUpsGys"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Future Updates and Community Engagement
&lt;/h3&gt;

&lt;p&gt;Stay engaged with the Model HQ community by following their updates and tutorials on platforms like YouTube. The &lt;a href="https://youtube.com/playlist?list=PL1-dn33KwsmBiKZDobr9QT-4xI8bNJvIU&amp;amp;si=dLdhu0kMQWwgBwTE" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;Model HQ YouTube playlist&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; offers valuable insights and tips to help you maximize your experience with the application.&lt;/p&gt;

&lt;p&gt;Join the &lt;a href="https://discord.gg/bphreFK4NJ" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;LLMWare’s Official Discord Server&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; to interact with LLMWare’s great community of users and if you have any questions or feedback.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;Most AI apps require you to upload data to a cloud server. That’s slow, often expensive, and puts your privacy at risk.&lt;/p&gt;

&lt;p&gt;With &lt;strong&gt;Model HQ&lt;/strong&gt;, everything runs on your own machine with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;✅ No internet needed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;✅ No Coding Required&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;✅ No API keys or credits&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;✅ No data leaves your PC&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;✅ Zero cost per query&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s &lt;strong&gt;your personal AI lab&lt;/strong&gt;, fully private and offline.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Get Started with Model HQ Today!
&lt;/h2&gt;

&lt;p&gt;Creating a chatbot that runs locally without coding and an internet connection has never been easier. With Model HQ, you have access to a powerful AI tool that can enhance your productivity and streamline your workflow. &lt;/p&gt;

&lt;p&gt;Ready to experience the future of AI? Visit the &lt;a href="https://llmware.ai" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;LLMWare website&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; to learn more about Model HQ and its features. Don’t forget to sign up for the &lt;a href="https://llmware.ai/enterprise#developers-waitlist" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;90-day free trial for developers here&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; and explore the application firsthand. When you’re ready to make the leap, you can &lt;a href="https://llmware-modelhq.checkoutpage.com/modelhq-client-app-for-windows" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;purchase Model HQ directly here&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Unlock the full potential of AI on your PC or laptop with Model HQ today, and take the first step towards creating your very own local chatbot!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>nocode</category>
      <category>security</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Here's how I created a Real-Time Discord Badge for Github Readme 🌠</title>
      <dc:creator>Rohan Sharma</dc:creator>
      <pubDate>Tue, 01 Jul 2025 03:30:03 +0000</pubDate>
      <link>https://dev.to/rohan_sharma/heres-how-i-created-a-real-time-discord-badge-for-github-readme-4adg</link>
      <guid>https://dev.to/rohan_sharma/heres-how-i-created-a-real-time-discord-badge-for-github-readme-4adg</guid>
      <description>&lt;p&gt;Hello all, I'm back with another project. This time, I built a Real-Time Discord badge that shows the total member count.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yztmdptnhn446z4xeds.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yztmdptnhn446z4xeds.png" alt="badge"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog, I will share with you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why did I build this?&lt;/li&gt;
&lt;li&gt;Why add a dynamic Discord badge to your GitHub Readme?&lt;/li&gt;
&lt;li&gt;A step-by-step guide to adding this badge to your GitHub README, including how to customize it.&lt;/li&gt;
&lt;li&gt;How I handled real-time Discord data, added Redis caching, managed rate limits, and made it fast.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, let's start. 3... 2... 1... 🌱&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Why did I build this?
&lt;/h2&gt;

&lt;p&gt;I want to tell you a super short story for this. The company for which I'm working wanted me to add a dynamic Discord badge to their GitHub README that shows the total member count.&lt;/p&gt;

&lt;p&gt;I tried searching on the web. After a lot of searching, I found that Discord already provides a "Server Widget JSON API". For this, you have to go to your Discord server settings &amp;gt; engagement, and then turn on the widget, and you will be able to find the API link and the Server ID.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro0h159xjg6nrwpmiqc3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro0h159xjg6nrwpmiqc3.png" alt="server widget"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can create a badge using this Server ID as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;![demo&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="sx"&gt;https://img.shields.io/discord/YOUR_SERVER_ID?logo=discord&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;](https://discord.gg/INVITE_CODE)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;But the thing is, &lt;strong&gt;it will only show the online members count and not all the members&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F300d01qxln8nn1gd3vfo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F300d01qxln8nn1gd3vfo.png" alt="online count badge"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I tried to search a lot about "how to show all members' count", but I wasn't able to find any.&lt;/p&gt;

&lt;p&gt;And thus, I decided to create one. It was super easy. When I built it for my company, I realized there may be more people who want this badge as well, so I made it public for the community.&lt;/p&gt;

&lt;p&gt;I don't know if such badges are already there in the market or not. I tried to find a lot, and the disappointment made me build it. &lt;strong&gt;If you know any such badges or methods to show dynamic Discord badges that update in real-time, then please let me know in the comment section.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Why add &lt;code&gt;Discord Live Members Count Badge&lt;/code&gt; to your GitHub Readme?
&lt;/h2&gt;

&lt;p&gt;Many open-source projects and communities wanted to showcase their Discord server growth on their GitHub READMEs because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The bigger the community, the more trust it builds.&lt;/strong&gt; Thus, showing your total Discord members count will not only market your product, but also build trust in future users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-time metrics make your README dynamic.&lt;/strong&gt; It gives a sense that the project is active and maintained.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Attracts contributors and collaborators.&lt;/strong&gt; A growing community signals engagement, which can draw in more developers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Helps community managers track growth.&lt;/strong&gt; Badges provide a quick glance at server stats without needing to log into Discord.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Encourages more users to join.&lt;/strong&gt; Seeing a badge with large numbers can trigger FOMO and boost Discord join rates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improves project credibility.&lt;/strong&gt; Just like stars and forks, community size becomes a social proof metric.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easier comparison between similar projects.&lt;/strong&gt; Potential users can quickly evaluate community support through visible numbers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the most important point, "&lt;strong&gt;SNEAK-PEEK YOUR RIVALS OR ENEMY, OR FRIEND DISCORD COMMUNITY MEMBERS&lt;/strong&gt;". I don't know about you, but I do it. All you need is their Server ID. Keep reading the blog, and I will tell you how to do this. (I'm not promoting any such activities. It's bad. Anyway, I don't care. 😂)&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Step-by-Step Guide: Adding &lt;code&gt;Discord Live Members Count Badge&lt;/code&gt; to Your GitHub README
&lt;/h2&gt;

&lt;p&gt;Let's get your Discord badge up and running in less than 5 minutes!&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1: Add the &lt;code&gt;Live Count Bot&lt;/code&gt; to Your Server
&lt;/h3&gt;

&lt;p&gt;First, you need to invite my bot to your Discord server:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://discord.com/oauth2/authorize?client_id=1388440480102092860&amp;amp;permissions=1040&amp;amp;integration_type=0&amp;amp;scope=bot" rel="noopener noreferrer"&gt;🤖 Click here to add the bot&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The bot needs minimal permissions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;View Channels&lt;/strong&gt; (to access your server)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manage Channels&lt;/strong&gt; (for member counting)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 2: Enable Server Widget
&lt;/h3&gt;

&lt;p&gt;This is crucial - without this step, your badges won't work!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Right-click your server name → &lt;strong&gt;Server Settings&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Engagement&lt;/strong&gt; → &lt;strong&gt;Server Widget&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Toggle ON&lt;/strong&gt; the "Enable Server Widget" option&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Copy your Server ID&lt;/strong&gt; (you'll need this!)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Still facing difficulty? Follow the &lt;a href="https://discord-live-members-count-badge.vercel.app/documentation" rel="noopener noreferrer"&gt;&lt;strong&gt;Documentation here&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What if you're not a core member of the Discord server? The above option is present only for the server owner, admins, and moderators (sometimes). But if you're none of them and still want to add this badge? Remember, I told you above the "&lt;strong&gt;Sneak-Peek&lt;/strong&gt;" thing? We will apply it here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you're not a core member&lt;/strong&gt; and still want to add this badge or want to sneak peek, then follow the steps below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open your &lt;strong&gt;Users Settings&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Advanced&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Toggle ON&lt;/strong&gt; the "Developer Mode" option&lt;/li&gt;
&lt;li&gt;Now, right-click on any server and you will get an option to &lt;strong&gt;Copy Server ID&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE:&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;The Live Members Count must be present in that server.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Step 3. Customize your Badge
&lt;/h3&gt;

&lt;p&gt;Let's make it fast and as simple as possible.&lt;/p&gt;

&lt;p&gt;Copy this template, add your Server ID and Invite Link(optional, but important) in the corresponding places, and paste it in your GitHub Readme.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[![discord members](https://discord-live-members-count-badge.vercel.app/api/discord-members?guildId=YOUR_SERVER_ID_HERE)](https://discord.gg/YOUR_NEVER_EXPIRY_INVITE_CODE_HERE)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It will show the total Discord member counts (including the bots).&lt;/p&gt;

&lt;p&gt;I have made multiple endpoints, check it out here:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft8el1cae3333gjz6mcv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft8el1cae3333gjz6mcv2.png" alt="endpoints"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check the endpoints using this link:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Discord Members&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  https://discord-live-members-count-badge.vercel.app/api/discord-members?guildId=YOUR_SERVER_ID_HERE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;2. &lt;strong&gt;Discord Bots&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  https://discord-live-members-count-badge.vercel.app/api/discord-bots?guildId=YOUR_SERVER_ID_HERE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;3. &lt;strong&gt;Discord Total&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  https://discord-live-members-count-badge.vercel.app/api/discord-total?guildId=YOUR_SERVER_ID_HERE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;But, do you like to customize it more??&lt;/strong&gt;&lt;br&gt;
Well, I have this. I have included three extra parameters to make it more customizable for you.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;color (optional) - Hex color without &lt;code&gt;#&lt;/code&gt; (default: 7289DA)&lt;/li&gt;
&lt;li&gt;label (optional) - Custom text label (default: varies by endpoint)&lt;/li&gt;
&lt;li&gt;scale (optional) - Size multiplier from 0.5 to 10.0 (default: 1)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can use them as:&lt;br&gt;
Badge Code:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;![discord members&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="sx"&gt;https://discord-live-members-count-badge.vercel.app/api/discord-members?guildId=YOUR_SERVER_ID&amp;amp;color=27ae60&amp;amp;label=Users&amp;amp;scale=2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;](https://discord.gg/your-invite)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Endpoint:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;https://discord-live-members-count-badge.vercel.app/api/discord-members?guildId=YOUR_SERVER_ID&amp;amp;color=27ae60&amp;amp;label=Users&amp;amp;scale=2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Output:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykgbh5rr3an78dgx9hw9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykgbh5rr3an78dgx9hw9.png" alt="custom"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Same for other endpoints.&lt;/em&gt; Btw, pink is my favorite color. I don't know why I created a green badge. Maybe the devil made me do it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There's a possibility that you might get frustrated because you have to change the color and scale multiple times to get your best. Each time, you have to wait for the page to reload or hit &lt;code&gt;enter&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To become your HERO again, I built the website too.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://discord-live-members-count-badge.vercel.app/" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdiscord-live-members-count-badge.vercel.app%2Fog-image.png" height="auto" class="m-0"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://discord-live-members-count-badge.vercel.app/" rel="noopener noreferrer" class="c-link"&gt;
            Discord Badge
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            Add beautiful, real-time Discord member count badges to your GitHub README
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdiscord-live-members-count-badge.vercel.app%2Ffavicon.ico"&gt;
          discord-live-members-count-badge.vercel.app
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;The interface is simple. But there are two things to be noticed:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Badge Generator
&lt;/h3&gt;

&lt;p&gt;This is simple, add your Server ID and Invite Link. Click on Generate, and your badges will be generated. Copy and paste it directly into the readme.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fcrgftu347kixpwrr2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fcrgftu347kixpwrr2y.png" alt="generator"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Interactive Playground
&lt;/h3&gt;

&lt;p&gt;Don't want to manually customize badges? I built an interactive playground where you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;🎨 Pick colors&lt;/strong&gt; from presets or use custom hex codes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;📝 Edit labels&lt;/strong&gt; with live preview&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;📏 Adjust scaling&lt;/strong&gt; with a visual slider&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;📋 Copy ready-to-use&lt;/strong&gt; markdown code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://discord-live-members-count-bot.vercel.app" rel="noopener noreferrer"&gt;🚀 Try the Playground&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jhigmj0gyo2qdbwoy72.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jhigmj0gyo2qdbwoy72.png" alt="Playground Screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  How did I build it?
&lt;/h2&gt;

&lt;p&gt;In this section, I will try to explain how I built this web application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1. Creating the Discord Bot
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;a href="https://discord.com/developers/applications" rel="noopener noreferrer"&gt;&lt;strong&gt;Discord Developer Portal&lt;/strong&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create a new application&lt;/li&gt;
&lt;li&gt;Go to the &lt;strong&gt;Bot&lt;/strong&gt; section&lt;/li&gt;
&lt;li&gt;Create a bot and copy the token&lt;/li&gt;
&lt;li&gt;Invite the bot to your server with these permissions:

&lt;ul&gt;
&lt;li&gt;View Channels&lt;/li&gt;
&lt;li&gt;Manage Channels&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;In Privileged Gateway Intents, &lt;code&gt;Presence Intent&lt;/code&gt; and &lt;code&gt;Server Members Intent&lt;/code&gt; must be enabled.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Bot Permissions&lt;/strong&gt;&lt;br&gt;
For this case, the bot needs minimal permissions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;View Channels&lt;/strong&gt; (1024)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manage Channels&lt;/strong&gt; (16)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Total permission integer: &lt;code&gt;1040&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2. Routes and API
&lt;/h2&gt;
&lt;h3&gt;
  
  
  &lt;code&gt;discord-members&lt;/code&gt; API
&lt;/h3&gt;

&lt;p&gt;It fetches the approximate member count using Discord's &lt;code&gt;/guilds/{guildId}?with_counts=true&lt;/code&gt; endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`https://discord.com/api/v10/guilds/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;guildId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;?with_counts=true`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;Authorization&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Bot &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DISCORD_BOT_TOKEN&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;*&lt;em&gt;Caching *&lt;/em&gt; is not implemented in this route due to the approximate and fast nature of the API call.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;code&gt;discord-bots&lt;/code&gt; API
&lt;/h3&gt;

&lt;p&gt;It counts bots in a guild using the &lt;code&gt;fetchAllMembers()&lt;/code&gt; utility.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;members&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetchAllMembers&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;guildId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;botCount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;members&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bot&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Key Points:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Calls Discord's &lt;code&gt;/guilds/{guildId}/members&lt;/code&gt; endpoint (paginated).&lt;/li&gt;
&lt;li&gt;Requires member intent (Privileged Intent must be enabled).&lt;/li&gt;
&lt;li&gt;Filters bot users using &lt;code&gt;m.user.bot&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Uses caching to reduce repeated heavy calls. (in utils)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;code&gt;discord-total&lt;/code&gt; API
&lt;/h3&gt;

&lt;p&gt;It combines both bot and human user counts.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;members&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetchAllMembers&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;guildId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;botCount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;members&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bot&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;humanCount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;members&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;botCount&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Key Points:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Same logic and fetching as &lt;code&gt;discord-bots&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Shows combined info like &lt;code&gt;22 users, 5 bots&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;code&gt;fetchAllMembers&lt;/code&gt; Utility
&lt;/h3&gt;

&lt;p&gt;Paginate through all members of a guild using Discord's API.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;keepFetching&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`https://discord.com/api/v10/guilds/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;guildId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/members`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;Authorization&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Bot &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;DISCORD_BOT_TOKEN&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Caching:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tries Redis first using key &lt;code&gt;guild:{guildId}:members&lt;/code&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cached&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cacheKey&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cached&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`[Cache] HIT for &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;guildId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cached&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`[Cache] MISS for &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;guildId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`[Cache] Redis error on GET: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;On cache MISS, fetches from Discord and caches the result.&lt;/li&gt;
&lt;li&gt;TTL is controlled via &lt;code&gt;CACHE_TTL&lt;/code&gt; environment variable (default 300 seconds).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Rate Limiting Handling:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If API responds with 429, waits for &lt;code&gt;retry_after&lt;/code&gt; seconds, and retries.&lt;/li&gt;
&lt;li&gt;Adds &lt;code&gt;setTimeout&lt;/code&gt; between requests to avoid hitting limits.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;429&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;retryAfter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;retry_after&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;retryAfter&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="k"&gt;continue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Badge Generation
&lt;/h3&gt;

&lt;p&gt;All routes use the &lt;code&gt;badge-maker&lt;/code&gt; package:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;svg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;makeBadge&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;color&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;style&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;flat&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;logoBase64&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`data:image/svg+xml;base64,&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;discordLogoBase64&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Scaling:&lt;/strong&gt; Optional &lt;code&gt;scale&lt;/code&gt; param adjusts badge size.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scale&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;svg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;svg&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&amp;lt;svg&lt;/span&gt;&lt;span class="se"&gt;([^&lt;/span&gt;&lt;span class="sr"&gt;&amp;gt;&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;?)&lt;/span&gt;&lt;span class="sr"&gt;width="&lt;/span&gt;&lt;span class="se"&gt;(\d&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="sr"&gt;" height="&lt;/span&gt;&lt;span class="se"&gt;(\d&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="sr"&gt;"/&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&amp;lt;svg&lt;/span&gt;&lt;span class="se"&gt;[^&lt;/span&gt;&lt;span class="sr"&gt;&amp;gt;&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;*&amp;gt;/&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`$1&amp;lt;g transform="scale(&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;scale&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;)"&amp;gt;`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&amp;lt;&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;svg&amp;gt;/&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;/g&amp;gt;&amp;lt;/svg&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Error Handling:&lt;/strong&gt; If any call fails, a red "error" badge is returned.&lt;/p&gt;


&lt;h3&gt;
  
  
  Quick Summary
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;discord-members&lt;/code&gt; = fast estimate, no intents.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;discord-bots&lt;/code&gt; &amp;amp; &lt;code&gt;discord-total&lt;/code&gt; = precise, uses member list and cache.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fetchAllMembers&lt;/code&gt; handles paging, rate limits, and caching.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The setup supports scaling, CDN caching, and proper Discord rate limit compliance.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3. Create UI and Deploy.
&lt;/h2&gt;

&lt;p&gt;You can use any AI tool for creating the UI. I used V0 by Vercel. Also, for deploying my application, I used Vercel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://discord-live-members-count-badge.vercel.app/" rel="noopener noreferrer"&gt;&lt;strong&gt;See it in Action&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check out the GitHub repository for the Source Code (liked this project? Star it 🌠)&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/RS-labhub" rel="noopener noreferrer"&gt;
        RS-labhub
      &lt;/a&gt; / &lt;a href="https://github.com/RS-labhub/Discord-Live-Members-Count-Badge" rel="noopener noreferrer"&gt;
        Discord-Live-Members-Count-Badge
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Add beautiful, real-time Discord member count badges to your GitHub README
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/RS-labhub/Discord-Live-Members-Count-Badge/master/public/og-image.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FRS-labhub%2FDiscord-Live-Members-Count-Badge%2Fmaster%2Fpublic%2Fog-image.png" alt="banner"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;🎯 Discord Live Members Count Badge&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Add beautiful, real-time Discord member count badges to your GitHub README&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://opensource.org/licenses/MIT" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/fdf2982b9f5d7489dcf44570e714e3a15fce6253e0cc6b5aa61a075aac2ff71b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d4d49542d79656c6c6f772e737667" alt="License: MIT"&gt;&lt;/a&gt;
&lt;a href="https://discord-live-members-count-badge.vercel.app" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/f1ad8e741536392a9cc5f3af1607f02a39008eb552ccde501899c9d6948eac4a/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6976652d536974652d627269676874677265656e2e737667" alt="Live Site"&gt;&lt;/a&gt;
&lt;a href="https://rohan-sharma-portfolio.vercel.app" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/9c2042675a01a0cbaedb94325894b9cb3a2eaab934730c6238860f35b8ce572a/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f506f7274666f6c696f2d526f68616e5f536861726d612d626c756576696f6c65742e737667" alt="Portfolio"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;✨ Features&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;🔄 Real-time Updates&lt;/strong&gt; - Live member count with smart caching (5-minute intervals)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;👥 Multiple Badge Types&lt;/strong&gt; - Total members, human-only, or bot-only counts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🛡️ Safe &amp;amp; Secure&lt;/strong&gt; - Uses Discord API's &lt;code&gt;with_counts=true&lt;/code&gt; for public access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;⚡ Serverless Ready&lt;/strong&gt; - Deploy to Vercel, Railway, Render, or any platform&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🎨 Fully Customizable&lt;/strong&gt; - Custom colors, labels, and scaling options&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;📱 Mobile Responsive&lt;/strong&gt; - Optimized playground interface for all devices&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;📦 Zero Config&lt;/strong&gt; - Just add your bot and start using&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🎬 Project Showcase&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Preview&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://youtu.be/jSLE3u_2vag" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FRS-labhub%2FDiscord-Live-Members-Count-Badge%2Fmaster%2Fpublic%2Fthumbnail.png" alt="YouTube Demo"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;🎬 &lt;strong&gt;YouTube Demo&lt;/strong&gt;&lt;br&gt;Click the image to watch the full demo.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://dev.to/rohan_sharma/heres-how-i-created-a-real-time-discord-badge-for-github-readme-4adg" rel="nofollow"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FRS-labhub%2FDiscord-Live-Members-Count-Badge%2Fmaster%2Fpublic%2FblogCover.png" alt="Blog Post"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;📝 &lt;strong&gt;Blog Post&lt;/strong&gt;&lt;br&gt;Read the blog for in-depth explanation.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🚀 Quick Start&lt;/h2&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;1. Add Bot to Your Server&lt;/h3&gt;

&lt;/div&gt;
&lt;p&gt;First, invite our bot to your Discord server:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://discord.com/oauth2/authorize?client_id=1388440480102092860&amp;amp;permissions=1040&amp;amp;integration_type=0&amp;amp;scope=bot" rel="nofollow noopener noreferrer"&gt;&lt;strong&gt;🤖 Add Bot to Server&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;2. Enable Server Widget&lt;/h3&gt;

&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;Go to…&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/RS-labhub/Discord-Live-Members-Count-Badge" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wait, wait, wait! Did you just ask for a Video Explanation?&lt;/strong&gt; I have it for you already. Check it out here:&lt;br&gt;
  &lt;iframe src="https://www.youtube.com/embed/jSLE3u_2vag"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  [ ] PROMOTION:
&lt;/h2&gt;

&lt;p&gt;For the default badges on the website and the readme, I have used LLMWare's Discord Server with their consent. If you don't know, llmware just launched "&lt;strong&gt;Model HQ: Your Private AI Companion&lt;/strong&gt;". The interesting part is that it runs locally and without internet.&lt;/p&gt;

&lt;p&gt;Want to know more about it? Read it here&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/llmware/how-to-run-ai-models-privately-on-your-ai-pc-with-model-hq-no-cloud-no-code-3o9k" class="crayons-story__hidden-navigation-link"&gt;How to Run AI Models Privately on Your AI PC with Model HQ; No Cloud, No Code&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;
          &lt;a class="crayons-logo crayons-logo--l" href="/llmware"&gt;
            &lt;img alt="LLMWare logo" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F8208%2F4bf5768d-460d-460b-9ccc-a80499ca040e.png" class="crayons-logo__image"&gt;
          &lt;/a&gt;

          &lt;a href="/rohan_sharma" class="crayons-avatar  crayons-avatar--s absolute -right-2 -bottom-2 border-solid border-2 border-base-inverted  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1936949%2Fa1fd5434-8c99-4531-9491-2d117d2e6996.jpg" alt="rohan_sharma profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/rohan_sharma" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Rohan Sharma
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Rohan Sharma
                &lt;a href="/++"&gt;&lt;img alt="Subscriber" class="subscription-icon" src="https://assets.dev.to/assets/subscription-icon-805dfa7ac7dd660f07ed8d654877270825b07a92a03841aa99a1093bd00431b2.png"&gt;&lt;/a&gt;
              
              &lt;div id="story-author-preview-content-2629400" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/rohan_sharma" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1936949%2Fa1fd5434-8c99-4531-9491-2d117d2e6996.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Rohan Sharma&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

            &lt;span&gt;
              &lt;span class="crayons-story__tertiary fw-normal"&gt; for &lt;/span&gt;&lt;a href="/llmware" class="crayons-story__secondary fw-medium"&gt;LLMWare&lt;/a&gt;
            &lt;/span&gt;
          &lt;/div&gt;
          &lt;a href="https://dev.to/llmware/how-to-run-ai-models-privately-on-your-ai-pc-with-model-hq-no-cloud-no-code-3o9k" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Jun 27 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/llmware/how-to-run-ai-models-privately-on-your-ai-pc-with-model-hq-no-cloud-no-code-3o9k" id="article-link-2629400"&gt;
          How to Run AI Models Privately on Your AI PC with Model HQ; No Cloud, No Code
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag crayons-tag--filled  " href="/t/showdev"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;showdev&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/security"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;security&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/nocode"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;nocode&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/llmware/how-to-run-ai-models-privately-on-your-ai-pc-with-model-hq-no-cloud-no-code-3o9k" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/exploding-head-daceb38d627e6ae9b730f36a1e390fca556a4289d5a41abb2c35068ad3e2c4b5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;80&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/llmware/how-to-run-ai-models-privately-on-your-ai-pc-with-model-hq-no-cloud-no-code-3o9k#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              17&lt;span class="hidden s:inline"&gt; comments&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            5 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In short, try this once and let me know your feedback. I don't usually say it, but save this blog for the future and star the GitHub Repository.&lt;/p&gt;

&lt;p&gt;All links in one place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://discord-live-members-count-badge.vercel.app/" rel="noopener noreferrer"&gt;Discord Live Members Count Website&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://discord-live-members-count-badge.vercel.app/documentation" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/RS-labhub/Discord-Live-Members-Count-Badge" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt; (&lt;strong&gt;star it&lt;/strong&gt; 🌠)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://youtu.be/jSLE3u_2vag" rel="noopener noreferrer"&gt;Video Explanation&lt;/a&gt; (&lt;strong&gt;like and subscribe&lt;/strong&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want me to &lt;strong&gt;RAISE A PR for this badge on your GitHub repository&lt;/strong&gt;, then let me know in the comment section. &lt;/p&gt;

&lt;p&gt;Also, in case you want to know more about me, here's my &lt;a href="https://rohan-sharma-portfolio.vercel.app/" rel="noopener noreferrer"&gt;&lt;strong&gt;Portfolio&lt;/strong&gt;&lt;/a&gt; (and yeah, &lt;strong&gt;try to find out the secret page&lt;/strong&gt;, it's still waiting for you.)&lt;/p&gt;

&lt;p&gt;Thanks for reading till here. You're awesomeeeeeeeeeee. Here's a hug from me 🫂. Keep growing. You're the best. 🫀&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>javascript</category>
      <category>vibecoding</category>
    </item>
    <item>
      <title>How to Run AI Models Privately on Your AI PC with Model HQ; No Cloud, No Code</title>
      <dc:creator>Rohan Sharma</dc:creator>
      <pubDate>Fri, 27 Jun 2025 04:20:58 +0000</pubDate>
      <link>https://dev.to/llmware/how-to-run-ai-models-privately-on-your-ai-pc-with-model-hq-no-cloud-no-code-3o9k</link>
      <guid>https://dev.to/llmware/how-to-run-ai-models-privately-on-your-ai-pc-with-model-hq-no-cloud-no-code-3o9k</guid>
      <description>&lt;p&gt;In an era where efficiency and data privacy are paramount, &lt;strong&gt;Model HQ by&lt;/strong&gt; &lt;a href="https://llmware.ai" rel="noopener noreferrer"&gt;&lt;strong&gt;LLMWare&lt;/strong&gt;&lt;/a&gt; emerges as a game-changer for professionals and enthusiasts alike. Built by LLMWare, Model HQ is a groundbreaking desktop application that transforms your own PC or laptop into a fully private, high-performance AI workstation.&lt;/p&gt;

&lt;p&gt;Most AI tools rely on the cloud. &lt;strong&gt;Model HQ&lt;/strong&gt; doesn’t.&lt;/p&gt;

&lt;p&gt;No more cloud latency. No more vendor lock-in. Just &lt;strong&gt;100+ cutting-edge AI models&lt;/strong&gt;, blazing fast document search, and natural language tools; all running &lt;strong&gt;locally&lt;/strong&gt; on your machine.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  What is Model HQ?
&lt;/h2&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/Dbxb5qfsMaM"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model HQ&lt;/strong&gt; is a powerful, no-code desktop application that enables users to run enterprise-grade AI workflows &lt;strong&gt;locally&lt;/strong&gt;, &lt;strong&gt;securely&lt;/strong&gt;, and &lt;strong&gt;at scale,&lt;/strong&gt; right from their own PC or laptop. Designed for simplicity and performance, it provides point-and-click access to &lt;strong&gt;100+ state-of-the-art AI models&lt;/strong&gt;, ranging from &lt;strong&gt;1B to 32B parameters&lt;/strong&gt;, with built-in optimization for AI PCs and Intel hardware. Whether you’re building AI applications, analyzing documents, or querying data, Model HQ automatically adapts to your device’s specs to ensure &lt;strong&gt;fast, efficient inferencing,&lt;/strong&gt; even for large models that traditionally struggle on standard formats.&lt;/p&gt;

&lt;p&gt;What truly sets Model HQ apart is its &lt;strong&gt;privacy-first, offline capability&lt;/strong&gt;. Once models are downloaded, they can be used without Wi-Fi, keeping &lt;strong&gt;your data and sensitive information 100% on-device&lt;/strong&gt;. This makes it the fastest and most secure way to explore and deploy powerful AI tools without depending on the cloud or external APIs. From developers and researchers to enterprise teams, Model HQ delivers a &lt;strong&gt;seamless, cost-effective, and private AI experience&lt;/strong&gt;; all in one sleek, local platform.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  What Model HQ can do?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febpj6vk0qze2o2myp8qu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febpj6vk0qze2o2myp8qu.png" alt="model hq"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Chat:&lt;/strong&gt;&lt;br&gt;
The Chat feature allows users a fast way to start experimenting with chat models of various sizes, from Small (1–3 billion parameters), Medium (7–8 billion parameters) to Large (9 and above, up to 32 billion parameters).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Small Model&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
~1–3 billion parameters — Fastest response time, suitable for basic chat.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Medium Model&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
~7–8 billion parameters — Balanced performance, ideal for chat, data analysis and standard RAG tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Large Model&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
~9 up to 32 billion parameters — Most powerful chat, RAG, and best for advanced and complex analytical workloads.   &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://youtu.be/6z3kyUpsGys?si=4kYvkPEBUJN81nT6" rel="noopener noreferrer"&gt;Watch Chat in Action&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;2. Agents&lt;/strong&gt;&lt;br&gt;
Agents in Model HQ are pre-configured or custom-built workflows that automate complex tasks using local AI models. They allow users to process files, extract insights, or perform multi-step operations; &lt;strong&gt;all with point-and-click simplicity&lt;/strong&gt; and no coding required.&lt;/p&gt;

&lt;p&gt;Users can &lt;strong&gt;build new agents from scratch&lt;/strong&gt;, &lt;strong&gt;load existing ones&lt;/strong&gt; (either from built-in templates or previously created workflows), and manage them through a simple dropdown interface. From editing or deleting agents to running &lt;strong&gt;batch operations&lt;/strong&gt; on multiple documents, the Agent system provides a flexible way to scale private, on-device AI workflows. Pre-created agents include powerful tools like &lt;strong&gt;Contract Analyzer&lt;/strong&gt;, &lt;strong&gt;Customer Support Bot&lt;/strong&gt;, &lt;strong&gt;Financial Data Extractor&lt;/strong&gt;, &lt;strong&gt;Image Tagger&lt;/strong&gt;, and more — each designed to handle specific tasks efficiently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/UTNQxspDi3I?si=yOaPilNSEqY1xLFy" rel="noopener noreferrer"&gt;Watch Agents in Action&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;3. Bots&lt;/strong&gt;&lt;br&gt;
The Bots feature allows users to create their own custom Chat and RAG bots seamlessly for either the AI PC/edge device use case (Fast Start Chatbot and Model HQ Biz Bot) or via API deployment (Model HQ API Server Biz Bot).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/uy53WKrMOXc?si=TAaS_hYj0AddXu2R" rel="noopener noreferrer"&gt;Watch Bots in Action&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;4. RAG&lt;/strong&gt;&lt;br&gt;
RAG combines retrieval-based techniques with generative AI to allow models to answer questions more accurately by retrieving relevant information from external sources or documents. With RAG in Model HQ, you can create knowledge bases that you can query in the chat section or via a custom bot by uploading documents. The RAG section is used only to create the knowledge base.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/FSjpAgIZnPM?si=5kMR_sXH_pCyNLvg" rel="noopener noreferrer"&gt;Watch Rag in Action&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;5. Models&lt;/strong&gt;&lt;br&gt;
The Models section allows you to explore, manage, and test models within Model HQ. You can discover new models, manage downloaded models, review inference history, and run benchmark tests; all from a single interface.&lt;/p&gt;

&lt;p&gt;And this all can be done, while keeping your &lt;strong&gt;data private, your workflows offline, and your AI performance fully optimized for your device&lt;/strong&gt; — no internet, no cloud, and no compromise. With its powerful features and user-friendly interface, Model HQ empowers you to leverage AI technology without compromising on security. Experience the future of AI today and transform the way you work!&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  System Requirements
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nqbehtpmqis291asyjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nqbehtpmqis291asyjc.png" alt="sys_req"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h3&gt;
  
  
  Experience Model HQ Risk-Free
&lt;/h3&gt;

&lt;p&gt;We understand that trying new software can be a leap of faith. That’s why we’re offering a &lt;a href="https://llmware.ai/enterprise#developers-waitlist" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;90-day free trial for developers&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;. Experience the full capabilities of Model HQ without any commitment. Sign up for the trial here and discover how it can transform your workflow.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h3&gt;
  
  
  A Powerful Collaboration with Intel
&lt;/h3&gt;

&lt;p&gt;LLMWare.ai has partnered with Intel to optimize Model HQ for peak performance on your devices. This collaboration ensures that you receive a reliable and efficient AI experience, making your tasks smoother and more productive. Learn more about this exciting partnership &lt;a href="https://llmware.ai/intel" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;here&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Read the Intel Solution Brief here:&lt;br&gt;
&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://www.intel.com/content/www/us/en/content-details/854280/local-ai-no-code-more-secure-with-ai-pcs-and-the-private-cloud.html" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.intel.com%2Fetc.clientlibs%2Fsettings%2Fwcm%2Fdesigns%2Fintel%2Fus%2Fen%2Fimages%2Fresources%2Fprintlogo.png" height="auto" class="m-0"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://www.intel.com/content/www/us/en/content-details/854280/local-ai-no-code-more-secure-with-ai-pcs-and-the-private-cloud.html" rel="noopener noreferrer" class="c-link"&gt;
            Local AI—No Code, More Secure with AI PCs and the Private Cloud
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            Bring secure, no-code GenAI to your enterprise with Intel® AI PCs and LLMWare’s Model HQ—run agents and RAG queries locally without exposing data or incurring cloud costs.
In this brief, learn how to scale private AI simply and affordably.
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.intel.com%2Fetc.clientlibs%2Fsettings%2Fwcm%2Fdesigns%2Fintel%2Fdefault%2Fresources%2Ffavicon-32x32.png"&gt;
          intel.com
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Take the Next Step Towards AI Empowerment
&lt;/h3&gt;

&lt;p&gt;Don’t miss the chance to elevate your productivity with Model HQ. Whether you’re a business professional, a developer, or a student, this application is designed to meet your needs and exceed your expectations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://llmware-modelhq.checkoutpage.com/modelhq-client-app-for-windows" rel="noopener noreferrer"&gt;&lt;strong&gt;Purchase Model HQ Today!&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ready to unlock the full potential of AI on your PC or laptop? Buy Model HQ now by clicking here and take the first step towards a smarter, more efficient future.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Learn More About Model HQ
&lt;/h3&gt;

&lt;p&gt;For additional information about Model HQ, including detailed features and user guides, &lt;a href="https://llmware.ai" rel="noopener noreferrer"&gt;&lt;em&gt;visit our website&lt;/em&gt;&lt;/a&gt;. Don’t forget to check out our introductory video and explore our &lt;a href="https://youtube.com/playlist?list=PL1-dn33KwsmBiKZDobr9QT-4xI8bNJvIU&amp;amp;si=dLdhu0kMQWwgBwTE" rel="noopener noreferrer"&gt;&lt;em&gt;YouTube playlist&lt;/em&gt;&lt;/a&gt; for tutorials and tips.&lt;/p&gt;

&lt;p&gt;Join the &lt;a href="https://discord.gg/bphreFK4NJ" rel="noopener noreferrer"&gt;&lt;em&gt;LLMWare’s official Discord Server&lt;/em&gt;&lt;/a&gt; to interact with LLMWare's great community of users and if you have any questions or feedback.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Model HQ&lt;/strong&gt; isn’t just another AI app, it’s a complete, offline-first platform built for &lt;strong&gt;speed, privacy, and control&lt;/strong&gt;. Whether you’re chatting with LLMs, building agents, analyzing documents, or deploying custom bots, everything runs &lt;strong&gt;securely on your own PC or laptop&lt;/strong&gt;. With support for models up to &lt;strong&gt;32B parameters&lt;/strong&gt;, RAG-enabled document search, natural language SQL, and no-code workflows, Model HQ brings enterprise-grade AI directly to your desktop, no cloud required.&lt;/p&gt;

&lt;p&gt;As the world moves toward AI-powered productivity, Model HQ ensures you’re ahead of the curve with a faster, safer, and smarter way to work.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>nocode</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Creating a Chatbot from Scratch and Vibe Coding the UI💃</title>
      <dc:creator>Rohan Sharma</dc:creator>
      <pubDate>Sat, 21 Jun 2025 08:27:35 +0000</pubDate>
      <link>https://dev.to/rohan_sharma/creating-a-chatbot-from-scratch-and-vibe-coding-the-ui-1bij</link>
      <guid>https://dev.to/rohan_sharma/creating-a-chatbot-from-scratch-and-vibe-coding-the-ui-1bij</guid>
      <description>&lt;p&gt;Hey all,&lt;/p&gt;

&lt;p&gt;I hope you remember me. (Yes?? LMK in the comment section.)&lt;/p&gt;

&lt;p&gt;In this blog, I will discuss Radhika: Adaptive Reasoning &amp;amp; Intelligence Assistant. It provides specialized assistance across six distinct modes: General, Productivity, Wellness, Learning, Creative, and BFF.&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://radhika-sharma.vercel.app/" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fradhika-sharma.vercel.app%2Fog-image.png" height="auto" class="m-0"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://radhika-sharma.vercel.app/" rel="noopener noreferrer" class="c-link"&gt;
            Radhika
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            Radhika is a versatile AI chatbot designed to assist with a wide range of tasks, from answering questions to providing recommendations and engaging in casual conversation.
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fradhika-sharma.vercel.app%2Ffavicon.ico"&gt;
          radhika-sharma.vercel.app
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;(try it out, give feedback and suggestions, request changes)&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ Tech Stack
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Frontend
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Framework&lt;/strong&gt;: Next.js 14 with App Router and React 18&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Styling&lt;/strong&gt;: Tailwind CSS with custom design system&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Components&lt;/strong&gt;: shadcn/ui component library&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Icons&lt;/strong&gt;: Lucide React icon library&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3D Graphics&lt;/strong&gt;: Three.js for particle visualizations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Animations&lt;/strong&gt;: CSS transitions and keyframe animations&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AI &amp;amp; Backend
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI Integration&lt;/strong&gt;: Vercel AI SDK for unified LLM access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Providers&lt;/strong&gt;: Groq, Google Gemini, OpenAI, Claude&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speech&lt;/strong&gt;: WebKit Speech Recognition and Synthesis APIs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage&lt;/strong&gt;: Browser localStorage for chat persistence and settings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API&lt;/strong&gt;: Next.js API routes for secure LLM communication&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Development
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Language&lt;/strong&gt;: TypeScript for type safety&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build&lt;/strong&gt;: Next.js build system with optimizations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment&lt;/strong&gt;: Vercel-ready with environment variable support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Optimized bundle splitting and lazy loading&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 Implementing Main Logic
&lt;/h2&gt;

&lt;p&gt;This section breaks down how the &lt;a href="https://github.com/RS-labhub/Radhika/blob/master/app/api/chat/route.ts" rel="noopener noreferrer"&gt;&lt;code&gt;app/api/chat/route.ts&lt;/code&gt;&lt;/a&gt; endpoint processes requests, selects models, applies system prompts, and streams responses using different AI providers.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Parse Request
&lt;/h3&gt;

&lt;p&gt;The request handler begins by parsing the JSON body from the incoming POST request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;mode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;general&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;groq&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;apiKey&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;messages&lt;/code&gt;&lt;/strong&gt;: The conversation history sent by the client.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;mode&lt;/code&gt;&lt;/strong&gt;: Determines which system prompt to use (e.g., &lt;code&gt;bff&lt;/code&gt;, &lt;code&gt;learning&lt;/code&gt;, etc.).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;provider&lt;/code&gt;&lt;/strong&gt;: Specifies the AI backend to use (&lt;code&gt;groq&lt;/code&gt;, &lt;code&gt;openai&lt;/code&gt;, &lt;code&gt;claude&lt;/code&gt;, &lt;code&gt;gemini&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;apiKey&lt;/code&gt;&lt;/strong&gt;: Required for OpenAI and Claude if a user key is needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The code also validates whether the &lt;code&gt;messages&lt;/code&gt; array exists and is non-empty.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Assign System Prompt
&lt;/h3&gt;

&lt;p&gt;Based on the selected mode, a system prompt is selected to guide the assistant's personality and purpose:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;systemPrompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;SYSTEM_PROMPTS&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;SYSTEM_PROMPTS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;general&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Examples of modes include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;productivity&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;bff&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;creative&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;wellness&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  3. Route to the Correct Provider
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;provider&lt;/code&gt; field determines which AI model backend to use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gemini&lt;/strong&gt; (&lt;code&gt;google&lt;/code&gt;): Uses Google's Gemini 2.0 model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI&lt;/strong&gt;: Uses GPT models (like &lt;code&gt;gpt-4o&lt;/code&gt;, &lt;code&gt;gpt-3.5-turbo&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude&lt;/strong&gt;: Uses Anthropic models (like &lt;code&gt;claude-3-sonnet&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Groq&lt;/strong&gt;: Defaults to models like &lt;code&gt;llama-3&lt;/code&gt; and &lt;code&gt;qwen&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each provider has custom logic to instantiate the model, handle errors, and stream the response using:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;streamText&lt;/span&gt;&lt;span class="p"&gt;({...})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  4. Model Selection (Groq Only)
&lt;/h3&gt;

&lt;p&gt;If the provider is &lt;code&gt;groq&lt;/code&gt;, model selection is dynamic. It analyzes the last message to determine the type of task:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lastMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;analyze&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;lastMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;plan&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;modelType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;reasoning&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lastMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;creative&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;lastMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;design&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;modelType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;creative&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;modelType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;fast&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;RADHIKA automatically selects the best model based on your query complexity:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Determine which model to use based on conversation context&lt;/span&gt;
&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;modelType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;fast&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// llama-3.1-8b-instant for quick responses&lt;/span&gt;

&lt;span class="c1"&gt;// Use reasoning model for complex analytical tasks&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;analyze&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;compare&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;plan&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;strategy&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;decision&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;problem&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;modelType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;reasoning&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// llama-3.3-70b-versatile&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Use creative model for artistic and innovative tasks&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;creative&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;brainstorm&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;idea&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;write&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;design&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;story&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;modelType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;creative&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// qwen/qwen3-32b&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Model Configuration&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Customize model selection in the API route:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;MODELS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;groq&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;fast&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;llama-3.1-8b-instant&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;reasoning&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;llama-3.3-70b-versatile&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="na"&gt;creative&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;qwen/qwen3-32b&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;gemini&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gemini-2.0-flash&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gpt-4o&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;claude&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;claude-3-5-sonnet-20241022&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Then the appropriate model (&lt;code&gt;reasoning&lt;/code&gt;, &lt;code&gt;creative&lt;/code&gt;, or &lt;code&gt;fast&lt;/code&gt;) is selected and used for the response.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  📄 Multi-Provider Flow
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrcjqqlrevlqhvqh0l63.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrcjqqlrevlqhvqh0l63.png" alt="diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This approach allows a single API route to serve multiple model providers and assistant personalities while maintaining clean, scalable logic.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;If you're interested in knowing about the other logics like voice recognition and speech synthesis, light/dark mode, etc,. then please go over the github repo:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/RS-labhub" rel="noopener noreferrer"&gt;
        RS-labhub
      &lt;/a&gt; / &lt;a href="https://github.com/RS-labhub/Radhika" rel="noopener noreferrer"&gt;
        Radhika
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Radhika is a multi-model AI assistant built for your mood and friendship 💞
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/RS-labhub/Radhika/master/public/banner.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FRS-labhub%2FRadhika%2Fmaster%2Fpublic%2Fbanner.png" alt="banner"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Radhika&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;A modern AI assistant that adapts to how you work and think. Multiple modes, multiple models, one seamless chat experience. Features multiple LLM providers, image generation, voice interaction, and persistent chat history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try now&lt;/strong&gt;: &lt;a href="https://radhika-sharma.vercel.app" rel="nofollow noopener noreferrer"&gt;https://radhika-sharma.vercel.app&lt;/a&gt;&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Detailed Explanation&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;br&gt;
&lt;thead&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;th&gt;Preview&lt;/th&gt;
&lt;br&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;/thead&gt;
&lt;br&gt;
&lt;tbody&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;&lt;a href="https://dev.to/rohan_sharma/creating-a-chatbot-that-actually-stands-out-vibe-coded-version-draft-1ake" rel="nofollow"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FRS-labhub%2FRadhika%2Fmaster%2Fpublic%2Fcover-image.png" alt="Blog Post"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;
&lt;br&gt;
&lt;strong&gt;Blog Post&lt;/strong&gt;&lt;br&gt;Read the blog for in-depth explanation.&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;tr&gt;
&lt;br&gt;
&lt;td&gt;&lt;a href="https://radhika-sharma.vercel.app" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FRS-labhub%2FRadhika%2Fmaster%2Fpublic%2Fyoutube-thumbnail.png" alt="YouTube Demo"&gt;&lt;/a&gt;&lt;/td&gt;
&lt;br&gt;
&lt;td&gt;
&lt;br&gt;
&lt;strong&gt;YouTube Demo&lt;/strong&gt;&lt;br&gt;Coming soon.&lt;/td&gt;
&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;/tbody&gt;
&lt;br&gt;
&lt;/table&gt;&lt;/div&gt;&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Features&lt;/h2&gt;
&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;6 Chat Modes&lt;/strong&gt;: General, Productivity, Wellness, Learning, Creative, BFF&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Provider LLM&lt;/strong&gt;: Groq, Gemini, OpenAI, Claude&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image Generation&lt;/strong&gt;: Pollinations, DALL·E 3, Hugging Face, Free alternatives&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Voice&lt;/strong&gt;: Speech-to-text input &amp;amp; text-to-speech output&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auth &amp;amp; Persistence&lt;/strong&gt;: Appwrite auth with chat history &amp;amp; favorites&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UI&lt;/strong&gt;: Light/dark themes, modern &amp;amp; pixel UI styles&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Quick Start&lt;/h2&gt;

&lt;/div&gt;

&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;git clone https://github.com/RS-labhub/radhika.git
&lt;span class="pl-c1"&gt;cd&lt;/span&gt; radhika
bun install   &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; or npm install&lt;/span&gt;
bun run dev   &lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt; or npm run dev&lt;/span&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Open: &lt;a href="http://localhost:3000" rel="nofollow noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;License&lt;/h2&gt;

&lt;/div&gt;

&lt;p&gt;MIT License - see &lt;a href="https://github.com/RS-labhub/Radhika/LICENSE" rel="noopener noreferrer"&gt;LICENSE&lt;/a&gt;&lt;/p&gt;




&lt;div&gt;
&lt;p&gt;&lt;strong&gt;Built with ❤️ by &lt;a href="https://github.com/RS-labhub" rel="noopener noreferrer"&gt;Rohan Sharma&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/RS-labhub/Radhika/public/Author.jpg"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2FRS-labhub%2FRadhika%2Fpublic%2FAuthor.jpg" alt="author"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/RS-labhub/radhika" rel="noopener noreferrer"&gt;⭐ Star&lt;/a&gt; • &lt;a href="https://github.com/RS-labhub/radhika/issues" rel="noopener noreferrer"&gt;🐛 Issues&lt;/a&gt; • &lt;a href="https://github.com/RS-labhub/radhika/discussions" rel="noopener noreferrer"&gt;🗣️&lt;/a&gt;…&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/RS-labhub/Radhika" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Vibe Coding the UI
&lt;/h2&gt;

&lt;p&gt;When you have successfully implemented the main logic of your application, use the AI tools like v0, lovable, or bolt to create an interface according to your "thoughts".&lt;/p&gt;

&lt;p&gt;I used v0 and ChatGPT. Prompting... Prompting... and never-ending prompting... Check out the video below to see a simple, short explanation of this project with features. However, you still have &lt;strong&gt;&lt;a href="https://radhika-sharma.vercel.app/" rel="noopener noreferrer"&gt;live access&lt;/a&gt;&lt;/strong&gt; to it!&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/2FW6IJeOkzI"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you like it, then please star the repo 🌠 and follow me on GH.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Key Highlights
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🤖 &lt;strong&gt;Multi-Modal AI&lt;/strong&gt; - Six specialized assistant personalities in one app
&lt;/li&gt;
&lt;li&gt;⚡ &lt;strong&gt;Multi-Provider Support&lt;/strong&gt; - Groq, Gemini, OpenAI, and Claude integration
&lt;/li&gt;
&lt;li&gt;🎤 &lt;strong&gt;Advanced Voice&lt;/strong&gt; - Speech-to-text input and text-to-speech output
&lt;/li&gt;
&lt;li&gt;🎨 &lt;strong&gt;Dynamic 3D Visuals&lt;/strong&gt; - Interactive particle system with mode-based colors
&lt;/li&gt;
&lt;li&gt;💾 &lt;strong&gt;Smart Persistence&lt;/strong&gt; - Automatic chat history saving per mode
&lt;/li&gt;
&lt;li&gt;🚀 &lt;strong&gt;Quick Actions&lt;/strong&gt; - One-click access to common tasks per mode
&lt;/li&gt;
&lt;li&gt;📊 &lt;strong&gt;Real-time Analytics&lt;/strong&gt; - Live usage statistics and AI activity monitoring
&lt;/li&gt;
&lt;li&gt;🌙 &lt;strong&gt;Beautiful UI&lt;/strong&gt; - Responsive design with dark/light themes
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Modes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Productivity&lt;/strong&gt;: Task planning, project management, time optimization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wellness&lt;/strong&gt;: Health guidance, fitness routines, mental well-being support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning&lt;/strong&gt;: Educational assistance, study plans, skill development
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creative&lt;/strong&gt;: Brainstorming, content creation, artistic inspiration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;General&lt;/strong&gt;: Problem-solving, decision-making, everyday conversations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BFF&lt;/strong&gt;: Emotional support, casual chats, GenZ-friendly interactions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Perfect for users who need a versatile AI assistant that adapts to different contexts, maintains conversation history across specialized domains, and provides an engaging visual experience with advanced voice capabilities.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Radhika is a sophisticated AI-powered assistant built with Next.js and powered by multiple LLM providers including Groq, Gemini, OpenAI, and Claude. RADHIKA adapts to different modes of interaction, providing specialized assistance for productivity, wellness, learning, creative tasks, and even acts as your GenZ bestie!&lt;/p&gt;

&lt;p&gt;I personally suggest you try the "&lt;strong&gt;BFF&lt;/strong&gt;" mode. You will like it for sure.&lt;/p&gt;

&lt;p&gt;Once again, here are the links you don't want to miss out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Live Demo&lt;/strong&gt;: &lt;a href="https://radhika-sharma.vercel.app/" rel="noopener noreferrer"&gt;https://radhika-sharma.vercel.app/&lt;/a&gt; (added modern/pixel UI options)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Github Repo&lt;/strong&gt;: &lt;a href="https://github.com/RS-labhub/Radhika" rel="noopener noreferrer"&gt;https://github.com/RS-labhub/Radhika&lt;/a&gt; (Give it a star 🌠)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Youtube Demo&lt;/strong&gt;: &lt;a href="https://www.youtube.com/watch?v=2FW6IJeOkzI" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=2FW6IJeOkzI&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you for reading. You're wonderful. And I mean it. Ba-bye, see you in the next blog. (and PLEASE SPAM THE COMMENT SECTION AS ALWAYS)&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>vibecoding</category>
      <category>javascript</category>
      <category>programming</category>
    </item>
    <item>
      <title>Find My Portfolio’s Secret Page; If You Can 🤫</title>
      <dc:creator>Rohan Sharma</dc:creator>
      <pubDate>Fri, 13 Jun 2025 11:15:03 +0000</pubDate>
      <link>https://dev.to/rohan_sharma/find-my-portfolios-secret-page-if-you-can-25j1</link>
      <guid>https://dev.to/rohan_sharma/find-my-portfolios-secret-page-if-you-can-25j1</guid>
      <description>&lt;p&gt;Did you miss me? I guess, yeah, you did. Sorry for that. I was so busy with my DevRel work at llmware. (not flaunting)&lt;/p&gt;

&lt;p&gt;Btw, finally I'm able to complete my portfolio website. And this is what I wanted to share with my dev.to fam.&lt;/p&gt;

&lt;p&gt;Thanks to AI tools that accelerated the process. Vibe coding is not that bad; it saves a lot of time.&lt;/p&gt;

&lt;p&gt;So, let's start this super mini blog.&lt;/p&gt;

&lt;p&gt;3... 2... 1... 🟢&lt;/p&gt;

&lt;h2&gt;
  
  
  Welcome to &lt;code&gt;RS PORTFOLIO&lt;/code&gt; 👋
&lt;/h2&gt;

&lt;p&gt;First of all, don't expect too much. It's just a normal portfolio. &lt;strong&gt;IS IT?&lt;/strong&gt; Who knows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4d0bnmhw7epcyw0te0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4d0bnmhw7epcyw0te0r.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Before discussing more about it, let me tell you the tech stack I have used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NextJs (frontend and backend)&lt;/li&gt;
&lt;li&gt;Radix UI (UI Enhancements)&lt;/li&gt;
&lt;li&gt;FormSpree (for creating free unlimited forms)&lt;/li&gt;
&lt;li&gt;Vercel (hosting)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And boom! These resources were enough to create a good-looking portfolio. Actually, my portfolio DP is the main reason for the clean UI. 🙂 (please say "yes")&lt;/p&gt;

&lt;p&gt;Btw, it's responsive also. BUT I SUGGEST YOU TAKE A LOOK AT A BIGGER SCREEN. Mobile view is not that great. (Of course, I'm NOT going to work on this again)&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://rohansrma.vercel.app/" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frohansrma.vercel.app%2Fog-image.jpg" height="auto" class="m-0"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://rohansrma.vercel.app/" rel="noopener noreferrer" class="c-link"&gt;
            Rohan Sharma - Software Developer, Professional Blog Writer and UI/UX Designer
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            Explore the portfolio of Rohan Sharma, featuring cutting-edge software projects, insightful blogs, and creative UI/UX work.
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frohansrma.vercel.app%2Ffavicon.ico"&gt;
          rohansrma.vercel.app
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  What makes my PORTFOLIO UNIQUE?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Simple and Professional UI?&lt;/li&gt;
&lt;li&gt;Easy navigation?&lt;/li&gt;
&lt;li&gt;Adding "not available for work" label in the experience page?&lt;/li&gt;
&lt;li&gt;Printing Resume?&lt;/li&gt;
&lt;li&gt;SECRET BUTTON?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every portfolio is unique, and I can't judge them. (btw, I always judge them, an in-built feature)&lt;/p&gt;

&lt;p&gt;My portfolio is also like other portfolio/. You can judge it as well. I tried to &lt;strong&gt;WRITE AS MANY FAKE THINGS AND SHOWCASE FAKE EXPERTISE AS POSSIBLE&lt;/strong&gt;. What do you mean, everyone does it? 😂&lt;/p&gt;

&lt;p&gt;I don't want to emphasize more on the &lt;strong&gt;&lt;code&gt;SECRET BUTTON&lt;/code&gt;&lt;/strong&gt;. Because it is very unprofessional. But those who know me personally must have expected something like this from me. So, I have to stand on their beliefs.&lt;/p&gt;

&lt;p&gt;I, particularly, &lt;strong&gt;WANT YOU TO TAKE A LOOK AT MY PORTFOLIO, EXPLORE EVERY SECTION, AND GIVE ME FEEDBACK&lt;/strong&gt;. (Feedback is important, you know. Doesn't matter whether I will implement them or not. Even if I'm implementing then, I can't say about the timespan. It may take decades.)&lt;/p&gt;

&lt;p&gt;If you're a dark mode lover, I have added a dark mode functionality. TOTALLY AI DEPENDENT to implement this feature. I don't want to spend much time on color change.&lt;/p&gt;

&lt;p&gt;I mostly used chatGPT for bug fixes (those F*** hydration errors) and v0 to amplify building and improved UI/.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  The Mystery of SECRET BUTTON
&lt;/h2&gt;

&lt;p&gt;Again, before starting, I want to say, "Please don't go for it. It's just a random off-topic, unprofessional button, and made for fun".&lt;/p&gt;

&lt;p&gt;The secret button, as the name suggests, is very secret and so hidden. There's only one point to access that button. Yes, A SINGLE POINT.&lt;/p&gt;

&lt;p&gt;It can be present anywhere and in any section. If you want to see it, then you have to &lt;strong&gt;FIND IT BY YOURSELF&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;secret page&lt;/code&gt; consists of nothing but my personal space, where I'm trying to change my relationship status from SINGLE TO BEING MINGLE. It's boring as there are long essays and boring paragraphs. &lt;strong&gt;NOT DEVELOPER FRIENDLY&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A little glance of the &lt;code&gt;secret page&lt;/code&gt;:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fxd0l6b1591oxmh8etd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fxd0l6b1591oxmh8etd.png" alt="secret page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What?? C'mon, I already told "A LITTLE". &lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Jokes Apart, Time to Conclude
&lt;/h2&gt;

&lt;p&gt;Sorry for a short and promotional blog. I just wanted to share this with you guys. &lt;/p&gt;

&lt;p&gt;Upcoming blog, going to be very cool. And you'll like it for sure!&lt;/p&gt;

&lt;p&gt;Sharing again the Portfolio link: &lt;a href="https://rohan-sharma-portfolio.vercel.app/" rel="noopener noreferrer"&gt;https://rohan-sharma-portfolio.vercel.app/&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you liked my portfolio, then comment. &lt;/li&gt;
&lt;li&gt;If you found any bugs, then comment.&lt;/li&gt;
&lt;li&gt;If you have any feedback, then comment.&lt;/li&gt;
&lt;li&gt;If you want me to build an MCP (of your choice), then comment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading and your support. You're the best. Keep loving me!&lt;/p&gt;

</description>
      <category>portfolio</category>
      <category>webdev</category>
      <category>vibecoding</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Access Granted!! Here's the recipe behind my AI DMS 🤞</title>
      <dc:creator>Rohan Sharma</dc:creator>
      <pubDate>Sun, 04 May 2025 08:23:49 +0000</pubDate>
      <link>https://dev.to/rohan_sharma/access-granted-heres-the-recipe-behind-my-ai-dms-351b</link>
      <guid>https://dev.to/rohan_sharma/access-granted-heres-the-recipe-behind-my-ai-dms-351b</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/permit_io"&gt;Permit.io Authorization Challenge&lt;/a&gt;: AI Access Control&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Hey there,&lt;br&gt;
Welcome back! This is my &lt;strong&gt;2nd entry&lt;/strong&gt; for the Permit.io Authorization Challenge. (If you want to see the 1st one, here's the link: &lt;a href="https://dev.to/rohan_sharma/access-control-handled-heres-how-i-built-my-dms-212"&gt;https://dev.to/rohan_sharma/access-control-handled-heres-how-i-built-my-dms-212&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;This project is not different than the last one. It's still a document management system, but it now has more powerful features and configurations.//..&lt;/p&gt;

&lt;p&gt;Welcome to &lt;strong&gt;Radhika's AI DocManager&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubhjtb7j1f8qruwmffep.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubhjtb7j1f8qruwmffep.png" alt="logo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Radhika's AI DocManager&lt;/strong&gt; is still a DMS, but now it's come with the power of AI, new settings and configurations, modern UI (best dark mode), and powerful features. Test it on your local machine now!!! 👾&lt;/p&gt;

&lt;p&gt;This project demonstrates how to implement fine-grained authorization for both users and AI agents in a Next.js application using Permit.io. It's a document management system where users can create, view, edit, and delete documents based on their roles and document ownership, and AI agents can assist with document management based on their assigned permissions.&lt;/p&gt;
&lt;h3&gt;
  
  
  Features
&lt;/h3&gt;
&lt;h4&gt;
  
  
  1️⃣ User Authorization
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Role-Based Access Control (RBAC)&lt;/strong&gt;: Different roles (Admin, Editor, Viewer) have different permissions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attribute-Based Access Control (ABAC)&lt;/strong&gt;: Document owners have special privileges&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fine-Grained Authorization&lt;/strong&gt;: Using Permit.io to implement complex authorization rules&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  2️⃣ AI Authorization
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI Agent Roles&lt;/strong&gt;: Define different AI agent roles with specific capabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Permission Levels&lt;/strong&gt;: Configure what AI agents can access and modify

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No Access&lt;/strong&gt;: AI agent cannot access the resource at all&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read Only&lt;/strong&gt;: AI agent can only read but not modify resources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Suggest Only&lt;/strong&gt;: AI can suggest changes that require human approval&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full Access&lt;/strong&gt;: AI has full access to read and modify resources&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Approval Workflows&lt;/strong&gt;: Require human approval for sensitive AI operations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit and Monitoring&lt;/strong&gt;: Track all AI actions and approvals&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  3️⃣ Document Intelligence
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Document Analysis&lt;/strong&gt;: AI-powered analysis of document content and structure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document Summarization&lt;/strong&gt;: Generate concise summaries of documents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content Improvement&lt;/strong&gt;: AI suggestions for improving document content&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/Az8ENPFu4ls"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Repo
&lt;/h2&gt;

&lt;p&gt;Github Repo: &lt;strong&gt;&lt;a href="https://github.com/RS-labhub/AI-DocManager" rel="noopener noreferrer"&gt;https://github.com/RS-labhub/AI-DocManager&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Documentation: &lt;strong&gt;&lt;a href="https://rs-labhub.github.io/AI-DocManager/" rel="noopener noreferrer"&gt;https://rs-labhub.github.io/AI-DocManager/&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  My Journey
&lt;/h2&gt;

&lt;p&gt;As said in the last blog, it was quite difficult to create a DMS or document management system, as there are so many brainstorming behind this.&lt;/p&gt;

&lt;p&gt;Anyway, thanks to &lt;a href="https://www.permit.io/" rel="noopener noreferrer"&gt;Permit.io&lt;/a&gt; for saving a lot of my time while creating policies. It's easy to use and enough to say goodbye to the old methods where we die while writing the code.&lt;/p&gt;

&lt;p&gt;I used Permit.io to achieve these things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Role-Based Access Control or RBAC&lt;/li&gt;
&lt;li&gt;Attribute-Based Access Control or ABAC&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also, implemented the roles the AI should have. Here's both RBAC and ABAC are used. I used &lt;a href="https://console.groq.com" rel="noopener noreferrer"&gt;GROQ Cloud&lt;/a&gt; for fast LLM inference and OpenAI compatibility.&lt;/p&gt;

&lt;p&gt;Overall, it was a fun experience building this project, and I enjoyed building it.&lt;/p&gt;

&lt;p&gt;If you want to see the whole implementation of the Permit.io, please read the project &lt;a href="https://github.com/RS-labhub/AI-DocManager/blob/main/README.md" rel="noopener noreferrer"&gt;Readme&lt;/a&gt; file!&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Authorization for AI Applications with Permit.io
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07ghtyw8i6pe7v38m20d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07ghtyw8i6pe7v38m20d.png" alt="landing page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Authorization Model
&lt;/h3&gt;

&lt;h4&gt;
  
  
  User Authorization
&lt;/h4&gt;

&lt;p&gt;The application implements the following user authorization model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Admin&lt;/strong&gt;: Can create, view, edit, and delete any document, and access the admin panel&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Editor&lt;/strong&gt;: Can create, view, and edit documents, but can only delete their own documents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Viewer&lt;/strong&gt;: Can only view documents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, document owners have full control over their own documents regardless of their role.&lt;/p&gt;

&lt;h4&gt;
  
  
  AI Authorization
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25tfd48745mi2k6nchhb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25tfd48745mi2k6nchhb.png" alt="ai authz"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The application implements the following AI authorization model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;AI Agent Roles&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Assistant&lt;/strong&gt;: Helps with document organization and basic tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Editor&lt;/strong&gt;: Can edit and improve document content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analyzer&lt;/strong&gt;: Analyzes document content and provides insights&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;AI Capabilities&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;read_documents&lt;/strong&gt;: Ability to read document content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;suggest_edits&lt;/strong&gt;: Ability to suggest edits to documents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;edit_documents&lt;/strong&gt;: Ability to directly edit documents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;create_documents&lt;/strong&gt;: Ability to create new documents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;delete_documents&lt;/strong&gt;: Ability to delete documents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;analyze_content&lt;/strong&gt;: Ability to analyze document content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;summarize_content&lt;/strong&gt;: Ability to summarize documents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;translate_content&lt;/strong&gt;: Ability to translate documents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;generate_content&lt;/strong&gt;: Ability to generate new content&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Permission Levels&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;NO_ACCESS&lt;/strong&gt;: AI agent cannot access the resource at all&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;READ_ONLY&lt;/strong&gt;: AI agent can only read but not modify resources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SUGGEST_ONLY&lt;/strong&gt;: AI can suggest changes that require human approval&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FULL_ACCESS&lt;/strong&gt;: AI has full access to read and modify resources&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Implementation Details
&lt;/h3&gt;

&lt;h4&gt;
  
  
  AI Authorization Implementation
&lt;/h4&gt;

&lt;p&gt;The application implements AI authorization through several key components:&lt;/p&gt;

&lt;h5&gt;
  
  
  1. AI Agent Management
&lt;/h5&gt;

&lt;p&gt;The &lt;code&gt;AIAgent&lt;/code&gt; interface defines the structure of AI agents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;AIAgent&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AIAgentRole&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;capabilities&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AICapability&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
  &lt;span class="nl"&gt;createdBy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;createdAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;updatedAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;isActive&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Administrators can manage AI agents through the admin panel, defining their roles and capabilities.&lt;/p&gt;

&lt;h5&gt;
  
  
  2. Permission Levels
&lt;/h5&gt;

&lt;p&gt;The &lt;code&gt;AIPermissionLevel&lt;/code&gt; enum defines the different levels of access that AI agents can have:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;enum&lt;/span&gt; &lt;span class="nx"&gt;AIPermissionLevel&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;NO_ACCESS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;no_access&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;READ_ONLY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;read_only&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;SUGGEST_ONLY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;suggest_only&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;FULL_ACCESS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;full_access&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  3. AI Actions
&lt;/h5&gt;

&lt;p&gt;The &lt;code&gt;AIAction&lt;/code&gt; interface defines the structure of actions that AI agents can perform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;AIAction&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;agentId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;actionType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;resourceType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;resourceId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AIActionStatus&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;requestedAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;completedAt&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;requestedBy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;approvedBy&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;rejectedBy&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Record&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;result&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  4. Permission Checking
&lt;/h5&gt;

&lt;p&gt;The &lt;code&gt;checkAIPermission&lt;/code&gt; function checks if an AI agent has permission to perform an action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;checkAIPermission&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;agentId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;resourceType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;resourceId&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;permitted&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;requiresApproval&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;permissionLevel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AIPermissionLevel&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Implementation details...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  5. Approval Workflow
&lt;/h5&gt;

&lt;p&gt;The application implements an approval workflow for AI actions that require human oversight:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;requestAIAction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;agentId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;actionType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;resourceType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;resourceId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;documentTitle&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;documentContent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Record&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;action&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;AIAction&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;message&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Implementation details...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;approveAIAction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;actionId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;action&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;AIAction&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;message&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Implementation details...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;rejectAIAction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;actionId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;action&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;AIAction&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;message&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Implementation details...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Integration with Permit.io
&lt;/h4&gt;

&lt;p&gt;The application integrates with Permit.io through the &lt;code&gt;permit.ts&lt;/code&gt; file, which provides functions for checking permissions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Permit&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;permitio&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Initialize Permit SDK&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;permit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Permit&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;pdp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PERMIT_PDP_URL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PERMIT_SDK_TOKEN&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Check if a user can perform an action on a resource&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;checkPermission&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;resourceType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;resourceAttributes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Record&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;permitted&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;permit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;check&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;resourceType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;resourceAttributes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;permitted&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Permission check failed:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project demonstrates how to implement fine-grained authorization for both users and AI agents in a Next.js application using Permit.io. By externalizing authorization, we can create more secure, maintainable, and flexible applications that can safely leverage AI capabilities while maintaining appropriate controls.&lt;/p&gt;

&lt;p&gt;Please try to run it locally on your machine and let me know the feedback!&lt;/p&gt;

&lt;p&gt;Thank you for taking your time to read this blog. I hope you enjoyed it. Your support means the world to me. Thank youuuuuuuuuuuuuuuuu! ❣️&lt;/p&gt;

</description>
      <category>permitchallenge</category>
      <category>webdev</category>
      <category>security</category>
      <category>devchallenge</category>
    </item>
    <item>
      <title>Access Control? Handled. Here's How I Built My DMS 🌠</title>
      <dc:creator>Rohan Sharma</dc:creator>
      <pubDate>Tue, 29 Apr 2025 19:08:04 +0000</pubDate>
      <link>https://dev.to/rohan_sharma/access-control-handled-heres-how-i-built-my-dms-212</link>
      <guid>https://dev.to/rohan_sharma/access-control-handled-heres-how-i-built-my-dms-212</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/permit_io"&gt;Permit.io Authorization Challenge&lt;/a&gt;: Permissions Redefined&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;Hi there! 👋&lt;br&gt;
Presenting to you "&lt;strong&gt;Radhika's DocManager&lt;/strong&gt;": A secure Document Management System (or DMS) with fine-grained authorization powered by Permit.io.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forbjajnchik2w3vtlmpn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forbjajnchik2w3vtlmpn.png" alt="logo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the first version of Radhika's DocManager, which allows you to create, read, write, and delete documents based on their role and document ownership.&lt;/p&gt;
&lt;h3&gt;
  
  
  Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Role-Based Access Control (RBAC)&lt;/strong&gt;: Different roles (Admin, Editor, Viewer) have different permissions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attribute-Based Access Control (ABAC)&lt;/strong&gt;: Document owners have special privileges&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fine-Grained Authorization&lt;/strong&gt;: Using Permit.io to implement complex authorization rules&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Next.js App Router&lt;/strong&gt;: Modern React application with server components and server actions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Responsive UI&lt;/strong&gt;: Using Tailwind CSS and shadcn/ui components&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Authorization Model
&lt;/h3&gt;

&lt;p&gt;The application implements the following authorization model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Admin&lt;/strong&gt;: Can create, view, edit, and delete any document, and access the admin panel&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Editor&lt;/strong&gt;: Can create, view, and edit documents, but can also delete their own documents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Viewer&lt;/strong&gt;: Can only view documents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, document owners have full control over their own documents regardless of their role.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/zWEPYTF0AlQ"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Project Repo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/RS-labhub/Document_Management_System" rel="noopener noreferrer"&gt;https://github.com/RS-labhub/Document_Management_System&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
 &lt;/p&gt;

&lt;h2&gt;
  
  
  My Journey
&lt;/h2&gt;

&lt;p&gt;Creating a DMS was a very hectic job, especially while writing the access control. Thanks to Permit, which makes it easier and serves as a super time saver.&lt;/p&gt;

&lt;p&gt;The thing that focused more on this application is the "Use of Permit". This also proves how a simple application becomes so powerful by adding access controls.&lt;/p&gt;

&lt;p&gt;Anyway, the project is open-source. If you want to contribute, you're warmly welcome. &lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Using Permit.io for Authorization
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxxprkjk8oo9p56drh6l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxxprkjk8oo9p56drh6l.png" alt="permit use"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I used &lt;strong&gt;Permit&lt;/strong&gt; to achieve two things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Role-Based Access Control or RBAC&lt;/li&gt;
&lt;li&gt;Attribute-Based Access Control or ABAC&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the RBAC policy, 3 roles are created: Admin, Editor, and Viewer. The properties/permissions of each role are mentioned in the image.&lt;/p&gt;

&lt;p&gt;In the ABAC policy, access is determined by document attributes and user context.&lt;/p&gt;

&lt;p&gt;If you want to see the whole implementation of the Permit, please read the project &lt;a href="https://github.com/RS-labhub/Document_Management_System/blob/main/README.md" rel="noopener noreferrer"&gt;Readme&lt;/a&gt; file!&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;A huge thanks to &lt;a class="mentioned-user" href="https://dev.to/jennie_py"&gt;@jennie_py&lt;/a&gt; for their contribution to this project!&lt;/p&gt;

&lt;p&gt;This project demonstrates how to implement fine-grained authorization in a Next.js application using Permit.io. By externalizing authorization, we can create more secure, maintainable, and flexible applications.&lt;/p&gt;

&lt;p&gt;Thank you for reading this so far! Your support means the world to us. ❣️&lt;/p&gt;

</description>
      <category>permitchallenge</category>
      <category>webdev</category>
      <category>security</category>
      <category>devchallenge</category>
    </item>
    <item>
      <title>Top 10 Open-Source RAG Frameworks you need!! 🧌</title>
      <dc:creator>Rohan Sharma</dc:creator>
      <pubDate>Wed, 12 Mar 2025 04:31:14 +0000</pubDate>
      <link>https://dev.to/rohan_sharma/top-10-open-source-rag-frameworks-you-need-3fhe</link>
      <guid>https://dev.to/rohan_sharma/top-10-open-source-rag-frameworks-you-need-3fhe</guid>
      <description>&lt;p&gt;The capabilities of Large Language Models (LLMs) are enhanced by &lt;strong&gt;&lt;code&gt;Retrieval-Augmented Generation (RAG)&lt;/code&gt;&lt;/strong&gt;. Thus, RAG comes up with a super powerful technique that distinguishes it from others.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;RAG Frameworks&lt;/code&gt;&lt;/strong&gt; are tools and libraries that help developers build AI models that can retrieve relevant information from external sources (like databases or documents) and generate better responses based on that information.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  RAG and it's Flowchart 🎴
&lt;/h2&gt;

&lt;p&gt;Imagine you have a big toy box filled with all your favorite toys. But sometimes, when you want to find your favorite teddy bear, it takes a long time because the toys are all mixed up.&lt;/p&gt;

&lt;p&gt;Now, think of RAG (Retrieval-Augmented Generation) as a magical helper. This helper is really smart! When you ask, "&lt;strong&gt;Where is my teddy bear?&lt;/strong&gt;", it quickly looks through the toy box, finds the teddy bear, and gives it to you right away.&lt;/p&gt;

&lt;p&gt;In the same way, when you ask a computer a question, RAG helps it find the right information from a big book before giving you an answer. So instead of just guessing, it finds the best answer from the book and tells you! 😊&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="katex-element"&gt;
  &lt;span class="katex-display"&gt;&lt;span class="katex"&gt;&lt;span class="katex-mathml"&gt;RAG=RetrievalBasedSystem+GenerativeModels
 RAG = RetrievalBasedSystem + GenerativeModels
&lt;/span&gt;&lt;span class="katex-html"&gt;&lt;span class="base"&gt;&lt;span class="strut"&gt;&lt;/span&gt;&lt;span class="mord mathnormal"&gt;R&lt;/span&gt;&lt;span class="mord mathnormal"&gt;A&lt;/span&gt;&lt;span class="mord mathnormal"&gt;G&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;span class="mrel"&gt;=&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="base"&gt;&lt;span class="strut"&gt;&lt;/span&gt;&lt;span class="mord mathnormal"&gt;R&lt;/span&gt;&lt;span class="mord mathnormal"&gt;e&lt;/span&gt;&lt;span class="mord mathnormal"&gt;t&lt;/span&gt;&lt;span class="mord mathnormal"&gt;r&lt;/span&gt;&lt;span class="mord mathnormal"&gt;i&lt;/span&gt;&lt;span class="mord mathnormal"&gt;e&lt;/span&gt;&lt;span class="mord mathnormal"&gt;v&lt;/span&gt;&lt;span class="mord mathnormal"&gt;a&lt;/span&gt;&lt;span class="mord mathnormal"&gt;lB&lt;/span&gt;&lt;span class="mord mathnormal"&gt;a&lt;/span&gt;&lt;span class="mord mathnormal"&gt;se&lt;/span&gt;&lt;span class="mord mathnormal"&gt;d&lt;/span&gt;&lt;span class="mord mathnormal"&gt;S&lt;/span&gt;&lt;span class="mord mathnormal"&gt;ys&lt;/span&gt;&lt;span class="mord mathnormal"&gt;t&lt;/span&gt;&lt;span class="mord mathnormal"&gt;e&lt;/span&gt;&lt;span class="mord mathnormal"&gt;m&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;span class="mbin"&gt;+&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="base"&gt;&lt;span class="strut"&gt;&lt;/span&gt;&lt;span class="mord mathnormal"&gt;G&lt;/span&gt;&lt;span class="mord mathnormal"&gt;e&lt;/span&gt;&lt;span class="mord mathnormal"&gt;n&lt;/span&gt;&lt;span class="mord mathnormal"&gt;er&lt;/span&gt;&lt;span class="mord mathnormal"&gt;a&lt;/span&gt;&lt;span class="mord mathnormal"&gt;t&lt;/span&gt;&lt;span class="mord mathnormal"&gt;i&lt;/span&gt;&lt;span class="mord mathnormal"&gt;v&lt;/span&gt;&lt;span class="mord mathnormal"&gt;e&lt;/span&gt;&lt;span class="mord mathnormal"&gt;M&lt;/span&gt;&lt;span class="mord mathnormal"&gt;o&lt;/span&gt;&lt;span class="mord mathnormal"&gt;d&lt;/span&gt;&lt;span class="mord mathnormal"&gt;e&lt;/span&gt;&lt;span class="mord mathnormal"&gt;l&lt;/span&gt;&lt;span class="mord mathnormal"&gt;s&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Flowchart
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futwcpxp91pbp1ogtan8z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futwcpxp91pbp1ogtan8z.png" alt="RAG" width="800" height="615"&gt;&lt;/a&gt;&lt;/p&gt;
 RAG OverSimplified 






&lt;h3&gt;
  
  
  How RAG Frameworks Work ⚒️
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Retrieve&lt;/strong&gt; → Search for relevant documents using a vector database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Augment&lt;/strong&gt; → Feed those documents into the LLM as extra context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate&lt;/strong&gt; → The LLM generates an informed response using both retrieved data and its own training knowledge.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;

&lt;p&gt;🔹 &lt;u&gt;Step 1&lt;/u&gt;: User Question&lt;br&gt;
Example: "Who discovered gravity?"&lt;/p&gt;

&lt;p&gt;🔹 &lt;u&gt;Step 2&lt;/u&gt;: Retrieve Relevant Information&lt;br&gt;
Searches a knowledge base (e.g., Wikipedia, company documents)&lt;br&gt;
Finds: "Isaac Newton formulated the law of gravity in 1687."&lt;/p&gt;

&lt;p&gt;🔹 &lt;u&gt;Step 3&lt;/u&gt;: Augment &amp;amp; Generate Answer&lt;br&gt;
The LLM takes the retrieved information + its own knowledge&lt;br&gt;
Generates a complete, well-structured response&lt;/p&gt;

&lt;p&gt;🔹 &lt;u&gt;Step 4&lt;/u&gt;: Final Answer&lt;br&gt;
Example: "Gravity was discovered by Isaac Newton in 1687."&lt;/p&gt;



&lt;p&gt;I hope now you're somewhat clear with the Rag concept. Now, in this blog, we will be discussing the top 10 Open-Source RAG frameworks that will help you boost your project or enterprise.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Top 10 Open-Source RAG Frameworks you need!! 📃
&lt;/h2&gt;

&lt;p&gt;Here's a curated list of some famous and widely used RAG frameworks, you might not want to miss:&lt;/p&gt;
&lt;h3&gt;
  
  
  1️⃣ &lt;a href="https://github.com/llmware-ai/llmware" rel="noopener noreferrer"&gt;LLMWare.ai&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;11K Github Stars, 1.8K Forks&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4vulsrnemm0xvm039ze.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4vulsrnemm0xvm039ze.png" alt="llmware" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;
 LLMWare.ai- &lt;a&gt;https://llmware.ai&lt;/a&gt; 



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;LLMWare provides a unified framework for building LLM-based applications (e.g., RAG, Agents), using small, specialized models that can be deployed privately, integrated with enterprise knowledge sources safely and securely, and cost-effectively tuned and adapted for any business process.&lt;/p&gt;
&lt;h4&gt;
  
  
  Core Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RAG support&lt;/strong&gt; for enterprise-level AI apps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM Orchestration&lt;/strong&gt; – Connects multiple LLMs (OpenAI, Anthropic, Google, etc.).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document Processing &amp;amp; Embeddings&lt;/strong&gt; – Enables structured AI-driven search.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector Database Integration&lt;/strong&gt; – Works with Pinecone, ChromaDB, Weaviate, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Fine-Tuning&lt;/strong&gt; – Train models on private datasets.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  🔹Use Cases
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Chatbots &amp;amp; virtual assistants&lt;/li&gt;
&lt;li&gt;AI-driven search and retrieval&lt;/li&gt;
&lt;li&gt;Summarization &amp;amp; text analysis&lt;/li&gt;
&lt;li&gt;Enterprise knowledge management&lt;/li&gt;
&lt;li&gt;Financial Analysis&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Why LLMWare.ai?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Faster AI development with pre-built tools&lt;/li&gt;
&lt;li&gt;Scalable &amp;amp; flexible for enterprise applications&lt;/li&gt;
&lt;li&gt;Open-source &amp;amp; extensible&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/llmware-ai" rel="noopener noreferrer"&gt;
        llmware-ai
      &lt;/a&gt; / &lt;a href="https://github.com/llmware-ai/llmware" rel="noopener noreferrer"&gt;
        llmware
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Unified framework for building enterprise RAG pipelines with small, specialized models
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;llmware&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/eeedbfa60b5c3c18653d3308d1475c871d463bbe31f49b2eb46b1cd0404d85bf/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f707974686f6e2d332e31305f2537435f332e31312537435f332e31322537435f332e31332537435f332e31342d626c75653f636f6c6f723d626c7565"&gt;&lt;img src="https://camo.githubusercontent.com/eeedbfa60b5c3c18653d3308d1475c871d463bbe31f49b2eb46b1cd0404d85bf/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f707974686f6e2d332e31305f2537435f332e31312537435f332e31322537435f332e31332537435f332e31342d626c75653f636f6c6f723d626c7565" alt="Static Badge"&gt;&lt;/a&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/fa97e41f3c599e720fe4a40f40a409d7cf2b88064e7ba6a1dd99d666533ae95b/68747470733a2f2f696d672e736869656c64732e696f2f707970692f762f6c6c6d776172653f636f6c6f723d626c7565"&gt;&lt;img src="https://camo.githubusercontent.com/fa97e41f3c599e720fe4a40f40a409d7cf2b88064e7ba6a1dd99d666533ae95b/68747470733a2f2f696d672e736869656c64732e696f2f707970692f762f6c6c6d776172653f636f6c6f723d626c7565" alt="PyPI - Version"&gt;&lt;/a&gt;
&lt;a href="https://discord.gg/bphreFK4NJ" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/41bc15d88e589622e8f332a0beeeb459df90fa7d0c7a4896a084deda6a2a0192/68747470733a2f2f646973636f72642d6c6976652d6d656d626572732d636f756e742d62616467652e76657263656c2e6170702f6170692f646973636f72642d6d656d626572733f6775696c6449643d31313739323435363432373730353539303637266c6162656c3d646973636f72642532306d656d6265727326636f6c6f723d353836354632" alt="members"&gt;&lt;/a&gt;
&lt;a href="https://github.com/llmware-ai/llmware/actions/workflows/pages.yml" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/llmware-ai/llmware/actions/workflows/pages.yml/badge.svg" alt="Documentation"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🧰🛠️ Unified framework for building knowledge-based local, private, secure LLM-based applications&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;&lt;code&gt;llmware&lt;/code&gt; is optimized for AI PC and local laptop, edge and self-hosted deployment across a wide range of Windows, Mac and Linux platforms, with support for GGUF, OpenVINO, ONNXRuntime, ONNXRuntime-QNN (Qualcomm), WindowsLocalFoundry, and Pytorch, providing a high-level interface that makes it easy to leverage the right inferencing technology optimized for the target platform.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;llmware&lt;/code&gt; has two main components:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Model catalog with 300+ models&lt;/strong&gt; - models prepackaged in quantized, optimized formats, to leverage on device GPU and NPU capabilities, with support for major open source model families and 50+ llmware finetuned SLIM, Bling, Dragon and Industry-Bert models specialized for key tasks in enterprise process automation.  Also supports leading cloud models from OpenAI, Anthropic and Google.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;RAG Pipeline&lt;/strong&gt; - integrated components for the full lifecycle of connecting knowledge sources to generative AI models with wide range of document parsing and…&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/llmware-ai/llmware" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https:/github.com/llmware-ai/llmware" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Star LLMWare on GitHub ⭐&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;a href="https://discord.gg/bphreFK4NJ" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;LLMWare Discord Server 💬&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  2️⃣ &lt;a href="https://github.com/run-llama/llama_index" rel="noopener noreferrer"&gt;LlamaIndex (Formerly GPT Index)&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;39.8K Github Stars, 5.7K Forks&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6idwt7yjxzlfqenu4gz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6idwt7yjxzlfqenu4gz.png" alt="llamaIndex" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;
 LlamaIndex: &lt;a&gt;https://www.llamaindex.ai&lt;/a&gt; 



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;LlamaIndex (GPT Index) is a data framework for your LLM application. Building with LlamaIndex typically involves working with LlamaIndex core and a chosen set of integrations (or plugins).&lt;/p&gt;

&lt;h4&gt;
  
  
  Core Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Indexing &amp;amp; Retrieval&lt;/strong&gt; – Organizes data efficiently for fast lookups.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modular Pipelines&lt;/strong&gt; – Customizable components for RAG workflows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple Data Sources&lt;/strong&gt; – Supports PDFs, SQL, APIs, and more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector Store Integrations&lt;/strong&gt; – Works with Pinecone, FAISS, ChromaDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  🔹Use Cases
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AI-powered search engines&lt;/li&gt;
&lt;li&gt;Knowledge retrieval for chatbots&lt;/li&gt;
&lt;li&gt;Code and document understanding&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why LlamaIndex?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Easy to integrate with OpenAI, LangChain, etc.&lt;/li&gt;
&lt;li&gt;Highly flexible &amp;amp; modular for different AI tasks.&lt;/li&gt;
&lt;li&gt;Supports structured &amp;amp; unstructured data.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/run-llama" rel="noopener noreferrer"&gt;
        run-llama
      &lt;/a&gt; / &lt;a href="https://github.com/run-llama/llama_index" rel="noopener noreferrer"&gt;
        llama_index
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      LlamaIndex is the leading document agent and OCR platform
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;🗂️ LlamaIndex 🦙&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://pypi.org/project/llama-index/" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/a67de1ee2a66530a0f1b412e06afd6b34174fa691bf417846172cfa5e4ab7485/68747470733a2f2f696d672e736869656c64732e696f2f707970692f646d2f6c6c616d612d696e646578" alt="PyPI - Downloads"&gt;&lt;/a&gt;
&lt;a href="https://github.com/run-llama/llama_index/actions/workflows/build_package.yml" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/run-llama/llama_index/actions/workflows/build_package.yml/badge.svg" alt="Build"&gt;&lt;/a&gt;
&lt;a href="https://github.com/jerryjliu/llama_index/graphs/contributors" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/6fef0b10b05e9e43f79fc04213066b9866b1f9911cb17322d628f5852d788a45/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f636f6e7472696275746f72732f6a657272796a6c69752f6c6c616d615f696e646578" alt="GitHub contributors"&gt;&lt;/a&gt;
&lt;a href="https://discord.gg/dGcwcsnxhU" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/dcf4077e2557ac3b54bad76685ab021fcf5cfb62042f46a588866f9c603572e4/68747470733a2f2f696d672e736869656c64732e696f2f646973636f72642f31303539313939323137343936373732363838" alt="Discord"&gt;&lt;/a&gt;
&lt;a href="https://x.com/llama_index" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/5879f7d796f75b7f7200aeae80a9ceae06de1595dde914c0b3c19dcfaef156bf/68747470733a2f2f696d672e736869656c64732e696f2f747769747465722f666f6c6c6f772f6c6c616d615f696e646578" alt="Twitter"&gt;&lt;/a&gt;
&lt;a href="https://www.reddit.com/r/LlamaIndex/" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/fe43c5534faae8a6c77e77259f119efe8fd00b93132b7acf666b8d2503c51eb0/68747470733a2f2f696d672e736869656c64732e696f2f7265646469742f7375627265646469742d73756273637269626572732f4c6c616d61496e6465783f7374796c653d706c6173746963266c6f676f3d726564646974266c6162656c3d722532464c6c616d61496e646578266c6162656c436f6c6f723d7768697465" alt="Reddit"&gt;&lt;/a&gt;
&lt;a href="https://www.phorm.ai/query?projectId=c5863b56-6703-4a5d-87b6-7e6031bf16b6" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/8e0d9f765d03e0894bb813b8e93d7524933d373ccc0241000d5c0a7922009223/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f50686f726d2d41736b5f41492d2532334632373737412e7376673f266c6f676f3d646174613a696d6167652f7376672b786d6c3b6261736536342c50484e325a79423361575230614430694e534967614756705a326830505349304969426d6157787350534a756232356c4969423462577875637a30696148523063446f764c336433647935334d793576636d63764d6a41774d43397a646d636950676f67494478775958526f49475139496b30304c6a517a494445754f446779595445754e4451674d5334304e434177494441674d5330754d446b344c6a51794e6d4d744c6a41314c6a45794d7930754d5445314c6a497a4c5334784f5449754d7a49794c5334774e7a55754d446b744c6a45324c6a45324e5330754d6a55314c6a49794e6d45784c6a4d314d7941784c6a4d314d794177494441674d5330754e546b314c6a49784d6d4d744c6a41354f5334774d5449744c6a45354d6934774d5451744c6a49334f5334774d445a734c5445754e546b7a4c5334784e4859744c6a51774e6d67784c6a59314f474d754d446b754d4441784c6a45334c5334784e6a6b754d6a51324c5334784f5446684c6a59774d7934324d444d674d434177494441674c6a49744c6a45774e6934314d6a6b754e544935494441674d434177494334784d7a67744c6a45334c6a59314e4334324e5451674d434177494441674c6a41324e5330754d6a52734c6a41794f4330754d7a4a684c6a6b7a4c6a6b7a494441674d4341774c5334774d7a59744c6a49304f5334314e6a63754e545933494441674d4341774c5334784d444d744c6a49754e5441794c6a55774d694177494441674d4330754d5459344c5334784d7a67754e6a41344c6a59774f434177494441674d4330754d6a51744c6a41324e3077794c6a517a4e7934334d6a6b674d5334324d6a55754e6a63785953347a4d6a49754d7a4979494441674d4341774c5334794d7a49754d4455344c6a4d334e53347a4e7a55674d434177494441744c6a45784e6934794d7a4a734c5334784d5459674d5334304e5330754d4455344c6a59354e7930754d4455344c6a63314e4577754e7a4131494452734c53347a4e5463744c6a41334f5577754e6a41794c6a6b774e6b4d754e6a45334c6a63794e6934324e6a4d754e5463304c6a637a4f5334304e5452684c6a6b314f4334354e5467674d434177494445674c6a49334e4330754d6a67314c6a6b334d5334354e7a45674d434177494445674c6a4d7a4e7930754d54526a4c6a45784f5330754d4449324c6a49794e7930754d444d304c6a4d794e5330754d44493254444d754d6a4d794c6a4532597934784e546b754d4445304c6a4d7a4e6934774d7934304e546b754d446779595445754d54637a494445754d54637a494441674d434178494334314e4455754e445133597934774e6934774f5451754d5441354c6a45354d6934784e4451754d6a6b7a595445754d7a6b79494445754d7a6b79494441674d434178494334774e7a67754e5468734c5334774d6a6b754d7a4a614969426d615778735053496a526a49334e7a64424969382b4369416750484268644767675a443069545451754d446779494449754d444133595445754e445531494445754e445531494441674d4341784c5334774f5467754e444933597930754d4455754d5449304c5334784d5451754d6a4d794c5334784f5449754d7a4930595445754d544d674d5334784d794177494441674d5330754d6a55304c6a49794e7941784c6a4d314d7941784c6a4d314d794177494441674d5330754e546b314c6a49784e474d744c6a45754d4445794c5334784f544d754d4445304c5334794f4334774d445a734c5445754e5459744c6a45774f4334774d7a51744c6a51774e6934774d7930754d7a5134494445754e5455354c6a45314e474d754d446b674d4341754d54637a4c5334774d5334794e4467744c6a417a4d3245754e6a417a4c6a59774d794177494441674d4341754d6930754d5441324c6a557a4d6934314d7a49674d434177494441674c6a457a4f5330754d5463794c6a59324c6a5932494441674d434177494334774e6a51744c6a49304d5777754d4449354c53347a4d6a46684c6a6b304c6a6b30494441674d4341774c5334774d7a59744c6a49314c6a55334c6a5533494441674d4341774c5334784d444d744c6a49774d6934314d4449754e544179494441674d4341774c5334784e6a67744c6a457a4f4334324d4455754e6a4131494441674d4341774c5334794e4330754d445933544445754d6a637a4c6a67794e324d744c6a41354e4330754d4441344c5334784e6a67754d4445744c6a49794d5334774e5455744c6a41314d7934774e4455744c6a41344e4334784d5451744c6a41354d6934794d445a4d4c6a63774e534130494441674d7934354d7a68734c6a49314e5330794c6a6b784d5545784c6a4178494445754d4445674d434177494445674c6a4d354d7934314e7a49754f5459794c6a6b324d694177494441674d5341754e6a59324c6a49344e6d45754f5463754f5463674d434177494445674c6a4d7a4f4330754d5452444d5334784d6a49754d5449674d5334794d7934784d5341784c6a4d794f4334784d546c734d5334314f544d754d54526a4c6a45324c6a41784e43347a4c6a41304e7934304d6a4d754d5745784c6a4533494445754d5463674d434177494445674c6a55304e5334304e44686a4c6a41324d5334774f5455754d5441354c6a45354d7934784e4451754d6a6b31595445754e444132494445754e444132494441674d434178494334774e7a63754e54677a624330754d4449344c6a4d794d6c6f6949475a7062477739496e646f6158526c4969382b4369416750484268644767675a443069545451754d446779494449754d444133595445754e445531494445754e445531494441674d4341784c5334774f5467754e444933597930754d4455754d5449304c5334784d5451754d6a4d794c5334784f5449754d7a4930595445754d544d674d5334784d794177494441674d5330754d6a55304c6a49794e7941784c6a4d314d7941784c6a4d314d794177494441674d5330754e546b314c6a49784e474d744c6a45754d4445794c5334784f544d754d4445304c5334794f4334774d445a734c5445754e5459744c6a45774f4334774d7a51744c6a51774e6934774d7930754d7a5134494445754e5455354c6a45314e474d754d446b674d4341754d54637a4c5334774d5334794e4467744c6a417a4d3245754e6a417a4c6a59774d794177494441674d4341754d6930754d5441324c6a557a4d6934314d7a49674d434177494441674c6a457a4f5330754d5463794c6a59324c6a5932494441674d434177494334774e6a51744c6a49304d5777754d4449354c53347a4d6a46684c6a6b304c6a6b30494441674d4341774c5334774d7a59744c6a49314c6a55334c6a5533494441674d4341774c5334784d444d744c6a49774d6934314d4449754e544179494441674d4341774c5334784e6a67744c6a457a4f4334324d4455754e6a4131494441674d4341774c5334794e4330754d445933544445754d6a637a4c6a67794e324d744c6a41354e4330754d4441344c5334784e6a67754d4445744c6a49794d5334774e5455744c6a41314d7934774e4455744c6a41344e4334784d5451744c6a41354d6934794d445a4d4c6a63774e534130494441674d7934354d7a68734c6a49314e5330794c6a6b784d5545784c6a4178494445754d4445674d434177494445674c6a4d354d7934314e7a49754f5459794c6a6b324d694177494441674d5341754e6a59324c6a49344e6d45754f5463754f5463674d434177494445674c6a4d7a4f4330754d5452444d5334784d6a49754d5449674d5334794d7934784d5341784c6a4d794f4334784d546c734d5334314f544d754d54526a4c6a45324c6a41784e43347a4c6a41304e7934304d6a4d754d5745784c6a4533494445754d5463674d434177494445674c6a55304e5334304e44686a4c6a41324d5334774f5455754d5441354c6a45354d7934784e4451754d6a6b31595445754e444132494445754e444132494441674d434178494334774e7a63754e54677a624330754d4449344c6a4d794d6c6f6949475a7062477739496e646f6158526c4969382b436a777663335a6e50676f3d" alt="Ask AI"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;LlamaIndex OSS (by &lt;a href="https://llamaindex.ai" rel="nofollow noopener noreferrer"&gt;LlamaIndex&lt;/a&gt;) is an open-source framework to build agentic applications. &lt;strong&gt;&lt;a href="https://cloud.llamaindex.ai" rel="nofollow noopener noreferrer"&gt;Parse&lt;/a&gt;&lt;/strong&gt; is our enterprise platform for agentic OCR, parsing, extraction, indexing and more. You can use LlamaParse with this framework or on its own; see &lt;a href="https://github.com/run-llama/llama_index#llamacloud-document-agent-platform" rel="noopener noreferrer"&gt;LlamaParse&lt;/a&gt; below for signup and product links.&lt;/p&gt;
&lt;blockquote&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;📚 &lt;strong&gt;Documentation:&lt;/strong&gt;
&lt;/h3&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developers.llamaindex.ai/python/cloud/llamaparse/" rel="nofollow noopener noreferrer"&gt;LlamaParse&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.llamaindex.ai/python/framework/" rel="nofollow noopener noreferrer"&gt;LlamaIndex OSS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.llamaindex.ai/python/llamaagents/overview/" rel="nofollow noopener noreferrer"&gt;LlamaAgents&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;Building with LlamaIndex typically involves working with LlamaIndex core and a chosen set of integrations (or plugins). There are two ways to start building with LlamaIndex in
Python:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Starter&lt;/strong&gt;: &lt;a href="https://pypi.org/project/llama-index/" rel="nofollow noopener noreferrer"&gt;&lt;code&gt;llama-index&lt;/code&gt;&lt;/a&gt;. A starter Python package that includes core LlamaIndex as well as a selection of integrations.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Customized&lt;/strong&gt;: &lt;a href="https://pypi.org/project/llama-index-core/" rel="nofollow noopener noreferrer"&gt;&lt;code&gt;llama-index-core&lt;/code&gt;&lt;/a&gt;. Install core LlamaIndex and add your chosen LlamaIndex integration packages on &lt;a href="https://llamahub.ai/" rel="nofollow noopener noreferrer"&gt;LlamaHub&lt;/a&gt;
that are required for your application. There are over 300 LlamaIndex integration
packages that work seamlessly with core, allowing you to build with your preferred
LLM, embedding, and vector store providers.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The LlamaIndex…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/run-llama/llama_index" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://github.com/run-llama/llama_index" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Star LlamaIndex on GitHub ⭐&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;a href="https://discord.com/invite/eN6D2HQ4aX" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;LlamaIndex Discord Server 💬&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  3️⃣ &lt;a href="https://github.com/deepset-ai/haystack" rel="noopener noreferrer"&gt;Haystack (by deepset AI)&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;19.7K Github Stars, 2.1K Forks&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fld3rh5gulc2s2kjsj7gl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fld3rh5gulc2s2kjsj7gl.png" alt="Haystack" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;
 Haystack: &lt;a&gt;https://haystack.deepset.ai&lt;/a&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Haystack is an end-to-end LLM framework that allows you to build applications powered by LLMs, Transformer models, vector search and more. Whether you want to perform retrieval-augmented generation (RAG), document search, question answering or answer generation, Haystack can orchestrate state-of-the-art embedding models and LLMs into pipelines to build end-to-end NLP applications and solve your use case.&lt;/p&gt;

&lt;h4&gt;
  
  
  Core Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval &amp;amp; Augmentation&lt;/strong&gt; – Combines document search with LLMs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid Search&lt;/strong&gt; – Uses BM25, Dense Vectors, and Neural Retrieval.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre-built Pipelines&lt;/strong&gt; – Modular approach for rapid development.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration Support&lt;/strong&gt; – Works with Elasticsearch, OpenSearch, FAISS.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  🔹Use Cases
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AI-powered document Q&amp;amp;A&lt;/li&gt;
&lt;li&gt;Context-aware virtual assistants&lt;/li&gt;
&lt;li&gt;Scalable enterprise search&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why Haystack?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Optimized for production RAG applications.&lt;/li&gt;
&lt;li&gt;Supports various retrievers &amp;amp; LLMs for flexibility.&lt;/li&gt;
&lt;li&gt;Strong enterprise adoption &amp;amp; community.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/run-llama" rel="noopener noreferrer"&gt;
        run-llama
      &lt;/a&gt; / &lt;a href="https://github.com/run-llama/llama_index" rel="noopener noreferrer"&gt;
        llama_index
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      LlamaIndex is the leading document agent and OCR platform
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;🗂️ LlamaIndex 🦙&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://pypi.org/project/llama-index/" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/a67de1ee2a66530a0f1b412e06afd6b34174fa691bf417846172cfa5e4ab7485/68747470733a2f2f696d672e736869656c64732e696f2f707970692f646d2f6c6c616d612d696e646578" alt="PyPI - Downloads"&gt;&lt;/a&gt;
&lt;a href="https://github.com/run-llama/llama_index/actions/workflows/build_package.yml" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/run-llama/llama_index/actions/workflows/build_package.yml/badge.svg" alt="Build"&gt;&lt;/a&gt;
&lt;a href="https://github.com/jerryjliu/llama_index/graphs/contributors" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/6fef0b10b05e9e43f79fc04213066b9866b1f9911cb17322d628f5852d788a45/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f636f6e7472696275746f72732f6a657272796a6c69752f6c6c616d615f696e646578" alt="GitHub contributors"&gt;&lt;/a&gt;
&lt;a href="https://discord.gg/dGcwcsnxhU" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/dcf4077e2557ac3b54bad76685ab021fcf5cfb62042f46a588866f9c603572e4/68747470733a2f2f696d672e736869656c64732e696f2f646973636f72642f31303539313939323137343936373732363838" alt="Discord"&gt;&lt;/a&gt;
&lt;a href="https://x.com/llama_index" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/5879f7d796f75b7f7200aeae80a9ceae06de1595dde914c0b3c19dcfaef156bf/68747470733a2f2f696d672e736869656c64732e696f2f747769747465722f666f6c6c6f772f6c6c616d615f696e646578" alt="Twitter"&gt;&lt;/a&gt;
&lt;a href="https://www.reddit.com/r/LlamaIndex/" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/fe43c5534faae8a6c77e77259f119efe8fd00b93132b7acf666b8d2503c51eb0/68747470733a2f2f696d672e736869656c64732e696f2f7265646469742f7375627265646469742d73756273637269626572732f4c6c616d61496e6465783f7374796c653d706c6173746963266c6f676f3d726564646974266c6162656c3d722532464c6c616d61496e646578266c6162656c436f6c6f723d7768697465" alt="Reddit"&gt;&lt;/a&gt;
&lt;a href="https://www.phorm.ai/query?projectId=c5863b56-6703-4a5d-87b6-7e6031bf16b6" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/8e0d9f765d03e0894bb813b8e93d7524933d373ccc0241000d5c0a7922009223/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f50686f726d2d41736b5f41492d2532334632373737412e7376673f266c6f676f3d646174613a696d6167652f7376672b786d6c3b6261736536342c50484e325a79423361575230614430694e534967614756705a326830505349304969426d6157787350534a756232356c4969423462577875637a30696148523063446f764c336433647935334d793576636d63764d6a41774d43397a646d636950676f67494478775958526f49475139496b30304c6a517a494445754f446779595445754e4451674d5334304e434177494441674d5330754d446b344c6a51794e6d4d744c6a41314c6a45794d7930754d5445314c6a497a4c5334784f5449754d7a49794c5334774e7a55754d446b744c6a45324c6a45324e5330754d6a55314c6a49794e6d45784c6a4d314d7941784c6a4d314d794177494441674d5330754e546b314c6a49784d6d4d744c6a41354f5334774d5449744c6a45354d6934774d5451744c6a49334f5334774d445a734c5445754e546b7a4c5334784e4859744c6a51774e6d67784c6a59314f474d754d446b754d4441784c6a45334c5334784e6a6b754d6a51324c5334784f5446684c6a59774d7934324d444d674d434177494441674c6a49744c6a45774e6934314d6a6b754e544935494441674d434177494334784d7a67744c6a45334c6a59314e4334324e5451674d434177494441674c6a41324e5330754d6a52734c6a41794f4330754d7a4a684c6a6b7a4c6a6b7a494441674d4341774c5334774d7a59744c6a49304f5334314e6a63754e545933494441674d4341774c5334784d444d744c6a49754e5441794c6a55774d694177494441674d4330754d5459344c5334784d7a67754e6a41344c6a59774f434177494441674d4330754d6a51744c6a41324e3077794c6a517a4e7934334d6a6b674d5334324d6a55754e6a63785953347a4d6a49754d7a4979494441674d4341774c5334794d7a49754d4455344c6a4d334e53347a4e7a55674d434177494441744c6a45784e6934794d7a4a734c5334784d5459674d5334304e5330754d4455344c6a59354e7930754d4455344c6a63314e4577754e7a4131494452734c53347a4e5463744c6a41334f5577754e6a41794c6a6b774e6b4d754e6a45334c6a63794e6934324e6a4d754e5463304c6a637a4f5334304e5452684c6a6b314f4334354e5467674d434177494445674c6a49334e4330754d6a67314c6a6b334d5334354e7a45674d434177494445674c6a4d7a4e7930754d54526a4c6a45784f5330754d4449324c6a49794e7930754d444d304c6a4d794e5330754d44493254444d754d6a4d794c6a4532597934784e546b754d4445304c6a4d7a4e6934774d7934304e546b754d446779595445754d54637a494445754d54637a494441674d434178494334314e4455754e445133597934774e6934774f5451754d5441354c6a45354d6934784e4451754d6a6b7a595445754d7a6b79494445754d7a6b79494441674d434178494334774e7a67754e5468734c5334774d6a6b754d7a4a614969426d615778735053496a526a49334e7a64424969382b4369416750484268644767675a443069545451754d446779494449754d444133595445754e445531494445754e445531494441674d4341784c5334774f5467754e444933597930754d4455754d5449304c5334784d5451754d6a4d794c5334784f5449754d7a4930595445754d544d674d5334784d794177494441674d5330754d6a55304c6a49794e7941784c6a4d314d7941784c6a4d314d794177494441674d5330754e546b314c6a49784e474d744c6a45754d4445794c5334784f544d754d4445304c5334794f4334774d445a734c5445754e5459744c6a45774f4334774d7a51744c6a51774e6934774d7930754d7a5134494445754e5455354c6a45314e474d754d446b674d4341754d54637a4c5334774d5334794e4467744c6a417a4d3245754e6a417a4c6a59774d794177494441674d4341754d6930754d5441324c6a557a4d6934314d7a49674d434177494441674c6a457a4f5330754d5463794c6a59324c6a5932494441674d434177494334774e6a51744c6a49304d5777754d4449354c53347a4d6a46684c6a6b304c6a6b30494441674d4341774c5334774d7a59744c6a49314c6a55334c6a5533494441674d4341774c5334784d444d744c6a49774d6934314d4449754e544179494441674d4341774c5334784e6a67744c6a457a4f4334324d4455754e6a4131494441674d4341774c5334794e4330754d445933544445754d6a637a4c6a67794e324d744c6a41354e4330754d4441344c5334784e6a67754d4445744c6a49794d5334774e5455744c6a41314d7934774e4455744c6a41344e4334784d5451744c6a41354d6934794d445a4d4c6a63774e534130494441674d7934354d7a68734c6a49314e5330794c6a6b784d5545784c6a4178494445754d4445674d434177494445674c6a4d354d7934314e7a49754f5459794c6a6b324d694177494441674d5341754e6a59324c6a49344e6d45754f5463754f5463674d434177494445674c6a4d7a4f4330754d5452444d5334784d6a49754d5449674d5334794d7934784d5341784c6a4d794f4334784d546c734d5334314f544d754d54526a4c6a45324c6a41784e43347a4c6a41304e7934304d6a4d754d5745784c6a4533494445754d5463674d434177494445674c6a55304e5334304e44686a4c6a41324d5334774f5455754d5441354c6a45354d7934784e4451754d6a6b31595445754e444132494445754e444132494441674d434178494334774e7a63754e54677a624330754d4449344c6a4d794d6c6f6949475a7062477739496e646f6158526c4969382b4369416750484268644767675a443069545451754d446779494449754d444133595445754e445531494445754e445531494441674d4341784c5334774f5467754e444933597930754d4455754d5449304c5334784d5451754d6a4d794c5334784f5449754d7a4930595445754d544d674d5334784d794177494441674d5330754d6a55304c6a49794e7941784c6a4d314d7941784c6a4d314d794177494441674d5330754e546b314c6a49784e474d744c6a45754d4445794c5334784f544d754d4445304c5334794f4334774d445a734c5445754e5459744c6a45774f4334774d7a51744c6a51774e6934774d7930754d7a5134494445754e5455354c6a45314e474d754d446b674d4341754d54637a4c5334774d5334794e4467744c6a417a4d3245754e6a417a4c6a59774d794177494441674d4341754d6930754d5441324c6a557a4d6934314d7a49674d434177494441674c6a457a4f5330754d5463794c6a59324c6a5932494441674d434177494334774e6a51744c6a49304d5777754d4449354c53347a4d6a46684c6a6b304c6a6b30494441674d4341774c5334774d7a59744c6a49314c6a55334c6a5533494441674d4341774c5334784d444d744c6a49774d6934314d4449754e544179494441674d4341774c5334784e6a67744c6a457a4f4334324d4455754e6a4131494441674d4341774c5334794e4330754d445933544445754d6a637a4c6a67794e324d744c6a41354e4330754d4441344c5334784e6a67754d4445744c6a49794d5334774e5455744c6a41314d7934774e4455744c6a41344e4334784d5451744c6a41354d6934794d445a4d4c6a63774e534130494441674d7934354d7a68734c6a49314e5330794c6a6b784d5545784c6a4178494445754d4445674d434177494445674c6a4d354d7934314e7a49754f5459794c6a6b324d694177494441674d5341754e6a59324c6a49344e6d45754f5463754f5463674d434177494445674c6a4d7a4f4330754d5452444d5334784d6a49754d5449674d5334794d7934784d5341784c6a4d794f4334784d546c734d5334314f544d754d54526a4c6a45324c6a41784e43347a4c6a41304e7934304d6a4d754d5745784c6a4533494445754d5463674d434177494445674c6a55304e5334304e44686a4c6a41324d5334774f5455754d5441354c6a45354d7934784e4451754d6a6b31595445754e444132494445754e444132494441674d434178494334774e7a63754e54677a624330754d4449344c6a4d794d6c6f6949475a7062477739496e646f6158526c4969382b436a777663335a6e50676f3d" alt="Ask AI"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;LlamaIndex OSS (by &lt;a href="https://llamaindex.ai" rel="nofollow noopener noreferrer"&gt;LlamaIndex&lt;/a&gt;) is an open-source framework to build agentic applications. &lt;strong&gt;&lt;a href="https://cloud.llamaindex.ai" rel="nofollow noopener noreferrer"&gt;Parse&lt;/a&gt;&lt;/strong&gt; is our enterprise platform for agentic OCR, parsing, extraction, indexing and more. You can use LlamaParse with this framework or on its own; see &lt;a href="https://github.com/run-llama/llama_index#llamacloud-document-agent-platform" rel="noopener noreferrer"&gt;LlamaParse&lt;/a&gt; below for signup and product links.&lt;/p&gt;
&lt;blockquote&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;📚 &lt;strong&gt;Documentation:&lt;/strong&gt;
&lt;/h3&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developers.llamaindex.ai/python/cloud/llamaparse/" rel="nofollow noopener noreferrer"&gt;LlamaParse&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.llamaindex.ai/python/framework/" rel="nofollow noopener noreferrer"&gt;LlamaIndex OSS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.llamaindex.ai/python/llamaagents/overview/" rel="nofollow noopener noreferrer"&gt;LlamaAgents&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;Building with LlamaIndex typically involves working with LlamaIndex core and a chosen set of integrations (or plugins). There are two ways to start building with LlamaIndex in
Python:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Starter&lt;/strong&gt;: &lt;a href="https://pypi.org/project/llama-index/" rel="nofollow noopener noreferrer"&gt;&lt;code&gt;llama-index&lt;/code&gt;&lt;/a&gt;. A starter Python package that includes core LlamaIndex as well as a selection of integrations.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Customized&lt;/strong&gt;: &lt;a href="https://pypi.org/project/llama-index-core/" rel="nofollow noopener noreferrer"&gt;&lt;code&gt;llama-index-core&lt;/code&gt;&lt;/a&gt;. Install core LlamaIndex and add your chosen LlamaIndex integration packages on &lt;a href="https://llamahub.ai/" rel="nofollow noopener noreferrer"&gt;LlamaHub&lt;/a&gt;
that are required for your application. There are over 300 LlamaIndex integration
packages that work seamlessly with core, allowing you to build with your preferred
LLM, embedding, and vector store providers.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The LlamaIndex…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/run-llama/llama_index" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://github.com/deepset-ai/haystack" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Star Haystack on GitHub ⭐&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;a href="https://discord.com/invite/xYvH6drSmA" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Haystack Discord Server 💬&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  4️⃣ &lt;a href="https://github.com/jina-ai" rel="noopener noreferrer"&gt;Jina AI&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;21.4K Github Stars, 2.2K Forks (jina-ai/serve)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5q8jjn0eesr87os6wlk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5q8jjn0eesr87os6wlk.png" alt="jinaAi" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;
 Jina AI: &lt;a&gt;https://jina.ai/&lt;/a&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Jina AI is an open-source MLOps and AI framework designed for neural search, generative AI, and multimodal applications. It enables developers to build scalable AI-powered search systems, chatbots, and RAG (Retrieval-Augmented Generation) applications efficiently.&lt;/p&gt;

&lt;h4&gt;
  
  
  Core Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Neural Search&lt;/strong&gt; – Uses deep learning for document retrieval.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-modal Data Support&lt;/strong&gt; – Works with text, images, audio.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector Database Integration&lt;/strong&gt; – Built-in support for Jina Embeddings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud &amp;amp; On-Premise Support&lt;/strong&gt; – Easily deployable on Kubernetes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  🔹Use Cases
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AI-powered semantic search&lt;/li&gt;
&lt;li&gt;Multi-modal search applications&lt;/li&gt;
&lt;li&gt;Video, image, and text retrieval&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why Jina AI?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Fast &amp;amp; scalable for AI-driven search.&lt;/li&gt;
&lt;li&gt;Supports multiple LLMs &amp;amp; vector stores.&lt;/li&gt;
&lt;li&gt;Well-suited for both startups &amp;amp; enterprises.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/jina-ai" rel="noopener noreferrer"&gt;
        jina-ai
      &lt;/a&gt; / &lt;a href="https://github.com/jina-ai/serve" rel="noopener noreferrer"&gt;
        serve
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      ☁️ Build multimodal AI applications with cloud-native stack
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Jina-Serve&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://pypi.org/project/jina/" rel="nofollow noopener noreferrer"&gt;&lt;img alt="PyPI" src="https://camo.githubusercontent.com/956af16967f3be5a3e34826b6742b0162362a729564c6e13c3202543c141dbbc/68747470733a2f2f696d672e736869656c64732e696f2f707970692f762f6a696e613f6c6162656c3d52656c65617365267374796c653d666c61742d737175617265"&gt;&lt;/a&gt;
&lt;a href="https://discord.jina.ai" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/3a6f12b40fd75f6a82e93699dd723ca0022a33ab1ed4812974a7d2087e3da65f/68747470733a2f2f696d672e736869656c64732e696f2f646973636f72642f313130363534323232303131323330323133303f6c6f676f3d646973636f7264266c6f676f436f6c6f723d7768697465267374796c653d666c61742d737175617265"&gt;&lt;/a&gt;
&lt;a href="https://pypistats.org/packages/jina" rel="nofollow noopener noreferrer"&gt;&lt;img alt="PyPI - Downloads from official pypistats" src="https://camo.githubusercontent.com/82f8a934ccd23109deb737418f30cb377c46abefe417d9299eb019a2198c4242/68747470733a2f2f696d672e736869656c64732e696f2f707970692f646d2f6a696e613f7374796c653d666c61742d737175617265"&gt;&lt;/a&gt;
&lt;a href="https://github.com/jina-ai/jina/actions/workflows/cd.yml" rel="noopener noreferrer"&gt;&lt;img alt="Github CD status" src="https://github.com/jina-ai/jina/actions/workflows/cd.yml/badge.svg"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Jina-serve is a framework for building and deploying AI services that communicate via gRPC, HTTP and WebSockets. Scale your services from local development to production while focusing on your core logic.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Key Features&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Native support for all major ML frameworks and data types&lt;/li&gt;
&lt;li&gt;High-performance service design with scaling, streaming, and dynamic batching&lt;/li&gt;
&lt;li&gt;LLM serving with streaming output&lt;/li&gt;
&lt;li&gt;Built-in Docker integration and Executor Hub&lt;/li&gt;
&lt;li&gt;One-click deployment to Jina AI Cloud&lt;/li&gt;
&lt;li&gt;Enterprise-ready with Kubernetes and Docker Compose support&lt;/li&gt;
&lt;/ul&gt;

&lt;strong&gt;Comparison with FastAPI&lt;/strong&gt;
&lt;p&gt;Key advantages over FastAPI:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DocArray-based data handling with native gRPC support&lt;/li&gt;
&lt;li&gt;Built-in containerization and service orchestration&lt;/li&gt;
&lt;li&gt;Seamless scaling of microservices&lt;/li&gt;
&lt;li&gt;One-command cloud deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Install&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;pip install jina&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;See guides for &lt;a href="https://jina.ai/serve/get-started/install/apple-silicon-m1-m2/" rel="nofollow noopener noreferrer"&gt;Apple Silicon&lt;/a&gt; and &lt;a href="https://jina.ai/serve/get-started/install/windows/" rel="nofollow noopener noreferrer"&gt;Windows&lt;/a&gt;.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Core Concepts&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Three main layers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data&lt;/strong&gt;: BaseDoc and DocList for input/output&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serving&lt;/strong&gt;: Executors process Documents, Gateway connects services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orchestration&lt;/strong&gt;: Deployments serve Executors, Flows create pipelines&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Build AI Services&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Let's create a gRPC-based…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/jina-ai/serve" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://github.com/jina-ai" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Star Jina AI on GitHub ⭐&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;a href="https://discord.com/invite/AWXCCC6G2P" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Jina AI Discord Server 💬&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  5️⃣ &lt;a href="https://github.com/truefoundry/cognita" rel="noopener noreferrer"&gt;Cognita by truefoundry&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;3.9K Github Stars, 322 Forks&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2lharhjmlihtuam9k4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2lharhjmlihtuam9k4c.png" alt="cognita" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;
 Cognita: &lt;a&gt;https://cognita.truefoundry.com&lt;/a&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Cognita addresses the challenges of deploying complex AI systems by offering a structured framework that balances customization with user-friendliness. Its modular design ensures that applications can evolve alongside technological advancements, providing long-term value and adaptability.&lt;/p&gt;

&lt;h4&gt;
  
  
  Core Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Modular Architecture&lt;/strong&gt; – Seven customizable components (data loaders, parsers, embedders, rerankers, vector databases, metadata store, query controllers).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector Database Support&lt;/strong&gt; – Compatible with Qdrant, SingleStore, and other databases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customizability&lt;/strong&gt; – Easily extend or swap components for different AI applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; – Designed for enterprise use, supporting large datasets and real-time retrieval.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API-Driven&lt;/strong&gt; – Seamless integration with existing AI pipelines.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  🔹Use Cases
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AI-powered Customer Support with real-time retrieval.&lt;/li&gt;
&lt;li&gt;Enterprise Knowledge Management&lt;/li&gt;
&lt;li&gt;Context-Aware AI Assistants&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why Cognita?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Open-source with modular design for custom RAG workflows.&lt;/li&gt;
&lt;li&gt;Works with LangChain, LlamaIndex, and multiple vector stores.&lt;/li&gt;
&lt;li&gt;Built for scalable and reliable AI solutions.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/truefoundry" rel="noopener noreferrer"&gt;
        truefoundry
      &lt;/a&gt; / &lt;a href="https://github.com/truefoundry/cognita" rel="noopener noreferrer"&gt;
        cognita
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry 
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Cognita&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/truefoundry/cognita/./docs/images/readme-banner.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ftruefoundry%2Fcognita%2F.%2Fdocs%2Fimages%2Freadme-banner.png" alt="RAG_TF"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Why use Cognita?&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Langchain/LlamaIndex provide easy to use abstractions that can be used for quick experimentation and prototyping on jupyter notebooks. But, when things move to production, there are constraints like the components should be modular, easily scalable and extendable. This is where Cognita comes in action
Cognita uses Langchain/Llamaindex under the hood and provides an organisation to your codebase, where each of the RAG component is modular, API driven and easily extendible. Cognita can be used easily in a &lt;a href="https://github.com/truefoundry/cognita#rocket-quickstart-running-cognita-locally" rel="noopener noreferrer"&gt;local&lt;/a&gt; setup, at the same time, offers you a production ready environment along with no-code &lt;a href="https://github.com/truefoundry/cognita/./frontend/README.md" rel="noopener noreferrer"&gt;UI&lt;/a&gt; support. Cognita also supports incremental indexing by default.&lt;/p&gt;
&lt;p&gt;You can try out Cognita at: &lt;a href="https://cognita.truefoundry.com" rel="nofollow noopener noreferrer"&gt;https://cognita.truefoundry.com&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/truefoundry/cognita/./docs/images/RAG-TF.gif"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ftruefoundry%2Fcognita%2F.%2Fdocs%2Fimages%2FRAG-TF.gif" alt="RAG_TF"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;🎉 What's new in Cognita&lt;/h1&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;[September, 2024] Cognita now has AudioParser (&lt;a href="https://github.com/fedirz/faster-whisper-server" rel="noopener noreferrer"&gt;https://github.com/fedirz/faster-whisper-server&lt;/a&gt;) and VideoParser (AudioParser + MultimodalParser).&lt;/li&gt;
&lt;li&gt;[August, 2024] Cognita has now moved to using pydantic v2.&lt;/li&gt;
&lt;li&gt;[July, 2024] Introducing &lt;code&gt;model gateway&lt;/code&gt; a single file to…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/truefoundry/cognita" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://github.com/truefoundry/cognita" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Star Cognita on GitHub ⭐&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;a href="https://truefoundry.slack.com/" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Cognita Slack Community 💬&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  6️⃣ &lt;a href="https://github.com/infiniflow/ragflow" rel="noopener noreferrer"&gt;RAGFlow by infiniflow&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;43.9K Github Stars, 3.9K Forks&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xdtqcluh4rdnysz3al4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xdtqcluh4rdnysz3al4.png" alt="ragFlow" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;
 RAGFlow: &lt;a&gt;https://ragflow.io/&lt;/a&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;RAGFlow is an open-source Retrieval-Augmented Generation (RAG) engine developed by InfiniFlow, focusing on deep document understanding to enhance AI-driven question-answering systems.&lt;/p&gt;

&lt;h4&gt;
  
  
  Core Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Deep Document Understanding&lt;/strong&gt;: RAGFlow excels in processing complex, unstructured data formats, enabling accurate information extraction and retrieval.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Template-Based Chunking&lt;/strong&gt;: It employs intelligent, explainable chunking methods with various templates to optimize data processing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration with Infinity Database&lt;/strong&gt;: RAGFlow seamlessly integrates with Infinity, an AI-native database optimized for dense and sparse vector searches, enhancing retrieval performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GraphRAG Support&lt;/strong&gt;: The engine incorporates GraphRAG, enabling advanced retrieval-augmented generation capabilities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Designed to handle extensive datasets, RAGFlow is suitable for businesses of all sizes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  🔹Use Cases
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Enterprise Knowledge Management&lt;/li&gt;
&lt;li&gt;Legal Document Analysis&lt;/li&gt;
&lt;li&gt;AI-Powered Customer Support&lt;/li&gt;
&lt;li&gt;Medical Research&lt;/li&gt;
&lt;li&gt;Financial Analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why RAGFlow?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Deep Document Processing – Structures unstructured data for complex analysis.&lt;/li&gt;
&lt;li&gt;Graph-Enhanced RAG – Uses graph-based retrieval for smarter responses.&lt;/li&gt;
&lt;li&gt;Hybrid Search – Combines vector and keyword search for accuracy.&lt;/li&gt;
&lt;li&gt;Enterprise Scalability – Handles large-scale AI search applications.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/infiniflow" rel="noopener noreferrer"&gt;
        infiniflow
      &lt;/a&gt; / &lt;a href="https://github.com/infiniflow/ragflow" rel="noopener noreferrer"&gt;
        ragflow
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div&gt;
&lt;a href="https://demo.ragflow.io/" rel="nofollow noopener noreferrer"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Finfiniflow%2Fragflow%2Fweb%2Fsrc%2Fassets%2Flogo-with-text.svg" width="520" alt="ragflow logo"&gt;
&lt;/a&gt;
&lt;/div&gt;
&lt;p&gt;
  &lt;a href="https://github.com/infiniflow/ragflow/./README.md" rel="noopener noreferrer"&gt;&lt;img alt="README in English" src="https://camo.githubusercontent.com/5c0caf5ced5e85c6f7980d1551aabee006d055bb853baaad9ce9e0f10bfb57bc/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f456e676c6973682d444245444641"&gt;&lt;/a&gt;
  &lt;a href="https://github.com/infiniflow/ragflow/./README_zh.md" rel="noopener noreferrer"&gt;&lt;img alt="简体中文版自述文件" src="https://camo.githubusercontent.com/9bb50c9cf659024406659a9f9c1a9b09f394f05e6bffb238129fb82bf1750e59/68747470733a2f2f696d672e736869656c64732e696f2f62616467652fe7ae80e4bd93e4b8ade696872d444645304535"&gt;&lt;/a&gt;
  &lt;a href="https://github.com/infiniflow/ragflow/./README_tzh.md" rel="noopener noreferrer"&gt;&lt;img alt="繁體版中文自述文件" src="https://camo.githubusercontent.com/ba37b7979c04919d8552fce0f4a0aa306c15891406b935fd2a70996d396ceff9/68747470733a2f2f696d672e736869656c64732e696f2f62616467652fe7b981e9ab94e4b8ade696872d444645304535"&gt;&lt;/a&gt;
  &lt;a href="https://github.com/infiniflow/ragflow/./README_ja.md" rel="noopener noreferrer"&gt;&lt;img alt="日本語のREADME" src="https://camo.githubusercontent.com/0b76d9a2057efad8b6a1a4497e345efeba4f10c946faef344dca3d723b643128/68747470733a2f2f696d672e736869656c64732e696f2f62616467652fe697a5e69cace8aa9e2d444645304535"&gt;&lt;/a&gt;
  &lt;a href="https://github.com/infiniflow/ragflow/./README_ko.md" rel="noopener noreferrer"&gt;&lt;img alt="한국어" src="https://camo.githubusercontent.com/cdf3e3d23cdf8cef98c2a7235e99edefe69d0c4c0c8deb659fcf8c28d9a9c532/68747470733a2f2f696d672e736869656c64732e696f2f62616467652fed959ceab5adec96b42d444645304535"&gt;&lt;/a&gt;
  &lt;a href="https://github.com/infiniflow/ragflow/./README_id.md" rel="noopener noreferrer"&gt;&lt;img alt="Bahasa Indonesia" src="https://camo.githubusercontent.com/59e6536b48def2948702890423c4a4cc0dccf1444d80c6536ec808d4439d4b60/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f42616861736120496e646f6e657369612d444645304535"&gt;&lt;/a&gt;
  &lt;a href="https://github.com/infiniflow/ragflow/./README_pt_br.md" rel="noopener noreferrer"&gt;&lt;img alt="Português(Brasil)" src="https://camo.githubusercontent.com/de9c9a8163f0dc7f291f73e2a6d6a585b0e3dd8340ede332f21a4e44fa928320/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f506f7274756775c3aa732842726173696c292d444645304535"&gt;&lt;/a&gt;
  &lt;a href="https://github.com/infiniflow/ragflow/./README_fr.md" rel="noopener noreferrer"&gt;&lt;img alt="README en Français" src="https://camo.githubusercontent.com/e76eb715af358b137100b5f651dc98909f92600f9809a2e530d8d745f94b4ba9/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4672616ec3a76169732d444645304535"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;
    &lt;a href="https://x.com/intent/follow?screen_name=infiniflowai" rel="nofollow noopener noreferrer"&gt;
        &lt;img src="https://camo.githubusercontent.com/57f1d22b0baf26c27f421a62a2d38859d47a1fff30b98f5fbb68f2c95e09ff5f/68747470733a2f2f696d672e736869656c64732e696f2f747769747465722f666f6c6c6f772f696e66696e69666c6f773f6c6f676f3d5826636f6c6f723d253230253233663566356635" alt="follow on X(Twitter)"&gt;
    &lt;/a&gt;
    &lt;a href="https://demo.ragflow.io" rel="nofollow noopener noreferrer"&gt;
        &lt;img alt="Static Badge" src="https://camo.githubusercontent.com/61de670b565422f07ca3f518efccdf278a758560420b573836d7d6441e944b6e/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4f6e6c696e652d44656d6f2d346536623939"&gt;
    &lt;/a&gt;
    &lt;a href="https://hub.docker.com/r/infiniflow/ragflow" rel="nofollow noopener noreferrer"&gt;
        &lt;img src="https://camo.githubusercontent.com/0f0a1753388867bd1b34fd64e0b03de193341367f2e8fb307f92f3c081d86534/68747470733a2f2f696d672e736869656c64732e696f2f646f636b65722f70756c6c732f696e66696e69666c6f772f726167666c6f773f6c6162656c3d446f636b657225323050756c6c7326636f6c6f723d306462376564266c6f676f3d646f636b6572266c6f676f436f6c6f723d7768697465267374796c653d666c61742d737175617265" alt="docker pull infiniflow/ragflow:v0.24.0"&gt;
    &lt;/a&gt;
    &lt;a href="https://github.com/infiniflow/ragflow/releases/latest" rel="noopener noreferrer"&gt;
        &lt;img src="https://camo.githubusercontent.com/3fd28cc4cca55cb2b58911835bcadaa5e91147bb0d537ba20b3715f19c00c485/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f762f72656c656173652f696e66696e69666c6f772f726167666c6f773f636f6c6f723d626c7565266c6162656c3d4c617465737425323052656c65617365" alt="Latest Release"&gt;
    &lt;/a&gt;
    &lt;a href="https://github.com/infiniflow/ragflow/blob/main/LICENSE" rel="noopener noreferrer"&gt;
        &lt;img height="21" src="https://camo.githubusercontent.com/6b58253ba6743f6bf0d276208aa116826b139e2ab3edd8baa990c41a558d90b0/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d4170616368652d2d322e302d6666666666663f6c6162656c436f6c6f723d64346561663726636f6c6f723d326536636334" alt="license"&gt;
    &lt;/a&gt;
    &lt;a href="https://deepwiki.com/infiniflow/ragflow" rel="nofollow noopener noreferrer"&gt;
        &lt;img alt="Ask DeepWiki" src="https://camo.githubusercontent.com/0f5ae213ac378635adeb5d7f13cef055ad2f7d9a47b36de7b1c67dbe09f609ca/68747470733a2f2f6465657077696b692e636f6d2f62616467652e737667"&gt;
    &lt;/a&gt;
&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h4 class="heading-element"&gt;
  &lt;a href="https://ragflow.io/docs/dev/" rel="nofollow noopener noreferrer"&gt;Document&lt;/a&gt; |
  &lt;a href="https://github.com/infiniflow/ragflow/issues/12241" rel="noopener noreferrer"&gt;Roadmap&lt;/a&gt; |
  &lt;a href="https://twitter.com/infiniflowai" rel="nofollow noopener noreferrer"&gt;Twitter&lt;/a&gt; |
  &lt;a href="https://discord.gg/NjYzJD3GM3" rel="nofollow noopener noreferrer"&gt;Discord&lt;/a&gt; |
  &lt;a href="https://demo.ragflow.io" rel="nofollow noopener noreferrer"&gt;Demo&lt;/a&gt;
&lt;/h4&gt;
&lt;/div&gt;

&lt;div&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/ragflow-octoverse.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Finfiniflow%2Fragflow-docs%2Frefs%2Fheads%2Fimage%2Fimage%2Fragflow-octoverse.png" width="1200"&gt;&lt;/a&gt;
&lt;/div&gt;

&lt;div&gt;
&lt;a href="https://trendshift.io/repositories/9064" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/a6e05fb93088900f5d0a2895e9ab9f80c694c6e40caeed8a94685e67f44cc07b/68747470733a2f2f7472656e6473686966742e696f2f6170692f62616467652f7265706f7369746f726965732f39303634" alt="infiniflow%2Fragflow | Trendshift" width="250" height="55"&gt;&lt;/a&gt;
&lt;/div&gt;


&lt;b&gt;📕 Table of Contents&lt;/b&gt;
&lt;ul&gt;
&lt;li&gt;💡 &lt;a href="https://github.com/infiniflow/ragflow#-what-is-ragflow" rel="noopener noreferrer"&gt;What is RAGFlow?&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🎮 &lt;a href="https://github.com/infiniflow/ragflow#-demo" rel="noopener noreferrer"&gt;Demo&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📌 &lt;a href="https://github.com/infiniflow/ragflow#-latest-updates" rel="noopener noreferrer"&gt;Latest Updates&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🌟 &lt;a href="https://github.com/infiniflow/ragflow#-key-features" rel="noopener noreferrer"&gt;Key Features&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🔎 &lt;a href="https://github.com/infiniflow/ragflow#-system-architecture" rel="noopener noreferrer"&gt;System Architecture&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🎬 &lt;a href="https://github.com/infiniflow/ragflow#-get-started" rel="noopener noreferrer"&gt;Get Started&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🔧 &lt;a href="https://github.com/infiniflow/ragflow#-configurations" rel="noopener noreferrer"&gt;Configurations&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🔧 &lt;a href="https://github.com/infiniflow/ragflow#-build-a-docker-image" rel="noopener noreferrer"&gt;Build a Docker image&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🔨 &lt;a href="https://github.com/infiniflow/ragflow#-launch-service-from-source-for-development" rel="noopener noreferrer"&gt;Launch service from source for development&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📚 &lt;a href="https://github.com/infiniflow/ragflow#-documentation" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📜 &lt;a href="https://github.com/infiniflow/ragflow#-roadmap" rel="noopener noreferrer"&gt;Roadmap&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🏄 &lt;a href="https://github.com/infiniflow/ragflow#-community" rel="noopener noreferrer"&gt;Community&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🙌 &lt;a href="https://github.com/infiniflow/ragflow#-contributing" rel="noopener noreferrer"&gt;Contributing&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;💡 What is RAGFlow?&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;&lt;a href="https://ragflow.io/" rel="nofollow noopener noreferrer"&gt;RAGFlow&lt;/a&gt; is a leading open-source Retrieval-Augmented Generation (&lt;a href="https://ragflow.io/basics/what-is-rag" rel="nofollow noopener noreferrer"&gt;RAG&lt;/a&gt;) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs. It offers a streamlined RAG workflow adaptable to enterprises of any scale. Powered by a converged &lt;a href="https://ragflow.io/basics/what-is-agent-context-engine" rel="nofollow noopener noreferrer"&gt;context engine&lt;/a&gt; and pre-built agent templates, RAGFlow enables developers to transform complex data into high-fidelity, production-ready AI systems with exceptional efficiency and precision.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🎮 Demo&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;Try our demo at &lt;a href="https://demo.ragflow.io" rel="nofollow noopener noreferrer"&gt;https://demo.ragflow.io&lt;/a&gt;.&lt;/p&gt;

&lt;div&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/chunking.gif"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Finfiniflow%2Fragflow-docs%2Frefs%2Fheads%2Fimage%2Fimage%2Fchunking.gif" width="1200"&gt;&lt;/a&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/infiniflow/ragflow-docs/refs/heads/image/image/agentic-dark.gif"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Finfiniflow%2Fragflow-docs%2Frefs%2Fheads%2Fimage%2Fimage%2Fagentic-dark.gif" width="1200"&gt;&lt;/a&gt;
&lt;/div&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;🔥 Latest Updates&lt;/h2&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;2025-12-26 Supports 'Memory' for AI agent.&lt;/li&gt;
&lt;li&gt;2025-11-19 Supports Gemini 3 Pro.&lt;/li&gt;
&lt;li&gt;2025-11-12 Supports data synchronization from Confluence…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/infiniflow/ragflow" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://github.com/infiniflow/ragflow" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Star RAGFlow on GitHub ⭐&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;a href="https://discord.com/invite/4XxujFgUN7" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;RAGFlow Discord Server 💬&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h3&gt;
  
  
  7️⃣ &lt;a href="https://github.com/neuml/txtai" rel="noopener noreferrer"&gt;txtAI by NeuML&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;10.5K Github Stars, 669 Forks&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg40mjmnnr9bylrp4uoi5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg40mjmnnr9bylrp4uoi5.png" alt="txtai" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;
 txtAI: &lt;a&gt;https://neuml.github.io/txtai/&lt;/a&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;txtAI is an open-source AI-powered search engine and embeddings database, designed for semantic search, RAG, and document similarity.&lt;/p&gt;
&lt;h4&gt;
  
  
  Core Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Embeddings Indexing&lt;/strong&gt; – Stores and retrieves documents using vector-based search.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAG Integration&lt;/strong&gt; – Enhances LLM responses with retrieval-augmented generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Modal Support&lt;/strong&gt; – Works with text, images, and audio embeddings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalable &amp;amp; Lightweight&lt;/strong&gt; – Runs on edge devices, local systems, and cloud.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;APIs &amp;amp; Pipelines&lt;/strong&gt; – Provides an API for text search, similarity, and question answering.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQLite Backend&lt;/strong&gt; – Uses SQLite-based vector storage for fast retrieval.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  🔹Use Cases:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AI-Powered Semantic Search&lt;/li&gt;
&lt;li&gt;Chatbot Augmentation&lt;/li&gt;
&lt;li&gt;Content Recommendation&lt;/li&gt;
&lt;li&gt;Automated Tagging &amp;amp; Classification &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why txtai?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Lightweight &amp;amp; Efficient – Runs on low-resource environments.&lt;/li&gt;
&lt;li&gt;Versatile &amp;amp; Extendable – Works with any embeddings model.&lt;/li&gt;
&lt;li&gt;Fast Retrieval – Optimized for local and cloud-scale deployments.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/neuml" rel="noopener noreferrer"&gt;
        neuml
      &lt;/a&gt; / &lt;a href="https://github.com/neuml/txtai" rel="noopener noreferrer"&gt;
        txtai
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      💡 All-in-one AI framework for semantic search, LLM orchestration and language model workflows
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;
    &lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/neuml/txtai/master/logo.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fneuml%2Ftxtai%2Fmaster%2Flogo.png"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;
    &lt;b&gt;All-in-one AI framework&lt;/b&gt;
&lt;/p&gt;

&lt;p&gt;
    &lt;a href="https://github.com/neuml/txtai/releases" rel="noopener noreferrer"&gt;
        &lt;img src="https://camo.githubusercontent.com/db8c9772f7137bb6347a2fbcae0fe9c0ceb6ec6de69005cae1ba620a4810f0f2/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f72656c656173652f6e65756d6c2f74787461692e7376673f7374796c653d666c617426636f6c6f723d73756363657373" alt="Version"&gt;
    &lt;/a&gt;
    &lt;a href="https://github.com/neuml/txtai" rel="noopener noreferrer"&gt;
        &lt;img src="https://camo.githubusercontent.com/b19a5fb1714e7b0c65076c74a713f0347ed713c3eb409ab61b305c86891dfece/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f6c6173742d636f6d6d69742f6e65756d6c2f74787461692e7376673f7374796c653d666c617426636f6c6f723d626c7565" alt="GitHub last commit"&gt;
    &lt;/a&gt;
    &lt;a href="https://github.com/neuml/txtai/issues" rel="noopener noreferrer"&gt;
        &lt;img src="https://camo.githubusercontent.com/9966ba82e2f69696e61eb6c2ddb8e67c896bca60eaee4b092f30245cae019e49/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f6973737565732f6e65756d6c2f74787461692e7376673f7374796c653d666c617426636f6c6f723d73756363657373" alt="GitHub issues"&gt;
    &lt;/a&gt;
    &lt;a href="https://join.slack.com/t/txtai/shared_invite/zt-37c1zfijp-Y57wMty6YOx_hyIHEQvQJA" rel="nofollow noopener noreferrer"&gt;
        &lt;img src="https://camo.githubusercontent.com/200e209cd2e7400c01b2b42d2e40ef188e5ee7245e92467121dd214d5e38ef08/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f736c61636b2d6a6f696e2d626c75653f7374796c653d666c6174266c6f676f3d736c61636b266c6f676f636f6c6f723d7768697465" alt="Join Slack"&gt;
    &lt;/a&gt;
    &lt;a href="https://github.com/neuml/txtai/actions?query=workflow%3Abuild" rel="noopener noreferrer"&gt;
        &lt;img src="https://github.com/neuml/txtai/workflows/build/badge.svg" alt="Build Status"&gt;
    &lt;/a&gt;
    &lt;a href="https://coveralls.io/github/neuml/txtai?branch=master" rel="nofollow noopener noreferrer"&gt;
        &lt;img src="https://camo.githubusercontent.com/bb55cb341ebc75e47249d88548dfb17ad6f07b90183bac45dfe06df1b25a45ff/68747470733a2f2f696d672e736869656c64732e696f2f636f766572616c6c73436f7665726167652f6769746875622f6e65756d6c2f7478746169" alt="Coverage Status"&gt;
    &lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;txtai is an all-in-one AI framework for semantic search, LLM orchestration and language model workflows.&lt;/p&gt;

&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/neuml/txtai/master/docs/images/architecture.png#gh-light-mode-only"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fneuml%2Ftxtai%2Fmaster%2Fdocs%2Fimages%2Farchitecture.png%23gh-light-mode-only" alt="architecture"&gt;&lt;/a&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/neuml/txtai/master/docs/images/architecture-dark.png#gh-dark-mode-only"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fneuml%2Ftxtai%2Fmaster%2Fdocs%2Fimages%2Farchitecture-dark.png%23gh-dark-mode-only" alt="architecture"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The key component of txtai is an embeddings database, which is a union of vector indexes (sparse and dense), graph networks and relational databases.&lt;/p&gt;
&lt;p&gt;This foundation enables vector search and/or serves as a powerful knowledge source for large language model (LLM) applications.&lt;/p&gt;
&lt;p&gt;Build autonomous agents, retrieval augmented generation (RAG) processes, multi-model workflows and more.&lt;/p&gt;
&lt;p&gt;Summary of txtai features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;🔎 Vector search with SQL, object storage, topic modeling, graph analysis and multimodal indexing&lt;/li&gt;
&lt;li&gt;📄 Create embeddings for text, documents, audio, images and video&lt;/li&gt;
&lt;li&gt;💡 Pipelines powered by language models that run LLM prompts, question-answering, labeling, transcription, translation, summarization and more&lt;/li&gt;
&lt;li&gt;↪️️ Workflows to join pipelines together and aggregate business logic. txtai processes can be simple microservices or multi-model workflows.&lt;/li&gt;
&lt;li&gt;🤖 Agents that intelligently connect embeddings, pipelines, workflows and other agents together to autonomously…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/neuml/txtai" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://github.com/neuml/txtai" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Star txtAI on GitHub ⭐&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;a href="https://txtai.slack.com/join/shared_invite/zt-1cagya4yf-DQeuZbd~aMwH5pckBU4vPg#/shared-invite/email" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;txtAI Slack Community 💬&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  8️⃣ &lt;a href="https://github.com/stanford-oval/storm" rel="noopener noreferrer"&gt;STORM by stanford-oval&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;23.2K Github Stars, 2K Forks&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkqw9mzokgu12w61s7of.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkqw9mzokgu12w61s7of.png" alt="storm" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;
 STORM: &lt;a&gt;https://github.com/stanford-oval/storm&lt;/a&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;STORM is an AI-powered knowledge curation system developed by the Stanford Open Virtual Assistant Lab (OVAL). It automates the research process by generating comprehensive, citation-backed reports on various topics.&lt;/p&gt;

&lt;h4&gt;
  
  
  Core Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Perspective-Guided Question Asking&lt;/strong&gt;: STORM enhances the depth and breadth of information by generating questions from multiple perspectives, leading to more comprehensive research outcomes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simulated Conversations&lt;/strong&gt;: The system simulates dialogues between a Wikipedia writer and a topic expert, grounded in internet sources, to refine its understanding and generate detailed reports.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Agent Collaboration&lt;/strong&gt;: STORM employs a multi-agent system that simulates expert discussions, focusing on structured research and outline creation, and emphasizes proper citation and sourcing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  🔹Use Cases
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Academic Research: Assists researchers in generating comprehensive literature reviews and summaries on specific topics.&lt;/li&gt;
&lt;li&gt;Content Creation: Aids writers and journalists in producing well-researched articles with accurate citations.&lt;/li&gt;
&lt;li&gt;Educational Tools: Serves as a resource for students and educators to quickly gather information on a wide range of subjects.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why Choose STORM?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Automated In-Depth Research: STORM streamlines the process of gathering and synthesizing information, saving time and effort.&lt;/li&gt;
&lt;li&gt;Comprehensive Reports: By considering multiple perspectives and simulating expert conversations, STORM delivers well-rounded and detailed reports.&lt;/li&gt;
&lt;li&gt;Open-Source Accessibility: Being open-source, STORM allows for customization and integration into various workflows, making it a versatile tool for different users.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/stanford-oval" rel="noopener noreferrer"&gt;
        stanford-oval
      &lt;/a&gt; / &lt;a href="https://github.com/stanford-oval/storm" rel="noopener noreferrer"&gt;
        storm
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      An LLM-powered knowledge curation system that researches a topic and generates a full-length report with citations.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;
  &lt;a rel="noopener noreferrer" href="https://github.com/stanford-oval/storm/assets/logo.svg"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fstanford-oval%2Fstorm%2Fassets%2Flogo.svg"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;STORM: Synthesis of Topic Outlines through Retrieval and Multi-perspective Question Asking&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;
| &lt;a href="http://storm.genie.stanford.edu" rel="nofollow noopener noreferrer"&gt;&lt;b&gt;Research preview&lt;/b&gt;&lt;/a&gt; | &lt;a href="https://arxiv.org/abs/2402.14207" rel="nofollow noopener noreferrer"&gt;&lt;b&gt;STORM Paper&lt;/b&gt;&lt;/a&gt;| &lt;a href="https://www.arxiv.org/abs/2408.15232" rel="nofollow noopener noreferrer"&gt;&lt;b&gt;Co-STORM Paper&lt;/b&gt;&lt;/a&gt;  | &lt;a href="https://storm-project.stanford.edu/" rel="nofollow noopener noreferrer"&gt;&lt;b&gt;Website&lt;/b&gt;&lt;/a&gt; |
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Latest News&lt;/strong&gt; 🔥&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;[2025/01] We add &lt;a href="https://github.com/BerriAI/litellm" rel="noopener noreferrer"&gt;litellm&lt;/a&gt; integration for language models and embedding models in &lt;code&gt;knowledge-storm&lt;/code&gt; v1.1.0.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;[2024/09] Co-STORM codebase is now released and integrated into &lt;code&gt;knowledge-storm&lt;/code&gt; python package v1.0.0. Run &lt;code&gt;pip install knowledge-storm --upgrade&lt;/code&gt; to check it out.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;[2024/09] We introduce collaborative STORM (Co-STORM) to support human-AI collaborative knowledge curation! &lt;a href="https://www.arxiv.org/abs/2408.15232" rel="nofollow noopener noreferrer"&gt;Co-STORM Paper&lt;/a&gt; has been accepted to EMNLP 2024 main conference.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;[2024/07] You can now install our package with &lt;code&gt;pip install knowledge-storm&lt;/code&gt;!&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;[2024/07] We add &lt;code&gt;VectorRM&lt;/code&gt; to support grounding on user-provided documents, complementing existing support of search engines (&lt;code&gt;YouRM&lt;/code&gt;, &lt;code&gt;BingSearch&lt;/code&gt;). (check out &lt;a href="https://github.com/stanford-oval/storm/pull/58" rel="noopener noreferrer"&gt;#58&lt;/a&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;[2024/07] We release demo light for developers a minimal user interface built with streamlit framework in Python, handy for local development and demo hosting (checkout &lt;a href="https://github.com/stanford-oval/storm/pull/54" rel="noopener noreferrer"&gt;#54&lt;/a&gt;)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;[2024/06] We…&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/stanford-oval/storm" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://github.com/stanford-oval/storm" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Star STORM on GitHub ⭐&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;
&lt;h3&gt;
  
  
  9️⃣ &lt;a href="https://github.com/pathwaycom/llm-app" rel="noopener noreferrer"&gt;LLM-App by pathwaycom&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;22.5K Github Stars, 379 Forks&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgjduxy2btwofyjk9iz1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgjduxy2btwofyjk9iz1.png" alt="llmApp" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;
 LLM-App: &lt;a&gt;https://pathway.com/developers/templates/&lt;/a&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;LLM-App is an open-source framework developed by Pathway.com, designed to integrate Large Language Models (LLMs) into data processing workflows.&lt;/p&gt;
&lt;h4&gt;
  
  
  Core Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Seamless LLM Integration&lt;/strong&gt;: Allows for the incorporation of LLMs into various applications, enhancing data processing capabilities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-Time Data Processing&lt;/strong&gt;: Utilizes Pathway's real-time data processing engine to handle dynamic data streams efficiently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extensibility&lt;/strong&gt;: Designed to be adaptable, enabling users to customize and extend functionalities based on specific requirements.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  🔹Use Cases
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Data Analysis&lt;/li&gt;
&lt;li&gt;Natural Language Processing (NLP)&lt;/li&gt;
&lt;li&gt;Chatbots and Virtual Assistants&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why Choose LLM-App?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Integration with Pathway's Engine: Combines the power of LLMs with Pathway's robust data processing engine for efficient real-time applications.&lt;/li&gt;
&lt;li&gt;Open-Source Flexibility: Being open-source, it allows for community contributions and customization to fit diverse use cases.&lt;/li&gt;
&lt;li&gt;Scalability: Designed to handle large-scale data processing tasks, making it suitable for enterprise applications.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/pathwaycom" rel="noopener noreferrer"&gt;
        pathwaycom
      &lt;/a&gt; / &lt;a href="https://github.com/pathwaycom/llm-app" rel="noopener noreferrer"&gt;
        llm-app
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Pathway AI Pipelines&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://trendshift.io/repositories/4400" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/2bba7dea6a723c58817cb59d68b182d682c62159b67c8c6bab98efcc0f2a8fa3/68747470733a2f2f7472656e6473686966742e696f2f6170692f62616467652f7265706f7369746f726965732f34343030" alt="pathwaycom%2Fllm-app | Trendshift" width="250" height="55"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/13ecf8308dd447edcef2bafd36de23b6539b35f24c18be96fd53d12241ec7db0/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c696e75782d4643433632343f7374796c653d666f722d7468652d6261646765266c6f676f3d6c696e7578266c6f676f436f6c6f723d626c61636b"&gt;&lt;img src="https://camo.githubusercontent.com/13ecf8308dd447edcef2bafd36de23b6539b35f24c18be96fd53d12241ec7db0/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c696e75782d4643433632343f7374796c653d666f722d7468652d6261646765266c6f676f3d6c696e7578266c6f676f436f6c6f723d626c61636b" alt="Linux"&gt;&lt;/a&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://camo.githubusercontent.com/d65285710170006223414d0c131c324f70defb3f550ab52ace4469abfdbf6d96/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6d61632532306f732d3030303030303f7374796c653d666f722d7468652d6261646765266c6f676f3d6170706c65266c6f676f436f6c6f723d7768697465"&gt;&lt;img src="https://camo.githubusercontent.com/d65285710170006223414d0c131c324f70defb3f550ab52ace4469abfdbf6d96/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6d61632532306f732d3030303030303f7374796c653d666f722d7468652d6261646765266c6f676f3d6170706c65266c6f676f436f6c6f723d7768697465" alt="macOS"&gt;&lt;/a&gt;
&lt;a href="https://discord.gg/pathway" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/8c0fca73564f21d7a6f235747eb4d739a2e4aaa348b8e074904127baeb944b9e/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f446973636f72642d3538363546323f7374796c653d666f722d7468652d6261646765266c6f676f3d646973636f7264266c6f676f436f6c6f723d7768697465" alt="chat on Discord"&gt;&lt;/a&gt;
&lt;a href="https://x.com/intent/follow?screen_name=pathway_com" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/ffa4875d833472c2e4ba789ff4d078f9c7ae8e735e5d2efe9534d027c0322b4e/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f582d3030303030303f7374796c653d666f722d7468652d6261646765266c6f676f3d78266c6f676f436f6c6f723d7768697465" alt="follow on X"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;p&gt;Pathway's &lt;strong&gt;AI Pipelines&lt;/strong&gt; allow you to quickly put in production AI applications that offer &lt;strong&gt;high-accuracy RAG and AI enterprise search at scale&lt;/strong&gt; using the most &lt;strong&gt;up-to-date knowledge&lt;/strong&gt; available in your data sources. It provides you ready-to-deploy &lt;strong&gt;LLM (Large Language Model) App Templates&lt;/strong&gt;. You can test them on your own machine and deploy on-cloud (GCP, AWS, Azure, Render,...) or on-premises.&lt;/p&gt;
&lt;p&gt;The apps connect and sync (all new data additions, deletions, updates) with data sources on your &lt;strong&gt;file system, Google Drive, Sharepoint, S3, Kafka, PostgreSQL, real-time data APIs&lt;/strong&gt;. They come with no infrastructure dependencies that would need a separate setup. They include &lt;strong&gt;built-in data indexing&lt;/strong&gt; enabling vector search, hybrid search, and full-text search - all done in-memory, with cache.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Application Templates&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;The application templates provided in this repo scale up to &lt;strong&gt;millions of pages of documents&lt;/strong&gt;. Some of them are optimized for simplicity, some are optimized…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/pathwaycom/llm-app" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://github.com/pathwaycom/llm-app" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Star LLM-App on GitHub ⭐&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;a href="https://discord.com/invite/pathway" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;LLM-App Discord Server 💬&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  🔟 &lt;a href="https://github.com/satellitecomponent/Neurite" rel="noopener noreferrer"&gt;Neurite by satellitecomponent&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;1.4K Github Stars, 127 Forks&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yt4wvofxj42i34cmd4k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yt4wvofxj42i34cmd4k.png" alt="neurite" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;
 Neurite: &lt;a&gt;https://neurite.network//&lt;/a&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Neurite is an open-source project that offers a fractal graph-of-thought system, enabling rhizomatic mind-mapping for AI agents, web links, notes, and code.&lt;/p&gt;

&lt;h4&gt;
  
  
  Core Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fractal Graph-of-Thought&lt;/strong&gt;: Implements a unique approach to knowledge representation using fractal structures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rhizomatic Mind-Mapping&lt;/strong&gt;: Facilitates non-linear, interconnected mapping of ideas and information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration Capabilities&lt;/strong&gt;: Allows integration with AI agents, enhancing their knowledge management and retrieval processes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  🔹Use Cases
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Knowledge Management&lt;/li&gt;
&lt;li&gt;AI Research&lt;/li&gt;
&lt;li&gt;Educational Tools&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why Choose Neurite?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Innovative Knowledge Representation: Offers a novel approach to organizing information, beneficial for complex data analysis.&lt;/li&gt;
&lt;li&gt;Open-Source Accessibility: Allows users to customize and extend functionalities to suit specific needs.&lt;/li&gt;
&lt;li&gt;Community Engagement: Encourages collaboration and sharing of ideas within the knowledge management community.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/satellitecomponent" rel="noopener noreferrer"&gt;
        satellitecomponent
      &lt;/a&gt; / &lt;a href="https://github.com/satellitecomponent/Neurite" rel="noopener noreferrer"&gt;
        Neurite
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Fractal Graph-of-Thought. Rhizomatic Mind-Mapping for Ai-Agents, Web-Links, Notes, and Code.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a href="https://opensource.org/licenses/MIT" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/e9f356b8051ecbbe01b65a70ff4c1ad63d9ef2ca3cb465b6f2ad1bafb1080bb4/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d4d49542d707572706c652e737667" alt="License: MIT"&gt;&lt;/a&gt;
&lt;a href="https://discord.gg/NymeSwK9TH" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/48ad3a5269d5e31c669a047a6fb66159828072b892b5ff61e169d3d4defcc088/68747470733a2f2f696d672e736869656c64732e696f2f646973636f72642f313039333630333430353630393538323735353f7374796c653d666c6174266c6f676f3d646973636f7264266c6f676f436f6c6f723d7768697465266c6162656c3d446973636f726426636f6c6f723d253233373238396461" alt="Discord"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;🌐 &lt;strong&gt;&lt;a href="https://neurite.network/" rel="nofollow noopener noreferrer"&gt;neurite.network&lt;/a&gt;&lt;/strong&gt; 🌐&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;⚠️ &lt;code&gt;Warning:&lt;/code&gt; Contains flashing lights and colors which may affect those with photosensitive epilepsy.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=1BiUblUAd7s&amp;amp;list=PLnwfKwpTq3vDlXDrLParmQ_3waM1g-ehf" rel="nofollow noopener noreferrer"&gt;&lt;strong&gt;Check out our newly released series of demo videos!&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;🌱 This is an open-source project in active development.&lt;/p&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tbody&gt;
&lt;tr&gt;
    &lt;td&gt;
      &lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Table of Contents&lt;/h2&gt;
&lt;/div&gt;
      &lt;ol&gt;
        &lt;li&gt;&lt;a href="https://github.com/satellitecomponent/Neurite#introduction" rel="noopener noreferrer"&gt;Introduction&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;&lt;a href="https://github.com/satellitecomponent/Neurite#key-features" rel="noopener noreferrer"&gt;Key Features&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;&lt;a href="https://github.com/satellitecomponent/Neurite#how-to-use-neurite" rel="noopener noreferrer"&gt;How to Use Neurite&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;&lt;a href="https://github.com/satellitecomponent/Neurite#synchronized-knowledge-management" rel="noopener noreferrer"&gt;Synchronized Knowledge Management&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;&lt;a href="https://github.com/satellitecomponent/Neurite#fractalgpt" rel="noopener noreferrer"&gt;FractalGPT&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;&lt;a href="https://github.com/satellitecomponent/Neurite#multi-agent-ui" rel="noopener noreferrer"&gt;Multi-Agent UI&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;&lt;a href="https://github.com/satellitecomponent/Neurite#neurite-desktop" rel="noopener noreferrer"&gt;Neurite Desktop&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;&lt;a href="https://github.com/satellitecomponent/Neurite#neural-api" rel="noopener noreferrer"&gt;Neural API&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;&lt;a href="https://github.com/satellitecomponent/Neurite#join-the-conversation" rel="noopener noreferrer"&gt;Join the Conversation&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;&lt;a href="https://github.com/satellitecomponent/Neurite#gallery" rel="noopener noreferrer"&gt;Gallery&lt;/a&gt;&lt;/li&gt;
      &lt;/ol&gt;
    &lt;/td&gt;
    &lt;td&gt;
      &lt;div&gt;
        &lt;a href="https://www.youtube.com/watch?v=1BiUblUAd7s" rel="nofollow noopener noreferrer"&gt;&lt;strong&gt;Welcome to Neurite | Getting Started&lt;/strong&gt;&lt;/a&gt;
        &lt;div class="markdown-heading"&gt;
&lt;/div&gt;
        &lt;a href="https://www.youtube.com/watch?v=1BiUblUAd7s" rel="nofollow noopener noreferrer"&gt;
          &lt;img src="https://camo.githubusercontent.com/9b940190674cfef2dc254fd433e0c784158765db62f23b8a63f13d1c255e5edd/68747470733a2f2f696d672e796f75747562652e636f6d2f76692f31426955626c55416437732f6d7164656661756c742e6a7067" width="100%" title="Click to watch" alt="Welcome to Neurite"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/td&gt;
  &lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;&lt;code&gt;Introduction&lt;/code&gt;&lt;/h2&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Bridging Fractals and Thought&lt;/h2&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;💡 &lt;strong&gt;&lt;a href="https://neurite.network/" rel="nofollow noopener noreferrer"&gt;neurite.network&lt;/a&gt; unleashes a new dimension of digital interface...&lt;/strong&gt;
&lt;/h3&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;&lt;em&gt;&lt;strong&gt;...the fractal dimension.&lt;/strong&gt;&lt;/em&gt;&lt;/h3&gt;

&lt;/div&gt;
&lt;blockquote&gt;
&lt;p&gt;🧩 &lt;strong&gt;Drawing from chaos theory and graph theory, Neurite unveils the hidden patterns and intricate connections that shape creative thinking.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;For over two years we've been iterating out a virtually limitless workspace that blends the mesmerizing complexity of fractals with contemporary mind-mapping technique.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;📢 &lt;strong&gt;Major Update:&lt;/strong&gt;  &lt;a href="https://github.com/satellitecomponent/Neurite#neurite-desktop" rel="noopener noreferrer"&gt;&lt;strong&gt;Neurite Desktop&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tbody&gt;
&lt;tr&gt;
    &lt;td width="50%"&gt;
      &lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;&lt;code&gt;Why Fractals?&lt;/code&gt;&lt;/h3&gt;

&lt;/div&gt;
      &lt;p&gt;The Mandelbrot Set is not just an aesthetic choice - fractal logic is ingrained into a countless number…&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/satellitecomponent/Neurite" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://github.com/satellitecomponent/Neurite" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Star Neurite on GitHub ⭐&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;a href="https://discord.com/invite/NymeSwK9TH" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Neurite Discord Server 💬&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Bonus 🙂‍↕️: &lt;a href="https://github.com/SciPhi-AI/R2R" rel="noopener noreferrer"&gt;R2R by SciPhi-AI&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;5.4K Github Stars, 400 Forks&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0edv5r2nflsfebe9cdmd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0edv5r2nflsfebe9cdmd.png" alt="R2R" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;
 R2R: &lt;a&gt;https://r2r-docs.sciphi.ai/introduction&lt;/a&gt;



&lt;p&gt;R2R is an advanced AI retrieval system that implements agentic Retrieval-Augmented Generation (RAG) with a RESTful API, developed by SciPhi-AI.&lt;/p&gt;

&lt;h4&gt;
  
  
  Core Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agentic RAG System&lt;/strong&gt;: Combines retrieval systems with generation capabilities to provide comprehensive responses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RESTful API&lt;/strong&gt;: Offers a standardized API for easy integration into various applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advanced Retrieval Mechanisms&lt;/strong&gt;: Utilizes sophisticated algorithms to fetch relevant information efficiently.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  🔹Use Cases
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Intelligent Search Engines&lt;/li&gt;
&lt;li&gt;Content Generation&lt;/li&gt;
&lt;li&gt;Research Assistance&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why Choose R2R?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Comprehensive AI Retrieval: Offers advanced retrieval capabilities, making it suitable for complex information retrieval tasks.&lt;/li&gt;
&lt;li&gt;Easy Integration: The RESTful API design allows for seamless integration into existing systems.&lt;/li&gt;
&lt;li&gt;Open-Source Community: Being open-source, it benefits from community contributions and continuous improvements.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/SciPhi-AI" rel="noopener noreferrer"&gt;
        SciPhi-AI
      &lt;/a&gt; / &lt;a href="https://github.com/SciPhi-AI/R2R" rel="noopener noreferrer"&gt;
        R2R
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      SoTA production-ready AI retrieval system. Agentic Retrieval-Augmented Generation (RAG) with a RESTful API.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;a rel="noopener noreferrer" href="https://private-user-images.githubusercontent.com/68796651/427579002-10b530a6-527f-4335-b2e4-ceaa9fc1219f.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzIyMTQ3NzIsIm5iZiI6MTc3MjIxNDQ3MiwicGF0aCI6Ii82ODc5NjY1MS80Mjc1NzkwMDItMTBiNTMwYTYtNTI3Zi00MzM1LWIyZTQtY2VhYTlmYzEyMTlmLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNjAyMjclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMjI3VDE3NDc1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTIyNGRhZTM5ZmI1MjlkYmUzN2Q4ZjM3MjMwODFhNDYzNTlmZmMxZDc2YWRjMzk0OTE4OGJlYWJkZTU3ZWMxOTQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0._PZSfbHDznBQAsoa76j-ut6zBxxDyWmCMkD42hNfFbI"&gt;&lt;img width="1217" alt="Screenshot 2025-03-27 at 6 35 02 AM" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fprivate-user-images.githubusercontent.com%2F68796651%2F427579002-10b530a6-527f-4335-b2e4-ceaa9fc1219f.png%3Fjwt%3DeyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzIyMTQ3NzIsIm5iZiI6MTc3MjIxNDQ3MiwicGF0aCI6Ii82ODc5NjY1MS80Mjc1NzkwMDItMTBiNTMwYTYtNTI3Zi00MzM1LWIyZTQtY2VhYTlmYzEyMTlmLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNjAyMjclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMjI3VDE3NDc1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTIyNGRhZTM5ZmI1MjlkYmUzN2Q4ZjM3MjMwODFhNDYzNTlmZmMxZDc2YWRjMzk0OTE4OGJlYWJkZTU3ZWMxOTQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0._PZSfbHDznBQAsoa76j-ut6zBxxDyWmCMkD42hNfFbI"&gt;&lt;/a&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;
The most advanced AI retrieval system
&lt;/h3&gt;
&lt;p&gt;Agentic Retrieval-Augmented Generation (RAG) with a RESTful API.&lt;/p&gt;
&lt;/div&gt;
&lt;div&gt;
   &lt;div&gt;
      &lt;a href="https://r2r-docs.sciphi.ai/" rel="nofollow noopener noreferrer"&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/a&gt; ·
      &lt;a href="https://github.com/SciPhi-AI/R2R/issues/new?assignees=&amp;amp;labels=&amp;amp;projects=&amp;amp;template=bug_report.md&amp;amp;title=" rel="noopener noreferrer"&gt;&lt;strong&gt;Report Bug&lt;/strong&gt;&lt;/a&gt; ·
      &lt;a href="https://github.com/SciPhi-AI/R2R/issues/new?assignees=&amp;amp;labels=&amp;amp;projects=&amp;amp;template=feature_request.md&amp;amp;title=" rel="noopener noreferrer"&gt;&lt;strong&gt;Feature Request&lt;/strong&gt;&lt;/a&gt; ·
      &lt;a href="https://discord.gg/p6KqD2kjtB" rel="nofollow noopener noreferrer"&gt;&lt;strong&gt;Discord&lt;/strong&gt;&lt;/a&gt;
   &lt;/div&gt;
   &lt;br&gt;
   &lt;p&gt;
    &lt;a href="https://r2r-docs.sciphi.ai" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/e4e22ab7227226b37394c48ddc16a52ae17ce8f7667807c989473e40071cdfa4/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f646f63732e7363697068692e61692d334631364534" alt="Docs"&gt;&lt;/a&gt;
    &lt;a href="https://discord.gg/p6KqD2kjtB" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/7dbfa722eb3a9e5aab64dcf9303d098996ac88c5e63ed0c8b9413aa00c49dbe7/68747470733a2f2f696d672e736869656c64732e696f2f646973636f72642f313132303737343635323931353130353933343f7374796c653d736f6369616c266c6f676f3d646973636f7264" alt="Discord"&gt;&lt;/a&gt;
    &lt;a href="https://github.com/SciPhi-AI" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/3847f7b15e41157864b973eeb349bd6fa06c7095f192ee2dcede739a832ce5c9/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f5363695068692d41492f523252" alt="Github Stars"&gt;&lt;/a&gt;
    &lt;a href="https://github.com/SciPhi-AI/R2R/pulse" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/b772b919a06a0da19926fd105caad6c8406a006c3001734aec3865f302df505c/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f636f6d6d69742d61637469766974792f772f5363695068692d41492f523252" alt="Commits-per-week"&gt;&lt;/a&gt;
    &lt;a href="https://opensource.org/licenses/MIT" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/e9f356b8051ecbbe01b65a70ff4c1ad63d9ef2ca3cb465b6f2ad1bafb1080bb4/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d4d49542d707572706c652e737667" alt="License: MIT"&gt;&lt;/a&gt;
  &lt;/p&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;About&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;R2R is an advanced AI retrieval system supporting Retrieval-Augmented Generation (RAG) with production-ready features. Built around a RESTful API, R2R offers multimodal content ingestion, hybrid search, knowledge graphs, and comprehensive document management.&lt;/p&gt;
&lt;p&gt;R2R also includes a &lt;strong&gt;Deep Research API&lt;/strong&gt;, a multi-step reasoning system that fetches relevant data from your knowledgebase and/or the internet to deliver richer, context-aware answers for complex queries.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Usage&lt;/h1&gt;

&lt;/div&gt;
&lt;div class="highlight highlight-source-python notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-c"&gt;# Basic search&lt;/span&gt;
&lt;span class="pl-s1"&gt;results&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s1"&gt;client&lt;/span&gt;.&lt;span class="pl-c1"&gt;retrieval&lt;/span&gt;.&lt;span class="pl-c1"&gt;search&lt;/span&gt;(&lt;span class="pl-s1"&gt;query&lt;/span&gt;&lt;span class="pl-c1"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;"What is DeepSeek R1?"&lt;/span&gt;)
&lt;span class="pl-c"&gt;# RAG with citations&lt;/span&gt;
&lt;span class="pl-s1"&gt;response&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s1"&gt;client&lt;/span&gt;.&lt;span class="pl-c1"&gt;retrieval&lt;/span&gt;.&lt;span class="pl-c1"&gt;rag&lt;/span&gt;(&lt;span class="pl-s1"&gt;query&lt;/span&gt;&lt;span class="pl-c1"&gt;=&lt;/span&gt;&lt;span class="pl-s"&gt;"What is DeepSeek R1?"&lt;/span&gt;)

&lt;span class="pl-c"&gt;# Deep Research RAG Agent&lt;/span&gt;
&lt;span class="pl-s1"&gt;response&lt;/span&gt; &lt;span class="pl-c1"&gt;=&lt;/span&gt; &lt;span class="pl-s1"&gt;client&lt;/span&gt;.&lt;span class="pl-c1"&gt;retrieval&lt;/span&gt;.&lt;span class="pl-c1"&gt;agent&lt;/span&gt;(
  &lt;span class="pl-s1"&gt;message&lt;/span&gt;&lt;span class="pl-c1"&gt;=&lt;/span&gt;{&lt;span class="pl-s"&gt;"role"&lt;/span&gt;:&lt;span class="pl-s"&gt;"user"&lt;/span&gt;, &lt;span class="pl-s"&gt;"content"&lt;/span&gt;: &lt;span class="pl-s"&gt;"What does deepseek r1&lt;/span&gt;&lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/SciPhi-AI/R2R" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;a href="https://github.com/SciPhi-AI/R2R" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Star R2R on GitHub ⭐&lt;/a&gt;
&lt;/th&gt;
&lt;th&gt;
&lt;a href="https://discord.com/invite/p6KqD2kjtB" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;R2R Discord Server 💬&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Here's a Quick Recap (only for you) 🙈
&lt;/h2&gt;

&lt;p&gt;Below is a table that contains the list of all the RAG Frameworks mentioned in this blog:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Framework&lt;/th&gt;
&lt;th&gt;Key Features&lt;/th&gt;
&lt;th&gt;Use Cases&lt;/th&gt;
&lt;th&gt;Why Choose It?&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/llmware-ai/llmware" rel="noopener noreferrer"&gt;LLMWare&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;End-to-end RAG pipeline, hybrid search, multi-LLM support&lt;/td&gt;
&lt;td&gt;Enterprise search, document Q&amp;amp;A, knowledge retrieval&lt;/td&gt;
&lt;td&gt;Highly optimized for unstructured data processing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/run-llama/llama_index" rel="noopener noreferrer"&gt;LlamaIndex&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Data connectors, structured retrieval, adaptive chunking&lt;/td&gt;
&lt;td&gt;RAG-based chatbots, document search, financial/legal data analysis&lt;/td&gt;
&lt;td&gt;Strong ecosystem with integrations and indexing optimizations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/deepset-ai/haystack" rel="noopener noreferrer"&gt;Haystack&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Modular RAG, retrievers, rankers, scalable inference&lt;/td&gt;
&lt;td&gt;Enterprise AI assistants, Q&amp;amp;A systems, contextual document search&lt;/td&gt;
&lt;td&gt;Powerful for production-ready search applications&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/jina-ai" rel="noopener noreferrer"&gt;Jina AI&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Neural search, multi-modal data, vector indexing&lt;/td&gt;
&lt;td&gt;AI-powered semantic search, image/video/text retrieval&lt;/td&gt;
&lt;td&gt;Scalable and fast for AI-driven search solutions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/truefoundry/cognita" rel="noopener noreferrer"&gt;Cognita&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;RAG with knowledge graphs, retrieval re-ranking&lt;/td&gt;
&lt;td&gt;AI-driven knowledge graphs, intelligent document search&lt;/td&gt;
&lt;td&gt;Advanced retrieval using structured and unstructured data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/infiniflow/ragflow" rel="noopener noreferrer"&gt;RAGFlow&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Graph-enhanced retrieval, hybrid search, deep document processing&lt;/td&gt;
&lt;td&gt;Legal, finance, research document retrieval&lt;/td&gt;
&lt;td&gt;Enterprise-ready, scalable, optimized for structured search&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/neuml/txtai" rel="noopener noreferrer"&gt;txtAI&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Lightweight RAG, embeddings-based retrieval, easy deployment&lt;/td&gt;
&lt;td&gt;Document similarity search, lightweight search engines&lt;/td&gt;
&lt;td&gt;Fast and simple RAG for developers needing flexibility&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/stanford-oval/storm" rel="noopener noreferrer"&gt;STORM&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Multi-hop retrieval, knowledge synthesis, LLM chaining&lt;/td&gt;
&lt;td&gt;AI-driven research assistants, contextual understanding&lt;/td&gt;
&lt;td&gt;Optimized for complex knowledge retrieval tasks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/pathwaycom/llm-app" rel="noopener noreferrer"&gt;LLM-App&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Fast streaming RAG, parallel retrieval, scalable indexing&lt;/td&gt;
&lt;td&gt;Live AI chatbots, customer support automation&lt;/td&gt;
&lt;td&gt;Efficient RAG with fast response time for high-load applications&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/satellitecomponent/Neurite" rel="noopener noreferrer"&gt;Neurite&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Multi-agent reasoning, multi-modal retrieval&lt;/td&gt;
&lt;td&gt;Research assistance, AI-powered document analysis&lt;/td&gt;
&lt;td&gt;Supports multi-modal inputs and collaborative AI reasoning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/SciPhi-AI/R2R" rel="noopener noreferrer"&gt;R2R&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Reasoning-based retrieval, automated knowledge extraction&lt;/td&gt;
&lt;td&gt;Scientific document processing, in-depth Q&amp;amp;A&lt;/td&gt;
&lt;td&gt;Tailored for complex and logical reasoning in RAG&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  But wait, Why can't we use LangChain over RAG Frameworks??
&lt;/h2&gt;

&lt;p&gt;While LangChain is a powerful tool for working with LLMs, it is not a dedicated RAG framework. Here’s why a specialized RAG framework might be a better choice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LangChain helps connect LLMs with different tools (vector databases, APIs, memory, etc.), but it does not specialize in optimizing retrieval-augmented generation (RAG).&lt;/li&gt;
&lt;li&gt;LangChain provides building blocks for RAG but lacks advanced retrieval mechanisms found in dedicated RAG frameworks.&lt;/li&gt;
&lt;li&gt;LangChain is good for prototypes, but handling large-scale document retrieval or enterprise-level applications often requires an optimized RAG framework.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Choosing the right framework 😉
&lt;/h2&gt;

&lt;p&gt;With a variety of open-source RAG frameworks available—each optimized for different use cases—choosing the right one depends on your specific needs, scalability requirements, and data complexity.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you need a lightweight and developer-friendly solution, frameworks like &lt;strong&gt;txtAI&lt;/strong&gt; or &lt;strong&gt;LLM-App&lt;/strong&gt; are great choices.&lt;/li&gt;
&lt;li&gt;For enterprise-scale, structured retrieval, &lt;strong&gt;LLMWare&lt;/strong&gt;, &lt;strong&gt;RAGFlow&lt;/strong&gt;, &lt;strong&gt;LlamaIndex&lt;/strong&gt;, and &lt;strong&gt;Haystack&lt;/strong&gt; offer robust performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Jina AI&lt;/strong&gt; and &lt;strong&gt;Neurite&lt;/strong&gt; are well-suited for the task if you focus on multi-modal data processing.&lt;/li&gt;
&lt;li&gt;For reasoning-based or knowledge graph-powered retrieval, &lt;strong&gt;Cognita&lt;/strong&gt;, &lt;strong&gt;R2R&lt;/strong&gt;, and &lt;strong&gt;STORM&lt;/strong&gt; stand out.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, we are at the end of the blog. I hope you found it insightful. Please save it for the future. Who knows, when you need it!&lt;/p&gt;

&lt;p&gt;Follow me on Github&lt;br&gt;
&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://github.com/RS-labhub" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Favatars.githubusercontent.com%2Fu%2F117426013%3Fv%3D4%3Fs%3D400" height="460" class="m-0" width="460"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://github.com/RS-labhub" rel="noopener noreferrer" class="c-link"&gt;
            RS-labhub (Rohan Sharma) · GitHub
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            Hi there!🧟
You're the most beautiful person I've ever met! - RS-labhub
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.githubassets.com%2Ffavicons%2Ffavicon.svg" width="32" height="32"&gt;
          github.com
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Thank you so much for reading! You're the most beautiful person I ever met. I have a lot of trust in you. Keep believing in yourself, and one day you will become motivation for others. 💖&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>rag</category>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>Creating and Managing a nextJS Event Planning Website with Daytona! 🐦‍🔥</title>
      <dc:creator>Rohan Sharma</dc:creator>
      <pubDate>Mon, 06 Jan 2025 04:30:03 +0000</pubDate>
      <link>https://dev.to/rohan_sharma/creating-and-managing-a-nextjs-event-planning-website-with-daytona-1n29</link>
      <guid>https://dev.to/rohan_sharma/creating-and-managing-a-nextjs-event-planning-website-with-daytona-1n29</guid>
      <description>&lt;p&gt;It's been a long time since I wrote a blog. Let's make this happen!&lt;/p&gt;

&lt;p&gt;This blog will focus on three things.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Part a: A quick details about the event management website "&lt;strong&gt;Festigo&lt;/strong&gt;"&lt;/li&gt;
&lt;li&gt;Part a: What is &lt;strong&gt;Daytona&lt;/strong&gt; and how does it make things easier?&lt;/li&gt;
&lt;li&gt;Part b: Event Planning System &lt;strong&gt;Overview&lt;/strong&gt; and &lt;strong&gt;Workflow&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This blog is going to be little different and a lil bit of tutorial type also. &lt;strong&gt;But I'm sure you're going to like it!&lt;/strong&gt; Let's start without wasting time!&lt;/p&gt;

&lt;p&gt;3... 2... 1... 🟢&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  What is Festigo?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrgb0dku8rwb4wa8zpiv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrgb0dku8rwb4wa8zpiv.png" alt="festigo" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Festigo&lt;/strong&gt; is an &lt;u&gt;event planning and event organizing website&lt;/u&gt; that brings guests, vendors, and event organizers(the people hosting the event) under a single umbrella. Apart from organizing different types of events, from birthdays to giveaways and beyond, our platform also streamlines the planning and coordination process for various events. With vendor management capabilities, vendors can efficiently handle multiple events across different dates simultaneously. Moreover, our platform empowers event organizers to manage multiple events concurrently, eliminating the need for them to juggle communication between various vendors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;NextJS&lt;/li&gt;
&lt;li&gt;Tailwind CSS&lt;/li&gt;
&lt;li&gt;Postgres DB&lt;/li&gt;
&lt;li&gt;Prisma ORM&lt;/li&gt;
&lt;li&gt;Typescript&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Festigo comes with these super features
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Onboarding Event Organizers&lt;/strong&gt;: Event organizers, such as couples planning a wedding, will need to register and create profiles to utilize the application effectively.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inviting Vendors&lt;/strong&gt;: Organizers can invite vendors from their contacts by sharing invitation links via email or text directly from the platform.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event Management&lt;/strong&gt;: Organizers can create and manage multiple events simultaneously, each with its own details and dates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaborative Chat Spaces&lt;/strong&gt;: Users, including vendors and organizers, can create chat rooms for seamless coordination and synchronization during event planning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One-on-One Chats&lt;/strong&gt;: Users can engage in direct messaging, allowing for private conversations between vendors and organizers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Event Navigation&lt;/strong&gt;: Both organizers and vendors can seamlessly navigate between multiple events and access event-specific details with ease.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event Scheduling&lt;/strong&gt;: The application will feature a calendar system for planning various events, complete with reminders and RSVP functionality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User-Friendly Interface&lt;/strong&gt;: Festigo boasts a user-friendly and mobile-responsive design, ensuring effortless navigation across its different sections.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Guest List Management&lt;/strong&gt;: Organizers can create guest lists, invite guests to the platform, and track attendance through RSVPs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Github Link&lt;/strong&gt;: &lt;strong&gt;&lt;a href="https://github.com/RS-labhub/Festigo" rel="noopener noreferrer"&gt;https://github.com/RS-labhub/Festigo&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;code&gt;Now let's start this blog!!! Yayyyyyyy!! 🙂&lt;/code&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  What is Daytona??
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Have you ever heard of &lt;strong&gt;cloud-based development environments&lt;/strong&gt;??&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;No??&lt;/p&gt;

&lt;p&gt;No worries. &lt;strong&gt;Cloud-based development environments&lt;/strong&gt; are those environments where you can &lt;strong&gt;create a container&lt;/strong&gt; or &lt;strong&gt;virtual machine&lt;/strong&gt; dedicated to &lt;strong&gt;development&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Didn't understand?? Let me simplify for you. &lt;/p&gt;

&lt;p&gt;A cloud-based development environment is a &lt;strong&gt;virtual workspace&lt;/strong&gt; accessible through the internet, where developers can code, test, and deploy applications using pre-configured tools, libraries, and infrastructure hosted on a cloud provider's servers.&lt;/p&gt;

&lt;p&gt;This was the only pre-requisite to know about Daytona! Now, it will be easier to understand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Daytona is also a cloud-based development environment&lt;/strong&gt;. Unlike Github Codespaces, which supports only Github as a code host. Daytona supports multiple code hosts or git providers like GitHub, GitLab, Bitbucket, and Gitea (even more options than Gitpod).&lt;/p&gt;

&lt;p&gt;What is special about Daytona?? &lt;strong&gt;Daytona is open-source comes under Apache license 2.0 and supports VS Code/Jet Brains as IDE&lt;/strong&gt;. Isn't cool??&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But Why to setup a Virtual Machine??&lt;/strong&gt;&lt;br&gt;
Try to find it yourself. And let me know what you get in the Comment section. 😉&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/daytonaio/daytona" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Star Daytona on Github ⭐&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Let's try to setup our project on Daytona 🥹
&lt;/h2&gt;

&lt;p&gt;Daytona is easy to use and very simple to understand. But, the first time is always special. (⚠️ dark meme alert? No. Your mind is dirty. I'm very pure with my thoughts 🥹😂)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;:&lt;br&gt;
Install Daytona on your local machine. (one-time effort). Before installing, please read the &lt;a href="https://www.daytona.io/docs/about/getting-started/" rel="noopener noreferrer"&gt;system requirements&lt;/a&gt; first, or in the end you'll throw me some beautiful cursed words.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Operating System&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Installation Command&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Linux&lt;/td&gt;
&lt;td&gt;&lt;code&gt;bash -c "$(curl -sf -L https://download.daytona.io/daytona/install.sh)" &amp;amp;&amp;amp; daytona server -y &amp;amp;&amp;amp; daytona&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;macOS&lt;/td&gt;
&lt;td&gt;&lt;code&gt;bash -c "$(curl -sf -L https://download.daytona.io/daytona/install.sh)" &amp;amp;&amp;amp; daytona server -y &amp;amp;&amp;amp; daytona&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Windows&lt;/td&gt;
&lt;td&gt;&lt;code&gt;$architecture = if ($env:PROCESSOR_ARCHITECTURE -eq "AMD64") { "amd64" } else { "arm64" } md -Force "$Env:APPDATA\bin\daytona";[System.Net.ServicePointManager]::SecurityProtocol =[System.Net.SecurityProtocolType]'Tls,Tls11,Tls12'; Invoke-WebRequest -URI "https://download.daytona.io/daytona/v0.50/daytona windows-$architecture.exe" -OutFile "$Env:APPDATA\bin\daytona\daytona.exe"; $env:Path += ";" + $Env:APPDATA + "\bin\daytona"; [Environment]::SetEnvironmentVariable("Path", $env:Path,[System.EnvironmentVariableTarget]::User); daytona serve;&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Homebrew&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;brew tap daytonaio/tap&lt;/code&gt; and then &lt;code&gt;brew install daytona&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Nix&lt;/td&gt;
&lt;td&gt;&lt;code&gt;nix-shell -p daytona-bin&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;:&lt;br&gt;
Now in this step, we are going to configure our freshly installed package. Again, a one-time effort. 😉&lt;/p&gt;

&lt;p&gt;1️⃣ Add a Git Provider:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; daytona git-providers add
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will allow you to perform Git operations w/o repeated authentication. When you run the above command, you'll get some options like &lt;code&gt;GitHub&lt;/code&gt;, &lt;code&gt;GitLab&lt;/code&gt;, &lt;code&gt;Bitbucket&lt;/code&gt;, &lt;code&gt;Other&lt;/code&gt;. Select the one you're comfortable with. Then you'll be asked to put in the Personal access token. You can skip the further step. And you're done!&lt;/p&gt;

&lt;p&gt;2️⃣ Install a Provider:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; daytona provider &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will provide the interface with the Daytona Server to manage and deploy your workspaces. When you run this command, you'll be prompted with some options. Choose your provider based on your preference. (I'll go with &lt;code&gt;docker provider&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;3️⃣ Set a Target:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; daytona target &lt;span class="nb"&gt;set&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Targets specify the precise location or platform where these environments will reside. When you run the above command, you get a TUI form (Terminal interface), where you have to add your selected provider details. &lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;:&lt;br&gt;
If you've done with the above steps. Then you're good to go! You did all the hard work, and now it's time to create a workspace where we will be working.&lt;/p&gt;

&lt;p&gt;But before creating a workspace, we need to create a &lt;code&gt;devcontainer.json&lt;/code&gt;. I know you must be thinking, what the hell is this? A &lt;code&gt;devcontainer.json&lt;/code&gt; file in your project tells VS Code how to access (or create) a development container with a well-defined tool and runtime stack. This file is mandatory for Daytona to create a workspace. &lt;/p&gt;

&lt;p&gt;If you want to create a &lt;code&gt;devcontainer.json&lt;/code&gt; file, you can use &lt;strong&gt;&lt;a href="https://devcontainer.ai/" rel="noopener noreferrer"&gt;devcontainer.ai&lt;/a&gt;&lt;/strong&gt; provided by Daytona itself. But remember AI can make mistakes. Conventionally, we put &lt;code&gt;devcontainer.json&lt;/code&gt; file inside &lt;code&gt;.devcontainer&lt;/code&gt; folder. But why?? Ask me in the comment section. 🙂&lt;/p&gt;

&lt;p&gt;Let's create a quick &lt;code&gt;devcontainer.json&lt;/code&gt; file for our festigo project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Festigo Dev Container"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"image"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcr.microsoft.com/devcontainers/javascript-node"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"forwardPorts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"postCreateCommand"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cd festigo &amp;amp;&amp;amp; npm install"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's jump inside this code.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;name&lt;/code&gt;: It specifies the name of the container. It can be any.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;image&lt;/code&gt;: It specifies the Docker image to be used for the container. &lt;code&gt;mcr.microsoft.com/devcontainers/javascript-node&lt;/code&gt; is a pre-configured Docker image designed for JavaScript and Node.js development. It includes necessary tools and dependencies for Node.js projects, provided by Microsoft.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;forwardPorts&lt;/code&gt;: It allows the specified port(s) on the container to be forwarded to your local machine.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;postCreateCommand&lt;/code&gt;: As the name suggests, it specifies the command to run after the container is created and set up.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;:&lt;br&gt;
It's time to Create a Workspace. We can create a workspace by running the below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;daytona create repo_url
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our case, it's going to be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;daytona create https://github.com/RS-labhub/Festigo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we are done!!! A workspace will be created and we are ready to code or make changes inside it. Maybe you find it difficult for the first time, but it's just for one time. And you'll be good to go! You can create as many workspaces as you want at a time and you'll never run out of credits.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Festigo (Event Planning Website): Overview and WorkFlow 🎉
&lt;/h2&gt;

&lt;p&gt;I have already explained in the above section that "what is festigo?" and what it will be doing.&lt;/p&gt;

&lt;p&gt;Imagine you're planning an event or you have to organize one. You've got to arrange for a flower decorator, a DJ, and many more vendors. Coordinating between them all can be incredibly challenging without proper planning, especially if you’re doing it for the first time. To solve this hectic problem, we, the team- Twilight Ties, introduce you with our application Festigo to save your time as well as your drudgery. Festigo serves as a centralized solution, seamlessly connecting you with all necessary vendors and facilitating smooth coordination between them. Whether it's ensuring the flowers groove with the beats or orchestrating a perfectly synchronized schedule, our platform sweeps away the stress of juggling multiple aspects of your special day.&lt;/p&gt;

&lt;p&gt;To solve this problem, we made Festigo. It was a team project for a hackthon, let say X. Before explaining further, I want to thank my team mates. Thanks a lot &lt;strong&gt;Niharika&lt;/strong&gt;, &lt;strong&gt;Himanshu&lt;/strong&gt;, and &lt;strong&gt;Adarsh&lt;/strong&gt; for your awesome work. You guys are super awesome!&lt;/p&gt;

&lt;p&gt;My role in building Festigo was to design the full interface and then contribute to the frontend part. For designing, I opted Figma. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hq475eucc8tbe6z227m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hq475eucc8tbe6z227m.png" alt="figma" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This image speaks a lot. A super short description of our project. But due to some reasons, we won't be able to complete it fully. If want to be a part of this awesome project, then comment below and we will work together and finish it. It solves a real-world problem, so need to give further justification on why you should work on this.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  WorkFlow 🌊
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7x1z1b2j1tmeshexnsfo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7x1z1b2j1tmeshexnsfo.png" alt="User Flow" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is how we're planning to make it. The concept is simple as we don't want to complicate things at this stage. But yeah, we too know some top cases and edge cases that will panic during the implementation. But let's hope for the best.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  How it should look like- The UI 🤔
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95pp4ar7ojdrhm1xkwkh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95pp4ar7ojdrhm1xkwkh.png" alt="figma" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The design is final upto a max extent. But we do like suggestions and changes will be made. The design is just for a quick reference and not a professional one. So, don't judge it, please. I made with too much work and within a week you know.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figma Prototype link&lt;/strong&gt;: &lt;strong&gt;&lt;a href="https://www.figma.com/proto/6nLRk40BpOgl4lNwFqCQTe/Event-Website?node-id=1131-5517&amp;amp;t=BhqWDa27AAvElKz6-0&amp;amp;scaling=scale-down&amp;amp;page-id=1131%3A5516&amp;amp;starting-point-node-id=1131%3A5517" rel="noopener noreferrer"&gt;Festigo Full Working Figma Prototype&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And here's the &lt;strong&gt;explanation of the design&lt;/strong&gt;: &lt;strong&gt;&lt;a href="https://www.youtube.com/watch?v=Paq7I1Ru22s" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=Paq7I1Ru22s&lt;/a&gt;&lt;/strong&gt; (it will make you easier to understand what we are trying to achieve)&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Engineered Prototype: Work in Progress 🙂
&lt;/h3&gt;

&lt;p&gt;Of course, if we have a git repo. It means we have some work done!&lt;/p&gt;

&lt;p&gt;If you're facing trouble in getting started. Then &lt;strong&gt;watch this YouTube video&lt;/strong&gt;:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/6-d2u9N8SxQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;So, let's setup Daytona and work together on this beautiful project. The maximum work is done, but we just want to finish it and launch this in the market.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  A little story of Daytona and I that made me a bounty hunter. [optional to read]
&lt;/h2&gt;

&lt;p&gt;Again, I get to know about Daytona through &lt;strong&gt;Quira&lt;/strong&gt;. If you still don't know about Quira, then read this blog: &lt;a href="https://dev.to/rohan_sharma/quira-monetise-your-open-source-work-10e3"&gt;https://dev.to/rohan_sharma/quira-monetise-your-open-source-work-10e3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I saw this project is open-source and then I tried to contribute. And some contributions, the project was funded. Now, they started to add a bounty to every issue almost. This is where I started to hunt bounties. I solved almost 10 GH issues of Daytona (not a great number I know.) of which around 6-7 of them are with bounty.&lt;/p&gt;

&lt;p&gt;Here's a list of all my successful PRs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;fix: properties missing from server config view&lt;/code&gt;: &lt;a href="https://lnkd.in/gsgFTibR" rel="noopener noreferrer"&gt;https://lnkd.in/gsgFTibR&lt;/a&gt;
(claimed bounty 💃)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fix: properties missing from server config view&lt;/code&gt;: &lt;a href="https://lnkd.in/gsgFTibR" rel="noopener noreferrer"&gt;https://lnkd.in/gsgFTibR&lt;/a&gt;
(claimed bounty 💃)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fix: Using git providers hosted without HTTPS fails to clone repositories&lt;/code&gt;: &lt;a href="https://lnkd.in/gQ2-msnV" rel="noopener noreferrer"&gt;https://lnkd.in/gQ2-msnV&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fix: devcontainer config file name validation&lt;/code&gt;: &lt;a href="https://lnkd.in/gzBy8bFk" rel="noopener noreferrer"&gt;https://lnkd.in/gzBy8bFk&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;bug fix: sudo dockerd removed&lt;/code&gt;: &lt;a href="https://lnkd.in/gzHMNjah" rel="noopener noreferrer"&gt;https://lnkd.in/gzHMNjah&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fix: quitting the profile TUI results in FATAL_ERR&lt;/code&gt;: &lt;a href="https://lnkd.in/gimd_njj" rel="noopener noreferrer"&gt;https://lnkd.in/gimd_njj&lt;/a&gt;
(claimed bounty 💃)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Fix: Status code issue&lt;/code&gt;: &lt;a href="https://lnkd.in/gDTg4D_9" rel="noopener noreferrer"&gt;https://lnkd.in/gDTg4D_9&lt;/a&gt; (claimed bounty 💃)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And 2-3 others. I guess I didn't post them on Linkedin, else I would be able to find them. Haha, no worries. 😊&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  And the part you all are waiting for. #Conclusion ☠️
&lt;/h2&gt;

&lt;p&gt;Daytona is a super cool cloud-based development environment (virtual machine) that allows you to give rest to your local system and work in a more powerful environment.&lt;/p&gt;

&lt;p&gt;Daytona not only provides you &lt;code&gt;your personalized workspace&lt;/code&gt; for free but also provides bounties to solve their GH issues. All you need is some expertise in &lt;code&gt;GoLang&lt;/code&gt;. Again, don't try to spam.&lt;/p&gt;

&lt;p&gt;Please support them, and at least give them a star.&lt;br&gt;
&lt;a href="https://github.com/daytonaio/daytona" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Star Daytona on Github ⭐&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;Also, you can follow me on Github. &lt;br&gt;
&lt;a href="https://github.com/RS-labhub" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer"&gt;Follow Me on Github ⭐&lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;That's all for today. Thank you so much for you time. You're doing good in your life. We all feel motivated at a moment of time. But this is where you have to move on. &lt;strong&gt;Have some faith and trust in yourself&lt;/strong&gt;. The path you're walking on is not simple. Many ppl are proud on you, including me. 💘&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>javascript</category>
      <category>webdev</category>
      <category>nextjs</category>
    </item>
  </channel>
</rss>
