DEV Community

Cover image for How I Built an LLM Chat UI Better Than ChatGPT (As a Backend Engineer)
Dhruvin Rupesh Soni
Dhruvin Rupesh Soni

Posted on • Originally published at dhruvinrsoni.github.io

How I Built an LLM Chat UI Better Than ChatGPT (As a Backend Engineer)

πŸ“– Table of Contents


πŸ€” The Frustration That Started It All {#the-frustration}

When ChatGPT launched, I tried every LLM chat UI I could find.

Each had brilliant featuresβ€”but they were scattered:

  • Gemini had beautiful inline editing
  • ChatGPT had useful archiving
  • Copilot had smooth keyboard shortcuts

I wanted them all. In one place.

So I scribbled 20 features on a list and thought:

"This will take half an hour."

Six months later, I shipped Samvada Studio with:

  • βœ… 35+ features
  • βœ… 6 LLM providers
  • βœ… 15,000 lines of documentation
  • βœ… 100% local-first, security-first architecture

Here's the complete technical story.


πŸ“ The Original Vision{#the-vision}

This is the actual list I wrote (typos and all):

[Insert screenshot of THE_BEGINNING.md from your repo]

Let me break down how I implemented each feature.

Feature 1: Inline Editing (Gemini-Inspired)

Gemini lets you edit AI responses directly. Brilliant UXβ€”you're iterating in place, no copy-paste needed.

My Implementation:

// PromptResponseItem.tsx
const [isEditing, setIsEditing] = useState(false);
const [editedContent, setEditedContent] = useState(response.content);

const handleSave = () => {
  dispatch({
    type: 'UPDATE_RESPONSE',
    payload: {
      chatId,
      pnrId,
      responseId: response.id,
      newContent: editedContent
    }
  });
  setIsEditing(false);
};

return (
  <div>
    {isEditing ? (
      <textarea
        value={editedContent}
        onChange={(e) => setEditedContent(e.target.value)}
        className="w-full p-2 border rounded"
      />
    ) : (
      <ReactMarkdown>{response.content}</ReactMarkdown>
    )}
    <button onClick={() => setIsEditing(!isEditing)}>
      {isEditing ? 'πŸ’Ύ' : '✏️'}
    </button>
  </div>
);
Enter fullscreen mode Exit fullscreen mode

Key Decision: Use controlled components for immediate state updates.

Feature 2: Archive System (ChatGPT-Inspired)

ChatGPT's archive feature is perfect for decluttering. I implemented it with a simple boolean flag.

// ChatContext.tsx
interface Chat {
  id: string;
  title: string;
  isArchived: boolean; // Simple!
  prompts: PromptResponse[];
}

// Toggle archive
const archiveChat = (chatId: string) => {
  dispatch({
    type: 'TOGGLE_ARCHIVE',
    payload: { chatId }
  });
};
Enter fullscreen mode Exit fullscreen mode

Sidebar filter:

const visibleChats = showArchived 
  ? chats 
  : chats.filter(chat => !chat.isArchived);
Enter fullscreen mode Exit fullscreen mode

[Continue with remaining 18 features...]


πŸ› οΈ Tech Stack Decision {#tech-stack}

As a backend engineer, I had zero React experience. Here's why I chose what I chose:

React 18 + TypeScript

Why React?

  • Component-based architecture (maps to my OOP brain)
  • Huge ecosystem (solutions exist for everything)
  • PWA support (I wanted installable app)
  • Job market relevance (learning investment pays off)

Why TypeScript?

// This catches bugs BEFORE runtime
interface PromptResponse {
  id: string;
  prompt: Message;
  responses: Message[];
  timestamp: number;
  // If I forget a field, TypeScript yells at me
}

// ❌ This won't compile:
const pnr: PromptResponse = {
  id: '123',
  prompt: { content: 'Hello' }
  // Missing: responses, timestamp
};
Enter fullscreen mode Exit fullscreen mode

Type safety = fewer bugs = happier users.

Vite

Why not Create React App?

Metric Vite CRA
Cold start 0.5s 15s
Hot reload 50ms 2s
Production build 6s 60s

Vite is 10x faster. Life's too short for slow builds.

Tailwind CSS

Why not styled-components?

// ❌ styled-components: Create separate component
const Button = styled.button`
  padding: 0.5rem 1rem;
  background: blue;
  color: white;
`;

// βœ… Tailwind: Inline, fast, no context switching
<button className="px-4 py-2 bg-blue-500 text-white">
  Click me
</button>
Enter fullscreen mode Exit fullscreen mode

Tailwind = rapid prototyping. Dark mode is built-in:

<div className="bg-white dark:bg-gray-900">
  {/* Automatic theme support */}
</div>
Enter fullscreen mode Exit fullscreen mode

πŸ—οΈ Architecture: Context API vs Redux {#architecture}

Everyone said "use Redux for complex state."

I said no. Here's why.

The State Structure

interface AppState {
  chats: Chat[];
  contextPanels: ContextPanel[];
  settings: Settings;
  templates: PromptTemplate[];
  folders: Folder[];
}
Enter fullscreen mode Exit fullscreen mode

For this, Redux is overkill. Context API + useReducer gives me:

Actions are typed:

type ChatAction =
  | { type: 'ADD_CHAT'; payload: Chat }
  | { type: 'DELETE_CHAT'; payload: { chatId: string } }
  | { type: 'UPDATE_RESPONSE'; payload: UpdateResponsePayload }
  | { type: 'TOGGLE_ARCHIVE'; payload: { chatId: string } };
Enter fullscreen mode Exit fullscreen mode

Reducer is predictable:

function chatReducer(state: AppState, action: ChatAction): AppState {
  switch (action.type) {
    case 'ADD_CHAT':
      return {
        ...state,
        chats: [...state.chats, action.payload]
      };

    case 'DELETE_CHAT':
      return {
        ...state,
        chats: state.chats.filter(c => c.id !== action.payload.chatId)
      };

    // ... other cases

    default:
      return state;
  }
}
Enter fullscreen mode Exit fullscreen mode

No boilerplate:

  • No action creators
  • No thunks
  • No middleware
  • No devtools setup

Just dispatch and done.

When to use Redux:

  • App with 100+ components
  • Complex async logic
  • Time-travel debugging needed

For most apps? Context API is enough.


πŸ” Security: Non-Negotiable Principles {#security}

Most LLM UIs store API keys in localStorage.

This is dangerous.

The Problem with localStorage

// ❌ Common approach (INSECURE)
localStorage.setItem('openai_api_key', apiKey);
Enter fullscreen mode Exit fullscreen mode

Risks:

  1. XSS attacks: Malicious script can read localStorage
  2. Browser extensions: Any extension can access it
  3. Dev tools: Anyone with physical access can view
  4. Syncing: May sync across devices unintentionally

My Solution: In-Memory Only

// βœ… Secure approach
const [providers, setProviders] = useState<LLMProvider[]>([]);

interface LLMProvider {
  type: 'openai' | 'anthropic' | 'google';
  endpoint: string;
  apiKey: string; // In component state, never persisted
  models: string[];
}
Enter fullscreen mode Exit fullscreen mode

When user closes tab: API key is gone. They re-enter next session.

Trade-off: Convenience vs Security. I chose security.

HTTPS Enforcement

function validateEndpoint(endpoint: string, type: ProviderType): boolean {
  // Exception: localhost for Ollama (local model)
  if (type === 'ollama' && endpoint.includes('localhost')) {
    return true;
  }

  // All cloud providers: HTTPS required
  if (!endpoint.startsWith('https://')) {
    throw new Error('HTTPS required for security');
  }

  return true;
}
Enter fullscreen mode Exit fullscreen mode

Why: API keys in HTTP = visible to anyone on network.

Input Sanitization

function sanitizePrompt(input: string): string {
  return input
    .replace(/\u0000/g, '') // Remove null bytes
    .replace(/[\x00-\x1F\x7F]/g, '') // Remove control chars
    .trim();
}
Enter fullscreen mode Exit fullscreen mode

Every user input goes through sanitization. Every. Single. Time.


πŸ”Œ Multi-Provider Abstraction {#multi-provider}

Supporting 6 LLM providers sounds complex. But with the right abstraction, it's trivial.

The Provider Interface

interface LLMProvider {
  type: 'openai' | 'anthropic' | 'google' | 'ollama' | 'azure' | 'custom';
  name: string;
  endpoint: string;
  apiKey: string;
  models: string[];
  defaultModel: string;
}
Enter fullscreen mode Exit fullscreen mode

The Call Function

async function callLLM(
  provider: LLMProvider,
  prompt: string,
  history: Message[]
): Promise<LLMResponse> {

  // Validate first
  validateEndpoint(provider.endpoint, provider.type);

  // Route based on type
  switch (provider.type) {
    case 'openai':
      return await callOpenAI(provider, prompt, history);

    case 'anthropic':
      return await callAnthropic(provider, prompt, history);

    case 'google':
      return await callGoogle(provider, prompt, history);

    // ... other providers
  }
}
Enter fullscreen mode Exit fullscreen mode

Adding a New Provider (20 Minutes)

  1. Add type to union (1 min):
type ProviderType = 'openai' | 'anthropic' | 'newProvider';
Enter fullscreen mode Exit fullscreen mode
  1. Add case to switch (1 min):
case 'newProvider':
  return await callNewProvider(provider, prompt, history);
Enter fullscreen mode Exit fullscreen mode
  1. Implement API call (15 min):
async function callNewProvider(
  provider: LLMProvider,
  prompt: string,
  history: Message[]
): Promise<LLMResponse> {
  const response = await fetch(provider.endpoint, {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${provider.apiKey}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      model: provider.defaultModel,
      messages: [...history, { role: 'user', content: prompt }]
    })
  });

  const data = await response.json();
  return {
    content: data.choices[0].message.content,
    tokens: data.usage.total_tokens
  };
}
Enter fullscreen mode Exit fullscreen mode
  1. Test (3 min):
npm run dev
# Add new provider in Admin
# Test connection
# Send a prompt
Enter fullscreen mode Exit fullscreen mode

Done! That's the power of standardization.


✨ Feature Breakdown {#features}

Command Palette (Ctrl+K)

Inspired by VS Code. Uses fuzzy search:

const commands: Command[] = [
  { name: 'New Chat', action: createChat, shortcut: 'Ctrl+N' },
  { name: 'Global Search', action: openSearch, shortcut: 'Ctrl+Shift+F' },
  { name: 'Export Chat', action: openExport, shortcut: 'Ctrl+Shift+E' },
  // ... 20+ commands
];

const filteredCommands = commands.filter(cmd =>
  cmd.name.toLowerCase().includes(query.toLowerCase())
);
Enter fullscreen mode Exit fullscreen mode

Voice Input (Ctrl+M)

Uses Web Speech API (built into browsers!):

const recognition = new (
  window.SpeechRecognition || window.webkitSpeechRecognition
)();

recognition.continuous = true;
recognition.interimResults = true;
recognition.lang = 'en-US';

recognition.onresult = (event) => {
  const transcript = Array.from(event.results)
    .map(result => result[0].transcript)
    .join('');

  setPromptText(transcript);
};

recognition.start();
Enter fullscreen mode Exit fullscreen mode

No external library needed!

Token Counter

Live estimation as you type:

function estimateTokens(text: string): number {
  // Rough estimate: 1 token β‰ˆ 4 characters
  // More accurate: Use tiktoken library
  return Math.ceil(text.length / 4);
}

function estimateCost(tokens: number, provider: ProviderType): number {
  const pricing = {
    'openai': 0.03 / 1000, // $0.03 per 1K tokens (GPT-4)
    'anthropic': 0.015 / 1000, // $0.015 per 1K tokens (Claude)
    'google': 0.00025 / 1000 // $0.00025 per 1K tokens (Gemini)
  };

  return tokens * (pricing[provider] || 0);
}
Enter fullscreen mode Exit fullscreen mode

πŸŽ“ Lessons Learned {#lessons}

Technical Lessons

1. TypeScript Catches Bugs Before Users Do

// ❌ Without TypeScript:
function updateChat(id, title) {
  // Oops, I passed a number instead of string
  // Bug discovered in production
}

// βœ… With TypeScript:
function updateChat(id: string, title: string) {
  // Compiler error if I pass wrong types
}
Enter fullscreen mode Exit fullscreen mode

2. Context API is Enough for Most Apps

I was scared of state management. Everyone said "use Redux."

For Samvada Studio (35+ features, complex state), Context API was plenty.

When to use Redux:

  • App with 100+ components sharing state
  • Complex async logic (sagas, thunks)
  • Need time-travel debugging

Otherwise: Context API + useReducer.

3. Web APIs Are Amazing

No need for heavy libraries:

  • Speech Recognition: Built-in browser API
  • Text-to-Speech: Built-in browser API
  • PWA: Service workers + manifest.json
  • LocalStorage: Built-in, simple, works everywhere

4. Documentation is Marketing

I wrote 15,000+ lines of docs. People noticed.

Comments on GitHub:

  • "The documentation is incredible!"
  • "Finally, someone who explains the WHY"
  • "I learned more from your docs than the code"

Good docs = trust = users = contributors.

5. Security Upfront, Not Later

I could've stored API keys in localStorage (easy). But I knew it was wrong.

Doing security after is 10x harder. Do it first.


Product Lessons

1. Steal Like an Artist

I didn't invent new UX patterns. I studied the best:

  • Gemini's inline editing β†’ Copied
  • ChatGPT's archiving β†’ Copied
  • Copilot's command palette β†’ Copied

Result: Familiar UX, zero learning curve.

2. Quality > Speed

My original timeline: "Half an hour."

Reality: 6 months.

But every feature is polished. Every edge case handled. Every interaction smooth.

Fast and buggy or slow and excellent? I chose excellent.

3. Users Want Keyboard Shortcuts

Power users LOVE keyboards:

  • Ctrl+K: Command Palette
  • Ctrl+M: Voice Input
  • Ctrl+.: Text-to-Speech
  • Ctrl+Enter: Send Message

Adding shortcuts was trivial. User satisfaction? Massive.


Life Lessons

1. Backend Engineers Can Do Frontend

I was intimidated by React. "That's not my world."

But I learned:

  • Components = Objects with render methods
  • Props = Constructor parameters
  • State = Class fields
  • Hooks = Utility functions

It's all just programming.

2. Frustration = Fuel

I was annoyed with existing LLM UIs. That annoyance drove me to build something better.

What's annoying you right now? That might be your next project.

3. Preserve Your Origin Story

I almost deleted my original feature list. "It's messy, incomplete, unprofessional."

But I kept it. Now it's in the repo as THE_BEGINNING.md.

People love origin stories. They're relatable, human, inspiring.

Don't hide your beginnings. Celebrate them.


πŸš€ Try It Yourself {#try-it}

Samvada Studio is open source and free.

Quick Start

# Clone the repo
git clone https://github.com/dhruvinrsoni/samvada-studio.git
cd samvada-studio

# Install dependencies
npm install

# Start development server
npm run dev

# Open http://localhost:5173
Enter fullscreen mode Exit fullscreen mode

What You Get

βœ… 35+ power-user features
βœ… 6 LLM providers (OpenAI, Anthropic, Google, Ollama, Azure, Custom)
βœ… 100% local-first (your data stays with you)
βœ… Security-first (API keys in memory only)
βœ… Comprehensive docs (15,000+ lines)

Perfect For

  • Developers who need keyboard-first interfaces
  • Prompt engineers who test multiple providers
  • Researchers who need organized conversations
  • Content pros who value privacy

🀝 Contributing

Found a bug? Have a feature idea?

Open an issue: github.com/dhruvinrsoni/samvada-studio/issues

Submit a PR: We have comprehensive contributing guidelines

Feature 21 is open! What should the next feature be?


🎯 Conclusion

What started as "a half-hour project" became 6 months of learning, building, and polishing.

Was it worth it? Absolutely.

I went from zero React knowledge to shipping a production-ready app.

I learned about security, architecture, and what makes great UX.

And I built something I use every single day.

Your frustration might be the start of your next project.

What are you waiting for?


Questions? Drop them in the comments. I'll answer every one.

Found this helpful? Give Samvada Studio a ⭐ on GitHub!

πŸ”— GitHub: github.com/dhruvinrsoni/samvada-studio

🐦 Twitter: @dhruvinrsoni
πŸ’Ό LinkedIn: @dhruvinrsoni

Tags: #opensource #react #typescript #llm #webdevelopment #buildinpublic

Top comments (0)