π Table of Contents
- The Frustration That Started It All
- The Original Vision
- Tech Stack Decision
- Architecture: Context API vs Redux
- Security: Non-Negotiable Principles
- Multi-Provider Abstraction
- Feature Breakdown (20 Features)
- Lessons Learned
- Try It Yourself
π€ The Frustration That Started It All {#the-frustration}
When ChatGPT launched, I tried every LLM chat UI I could find.
Each had brilliant featuresβbut they were scattered:
- Gemini had beautiful inline editing
- ChatGPT had useful archiving
- Copilot had smooth keyboard shortcuts
I wanted them all. In one place.
So I scribbled 20 features on a list and thought:
"This will take half an hour."
Six months later, I shipped Samvada Studio with:
- β 35+ features
- β 6 LLM providers
- β 15,000 lines of documentation
- β 100% local-first, security-first architecture
Here's the complete technical story.
π The Original Vision{#the-vision}
This is the actual list I wrote (typos and all):
[Insert screenshot of THE_BEGINNING.md from your repo]
Let me break down how I implemented each feature.
Feature 1: Inline Editing (Gemini-Inspired)
Gemini lets you edit AI responses directly. Brilliant UXβyou're iterating in place, no copy-paste needed.
My Implementation:
// PromptResponseItem.tsx
const [isEditing, setIsEditing] = useState(false);
const [editedContent, setEditedContent] = useState(response.content);
const handleSave = () => {
dispatch({
type: 'UPDATE_RESPONSE',
payload: {
chatId,
pnrId,
responseId: response.id,
newContent: editedContent
}
});
setIsEditing(false);
};
return (
<div>
{isEditing ? (
<textarea
value={editedContent}
onChange={(e) => setEditedContent(e.target.value)}
className="w-full p-2 border rounded"
/>
) : (
<ReactMarkdown>{response.content}</ReactMarkdown>
)}
<button onClick={() => setIsEditing(!isEditing)}>
{isEditing ? 'πΎ' : 'βοΈ'}
</button>
</div>
);
Key Decision: Use controlled components for immediate state updates.
Feature 2: Archive System (ChatGPT-Inspired)
ChatGPT's archive feature is perfect for decluttering. I implemented it with a simple boolean flag.
// ChatContext.tsx
interface Chat {
id: string;
title: string;
isArchived: boolean; // Simple!
prompts: PromptResponse[];
}
// Toggle archive
const archiveChat = (chatId: string) => {
dispatch({
type: 'TOGGLE_ARCHIVE',
payload: { chatId }
});
};
Sidebar filter:
const visibleChats = showArchived
? chats
: chats.filter(chat => !chat.isArchived);
[Continue with remaining 18 features...]
π οΈ Tech Stack Decision {#tech-stack}
As a backend engineer, I had zero React experience. Here's why I chose what I chose:
React 18 + TypeScript
Why React?
- Component-based architecture (maps to my OOP brain)
- Huge ecosystem (solutions exist for everything)
- PWA support (I wanted installable app)
- Job market relevance (learning investment pays off)
Why TypeScript?
// This catches bugs BEFORE runtime
interface PromptResponse {
id: string;
prompt: Message;
responses: Message[];
timestamp: number;
// If I forget a field, TypeScript yells at me
}
// β This won't compile:
const pnr: PromptResponse = {
id: '123',
prompt: { content: 'Hello' }
// Missing: responses, timestamp
};
Type safety = fewer bugs = happier users.
Vite
Why not Create React App?
| Metric | Vite | CRA |
|---|---|---|
| Cold start | 0.5s | 15s |
| Hot reload | 50ms | 2s |
| Production build | 6s | 60s |
Vite is 10x faster. Life's too short for slow builds.
Tailwind CSS
Why not styled-components?
// β styled-components: Create separate component
const Button = styled.button`
padding: 0.5rem 1rem;
background: blue;
color: white;
`;
// β
Tailwind: Inline, fast, no context switching
<button className="px-4 py-2 bg-blue-500 text-white">
Click me
</button>
Tailwind = rapid prototyping. Dark mode is built-in:
<div className="bg-white dark:bg-gray-900">
{/* Automatic theme support */}
</div>
ποΈ Architecture: Context API vs Redux {#architecture}
Everyone said "use Redux for complex state."
I said no. Here's why.
The State Structure
interface AppState {
chats: Chat[];
contextPanels: ContextPanel[];
settings: Settings;
templates: PromptTemplate[];
folders: Folder[];
}
For this, Redux is overkill. Context API + useReducer gives me:
Actions are typed:
type ChatAction =
| { type: 'ADD_CHAT'; payload: Chat }
| { type: 'DELETE_CHAT'; payload: { chatId: string } }
| { type: 'UPDATE_RESPONSE'; payload: UpdateResponsePayload }
| { type: 'TOGGLE_ARCHIVE'; payload: { chatId: string } };
Reducer is predictable:
function chatReducer(state: AppState, action: ChatAction): AppState {
switch (action.type) {
case 'ADD_CHAT':
return {
...state,
chats: [...state.chats, action.payload]
};
case 'DELETE_CHAT':
return {
...state,
chats: state.chats.filter(c => c.id !== action.payload.chatId)
};
// ... other cases
default:
return state;
}
}
No boilerplate:
- No action creators
- No thunks
- No middleware
- No devtools setup
Just dispatch and done.
When to use Redux:
- App with 100+ components
- Complex async logic
- Time-travel debugging needed
For most apps? Context API is enough.
π Security: Non-Negotiable Principles {#security}
Most LLM UIs store API keys in localStorage.
This is dangerous.
The Problem with localStorage
// β Common approach (INSECURE)
localStorage.setItem('openai_api_key', apiKey);
Risks:
- XSS attacks: Malicious script can read localStorage
- Browser extensions: Any extension can access it
- Dev tools: Anyone with physical access can view
- Syncing: May sync across devices unintentionally
My Solution: In-Memory Only
// β
Secure approach
const [providers, setProviders] = useState<LLMProvider[]>([]);
interface LLMProvider {
type: 'openai' | 'anthropic' | 'google';
endpoint: string;
apiKey: string; // In component state, never persisted
models: string[];
}
When user closes tab: API key is gone. They re-enter next session.
Trade-off: Convenience vs Security. I chose security.
HTTPS Enforcement
function validateEndpoint(endpoint: string, type: ProviderType): boolean {
// Exception: localhost for Ollama (local model)
if (type === 'ollama' && endpoint.includes('localhost')) {
return true;
}
// All cloud providers: HTTPS required
if (!endpoint.startsWith('https://')) {
throw new Error('HTTPS required for security');
}
return true;
}
Why: API keys in HTTP = visible to anyone on network.
Input Sanitization
function sanitizePrompt(input: string): string {
return input
.replace(/\u0000/g, '') // Remove null bytes
.replace(/[\x00-\x1F\x7F]/g, '') // Remove control chars
.trim();
}
Every user input goes through sanitization. Every. Single. Time.
π Multi-Provider Abstraction {#multi-provider}
Supporting 6 LLM providers sounds complex. But with the right abstraction, it's trivial.
The Provider Interface
interface LLMProvider {
type: 'openai' | 'anthropic' | 'google' | 'ollama' | 'azure' | 'custom';
name: string;
endpoint: string;
apiKey: string;
models: string[];
defaultModel: string;
}
The Call Function
async function callLLM(
provider: LLMProvider,
prompt: string,
history: Message[]
): Promise<LLMResponse> {
// Validate first
validateEndpoint(provider.endpoint, provider.type);
// Route based on type
switch (provider.type) {
case 'openai':
return await callOpenAI(provider, prompt, history);
case 'anthropic':
return await callAnthropic(provider, prompt, history);
case 'google':
return await callGoogle(provider, prompt, history);
// ... other providers
}
}
Adding a New Provider (20 Minutes)
- Add type to union (1 min):
type ProviderType = 'openai' | 'anthropic' | 'newProvider';
- Add case to switch (1 min):
case 'newProvider':
return await callNewProvider(provider, prompt, history);
- Implement API call (15 min):
async function callNewProvider(
provider: LLMProvider,
prompt: string,
history: Message[]
): Promise<LLMResponse> {
const response = await fetch(provider.endpoint, {
method: 'POST',
headers: {
'Authorization': `Bearer ${provider.apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: provider.defaultModel,
messages: [...history, { role: 'user', content: prompt }]
})
});
const data = await response.json();
return {
content: data.choices[0].message.content,
tokens: data.usage.total_tokens
};
}
- Test (3 min):
npm run dev
# Add new provider in Admin
# Test connection
# Send a prompt
Done! That's the power of standardization.
β¨ Feature Breakdown {#features}
Command Palette (Ctrl+K)
Inspired by VS Code. Uses fuzzy search:
const commands: Command[] = [
{ name: 'New Chat', action: createChat, shortcut: 'Ctrl+N' },
{ name: 'Global Search', action: openSearch, shortcut: 'Ctrl+Shift+F' },
{ name: 'Export Chat', action: openExport, shortcut: 'Ctrl+Shift+E' },
// ... 20+ commands
];
const filteredCommands = commands.filter(cmd =>
cmd.name.toLowerCase().includes(query.toLowerCase())
);
Voice Input (Ctrl+M)
Uses Web Speech API (built into browsers!):
const recognition = new (
window.SpeechRecognition || window.webkitSpeechRecognition
)();
recognition.continuous = true;
recognition.interimResults = true;
recognition.lang = 'en-US';
recognition.onresult = (event) => {
const transcript = Array.from(event.results)
.map(result => result[0].transcript)
.join('');
setPromptText(transcript);
};
recognition.start();
No external library needed!
Token Counter
Live estimation as you type:
function estimateTokens(text: string): number {
// Rough estimate: 1 token β 4 characters
// More accurate: Use tiktoken library
return Math.ceil(text.length / 4);
}
function estimateCost(tokens: number, provider: ProviderType): number {
const pricing = {
'openai': 0.03 / 1000, // $0.03 per 1K tokens (GPT-4)
'anthropic': 0.015 / 1000, // $0.015 per 1K tokens (Claude)
'google': 0.00025 / 1000 // $0.00025 per 1K tokens (Gemini)
};
return tokens * (pricing[provider] || 0);
}
π Lessons Learned {#lessons}
Technical Lessons
1. TypeScript Catches Bugs Before Users Do
// β Without TypeScript:
function updateChat(id, title) {
// Oops, I passed a number instead of string
// Bug discovered in production
}
// β
With TypeScript:
function updateChat(id: string, title: string) {
// Compiler error if I pass wrong types
}
2. Context API is Enough for Most Apps
I was scared of state management. Everyone said "use Redux."
For Samvada Studio (35+ features, complex state), Context API was plenty.
When to use Redux:
- App with 100+ components sharing state
- Complex async logic (sagas, thunks)
- Need time-travel debugging
Otherwise: Context API + useReducer.
3. Web APIs Are Amazing
No need for heavy libraries:
- Speech Recognition: Built-in browser API
- Text-to-Speech: Built-in browser API
- PWA: Service workers + manifest.json
- LocalStorage: Built-in, simple, works everywhere
4. Documentation is Marketing
I wrote 15,000+ lines of docs. People noticed.
Comments on GitHub:
- "The documentation is incredible!"
- "Finally, someone who explains the WHY"
- "I learned more from your docs than the code"
Good docs = trust = users = contributors.
5. Security Upfront, Not Later
I could've stored API keys in localStorage (easy). But I knew it was wrong.
Doing security after is 10x harder. Do it first.
Product Lessons
1. Steal Like an Artist
I didn't invent new UX patterns. I studied the best:
- Gemini's inline editing β Copied
- ChatGPT's archiving β Copied
- Copilot's command palette β Copied
Result: Familiar UX, zero learning curve.
2. Quality > Speed
My original timeline: "Half an hour."
Reality: 6 months.
But every feature is polished. Every edge case handled. Every interaction smooth.
Fast and buggy or slow and excellent? I chose excellent.
3. Users Want Keyboard Shortcuts
Power users LOVE keyboards:
- Ctrl+K: Command Palette
- Ctrl+M: Voice Input
- Ctrl+.: Text-to-Speech
- Ctrl+Enter: Send Message
Adding shortcuts was trivial. User satisfaction? Massive.
Life Lessons
1. Backend Engineers Can Do Frontend
I was intimidated by React. "That's not my world."
But I learned:
- Components = Objects with render methods
- Props = Constructor parameters
- State = Class fields
- Hooks = Utility functions
It's all just programming.
2. Frustration = Fuel
I was annoyed with existing LLM UIs. That annoyance drove me to build something better.
What's annoying you right now? That might be your next project.
3. Preserve Your Origin Story
I almost deleted my original feature list. "It's messy, incomplete, unprofessional."
But I kept it. Now it's in the repo as THE_BEGINNING.md.
People love origin stories. They're relatable, human, inspiring.
Don't hide your beginnings. Celebrate them.
π Try It Yourself {#try-it}
Samvada Studio is open source and free.
Quick Start
# Clone the repo
git clone https://github.com/dhruvinrsoni/samvada-studio.git
cd samvada-studio
# Install dependencies
npm install
# Start development server
npm run dev
# Open http://localhost:5173
What You Get
β
35+ power-user features
β
6 LLM providers (OpenAI, Anthropic, Google, Ollama, Azure, Custom)
β
100% local-first (your data stays with you)
β
Security-first (API keys in memory only)
β
Comprehensive docs (15,000+ lines)
Perfect For
- Developers who need keyboard-first interfaces
- Prompt engineers who test multiple providers
- Researchers who need organized conversations
- Content pros who value privacy
π€ Contributing
Found a bug? Have a feature idea?
Open an issue: github.com/dhruvinrsoni/samvada-studio/issues
Submit a PR: We have comprehensive contributing guidelines
Feature 21 is open! What should the next feature be?
π― Conclusion
What started as "a half-hour project" became 6 months of learning, building, and polishing.
Was it worth it? Absolutely.
I went from zero React knowledge to shipping a production-ready app.
I learned about security, architecture, and what makes great UX.
And I built something I use every single day.
Your frustration might be the start of your next project.
What are you waiting for?
Questions? Drop them in the comments. I'll answer every one.
Found this helpful? Give Samvada Studio a β on GitHub!
π GitHub: github.com/dhruvinrsoni/samvada-studio
π¦ Twitter: @dhruvinrsoni
πΌ LinkedIn: @dhruvinrsoni
Tags: #opensource #react #typescript #llm #webdevelopment #buildinpublic
Top comments (0)