Building an AI chatbot frontend? Here's why Redux was never an option.
The Problem
AI chatbot frontends have complex state:
// What you're actually dealing with
interface ChatState {
// Message history
messages: Message[];
// Streaming response
streamingText: string;
isStreaming: boolean;
// Tool execution
toolCalls: ToolCall[];
activeTool: Tool | null;
toolResults: Map<string, any>;
// Context management
contextWindow: Message[];
memory: MemoryItem[];
// User intent tracking
currentIntent: Intent | null;
intentHistory: Intent[];
// Execution flow
currentStep: number;
stepResults: Map<number, any>;
// Error handling
errors: Error[];
retryCount: number;
}
This isn't a simple counter. Redux was built for a different era.
Why Not Redux?
1. Boilerplate Overload
// Actions (30+ lines)
const ADD_MESSAGE = "chat/ADD_MESSAGE";
const SET_STREAMING = "chat/SET_STREAMING";
const START_TOOL = "chat/START_TOOL";
// ... 20 more
// Reducer (100+ lines)
const chatReducer = (state, action) => {
switch (action.type) {
case ADD_MESSAGE:
return { ...state, messages: [...state.messages, action.payload] };
// ... 20 more cases
}
};
// Selectors (20+ lines)
export const selectMessages = (state) => state.chat.messages;
export const selectStreaming = (state) => state.chat.streaming;
// ... 10 more
// Component (50+ lines)
const messages = useSelector(selectMessages);
const dispatch = useDispatch();
Total: 200+ lines for basic chat functionality.
2. TypeScript Pain
// Redux + TypeScript = suffering
dispatch({ type: "ADD_MESSAGE", payload: unknown });
// Have to type every action
// Have to type every reducer case
// Have to type every selector
3. No Native Tool Tracking
Tools in AI chatbots need:
- Status (running/completed/failed)
- Parameters
- Results
- Retry logic
Redux doesn't make this easy.
Why easy-model?
One Class = All State
class AIChatModel {
// State
messages: Message[] = [];
streamingText = "";
isStreaming = false;
toolCalls: ToolCall[] = [];
activeTool: Tool | null = null;
toolResults: Map<string, any> = new Map();
errors: Error[] = [];
// Actions
@loader.load()
async sendMessage(content: string) {
this.messages.push({ role: "user", content });
this.isStreaming = true;
const response = await llm.streamChat(this.messages);
for await (const chunk of response) {
this.streamingText += chunk;
}
this.messages.push({ role: "assistant", content: this.streamingText });
this.isStreaming = false;
}
async executeTool(tool: Tool, params: any) {
this.activeTool = tool;
const result = await tool.execute(params);
this.toolResults.set(tool.name, result);
this.activeTool = null;
}
}
~50 lines instead of 200+.
Built-in Loading States
class ChatModel {
@loader.load()
async sendMessage(content: string) {
/* ... */
}
}
// In component
const { isLoading } = useLoader();
{
isLoading(chat.sendMessage) && <SendingIndicator />;
}
No manual loading state management.
Undo/Redo for Free
const chat = useModel(AIChatModel, []);
const history = useModelHistory(chat);
// Debug AI responses
history.back();
history.forward();
Cross-Component State Sharing
// Chat component
function ChatPanel() {
const chat = useModel(AIChatModel, ["chat-1"]);
return <MessageList messages={chat.messages} />;
}
// Status component
function StatusPanel() {
const chat = useInstance(ChatProvider("chat-1"));
return <StatusBadge isStreaming={chat.isStreaming} />;
}
Deep Watching
watch(chat, (keys, prev, next) => {
console.log(`${keys.join(".")} changed:`, prev, "→", next);
// "messages.5.content"
// "toolCalls.0.status"
// "streamingText"
});
Comparison
| Feature | Redux | Zustand | MobX | easy-model |
|---|---|---|---|---|
| Class-based | ❌ | ❌ | ✅ | ✅ |
| No decorators | ✅ | ✅ | ❌ | ✅ |
| Built-in DI | ❌ | ❌ | ❌ | ✅ |
| Undo/Redo | ❌ | ❌ | ❌ | ✅ |
| Deep watch | ❌ | ⚠️ | ✅ | ✅ |
| TypeScript | ⚠️ | ✅ | ⚠️ | ✅ |
| Tool tracking | ❌ | ❌ | ❌ | ✅ |
What I Built
With easy-model:
- ✅ Chat message history
- ✅ Streaming response display
- ✅ Tool call tracking
- ✅ Error handling with retry
- ✅ Context/memory management
- ✅ Multi-turn conversation
- ✅ Undo/redo for debugging
All in ~150 lines of clean code.
The Point
Redux was designed for simple state updates. AI chatbots are anything but simple.
easy-model gives you:
- Class-based simplicity
- Complex state handling
- Built-in utilities (loading, history, DI)
- TypeScript that just works
Result: More time building features, less time fighting state.
GitHub: https://github.com/ZYF93/easy-model
npm: pnpm add @e7w/easy-model
⭐️ Stop using Redux for AI apps. Your future self will thank you.
Top comments (0)