DEV Community

Fatma Ali
Fatma Ali

Posted on

Why useState is Breaking Your AI App: The Case for State Machines in Complex React Interfaces

Tldr; Juggling useState booleans for complex UI creates "impossible states" and bugs. Finite state machines enforce exactly-one-state-at-a-time semantics, eliminating this problem. Use useReducer for simple state machines, or XState for complex orchestration with guards, timers, and hierarchies.

You've been there. I've been there. We've all been there. Staring at a React component that started as "just a simple form" and now looks like it needs its own architectural diagram. Your component has a dozen useState hooks at the top, all fighting each other like tabs in your browser:

const [isLoading, setIsLoading] = useState(false);
const [isStreaming, setIsStreaming] = useState(false);
const [isComplete, setIsComplete] = useState(false);
const [error, setError] = useState<Error | null>(null);
const [data, setData] = useState<string[] | null>(null);
const [isRetrying, setIsRetrying] = useState(false);
const [showConfetti, setShowConfetti] = useState(false); // Because why not?
Enter fullscreen mode Exit fullscreen mode

You tell yourself it's fine. "It's manageable," you whisper, as you write another useEffect to synchronize three of these boolean flags, creating a side effect that will haunt you in your dreams. This isn't just messy; it's a breeding ground for what I call "impossible states."

What happens when isLoading and isComplete are both true? Does your UI show a loading spinner and the final result? What if error has a value but isLoading is also true? Is it loading or is it an error? Your UI becomes a quantum superposition of confusion, and you're the unfortunate physicist tasked with observing it into a non-buggy state.

The AI Broke My Booleans

Now, let's throw a modern AI-powered feature into the mix. Your simple data fetch is now a generative UI that streams responses from a large language model. The state diagram in your head, which used to be a linear flow, now has branching paths, error recovery, cancellation states, and retry logic.

You're not just fetching data anymore. You are:

  1. Idle: Waiting for a user prompt.
  2. Submitting: Sending the prompt to the backend.
  3. Waiting for stream: The server has acknowledged the request, but the first token hasn't arrived.
  4. Streaming: Receiving the response, token by token.
  5. Success: The stream has finished, and the full response is displayed.
  6. Error: Something, somewhere, went horribly wrong. Maybe the AI is having an existential crisis. Maybe you forgot an API key.

How do you model this with useState?

const [isSubmitting, setIsSubmitting] = useState(false);
const [isStreaming, setIsStreaming] = useState(false);
const [isError, setIsError] = useState(false);
// ... and so on, and so on.
Enter fullscreen mode Exit fullscreen mode

You are now manually choreographing a ballet of booleans. setIsSubmitting(true), then setIsSubmitting(false) and setIsStreaming(true) in the same function. It's fragile. It's imperative. It's a bug waiting to happen. You've created a system where isSubmitting and isStreaming can be true at the same time, an impossible state that your UI has no idea how to render.

This is the moment of truth for many engineers: the realization that managing state with a loose collection of booleans can be fundamentally tricky for complex systems.

State Machines 101: What They Are and Why They Matter

A finite state machine (FSM) is a computational model that can be in exactly one of a finite number of states at any given time. Think of it like a flowchart with strict rules:

  1. States: A defined set of conditions your system can be in (e.g., idle, loading, success, error)
  2. Transitions: Allowed movements between states, triggered by events (e.g., SUBMIT event moves from idle to loading)
  3. Guards: Conditions that must be met for a transition to occur (e.g., can't submit if the form is invalid)
  4. Actions: Side effects that occur during transitions (e.g., clear error message when retrying)

The key insight: you can only be in one state at a time. No overlapping, no ambiguity.

Imagine a traffic light:

const trafficLightMachine = {
  initial: 'red',
  states: {
    red: { on: { TIMER: 'green' } },
    yellow: { on: { TIMER: 'red' } },
    green: { on: { TIMER: 'yellow' } }
  }
};
Enter fullscreen mode Exit fullscreen mode

The light can't be both red and green. The only valid transitions are defined. This is the power of explicit state modeling.

State Machines + The Actor Model: A Powerful Blend

You might have heard of the Actor Model—a concurrent computation model where "actors" are independent entities that communicate via messages. Here's the key insight: these aren't competing concepts, they're complementary.

Modern state machine frameworks like XState blend both paradigms:

  • State Machines: Model deterministic state transitions within each actor (what states can it be in, what transitions are valid)
  • Actor Model: Enable multiple state machines to run independently and communicate via events (spawning children, sending messages between machines)

Think of it this way: each actor is a state machine. The state machine defines its internal behavior, while the actor model defines how multiple machines coordinate.

Why This Matters in Frontend Development

Traditional UI development treats state as a bag of variables. State machines flip this: they treat state as a first-class citizen with explicit rules about what can happen when.

Benefits for frontend applications:

  1. Eliminates impossible states: Your UI can't render isLoading={true} and isSuccess={true} simultaneously.
  2. Self-documenting: The state machine is a living diagram of your app's behavior.
  3. Predictable: Given a state and an event, the next state is deterministic.
  4. Testable: You can test state transitions independently of UI rendering.
  5. Visual debugging: Tools like XState Visualizer let you see your state graph and step through transitions.

For simple forms or basic data fetching, this might be overkill. But for AI-powered features with streaming, retries, cancellation, and complex error recovery? State machines are the appropriate abstraction.

Building State Machines in React

Now that we understand what state machines are, let's see how to actually implement them in React. We'll start simple and progressively handle more complexity.

AI Streaming UX

Before we dive into solutions, let's be clear about what we're building. What seems like a simple "type a prompt, get a response" feature actually has challenges that break simple state management:

idle → connecting → streaming → complete
          ↓           ↓
        error ← ─ ─ ─ ┘
          ↓
      (retry logic)
Enter fullscreen mode Exit fullscreen mode

Your useReducer machine, which happily handled loading and success, suddenly feels inadequate. Here are the actual production scenarios you need to handle:

The Challenges

1. Race Conditions

  • User submits a prompt → starts streaming
  • User gets impatient, submits another prompt mid-stream
  • First stream is still sending chunks while second request is connecting
  • What should happen? Cancel the first? Queue the second? Your useState booleans have no answer.

2. Network Failures Mid-Stream

  • You're 60% through receiving a streaming response
  • Network drops, WebSocket disconnects
  • Do you retry from the beginning? Resume? Show partial data?
  • With scattered useState hooks, you're juggling isStreaming, hasError, partialData, and connectionStatus simultaneously.

3. User Interruptions

  • User clicks "Cancel" while streaming
  • You need to: abort the fetch, clean up the WebSocket, clear pending chunks, reset UI
  • But wait—what if they click "Regenerate" while the cancellation is processing?
  • This creates impossible states: isCancelling + isStarting + isStreaming all true at once.

4. Retry Logic with Conditions

  • Authentication fails → retry makes sense
  • Stream connection fails → retry makes sense
  • Stream completes but user dislikes result → regenerate makes sense
  • How do you model "retry is valid from error states but not from streaming or complete"?
  • With useState, you're writing nested if statements checking multiple booleans.

5. Parallel Context Updates

  • While streaming, you're updating: accumulated text, token count, time elapsed
  • If an error occurs, you need to preserve the partial response for display
  • If user cancels, you need to freeze the context mid-update
  • useState scatter: setText(), setTokens(), setElapsed() can desync.

Why This Breaks useState

Let's see what this looks like with traditional state:

const [isConnecting, setIsConnecting] = useState(false);
const [isStreaming, setIsStreaming] = useState(false);
const [error, setError] = useState(null);
const [isCancelling, setIsCancelling] = useState(false);
const [isRetrying, setIsRetrying] = useState(false);

// User clicks submit while already streaming - RACE CONDITION
const handleSubmit = async () => {
  if (isStreaming) {
    // Cancel existing stream first? Or ignore?
    setIsCancelling(true);
    // Wait... how long? Another useEffect?
  }
  setIsConnecting(true);
  // But what if cancellation hasn't finished?
};

// Network fails mid-stream - IMPOSSIBLE STATE
const handleStreamError = (err) => {
  setError(err);
  setIsStreaming(false); // Are we still connected? 
  setIsRetrying(true); // Or should we wait for user input?
};
Enter fullscreen mode Exit fullscreen mode

The problem: You're modeling a state graph with independent booleans. The real system has explicit paths:

  • From streaming, you can go to complete, error, or cancelled
  • From error, you can only go to idle (via reset) or back to connecting (via retry)
  • From idle, you can only go to connecting (if prompt is valid)

State machines enforce these rules. useState doesn't.

Now we see the true scope of the problem. This isn't just "loading" or "success"—it's a complex orchestration that needs proper tooling.

Solution #1: Start with useReducer

For simple to moderate state machine needs, React's built-in useReducer is your friend. Instead of scattered useState calls, you define a single state type and explicit transitions:

type State = 
  | { status: 'idle' }
  | { status: 'loading' }
  | { status: 'success', data: string[] }
  | { status: 'error', message: string };

type Action = 
  | { type: 'SUBMIT' }
  | { type: 'SUCCESS', data: string[] }
  | { type: 'ERROR', message: string }
  | { type: 'RESET' };

function reducer(state: State, action: Action): State {
  switch (state.status) {
    case 'idle':
      return action.type === 'SUBMIT' ? { status: 'loading' } : state;
    case 'loading':
      if (action.type === 'SUCCESS') return { status: 'success', data: action.data };
      if (action.type === 'ERROR') return { status: 'error', message: action.message };
      return state;
    case 'success':
    case 'error':
      return action.type === 'RESET' ? { status: 'idle' } : state;
  }
}
Enter fullscreen mode Exit fullscreen mode

This is already a massive improvement:

  • Impossible states are structurally impossible (can't be both loading and success)
  • State transitions are explicit and centralized
  • TypeScript ensures you handle all cases
  • The reducer documents your component's behavior

When to use useReducer:

  • Simple async flows (fetch → loading → success/error)
  • Form wizards with sequential steps
  • UI state that doesn't need timers or complex side effects

But what about our complex AI streaming scenario?

Where useReducer Falls Short

Our AI streaming component has requirements that useReducer struggles with:

  • Race condition prevention: Can't start new stream while another is active (need to cancel first)
  • Guards: Can't retry if max attempts reached; can't start if prompt is empty
  • Entry/exit actions: Clear streamed text when starting, preserve it on error for display
  • Complex conditional logic: Different error recovery paths (network vs auth vs stream)

You could model this with useReducer + useEffect, but now you're manually managing:

  • Abort controllers and cleanup in useEffect for race conditions
  • Manual state checking before every transition (if (state.status !== 'streaming') return;)
  • Synchronization between reducer state and async operations
  • Your reducer becomes a tangled mess of nested switch statements with validation

You're back to the same complexity problem, just in a different form.

Solution #2: XState for Complex Orchestration

This is the point where you're not just managing state; you're orchestrating a complex user experience. And for that, you need a tool designed for orchestration.

Enter XState. XState isn't just a state management library; it's a framework for building statecharts. A statechart extends finite state machines with hierarchical (nested) states, parallel states, guarded transitions, and entry/exit actions—critical features for modeling complex async flows like AI streaming.

Key capabilities relevant to our real-world AI challenges:

  • Guards (Conditional Transitions): Prevent race conditions (block START while streaming, block RETRY if max attempts reached).
  • Deterministic Transitions: Given current state + event, next state is always predictable (no isLoading && isRetrying ambiguity).
  • Entry/Exit Actions: Reset or preserve context cleanly when states change (clear text on start, preserve on error).
  • Unified Context: One source of truth for tokens, retries, errors, partial responses—atomic updates, no desync.
  • Explicit Error Paths: Network failures, stream errors, and cancellations have defined recovery paths.

Here's a sample state machine for AI streaming:

import { createMachine, assign } from 'xstate';

const aiStreamingMachine = createMachine({
  id: 'aiStreaming',
  initial: 'idle',
  context: { 
    prompt: '', 
    streamedText: '', 
    tokens: 0, 
    retryCount: 0, 
    error: null,
    abortController: null 
  },
  states: {
    idle: {
      on: {
        START: { 
          target: 'connecting',
          guard: ({ context }) => context.prompt.trim().length > 0,
          actions: assign({ 
            abortController: () => new AbortController(),
            error: null 
          })
        },
        UPDATE_PROMPT: { 
          actions: assign({ prompt: ({ event }) => event.value }) 
        }
      }
    },
    connecting: {
      invoke: {
        src: 'connectToStream',
        onDone: { target: 'streaming' },
        onError: { 
          target: 'error',
          actions: assign({ 
            error: ({ event }) => event.error.message 
          })
        }
      },
      on: {
        CANCEL: { 
          target: 'cancelled',
          actions: 'abortConnection' // cleanup action
        }
      }
    },
    streaming: {
      entry: assign({ streamedText: '', tokens: 0 }), // clear on entry
      on: {
        CHUNK_RECEIVED: {
          // PARALLEL CONTEXT UPDATES: atomic updates, no desync
          actions: assign({
            streamedText: ({ context, event }) => context.streamedText + event.chunk,
            tokens: ({ context }) => context.tokens + 1
          })
        },
        STREAM_COMPLETE: { target: 'complete' },
        STREAM_ERROR: {
          target: 'error',
          // PRESERVE PARTIAL DATA: keep what we received
          actions: assign({ 
            error: ({ event }) => event.message 
            // Note: streamedText NOT cleared, preserved for user
          })
        },
        CANCEL: { 
          target: 'cancelled',
          actions: 'abortStream' 
        }
      }
    },
    complete: { 
      on: { 
        REGENERATE: { 
          target: 'connecting',
          actions: assign({ retryCount: 0 }) // reset retry count
        },
        RESET: { 
          target: 'idle',
          actions: assign({ 
            prompt: '', 
            streamedText: '', 
            tokens: 0,
            retryCount: 0 
          })
        }
      } 
    },
    error: {
      on: {
        RETRY: {
          target: 'connecting',
          // RETRY LOGIC: guard prevents infinite retries
          guard: ({ context }) => context.retryCount < 3,
          actions: assign({ 
            retryCount: ({ context }) => context.retryCount + 1,
            error: null 
          })
        },
        RESET: { 
          target: 'idle',
          actions: assign({ 
            streamedText: '', 
            retryCount: 0,
            error: null 
          })
        }
      }
    },
    cancelled: { 
      // Preserves streamedText so user sees what was received before cancel
      on: { 
        RETRY: { target: 'connecting' },
        RESET: { target: 'idle' }
      } 
    }
  }
}, {
  // Define implementation for side effects
  actors: {
    connectToStream: async ({ context, self }) => {
      const response = await fetch('/api/stream', {
        method: 'POST',
        body: JSON.stringify({ prompt: context.prompt }),
        signal: context.abortController.signal // enables cancellation
      });

      if (!response.ok) throw new Error('Connection failed');

      const reader = response.body.getReader();
      const decoder = new TextDecoder();

      while (true) {
        const { done, value } = await reader.read();
        if (done) {
          self.send({ type: 'STREAM_COMPLETE' });
          break;
        }

        const chunk = decoder.decode(value);
        self.send({ type: 'CHUNK_RECEIVED', chunk });
      }
    }
  },
  actions: {
    abortConnection: ({ context }) => context.abortController?.abort(),
    abortStream: ({ context }) => context.abortController?.abort()
  }
});
Enter fullscreen mode Exit fullscreen mode

Why this matters for real-world scenarios:

  1. Race conditions eliminated: Can't start new stream while in streaming state—transition simply doesn't exist.
  2. Network failures handled: invoke.onError provides centralized error handling with context preservation.
  3. Cancellation is first-class: CANCEL event from any async state has explicit paths, cleanup actions guaranteed.
  4. Retry logic enforced: Guard at error.RETRY checks retryCount < 3, impossible to bypass.
  5. Context updates atomic: CHUNK_RECEIVED updates streamedText and tokens together, no desync.
  6. Partial data preserved: Error and cancellation states don't clear streamedText, user sees progress.

Compare this to managing the same logic with scattered useState hooks and conditional useEffect cleanup. The machine is living documentation of your app's behavior.

Common Questions

"Why Not Just Use Redux?" (Or Zustand, or Jotai...)

You might be thinking: "I already use Redux/Zustand/Jotai for my app state. Can't I just put this streaming state in there?" The short answer: You could, but you shouldn't.

Component State vs. Global State

Not all state belongs in a global store. A streaming AI response is ephemeral, component-scoped state:

  • It's tied to a specific component instance's lifecycle
  • It doesn't need to be shared across routes or unrelated components
  • It resets when the component unmounts
  • Multiple instances of the component should have independent state

Redux and similar tools excel at application-wide state (user authentication, theme preferences, shopping cart). They're architecturally wrong for temporary, local UI state.

Redux Doesn't Solve the State Machine Problem

Redux gives you centralized state and a reducer pattern, but it doesn't give you state machine semantics. You still have to:

  • Manually enforce valid state transitions
  • Write conditional logic to prevent impossible states
  • Handle guards, timers, and side effects yourself
  • Debug state transitions without visual tools

Using Redux for complex state machines is like using a screwdriver to hammer a nail. It's the wrong tool for the job.

The Right Tool for the Job

  • useState: Simple, independent values (form inputs, toggles)
  • useReducer: Local state machines with moderate complexity
  • Redux/Zustand/Jotai: Shared application state across components
  • XState: Complex state orchestration with guards, timers, hierarchies, and visual debugging

For our AI streaming component, the complexity is in state transitions and orchestration, not in sharing data globally.

Try the Interactive Demo

Now that you've seen the concepts, explore the interactive AI streaming state machine. The demo simulates real-world challenges you'd face in production:

  • Random network failures during connection (simulates flaky APIs)
  • User race conditions (try clicking START twice rapidly)
  • Mid-stream cancellation (cancel while tokens are flowing)
  • Retry logic with limits (automatic failure after 3 retries)

Playground

Interactive Experiments

Try these experiments to see state machine benefits in action:

Experiment What to Try Real-World Parallel
Guard Protection Click "Start Stream" with empty prompt Prevents wasted API calls in production—validation enforced at state level
Reasoning Panel Expand 🧠 panel during state changes Debugging production issues: historical state trail shows exactly what happened
Race Condition Prevention Click START rapidly multiple times Guards prevent concurrent streams—common issue when users double-click submit
Random Network Failure Retry multiple times until connection fails Simulates real flaky network conditions; centralized error handling kicks in
Mid-Stream Cancellation Click CANCEL while tokens are streaming User abandons response mid-generation—cleanup happens automatically, no orphaned listeners
Partial Data Preservation Cancel or error mid-stream, check output Production UX: show users what was received before failure (they can still copy it)
Retry Limit Enforcement Fail connection 3+ times Guard prevents infinite retry loops that drain API quota
Regenerate After Complete Complete stream, then REGENERATE Common UX pattern: try again with same prompt without re-typing

Note on the demo: While the demo uses timers for the connection/streaming simulation (to make the visualization clear), the state machine patterns shown here apply to real async operations like fetch(), WebSocket connections, and streaming APIs. The complexity it solves—race conditions, cancellation, retry logic, context preservation—are all real-world production concerns.

Performance Considerations

Before you refactor your entire codebase to use state machines, let's address the elephant in the room: performance.

The Overhead Question

State machines add a layer of abstraction, and abstraction has cost. Let's be honest about the tradeoffs:

Aspect useReducer XState
Bundle Size 0KB (built into React) ~15KB gzipped (core + React integration)
Runtime Overhead Negligible vs useState Moderate (transition calculations, guards, actions)
Memory Impact Minimal (reducer function closure) Higher (state nodes, event objects, context)
Reconciliation Same as useState Same as useState
Verdict ✅ Use freely without concerns ⚠️ Great for complex flows, overkill for simple toggles

When Performance Actually Matters

For most applications, XState's overhead is imperceptible. But there are edge cases:

❌ Avoid XState for:

  • High-frequency updates (60fps animations, mouse tracking, canvas interactions)
  • Hundreds of simultaneously active machines in a single view
  • Ultra-lightweight components (simple toggles, accordions)

✅ XState Shines for:

  • Complex async orchestration (like our AI streaming example)
  • Infrequent but critical state transitions (checkout flows, multi-step forms)
  • Features where correctness > raw speed (payment processing, data submission)

Optimization Strategies

If you adopt XState and need to squeeze out performance:

Strategy What to Do Example
Spawn Lazily Don't create child actors until needed Use invoke with conditional logic or spawn in actions
Debounce Events Batch rapid user inputs before sending to machine const debouncedSend = useDebouncedCallback((event) => send(event), 300);
Define Machines Once Don't create new instances on every render Define outside component or use useMemo

The Real Cost: Maintenance vs. Performance

Here's the uncomfortable truth: premature optimization kills more projects than slow code.

A state machine that's 2ms slower but prevents 10 hours of debugging impossible states is a massive win. The question isn't "Is XState slower than useState?" It's "What's the cost of shipping buggy state management?"

The Bottom Line

Don't choose state machines for performance. Choose them for correctness, maintainability, and developer experience. If you later discover a performance bottleneck, you can optimize selectively or replace specific hot paths.

But start with the right abstraction. Premature optimization is the root of all evil, but so is choosing useState for a problem that demands state machine semantics.

Originally published on https://fatmaali.dev

Top comments (0)