DEV Community

Cover image for Why Your State Management Is Slowing Down AI-Assisted Development
James Heal
James Heal

Posted on

Why Your State Management Is Slowing Down AI-Assisted Development

Zustand and Jotai give developers freedom — but that freedom is poison for AI code generation.

We're the frontend team at Minara. Over the past six months, we've leaned heavily on AI-assisted development to build out Minara's trading platform frontend. Early on, AI-generated code was barely usable — every generated store had a different structure, state management style varied wildly, and code review took longer than writing it by hand.

Then we switched to a Model/Service/UI three-layer architecture with a custom typed reducer, and our AI code adoption rate jumped from around 30% to over 80%.

This is what the Minara frontend team learned from hands-on AI-assisted development. This isn't an article about "which state management library is better." It's about: in the age of AI, how the architectural patterns you choose determine how much AI can actually help you.

State Management Is Where AI-Generated Frontend Code Goes Wrong

If you've used Cursor, Claude Code, or GitHub Copilot to generate React components, you've probably run into these problems:

Zustand: different every time. Ask AI to build a user list page, and the first time it flattens all state and actions into one store with create; the second time it splits into slices; the third time it adds a persist middleware. Three versions, three styles, all working — but your codebase becomes an architecture variety show.

Jotai: atomic = fragmented. Jotai's atoms are elegant, but for AI, deciding "which state should be an atom, which should be a derived atom, which should use atomFamily" requires deep business context. AI doesn't have that context. The result: either all state crammed into one massive atom, or exploded into dozens of atoms that are impossible to track.

React Context: boilerplate hell. Context + useReducer is the right direction, but the standard pattern is too verbose — createContext, Provider, useContext, action types, reducer switch… AI frequently makes mistakes in all this boilerplate, missing type definitions or mixing up context nesting levels.

The end result: you spend more time reviewing and fixing AI-generated code than AI saved you. That's not an AI problem. It's an architecture problem.

The Root Cause: Humans Want Freedom, AI Needs Constraints

Why do Zustand and Jotai work so well in human hands, but fall apart with AI?

The answer is simple: their core selling point — flexibility — is exactly AI's weakness.

Zustand's create accepts a function and returns an object of any shape. No schema, no conventions, no layering. For humans, that's "simplicity." For AI, that's an "infinite possibility space." When an API lets you do anything, AI will do something different every time.

// Zustand: AI writes it this way the first time
const useStore = create((set) => ({
  users: [],
  loading: false,
  fetchUsers: async () => {
    set({ loading: true });
    const users = await api.getUsers();
    set({ users, loading: false });
  },
}));

// Zustand: AI writes it this way the second time
const useStore = create(
  devtools(
    persist((set, get) => ({
      users: [],
      filters: { search: '', page: 1 },
      setFilters: (f) => set({ filters: { ...get().filters, ...f } }),
      fetchUsers: async () => { /* completely different structure */ },
    }))
  )
);
Enter fullscreen mode Exit fullscreen mode

Both are valid Zustand code. But two styles in the same project is a code review nightmare.

Jotai's problem is more subtle. Choosing atom granularity is fundamentally an architectural decision — which state should be coupled, which should be independent, which should be derived. These decisions require understanding business context, and AI can only see what's in the current prompt.

Think of it this way: let humans write prose, give AI a form to fill out. Humans are good at creating structure from freedom; AI is good at filling in content within structure. If your architecture hands AI a blank sheet of paper, it will draw something different every time. But if you give it a clear form — state definition goes here, action handling goes here, side effects go here — its output will be stable, consistent, and predictable.

Core insight: the quality of AI-generated code is proportional to the strength of your architectural constraints.

The Solution: Model/Service/UI Three-Layer Separation

Model/Service/UI Three-Layer Architecture

Our answer wasn't to invent a new state management library — it was to define an architectural pattern so that AI knows exactly where every line of code should go and what it should look like.

The Foundation: createReducer

First, our custom createReducer hook, which is the type foundation for the entire architecture:

// shared/create-reducer.ts
import { Draft, produce } from 'immer';

type MapReducerAction<S, T> =
  T extends Record<infer K, (state: S, payload: any) => S>
    ? K extends keyof T
      ? Parameters<T[K]>[1] extends undefined
        ? [K]
        : [K, Parameters<T[K]>[1]]
      : unknown
    : unknown;

export function createReducer<S = unknown>() {
  function createReducer<
    RO extends Record<string, (state: S, payload: any) => S>
  >(reducerObject: RO) {
    function reducer(state: S, action: MapReducerAction<S, RO>) {
      const actionHandle = reducerObject[action[0]];
      if (typeof actionHandle === 'function') {
        return actionHandle(state, action[1]);
      }
      return state;
    }
    return reducer;
  }
  return createReducer;
}

// Immer version, supports mutable draft style
export function createImmerReducer<S = unknown>() {
  function createReducer<
    RO extends Record<string, (state: Draft<S>, payload: any) => any>
  >(reducerObject: RO) {
    function reducer(state: S, action: MapReducerAction<any, RO>) {
      const actionHandle = reducerObject[action[0]];
      if (typeof actionHandle === 'function') {
        return produce(state, (draft) => {
          actionHandle(draft, action[1]);
        });
      }
      return state;
    }
    return createReducer;
  }
  return createReducer;
}
Enter fullscreen mode Exit fullscreen mode

The core design is double curryingcreateReducer<StateType>()(actions) — the first call locks in the state type, the second call passes the action object. TypeScript fully infers types at every step, and action names and payload types are all auto-completed at dispatch time.

What the Three Layers Are

Model (.model.ts) — pure state + reducer logic

Defines the shape of state and all the ways to modify it. No side effects, no hooks, pure functions.

// email-auth.model.ts
export const initState = {
  email: '',
  code: '',
  isValidEmail: false,
  sending: false,
  authing: false,
  expired: null as number | null,
  sendCodeBody: null as null | { email: string; captcha: string },
};

export const emailAuthReducer = createReducer<typeof initState>()({
  'update-email': (state, email: string) => ({
    ...state,
    email,
    isValidEmail: /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email),
  }),
  'send-code': (state) => ({
    ...state,
    sending: true,
    sendCodeBody: { email: state.email, captcha: state.captcha! },
  }),
  'success-send-code': (state) => ({
    ...state,
    sending: false,
    expired: Date.now() + 60000,
    sendCodeBody: null,
  }),
});
Enter fullscreen mode Exit fullscreen mode

Service (.service.ts) — hooks + side effect orchestration

Connects the model with useReducer, uses useEffect to watch trigger state and execute side effects.

// email-auth.service.ts
export function useEmailAuthService() {
  const [state, dispatch] = useReducer(emailAuthReducer, initState);

  // trigger state pattern: when sendCodeBody is non-null, fire the API call
  useEffect(() => {
    if (state.sendCodeBody !== null) {
      api.post('/auth/email/code', state.sendCodeBody)
        .then(() => dispatch(['success-send-code']))
        .catch(() => dispatch(['error-send-code']));
    }
  }, [state.sendCodeBody]);

  return { state, dispatch };
}
Enter fullscreen mode Exit fullscreen mode

UI (.tsx) — pure rendering, only consumes state and dispatch

// email-auth-form.tsx
export function EmailAuthForm() {
  const { state, dispatch } = useEmailAuthService();
  return (
    <form>
      <TextField
        value={state.email}
        onChange={(e) => dispatch(['update-email', e.target.value])}
      />
      <Button
        disabled={!state.isValidEmail || state.sending}
        onClick={() => dispatch(['send-code'])}
      >
        {state.sending ? 'Sending...' : 'Send Code'}
      </Button>
    </form>
  );
}
Enter fullscreen mode Exit fullscreen mode

Why This Is AI-Friendly

The key is that each layer's responsibilities are crystal clear:

Layer File suffix Allowed Forbidden
Model .model.ts State types, initial values, reducer hooks, API calls, JSX
Service .service.ts useReducer, useEffect, API calls JSX, DOM manipulation
UI .tsx JSX, dispatch calls, reading state Direct state mutation, API calls

When you write this table into your CLAUDE.md or cursor rules, AI has a clear decision framework. It no longer has to guess "where should this logic go," because the rules already tell it.

Another key design is the trigger state pattern: the model uses a body object (like sendCodeBody) as a "signal," and the service uses useEffect to watch that signal and trigger side effects. This is far cleaner than calling APIs directly in the UI layer or mixing async logic into a store — and AI only needs to learn one pattern to handle every async scenario.

createReducer: Letting the Type System Guide AI Generation

Three-layer separation solves the "where does the code go" problem, but there's still another question: how does AI know how to write actions?

Traditional Redux/useReducer patterns dispatch { type: string, payload: any }. For AI, this is almost no constraint at all — type is a string, payload is any, write whatever you want.

// Traditional pattern: AI can write anything
dispatch({ type: 'UPDATE_EMAIL', payload: 'test@example.com' });
dispatch({ type: 'update-email', payload: { email: 'test@example.com' } });
dispatch({ type: 'setEmail', email: 'test@example.com' });
// Three different styles, all valid, all different
Enter fullscreen mode Exit fullscreen mode

Our createReducer replaces action objects with action tuples, and combined with TypeScript's type inference, achieves full compile-time constraints:

const reducer = createReducer<typeof initState>()({
  'update-email': (state, email: string) => ({ ...state, email }),
  'toggle-active': (state) => ({ ...state, active: !state.active }),
  'set-filters': (state, filters: { search: string; page: number }) => ({
    ...state,
    filters,
  }),
});

// ✅ Correct: IDE auto-completes action name, payload type is inferred
dispatch(['update-email', 'test@example.com']);
dispatch(['toggle-active']);
dispatch(['set-filters', { search: 'react', page: 1 }]);

// ❌ Compile error: action name doesn't exist
dispatch(['unknown-action']);

// ❌ Compile error: payload type mismatch
dispatch(['update-email', 123]);

// ❌ Compile error: missing required payload
dispatch(['set-filters']);

// ❌ Compile error: passing payload to an action that doesn't need one
dispatch(['toggle-active', true]);
Enter fullscreen mode Exit fullscreen mode

What does this mean? When AI generates code, the TypeScript compiler itself is a guardrail. Even if AI writes the wrong action name or passes the wrong payload type, the IDE will immediately flag it, and AI agents (like Claude Code) will self-correct in the next iteration.

More importantly: IDE auto-completion. When AI types dispatch([, the IDE shows all available action names. Selecting an action also surfaces the payload type hint. This effectively turns AI's "freestyle writing" into "choosing from a menu" — and AI's accuracy when choosing from a menu is far higher than when writing freely.

Comparing type constraint strength across approaches:

Approach Action constraint Payload constraint IDE completion AI generation accuracy
Zustand None (free functions) None Decent Low
Jotai None (direct atom set) Yes (atom type) Good Medium
Redux Toolkit Yes (createSlice) Yes Good Medium-high
createReducer tuple Strong (enumerated names) Strong (per-action types) Precise High

Redux Toolkit's createSlice is actually similar in spirit — it also constrains actions through structure. But RTK has heavier boilerplate (slice, selector, thunk), and actions are still { type, payload } objects with a longer type inference chain. Our tuple approach is lighter, with more direct type inference.

A note for ReScript/OCaml users: If you're familiar with ReScript, you'll notice this is essentially an approximation of variant types + exhaustive pattern matching in TypeScript. ReScript's switch natively provides exhaustive action enumeration and payload type checking — which also validates from another angle that "using type constraints to guide code generation" is the right approach. The difference is we don't need to switch languages — 30 lines of TypeScript gets us 80% of the constraint power.

Real Comparison: Same Task, Four Architectures, How Does AI Write?

State Management Approaches vs AI Code Generation Reliability

Theory only goes so far — let's run an experiment. We designed a single uniform task and had AI (Claude Sonnet 4) generate code using four different state management approaches, with identical prompts except for specifying which approach to use.

Task Description

Implement a user list page component with the following features:

  • Search box: filter results in real time as the user types
  • Pagination: 10 items per page, with previous/next navigation
  • Sorting: click column headers to sort by name or email
  • Loading state: show a loading indicator during requests
  • Error handling: show an error message on failure with a retry option

Here is the actual AI-generated code (showing only the core state management portion; full code in the appendix).

Approach A: Zustand — 377 lines, everything flat

const useUserStore = create<UserStore>((set, get) => ({
  users: [],
  loading: false,
  error: null,
  searchQuery: '',
  currentPage: 1,
  pageSize: 10,
  sortField: 'name',
  sortDirection: 'asc',

  fetchUsers: async () => {
    set({ loading: true, error: null });
    try {
      const users = await api.getUsers();
      set({ users, loading: false });
    } catch (err) {
      set({ error: err instanceof Error ? err.message : 'Failed to fetch users', loading: false });
    }
  },

  setSearchQuery: (query: string) => {
    set({ searchQuery: query, currentPage: 1 });
  },

  setSortField: (field: SortField) => {
    const { sortField, sortDirection } = get();
    if (sortField === field) {
      set({ sortDirection: sortDirection === 'asc' ? 'desc' : 'asc' });
    } else {
      set({ sortField: field, sortDirection: 'asc' });
    }
  },

  // Derived data hung as methods on the store
  getFilteredSortedUsers: () => {
    const { users, searchQuery, sortField, sortDirection } = get();
    return users
      .filter(u => u.name.toLowerCase().includes(searchQuery.toLowerCase()) || ...)
      .sort((a, b) => { /* ... */ });
  },
  getPaginatedUsers: () => { /* calls getFilteredSortedUsers() */ },
  getTotalPages: () => { /* calls getFilteredSortedUsers() */ },
}));
Enter fullscreen mode Exit fullscreen mode

Problems exposed:

  • State, actions, async logic, and derived calculations all mixed into one object — no layering whatsoever
  • Derived data (getFilteredSortedUsers) is a method on the store, recalculated on every call with no memoization
  • getFilteredSortedUsers is called once each by getPaginatedUsers and getTotalPages — two full filter + sort passes in the same render
  • The UI component is also a 377-line monolithic function with no splitting

Approach B: Jotai — 285 lines, atoms scattered everywhere

// 7 base atoms
const usersAtom = atom<User[]>([]);
const loadingAtom = atom<boolean>(false);
const errorAtom = atom<string | null>(null);
const searchAtom = atom('');
const currentPageAtom = atom<number>(1);
const sortFieldAtom = atom<SortField>('name');
const sortDirectionAtom = atom<SortDirection>('asc');

// 3 derived atoms
const filteredSortedUsersAtom = atom((get) => {
  const users = get(usersAtom);
  const search = get(searchAtom).toLowerCase().trim();
  /* filter + sort ... */
});
const totalPagesAtom = atom((get) => { /* ... */ });
const pagedUsersAtom = atom((get) => { /* ... */ });
Enter fullscreen mode Exit fullscreen mode

Problems exposed:

  • 10 atoms scattered at the file's top level — and this is just a simple list page. Complex features would cause an atom explosion
  • AI wrote fetchUsers as a plain async function inside the component, not a write atom — showing that AI's grasp of Jotai's async patterns is unstable
  • 7 useAtom calls lined up in the component — any atom change triggers a full component re-render
  • handleSort needs to call setSortField then setSortDirection — two separate atom updates that can cause intermediate-state renders

Approach C: Context + useReducer — 407 lines, heaviest boilerplate

type Action =
  | { type: 'FETCH_START' }
  | { type: 'FETCH_SUCCESS'; payload: User[] }
  | { type: 'FETCH_ERROR'; payload: string }
  | { type: 'SET_SEARCH'; payload: string }
  | { type: 'SET_PAGE'; payload: number }
  | { type: 'SET_SORT'; payload: SortField };

function reducer(state: State, action: Action): State {
  switch (action.type) {
    case 'FETCH_START': return { ...state, loading: true, error: null };
    case 'FETCH_SUCCESS': return { ...state, loading: false, users: action.payload };
    /* ... 6 cases */
  }
}

const UserListContext = createContext<ContextValue | null>(null);

function useUserListContext(): ContextValue {
  const ctx = useContext(UserListContext);
  if (!ctx) throw new Error('useUserListContext must be used within UserListProvider');
  return ctx;
}

export function UserListProvider({ children }: { children: React.ReactNode }) {
  const [state, dispatch] = useReducer(reducer, initialState);
  /* side effects, derived data calculation */
  return (
    <UserListContext.Provider value={{ state, dispatch, filteredUsers, pagedUsers, totalPages, fetchUsers }}>
      {children}
    </UserListContext.Provider>
  );
}
Enter fullscreen mode Exit fullscreen mode

Highlights and problems:

  • ✅ Right direction: Action union provides type constraints, reducer is a pure function, UI is split into 6 sub-components (LoadingSpinner, ErrorMessage, UserTable, Pagination, etc.)
  • ❌ But the boilerplate is too heavy: Action union type + switch case + createContext + Provider + useContext hook + null check… "plumbing" code alone is ~40%
  • ❌ AI generated 407 lines — the most of all four approaches — and more code means more opportunities for errors

Approach D: createReducer + three-layer separation — 333 lines, clearest structure

// ===== MODEL LAYER (user-list.model.ts) =====
const initState: UserListState = {
  users: [],
  loading: false,
  error: null,
  search: '',
  page: 1,
  pageSize: 10,
  sortField: 'name',
  sortDirection: 'asc',
  fetchBody: { timestamp: Date.now() }, // trigger state: non-null triggers a request
};

const userListReducer = createReducer<UserListState>()({
  setSearch: (state, search: string) => ({ ...state, search, page: 1 }),
  setPage: (state, page: number) => ({ ...state, page }),
  setSort: (state, field: SortField) => ({
    ...state,
    sortField: field,
    sortDirection: state.sortField === field && state.sortDirection === 'asc' ? 'desc' : 'asc',
    page: 1,
  }),
  fetchStart: (state) => ({ ...state, loading: true, error: null }),
  fetchSuccess: (state, users: User[]) => ({ ...state, loading: false, users, error: null }),
  fetchError: (state, error: string) => ({ ...state, loading: false, error }),
  retry: (state) => ({ ...state, fetchBody: { timestamp: Date.now() } }),
});

// ===== SERVICE LAYER (user-list.service.ts) =====
function useUserListService() {
  const [state, dispatch] = useReducer(userListReducer, initState);

  // trigger state pattern: fetchBody changes trigger a request
  useEffect(() => {
    if (!state.fetchBody) return;
    dispatch(['fetchStart', undefined]);
    let cancelled = false;
    api.getUsers()
      .then(users => { if (!cancelled) dispatch(['fetchSuccess', users]); })
      .catch(err => { if (!cancelled) dispatch(['fetchError', err.message]); });
    return () => { cancelled = true; };
  }, [state.fetchBody]);

  const derived = useMemo(() => {
    // filter → sort → paginate, all derived data in one useMemo
    const filtered = state.users.filter(/* ... */);
    const sorted = [...filtered].sort(/* ... */);
    const paginated = sorted.slice(/* ... */);
    return { filteredAndSorted: sorted, paginated, totalPages };
  }, [state]);

  return { state, derived, dispatch };
}

// ===== UI LAYER (user-list.tsx) =====
function UserListPage() {
  const { state, derived, dispatch } = useUserListService();
  // Pure rendering: only reads state/derived, only calls dispatch
  return (
    <div>
      <input value={state.search} onChange={e => dispatch(['setSearch', e.target.value])} />
      {state.loading && <span>Loading...</span>}
      {state.error && <button onClick={() => dispatch(['retry', undefined])}>Retry</button>}
      <table>/* renders using derived.paginated */</table>
      <button onClick={() => dispatch(['setPage', state.page - 1])}>Previous</button>
      <button onClick={() => dispatch(['setPage', state.page + 1])}>Next</button>
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

Highlights:

  • ✅ Three-layer separation strictly enforced: Model layer is pure state with no side effects, Service layer orchestrates side effects and derived data, UI layer is pure rendering
  • ✅ AI spontaneously applied the trigger state pattern (fetchBody as a signal) and included request cancellation logic (cancelled flag)
  • ✅ Derived data computed with a single useMemo, unlike the Zustand version's duplicate calculations
  • ❌ AI generated a redundant userListReducerObject (duplicating reducer logic for type extraction) — showing that while createReducer's type inference is strong, AI occasionally doesn't fully trust it

Score Comparison

Based on actual AI-generated code, rated across 5 dimensions:

Dimension Zustand Jotai Context+Reducer createReducer three-layer
Type safety 6 — store methods have no action constraints 7 — atoms are typed but set is free 8 — Action union provides constraints 9 — tuple enforces full compile-time checking
Separation of concerns 3 — everything flat in one object 5 — atoms are separate but no layering 7 — Provider provides some separation 9 — strict model/service/ui three layers
Derived data handling 4 — no memoization, duplicate computation 8 — derived atoms auto-cache 6 — manual computation inside Provider 8 — unified useMemo computation
Lines of code 377 285 407 333
AI generation reliability Medium — style unpredictable Medium-low — async patterns unstable Medium-high — structure correct but verbose High — architectural constraints guide consistent output

Conclusion: there's no perfect solution, but the stronger the constraints, the more predictable the AI output.

Jotai's derived atom mechanism is the most elegant, but AI's grasp of its async patterns is unstable. Context+Reducer points in the right direction but has too much boilerplate. The createReducer three-layer approach isn't perfect either (AI wrote some redundant code), but the structural constraints of three-layer separation make AI's output the most consistent and predictable — and that's the most important metric in the age of AI.

Practical Guide: How to Adopt This in Your Project

You don't need to rewrite your entire project.

Step 1: Use the new pattern for new features first

The next feature that needs state management — write it with the model/service/ui three layers. The migration cost for a single feature is low, but enough for your team to feel the difference.

src/features/user-list/
├── user-list.model.ts    # state + reducer
├── user-list.service.ts  # hooks + side effects
└── user-list.tsx          # UI component
Enter fullscreen mode Exit fullscreen mode

Step 2: Take createReducer with you

createReducer is only 30 lines of code, zero dependencies (the Immer version requires immer). Copy it directly into your project:

// shared/create-reducer.ts — full code in section 3
export function createReducer<S = unknown>() { /* ... */ }
export function createImmerReducer<S = unknown>() { /* ... */ }
Enter fullscreen mode Exit fullscreen mode

Step 3: Write the rules clearly — this is the most important step

In your CLAUDE.md, .cursorrules, or project documentation, explicitly define the architectural rules:

## State Management Guidelines

All feature modules that require state management must use model/service/ui three-layer separation:

- **Model** (`.model.ts`): Define initState and reducer (using createReducer), pure functions, no hooks or API calls
- **Service** (`.service.ts`): useReducer + useEffect for side effects, useMemo for derived data
- **UI** (`.tsx`): Pure rendering components, only consume state/dispatch, no direct API calls

For async operations, use the trigger state pattern: set a body object in the model as a signal, and have the service use useEffect to watch and execute it.
Enter fullscreen mode Exit fullscreen mode

The ROI on this text is remarkable — AI reads these rules before generating code every time, then writes according to them. You define the rules once, AI executes them ten thousand times.

Step 4: Migrate gradually, don't big-bang

Old Zustand/Jotai code doesn't need to be migrated all at once. It still works fine. Just adopt the new pattern for new features and refactors, and let the project transition naturally.

Constraints Are Productivity

The criteria for evaluating state management libraries is changing.

For the past decade, we've used DX (Developer Experience) to judge a library — is the API clean, is the learning curve smooth, is there minimal boilerplate? Zustand and Jotai are near-perfect on this dimension.

But the AI era has introduced a new dimension: AI-X (AI Experience) — can AI produce stable, high-quality code within this architecture?

These two dimensions aren't in conflict. Good constraints don't make the human experience worse — three-layer separation makes code easier to understand and maintain, and typed reducers make refactoring safer. They just elevate "good practices" from "suggestions" to "rules."

Constraints aren't limitations. They're productivity.

Try using model/service/ui separation in your next feature and see whether AI-generated code is more directly usable. If you've had similar experiences or a different take, share it in the comments.


Connect with me

If this article was useful to you, feel free to follow me — I'll keep sharing hands-on experience with AI-assisted development, frontend architecture, and engineering productivity. And let me know in the comments what challenges you've run into with AI-assisted coding.

Top comments (0)