DEV Community

Cover image for 🚨 How We Prevented Backend Overload in a Next.js App Using TanStack Query Cancellation
AJIT PRADHAN
AJIT PRADHAN

Posted on

🚨 How We Prevented Backend Overload in a Next.js App Using TanStack Query Cancellation

A seemingly harmless dropdown was silently overloading our backend and breaking the UI. Here’s how we fixed it — and how you can too.


🕒 Reading Time: 8–10 mins
📌 Tags: Next.js, TanStack Query, React, Web Performance, Backend Optimization, Custom Hooks, Frontend Architecture, DX


🎯 The Problem: A Silent Performance Killer
Last month, our team deployed what we thought was a simple feature: a group selector dropdown in our Next.js dashboard. Users could switch between different business units to view their respective data.
What we didn’t realize was that this innocent dropdown was silently:
• 🔥 Burning through our backend CPU
• 💸 Increasing our cloud costs by 40%
• 😵 Confusing users with stale data
The worst part? Everything worked fine in development.


🧩 The Setup: A “Simple” Group Selector
Our dashboard had three main data sections that updated based on the selected group:

// components/GroupDashboard.tsx
const GroupDashboard = ({ groupId }) => {
  const { data: stats, isLoading } = useGroupStats(groupId);
  const { data: chart } = useGroupChart(groupId);
  const { data: logs } = useGroupLogs(groupId);

  return (
    <div className="dashboard">
      <GroupSelector value={groupId} onChange={setGroupId} />
      {isLoading && <Spinner />}
      <Stats data={stats} />
      <Chart data={chart} />
      <Logs data={logs} />
    </div>
  );
};
Enter fullscreen mode Exit fullscreen mode

Each custom hook wrapped a useQuery call:

// hooks/useGroupStats.ts
export const useGroupStats = (groupId: string) => {
  return useQuery({
    queryKey: ['stats', groupId],
    queryFn: () => fetchStats(groupId),
    enabled: !!groupId,
  });
};
Enter fullscreen mode Exit fullscreen mode

Looks clean, right? We thought so too.


😨 Production Reality: Users Are Fast
In production, users behaved very differently than our test scenarios:
What We Expected:
• User selects group A → waits for data → selects group B → waits for data
What Actually Happened:
• User rapidly clicks: A → B → A → C → B → A
• Each click triggered 3 API calls (stats + chart + logs)
• Previous requests weren’t cancelled
• 15+ concurrent requests for a single user interaction
The Cascade Effect:

  1. 🚨 Multiple API requests fired per interaction
  2. ❌ Previous queries weren’t cancelled
  3. ❌ Old responses overwrote newer ones
  4. 🚨 Backend CPU spiked to 80%
  5. 🚨 UI flickered with stale data
  6. 🚨 Database connection pool exhausted It wasn’t a crash. It was worse — a slow, silent degradation that only became apparent under real user load. ________________________________________

🔍 Root Cause Analysis
TanStack Query’s Default Behavior:
• ✅ Caches responses (good)
• ❌ Doesn’t cancel in-flight requests (bad)
• ❌ Returns responses regardless of how stale they are (bad)
• ❌ Leaves request cancellation up to you (bad)

The Race Condition:
// T0: User clicks Group A → fetchStats('A') starts
// T1: User clicks Group B → fetchStats('B') starts (A still running)
// T2: fetchStats('A') completes → UI shows Group A data
// T3: fetchStats('B') completes → UI shows Group B data
// T4: User clicks Group A again → fetchStats('A') starts (B still running)
Result: Users saw data flicker between different groups, and our backend was processing requests for groups the user had already abandoned.


🔬 Our Original Implementation (The Problem)

// hooks/useGroupStats.ts
export const useGroupStats = (groupId: string) => {
  return useQuery({
    queryKey: ['stats', groupId],
    queryFn: () => fetchStats(groupId),
    enabled: !!groupId,
  });
};
Enter fullscreen mode Exit fullscreen mode
// utils/fetchStats.ts
export const fetchStats = async (groupId: string) => {
  const res = await fetch(`/api/stats?groupId=${groupId}`);
  if (!res.ok) throw new Error('Failed to fetch stats');
  return res.json();
};
Enter fullscreen mode Exit fullscreen mode

Problem: fetch() doesn’t cancel anything on its own. We were ignoring TanStack’s signal.


✅ The Fix: Step-by-Step
Step 1: Update Your Fetch Function to Use signal

// utils/fetchStats.ts
export const fetchStats = async ({ queryKey, signal }) => {
  const [, groupId] = queryKey;
  const res = await fetch(`/api/stats?groupId=${groupId}`, { signal });
  if (!res.ok) throw new Error('Failed to fetch stats');
  return res.json();
};
Enter fullscreen mode Exit fullscreen mode

Step 2: Update Your Hook to Use New Signature

// hooks/useGroupStats.ts
import { fetchStats } from '../utils/fetchStats';

export const useGroupStats = (groupId: string) => {
  return useQuery({
    queryKey: ['stats', groupId],
    queryFn: fetchStats,
    enabled: !!groupId,
  });
};
Enter fullscreen mode Exit fullscreen mode

Step 3: Repeat for Other Hooks

// hooks/useGroupChart.ts
export const useGroupChart = (groupId: string) => {
  return useQuery({
    queryKey: ['chart', groupId],
    queryFn: async ({ queryKey, signal }) => {
      const [, id] = queryKey;
      const res = await fetch(`/api/chart?groupId=${id}`, { signal });
      if (!res.ok) throw new Error('Failed to fetch chart');
      return res.json();
    },
    enabled: !!groupId,
  });
};
Enter fullscreen mode Exit fullscreen mode

Step 4: Handle AbortError (Optional)

try {
  const res = await fetch(...);
  return await res.json();
} catch (err) {
  if (err.name === 'AbortError') return; // Silently ignore
  throw err;
}
Enter fullscreen mode Exit fullscreen mode
  • Cancellation is not manual — it’s automatic when using signal from TanStack Query.
  • As long as you pass signal to fetch(), and change the queryKey, cancellation works.
  • You can observe it using console.log or network panel showing canceled requests.

📊 Results After Implementation
Metric Before After
Backend CPU 80%+ 30%
UI Response Flickered Smooth
Cloud Cost +40% Stable
User Confusion High None


🧐 Key Takeaways
• Query caching ≠ Query control
• Always use ``** for fetch cancellation**
• Real users act fast — design for it
• Race conditions confuse users more than they break code


🔥 Bonus: Debounce Rapid Clicks

`
const useDebounce = (value, delay) => {
const [debounced, setDebounced] = useState(value);
useEffect(() => {
const t = setTimeout(() => setDebounced(value), delay);
return () => clearTimeout(t);
}, [value, delay]);
return debounced;
};
`

const debouncedGroupId = useDebounce(groupId, 250);


🌟 Final Thoughts
This one line — { signal } — saved us from performance chaos.
Sometimes the fix isn’t a new tool. It’s understanding your current stack deeply.
Before you ship:
• ✅ Are stale requests being cancelled?
• ✅ Is UI reflecting the latest intent?
• ✅ Is your backend wasting compute?
If you answered “maybe,” then you probably need this fix.


Have you faced similar issues? Share your story — these lessons help everyone build better.

Top comments (0)