<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Basile</title>
    <description>The latest articles on DEV Community by Basile (@karnak19).</description>
    <link>https://dev.to/karnak19</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/karnak19"/>
    <language>en</language>
    <item>
      <title>From 4 Seconds to 40ms: A Full-Stack Performance Odyssey</title>
      <dc:creator>Basile</dc:creator>
      <pubDate>Fri, 22 Aug 2025 13:34:56 +0000</pubDate>
      <link>https://dev.to/karnak19/from-4-seconds-to-40ms-a-full-stack-performance-odyssey-5192</link>
      <guid>https://dev.to/karnak19/from-4-seconds-to-40ms-a-full-stack-performance-odyssey-5192</guid>
      <description>&lt;p&gt;When we first launched the &lt;strong&gt;Builds&lt;/strong&gt; page on &lt;a href="https://supervive-stats.com/builds" rel="noopener noreferrer"&gt;supervive-stats.com/builds&lt;/a&gt;, it was painfully slow. A user's first view of our community-created builds took a staggering &lt;strong&gt;4 seconds&lt;/strong&gt; to load. For a core feature meant for browsing and discovery, this was an eternity.&lt;/p&gt;

&lt;p&gt;This is the story of our journey to fix it. We'll cover our initial flawed attempts, a clever but ultimately failed intermediate solution, and the final full-stack architecture—from SQL views to Next.js hydration—that took our load times from an embarrassing 4,000ms to a blazing-fast sub-100ms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A note on our approach:&lt;/strong&gt; Throughout this entire journey, we consistently used server-side prefetching to improve initial load times. What evolved was our query strategy: starting with &lt;code&gt;useInfiniteQuery&lt;/code&gt;, then moving to a simple &lt;code&gt;useQuery&lt;/code&gt;, and finally returning to &lt;code&gt;useInfiniteQuery&lt;/code&gt; with the right architecture to support it.&lt;/p&gt;

&lt;p&gt;Note: As english is not my native language, this article has been redacted with the help of AI, especially for diagrams, phrasing and grammar.&lt;/p&gt;

&lt;p&gt;Note2: Mermaid diagrams does not work here in dev.to, you can see them &lt;a href="https://basile.vernouillet.dev/blog/from-4-seconds-to-40ms-a-full-stack-performance-odyssey" rel="noopener noreferrer"&gt;in the original blog post&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  The Starting Point: It Worked… Until It Didn't
&lt;/h2&gt;

&lt;p&gt;In the beginning, everything was fine. With only a handful of builds, our database queries felt snappy. We were using PocketBase, and its expand feature was convenient, allowing us to fetch a build and all its related data—perks, abilities, user info, tags—in a single request. Sorting was handled on the client-side.&lt;/p&gt;

&lt;p&gt;But as the number of builds grew towards 1,000, the cracks began to show. What once felt instant now involved a noticeable, frustrating delay. In the world of databases, 1,000 rows is tiny. This wasn't a scaling problem; it was an efficiency problem. Our convenient shortcuts had created two major bottlenecks: massive data payloads and inefficient sorting logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 1: The Initial Implementation - An Infinite Mess
&lt;/h2&gt;

&lt;p&gt;Our first version used TanStack Query's &lt;code&gt;useInfiniteQuery&lt;/code&gt; hook with server-side prefetching, aiming for a seamless infinite scroll experience. However, it was built on two flawed assumptions.&lt;/p&gt;

&lt;p&gt;First, we were fetching far too much data with a massive &lt;code&gt;expand&lt;/code&gt; query in our PocketBase backend:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;grip_1,grip_2,relic_1,relic_2,kicks,perk_1,perk_2,perk_3,perk_4,perk_5,abilities.ability,consumables,queues,hunter,user,tags
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This resulted in a &lt;strong&gt;3 MB JSON payload&lt;/strong&gt; just for the initial view.&lt;/p&gt;

&lt;p&gt;Second, and more catastrophically, our complex popularity algorithm—a hybrid of a &lt;strong&gt;Wilson score interval&lt;/strong&gt; for votes and an &lt;strong&gt;engagement score&lt;/strong&gt; for views and comments—was calculated entirely on the &lt;strong&gt;client-side&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This created a disastrous performance trap. To sort the builds correctly, the client needed data for &lt;em&gt;all&lt;/em&gt; of them. As a result, every time &lt;code&gt;fetchNextPage&lt;/code&gt; was called to load more items, we had to &lt;strong&gt;re-fetch the entire list of builds&lt;/strong&gt;. The "infinite scroll" was an illusion that triggered a full 3 MB data load on every scroll. It was inefficient, expensive, and delivered a terrible user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 2: The Intermediate Fix - A Fast UI with a Fatal Flaw
&lt;/h2&gt;

&lt;p&gt;Realizing the infinite query was untenable, we pivoted. We replaced &lt;code&gt;useInfiniteQuery&lt;/code&gt; with a single &lt;code&gt;useQuery&lt;/code&gt; that still used server-side prefetching but fetched all builds on the initial load. We then implemented a "fake" pagination system on the client, slicing the in-memory array of sorted builds to display pages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Good:&lt;/strong&gt; Once the initial load was complete, the user experience was fantastic. Navigating between pages was instantaneous because all the data was already in memory. The UI was incredibly snappy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Bad:&lt;/strong&gt; The First Contentful Paint (FCP) was now the entire problem. The user was stuck staring at a loading spinner for ~4 seconds while we downloaded and processed the massive dataset. We had traded a bad interaction experience for an abysmal initial load experience.&lt;/p&gt;

&lt;p&gt;This phase taught us a critical lesson: &lt;strong&gt;Client-side sorting at scale is a losing battle.&lt;/strong&gt; The real work had to move to the server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 3: The Final Architecture - A Holistic, Full-Stack Solution
&lt;/h2&gt;

&lt;p&gt;Armed with this knowledge, we designed a new solution from the ground up, addressing the bottlenecks at every layer of the stack.&lt;/p&gt;

&lt;h3&gt;
  
  
  Part 1: Building a Fast Foundation (The Back-End)
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Server-Side Calculation:&lt;/strong&gt; First, we moved the entire hybrid popularity algorithm into a server-side CRON job. This job periodically pre-calculates the score for every build and stores it in a dedicated &lt;code&gt;builds_popularity&lt;/code&gt; table. The client is no longer responsible for any heavy lifting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Eliminating the Back-Relation Bottleneck:&lt;/strong&gt; With popularity pre-computed, we could now query for pages of sorted builds directly. However, our initial queries were still slow, taking 2-4 seconds. The culprit was a &lt;strong&gt;back-relation sort&lt;/strong&gt;. We were asking the &lt;code&gt;builds&lt;/code&gt; table to sort itself using a value from the separate &lt;code&gt;builds_popularity&lt;/code&gt; table.&lt;/p&gt;

&lt;p&gt;Even after precomputing popularity, we were still sorting on a back-relation:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-builds_popularity_via_build.popularity
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This turned out to be a big performance sink. Why? Because PocketBase has to join across collections dynamically to resolve back-relations. That means every paginated request (&lt;code&gt;?page=1&amp;amp;perPage=10&lt;/code&gt;) still triggers heavy work under the hood — fetching, joining, and then sorting — which left us with 2–4 second response times for just 10 builds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In short: sorting on back-relations doesn't scale well.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Flattening With a View Table&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The solution was to remove the back-relation entirely by flattening popularity into the same row as the build. We created a &lt;strong&gt;SQL view&lt;/strong&gt; that joins builds with builds_popularity:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;VIEW&lt;/span&gt; &lt;span class="n"&gt;builds_view_table&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
    &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;-- (all other build columns)&lt;/span&gt;
    &lt;span class="n"&gt;bp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;popularity&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;
    &lt;span class="n"&gt;builds&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;
&lt;span class="k"&gt;LEFT&lt;/span&gt; &lt;span class="k"&gt;JOIN&lt;/span&gt;
    &lt;span class="n"&gt;builds_popularity&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;bp&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;build&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Now the frontend just queries the &lt;code&gt;builds_view_table&lt;/code&gt; and sorts with:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-popularity
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;No back-relation, no heavy join cost, just fast queries. This change alone dropped our API response time for a page of results to &lt;strong&gt;well under 100ms&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(You might ask: why not just add a &lt;code&gt;popularity&lt;/code&gt; column to the &lt;code&gt;builds&lt;/code&gt; table? We use the &lt;code&gt;updated&lt;/code&gt; timestamp to show when an author last modified a build. If our CRON job constantly updated the &lt;code&gt;builds&lt;/code&gt; table, that timestamp would become meaningless. The view provided the performance win without disrupting our existing schema logic.)&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Part 2: Delivering an Instant Experience (The Front-End)
&lt;/h3&gt;

&lt;p&gt;With a blazing-fast API, we could now architect the front-end to feel instantaneous.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Server-Side Rendering the First Page:&lt;/strong&gt; We returned to &lt;code&gt;useInfiniteQuery&lt;/code&gt;, continuing our consistent use of server-side prefetching but now with a proper strategy. In our Next.js page component, we pre-fetch the &lt;em&gt;very first page&lt;/em&gt; of builds on the server. This data is then passed to the client and used to hydrate the TanStack Query cache.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// On the Next.js Page component (server-side)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;queryClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;QueryClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;buildsInfiniteQueryOptions&lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Prefetch the first page of data on the server&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;queryClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;prefetchInfiniteQuery&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;HydrationBoundary&lt;/span&gt; &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;dehydrate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;queryClient&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;BuildsList&lt;/span&gt; &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;HydrationBoundary&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;The result is that the user receives server-rendered HTML that already contains the first set of builds. The page is visible and meaningful &lt;em&gt;immediately&lt;/em&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Seamless Client-Side Takeover and Lazy Loading:&lt;/strong&gt; The &lt;code&gt;BuildsList&lt;/code&gt; component on the client hydrates from this server-provided state, ensuring there's no flicker or re-fetching of visible data. Subsequent pages are now fetched efficiently with &lt;code&gt;fetchNextPage&lt;/code&gt; as the user scrolls.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Inside the BuildsList component (client-side)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;fetchNextPage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;hasNextPage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
    &lt;span class="nf"&gt;useInfiniteQuery&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;buildsInfiniteQueryOptions&lt;/span&gt;&lt;span class="p"&gt;(...));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Finally, we implemented one last optimization. The individual build cards themselves are now responsible for lazily fetching their detailed data (specific perks, items, etc.). Since many builds share the same items, TanStack Query's cache ensures that once an item is fetched for one card, it's instantly available from memory for any other card that needs it, minimizing network requests.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Result: The Best of All Worlds
&lt;/h2&gt;

&lt;p&gt;By moving through this iterative process, we landed on a solution that solved all our previous problems and delivered a superior user experience.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;API Response Time:&lt;/strong&gt; From &lt;strong&gt;~4,000ms&lt;/strong&gt; to &lt;strong&gt;&amp;lt;100ms&lt;/strong&gt; (often ~40ms).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Payload Size:&lt;/strong&gt; From &lt;strong&gt;~3 MB&lt;/strong&gt; for the initial view to a few kilobytes for the hydrated data.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Perceived Performance:&lt;/strong&gt; Near-instant load time thanks to SSR and hydration, fixing the FCP issue from our intermediate solution.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Efficient Data Fetching:&lt;/strong&gt; True, efficient API pagination, fixing the "re-fetch everything" nightmare of our initial approach.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This journey was a powerful lesson in full-stack development. A fast API is essential, but it's the intelligent synergy between a well-designed back-end and a smart front-end loading strategy that turns a fast API into a truly instantaneous user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The Problem:&lt;/strong&gt; A builds listing page that took 4 seconds to load due to massive data payloads (3 MB) and client-side sorting of complex popularity algorithms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Journey:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Phase 1:&lt;/strong&gt; &lt;code&gt;useInfiniteQuery&lt;/code&gt; with server prefetching → Failed due to client-side sorting requiring full dataset re-fetches&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phase 2:&lt;/strong&gt; &lt;code&gt;useQuery&lt;/code&gt; with server prefetching → Fast navigation but terrible initial load (4s loading spinner)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phase 3:&lt;/strong&gt; Back to &lt;code&gt;useInfiniteQuery&lt;/code&gt; with proper architecture → Success&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Technical Solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Move computation server-side:&lt;/strong&gt; Popularity algorithm → CRON job → dedicated table&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Eliminate back-relations:&lt;/strong&gt; PocketBase back-relation sorts are slow → SQL view flattening data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smart prefetching:&lt;/strong&gt; Server-side render first page → hydrate TanStack Query cache → seamless client takeover&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lazy loading:&lt;/strong&gt; Individual build cards fetch detailed data on-demand with shared caching&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The Result:&lt;/strong&gt; 4,000ms → 40ms load times, 3 MB → few KB payloads, instant perceived performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Lesson:&lt;/strong&gt; Client-side sorting doesn't scale. The real performance wins come from intelligent backend design combined with strategic frontend hydration.&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>pocketbase</category>
      <category>webperf</category>
    </item>
  </channel>
</rss>
