If you've been following along with my journey, you know that I have been busy building Backlog Explorer. It's a way for gamers to manage their backlogs of unplayed games. As a notorious gamer who constantly buys new games without finishing the ones they already own, I thought this was a great product and would help me play games I already owned.
I took my time to really think about what I wanted this app to be and do. I wanted users to be able to add games, mark them as complete, filter by genre, by mood, and all the good stuff. My app would stand out from others because it would be more about you rediscovering what you already owned rather than spending all your money. My MVP was working really great for me, and the handful of friends I roped into testing it.
Until...this one user came along and wrecked it all.
Not with bad intent or anything. They just used it as it was meant to be used. Except...they had a lot of games. Like, over 1000 games in their library. And when they tried to load their dashboard, my poor little app just...died. Loading times went from snappy to "is this thing working?" The stats dashboard would timeout. Filtering became laggy. It was honestly embarrassing.
And when I received their feedback in my email, while on vacation in Spain, I was horrified. I couldn't even clock in to attempt to fix anything until a week later. I immediately emailed them back and apologized and told them I would be looking into it as soon as possible and would keep them posted. I was really worried that this was it for Backlog Explorer. That I clearly was not meant to be a developer and develop things for real people (because apparently...I'm not a real person? 🤷)
But actually, this was the best thing that could have happened to me as a developer.
The Reality Check
You see, I made the assumption that I was the norm. That a BIG backlog meant 50-100 games. That's what I had. I had a big backlog and it seemed like a lot to me. I tested with a small dataset because that's what I had. But this user showed me what real-world usage looks like. And real-world usage broke my assumptions pretty quickly.
So when I got home from vacation, and my son was back at daycare, I dug in. And the problems were pretty clear once I started digging:
- I was loading ALL games at once instead of paginating
- My database queries weren't optimized for larger datasets
- Dashboard stats were calculating on the frontend instead of the database
Basically, I had built something that worked great in my controlled environment of small datasets, but fell apart when it met actual users. Also, all the problems were things I had not learned at bootcamp. So I had to learn this fast in order to make it work for this user (they wrote how excited they were to use my app so I felt very personally attached to making this work for them). And I don't blame my bootcamp for this. There is only so much they can teach in such a short time. I got the basics, but now it was up to me to level up.
Learning Pagination the Hard Way
The biggest issue was that I was fetching every single game from the database every time someone loaded their library. For 50 games? No problem. For 1000 games...yeah that's not going to work.
I had to learn about pagination, and not just a "show 10 results per page" kind. I needed server-side pagination that would work with all the complex filtering I had built (more on that in another post).
Here is what I ended up with in my useLibraryGames
hook:
const [page, setPage] = useState(1)
const pageSize = 30
const [totalCount, setTotalCount] = useState(0)
const fetchGames = useCallback(async () => {
if (!userId) return
const from = (page - 1) * pageSize
const to = from + pageSize - 1
let query = supabase
.from('user_games')
.select(`
id,
status,
progress,
platforms,
image,
updated_at,
game:games!user_games_game_id_fkey (
id,
title,
background_image,
)
`, { count: 'exact' })
.eq('user_id', userId)
.range(from, to)
const { data: userGames, count, error } = await query
setGames(userGames || [])
setTotalCount(count || 0)
}, [userId, page, /* other dependencies */])
The key breakthrough was using Supabase's range(from, to)
method combined with { count: 'exact' }
. This meant:
- Server-side pagination: Only fetch the 30 games needed for the current page, not all 1000+
- Accurate totals: Get the exact count for pagination controls without loading all records
- Performance: Page loads went from 10+ seconds back to milliseconds
Then in the UI component, I added simple pagination controls:
<div className="flex justify-center items-center gap-2 mt-6">
<button
className="btn btn-outline"
disabled={page === 1}
onClick={() => setPage(page - 1)}
>
Previous
</button>
<span>
Page {page} of {Math.max(1, Math.ceil(totalCount / pageSize))}
</span>
<button
className="btn btn-outline"
disabled={page * pageSize >= totalCount}
onClick={() => setPage(page + 1)}
>
Next
</button>
</div>
The Filtering Challenge (Coming Next)
But here's where it got tricky - I had all these filters (by status, genre, mood, platform, year, search query) that users loved. With client-side pagination, filters just worked. With server-side pagination, I had to be more thoughtful about where filtering happened.
Solving this required building a complex system with 16 interdependent filter states that work together through React's dependency system - but that's a whole technical deep-dive for my next post.
The Database Design That Saved My Butt
When I first started building Backlog Explorer, I spent a lot of time thinking about the database schema. At the time, it felt like maybe I was overthinking it. I mean, I was just storing games and user progress, right? How complicated could it be? But I'm really glad I took the time to design it properly from the beginning, because it's what saved me when everything started breaking.
Here's how I structured it:
games table: This holds all the basic game information - title, cover art, description, release date, genres, platforms. Stuff that's the same for everyone.
user_games table: This is where all the personal stuff lives - whether a user owns the game, what status it's in (not started, playing, completed), their progress percentage, personal notes, when they added it.
genres and platforms tables: Normalized lookup tables that multiple games can reference.
The key insight was separating shared data from personal data. When 20 users all have "The Witcher 3" in their libraries, there's still only one record in the games
table. All the user-specific stuff (status, progress, notes) lives in separate user_games
records that reference the shared game.
This meant a couple of really important things:
- No data duplication. I wasn't storing "The Witcher 3" information 20 times.
- Easy optimization. When I needed to add database indexes for performance, I could target specific tables without affecting user data.
- Consistent game information. If I update a game's cover art or genre information, it updates for everyone automatically.
When that power user with 1000+ games joined and broke my app, this design is what let me fix the performance issues without having to restructure everything. I could add indexes, create optimized views, and improve queries because the data was already organized properly.
What I Actually Learned
Building Backlog Explorer has been this weird mix of "I'm so proud of this thing I built" and "oh god, I have no idea what I'm doing."But here's what I've figured out:
Start simple, but think ahead. I'm glad I designed a normalized database schema from the beginning, even when I was only thinking about small use cases. It made scaling up so much easier.
Real users will break your assumptions. I thought 100 games was a lot. Some people have thousands. Design for edge cases, or at least make sure your app fails gracefully.
Performance matters, even for side projects. Nobody wants to wait 10 seconds for their library to load, even if it's "just" a personal project.
The coolest part? After fixing all these issues, Backlog Explorer actually works really well now. Users with massive libraries can browse their games smoothly, filtering is snappy, and the dashboard stats load quickly It's not perfect - there's still so much I want to add and improve. But it's a real thing that real people use, and that feels pretty amazing.
Breaking things and fixing them taught me more than any tutorial ever could. Now I know what happens when your database queries get slow, when your state management gets complicated, when external APIs fail. That's the kind of experience you can't get from following along with a course. You have to build something real, let it break, and figure out how to fix it. I can put "performance optimization," "state management," and "API integration" on my resume now. But more than that, I know I can figure stuff out when things go wrong. And in development, things go wrong a lot.
If you want to check out Backlog Explorer, it's live at backlogexplorer.com. Feel free to break it - It'll give me more learning opportunities 😅
Top comments (1)
Hey preety how are you doing today hope you're doing well