DEV Community

Cover image for A Journey Through Code Reviews: The Good, The Bad, and The “Wait, What?” Moments
Tasbi Tasbi
Tasbi Tasbi

Posted on

A Journey Through Code Reviews: The Good, The Bad, and The “Wait, What?” Moments

Let’s be honest, code reviews are like showing someone your cooking for the first time. You think it's great, but then someone walks in and asks, "Did you mean to add that much salt?" Suddenly, you’re wondering how you’ve survived eating your own food this long. That’s pretty much how it feels when your carefully crafted code is exposed to the wild world of open-source collaboration.

Spoiler: It’s both terrifying and awesome at the same time.

Between diving into someone else’s code (and silently wondering why they did that) and having my own code put under the microscope, this journey was nothing short of eye-opening. So, how did I survive this coding adventure? Let’s break it down.

Code Review Approach: Async vs Sync
During this process, I primarily used an asynchronous (async) approach for code reviews, which involved creating and responding to issues on GitHub. I prefer async reviews because they give me the freedom to review and respond at my own pace. It's especially helpful when trying to juggle other commitments (or procrastinate on them).

But sometimes, sync communication through Slack was a lifesaver. When you need an answer right now, there’s nothing like a quick back-and-forth. For those "wait, what does this do?" moments, hopping into a chat made everything faster. Still, async is the chill method that lets you dive deep into the code without distractions.

Reviewing Someone Else's Code
Testing and reviewing someone else’s code? Now that’s an adventure. Initially, I had to dive deep into the unfamiliar codebase. It felt like trying to navigate someone else’s IKEA furniture instructions but in a programming language.

It was like being a detective on a new case—except instead of solving a crime, I was trying to figure out why their code was acting like a misbehaving child. One minute you’re like, “Oh, this is clever,” and the next, you’re staring at a hardcoded API URL like, “Really?”

Honestly, the hardest part was setting up their environment. Their README was missing key steps, so I had to go full-on Sherlock Holmes to get things working. But once I cracked the code (pun intended), I found some interesting issues—like trying to process non-existent files or catching errors with a blanket exception like it's a cozy night in.

Having My Code Reviewed
Imagine cleaning your house for hours, but when someone walks in, they immediately point out the dust on the ceiling fan. Yup, that’s what it felt like. My reviewer was nice enough to let me know that I could improve my error handling, and that maybe throwing a "user-friendly" message wouldn’t be a bad idea when the README generation worked.

The surprising part? These tiny changes made a huge difference. Suddenly, my code wasn’t just functional—it was polished. Getting feedback that actually improved the user experience made the process feel less like criticism and more like a team effort.

What kind of issues came up in your testing and review?
Oh, where to start? There were a few gems:

  1. Hardcoded API URL in api_handler.py: It’s like putting your house key under the welcome mat and hoping no one ever changes the locks. I suggested moving the API URL to a config file or environment variable—because nobody likes hardcoded stuff (except maybe the person who did it).
  2. Generic Exception Handling in complexity_analyzer.py: Catching all errors with a generic Exception is like throwing a giant net over everything and hoping it works out. Spoiler: It doesn’t. I suggested they get specific with their exceptions for easier debugging.
  3. Missing File Handling in main.py: The script kept running even when files didn’t exist. It was like trying to bake a cake without flour—sure, you could try, but it’s not going to turn out well. I recommended the script stop and actually handle missing files.
  4. Inconsistent Documentation: Reading the docs was like solving a riddle. Some files were documented, others? Not so much. I nudged them to get a bit more consistent—because, in the world of coding, a good docstring is worth its weight in gold.

was I able to fix all Issues
Absolutely! Fixing the issues was like that moment when you finally untangle your headphones—super satisfying. I cleaned up my error handling, added a timeout for API requests (no more endless waits), and made the model selection configurable, because why hardcode when you can be flexible? The process made me feel like I leveled up as a developer. Who knew that a few tweaks could make everything so much smoother?

Learning Outcome
This whole experience was a learning goldmine. If I had to boil it down:

  • Documentation is king: Skipping documentation is like leaving your future self (or any poor soul who touches your code) lost in the wilderness without a map.

  • Code reviews make you better: Having someone point out your blind spots isn’t always fun, but it’s necessary. It pushes you to think about things like user experience and clean, maintainable code.

  • Feedback is your friend: Sure, it stings when someone points out a mistake, but those tiny bits of feedback make a huge difference. Now I’m a better coder because of it.

In the end, code reviews weren’t just about fixing bugs—they were about growth. And hey, next time, I’ll probably double-check my own code for hardcoded URLs. 😅

Top comments (0)