This week's Lab4 was quite a step up. Adding a new feature to a project from my peer is always a big challenge. This feature itself seemed simple enough on the surface - add support for a TOML configuration file. However, as I quickly learned, the real challenge wasn't just about writing the code, but about integrating it into an existing codebase and using Git as a true collaboration tool.
When Perfect Logic Still Fails
So.. how did the coding go?
The main goal was to let the program read a .pack-repo.toml
file for default setting, so users don't have to type out things like --output
or --line-number
very single time. My first thought was a grab a TOML library (yes, that TOMLI on the title with confusion face) and then build some if/else
logic to check if a user provided a command-line argument or if a value existed in the TOML file.
This turned into a bit of a mess. Trying to manually check the default value of every single argument from argparse
felt fragile. It just didn't feel right. After a bit of a research, I had my first possible solution: parser.set_defaults()
. This argparse
feature was a game-changer. It let me load all the values from the TOML file
and set them as the new defaults before parsing the user's command-line arguments. It perfectly handled the priority logic CLI > TOML > Hardcoded
in a clean way.
But then I stumbled on a wall. A really frustrating one.
My code was logically perfect. It should have worked. But every time I ran it, the TOML settings were completely ignored. The output was still printing to the console instead of the file I'd specified in my .toml
.
After adding some print()
statements to my code, I found the reason. The script was looking for .pack-repo.toml
in the project's root directory, but I had placed in inside the src/
folder along with the Python script. How stupid does this sound? but I think this is also part of debugging. It was a humbling reminder that sometimes the biggest bugs aren't in the code.
Getting Friendly with Git and Pull Requests (already?)
The second part of this lab was all about Git. I have practiced creating another branch to safely work on the forked code and merge them afterwards but it still feels awkward and unnatural. I wanted to understand how Git system work. Pushing to my fork is just saving my work on my own cloud copy. The Pull Request is the main part. It's the formal "hello, here is my work please accept it" message.
Once I understood that procedure, the whole workflow made sense. The PR page on Github became the central hub for the project owner to see my code and read the changes.
What I learned and What I'd do Differently
So, what are my main takeaways from this experience?
Don't Reinvent the Wheel: My initial struggle with the priority logic vanished once I found the right tool for the job
parser.set_defaults()
. I'll definitely spend more time reading the docs of the libraries I'm using in the future.Debug Your Assumptions, Not Just Your Code: The file path issue taught me a valuable lesson. When things aren't working, the problem might not be your complex algorithm, but a simple, incorrect assumption. A
print()
statement is still one of the most useful debugging tools out there.A PR is a Conversation Starter:
Git
andGitHub
are more than just version control. They're collaboration platforms. The PR is where the handover from contributor to project owner happens. Writing a clear description and testing instructions is just as important as writing good code.
Next time, I'll probably be a lot quicker to check the simple stuff first when debugging. And I'll start with a clearer understanding that my push to a fork is just the beginning of the collaborative process, not the end.
Top comments (0)