I’ve found that the best way to stay technically sharp is to live with the consequences of old decisions. Over time, even “good” architectures show their cracks. Keeping a long-running personal project, one that I revisit, rewrite, migrate, and refactor as my skills and expectations evolve, has been my way of staying current.
My project tracks win/loss statistics for poker games I've played in. Version 1 was XML, XSLT, and a bit of shell scripting that produced a simple static HTML/CSS site. Version 2 moved to C# and JSON and shifted from a web application to a CLI. That transition is where I learned more deeply about dependency injection and event-sourced architectures. Today, the project is written in Python, backed by NDJSON, and uses Polars, which has been a great way to deepen my Python data-processing skills.
The right early decisions can set you up for success.
Projects evolve, and the architecture you start with is rarely the one you need later. That means intentionally designing for change. For example, while my project has gone through multiple JSON libraries, the rest of the code never cared.
Early on, I implemented the Repository pattern, which meant the only thing that mattered was the contract. Testing out a new library, or completely porting over to one, was as easy as adding a new class and changing the factory. Which also allowed me to run the new code side by side for testing.
Your skills evolve — and so does your tolerance for “good enough.”
Code that felt acceptable a year ago can become painful to work with once your expectations (or experience) change.
For a long time, I avoided refactoring my argument parser module because it worked and rarely needed updates. When I recently added several new commands, that changed. The module had grown into a single large function that configured argparse and then used a dispatch pattern to route commands. It worked, but it was brittle and hard to reason about.
My first pass was to break that function into individual command registration functions. Better, but still awkward. Registration and execution were separate concerns and the linkage between them relied on convention rather than structure. You had to know that _register_stats_command(...) corresponded to stats(...).
That was the signal that a different abstraction was needed.
I introduced a @Command decorator to register commands. I added a few supporting abstractions like Argument and ExclusiveGroup. This allowed me to keep argparse configuration directly alongside the dispatch function. It was cleaner, easier to reason about, and enabled additional validation that wasn’t practical before. For example, I now validate that the command and the function name match.
Does this decorator support everything argparse does? Nope, and that’s intentional. It supports exactly what this project needs today. Nothing more.
For me, staying current isn’t primarily about chasing the latest framework. It’s about revising real systems, refining judgment, and being willing to outgrow both old code and old assumptions. That’s where the learning is.
Top comments (0)