As I think back on the 24 years I've worked in technology, I would like to share with you three observations. These aren't the most profound things you'll read today, but I hope that you find them interesting.
Remember back in the 90's when the progress bar used to get really close to 100%, and then go backwards?
Sometimes it'd get all the way to 100% and then go backwards. Software projects do that, too. I wish I had never had the experience of working my butt off on a software project just to deploy it and find that we missed a bunch of spots.
Then there's "100% certain," which is my favorite. I can't tell you how many times I've been 100% certain about something just in time to turn around and be at least more than 0% wrong. I'm not sure if the problem resides in the "100%" or the "certain", but my suspicions lean more toward the "100%".
Treating 100% as imaginary or at best temporary has been very helpful for me. I work a problem until I can call it "done" more than "100%." Agile methodologies help here by giving you the exercise of defining "done" for whatever it is you're doing.
This hasn't bit me in a while, but I see it bite people all the time (and it used to bite me a bunch), but computing resources are always finite. Even if you can expand them quickly, they were finite, and going forward are finite still.
It's important to know what a program needs, and what constraints it has, and then to make sure that the program fits inside of those constraints.
The program can run out of memory, even if it has 256GiB physical memory.
The program can waste enough time that users don't care about what they were using it for anymore.
The program can run out of storage space.
The program can auto-scale to the point where it is unaffordable to run.
The program can receive so much traffic that it crashes.
If we don't remember these things then we have applications that crash, have bugs, have failures, or are far too expensive.
Whenever we convert data from one format to another, we need to remember that we now have two copies, and we need to understand how things like pointers and references work within our given stack to make sure that we're working within our constraints.
3. The amount of time it takes to fix a bug is inversely proportional to the amount of time it takes to find the bug
This is something that I think all software workers have experienced from time to time. You're burning time trying to figure out what is causing this whacked-out bug, just to find out it was a missing pair of square braces, or a comma, or there was a typo in a variable name.
This is by far the most frustrating part of computer work for many of us, and I think when it comes down to it this is a major limiting factor to folks considering this field.
The reality of the situation is that our computers, as much as science fiction would have us believe differently, are not smart. They aren't sentient, and they will not do well inferring your intent when you're working on your code.
We certainly have a lot of tools, linters, checkers, half-witted type systems, etc, which help us to find bugs sooner, but a lot of what we do can be syntactically correct but logically wrong.
We don't have any intent checkers yet, so we're going to have to continue to do a good job of reviewing our own work, engaging in robust peer review processes in order to prevent bugs, and find bugs more quickly.
Well that's my old man rant, thanks for reading it. Definitely let me know if you have any other fun observations to share.