DEV Community

Cover image for 5 Takeaways from "The Art of Doing Science and Engineering - Learning to Learn" by Richard Hamming
nhanzel
nhanzel

Posted on

5 Takeaways from "The Art of Doing Science and Engineering - Learning to Learn" by Richard Hamming

Richard Hamming was a mathematician and computer scientist whose theories and methods are still used widely today in the world of computer science. He is arguably most famous for his creation (or discovery depending on how you see it) of self-correcting error codes, or "Hamming Encoding". Hamming spent most of his career at Bell Labs working with the latest and greatest technology of the time. After his time at Bell Labs and his long career in consulting, he taught a course at the Naval Postgraduate School called "Learning to Learn".

This book is a transcript of the lessons that he taught over that course. I was intrigued by the title; I have always been interested in how to improve my skills at learning and comprehension, especially in the field of computer science. Programmers are always learning (sometimes too much so, getting trapped in "tutorial hell"), and any tips on getting better at learning were welcome.

However, as I was reading the book I took away more than mere learning tips. Here are five takeaways from "Learning to Learn" that I thought were especially intriguing for software engineers.

The client's problem isn't the problem you're trying to solve

This may seem at first glance like a counter-productive idea. Aren't I being paid to make the client what they ask for? Shouldn't I strictly adhere to the project requirements, as those spell out exactly what the client wants?

Yes and no. Yes, you should always listen to the client's wants and needs and ensure you deliver a product that they are happy with. However, don't blindly implement whatever idea the client has in their head. You should act as a filter between the client's idea and what the end product ultimately becomes.

As a programmer, you should be asking yourself "How would my client do this vs how would a computer do this?". We are losing some of the efficiency and power of modern computing if we handicap ourselves by using "human methods" on our technological problems.

Hamming calls this creating an "equivalent product".

"Indeed, one of the major items in the conversion from hand to machine production (Hamming is talking of manufacturing automation in this excerpt, but the point still stands) is the imaginative redesign of an equivalent product... there must be a larger give and take if there is to be a significant success. You must get the essentials of the job in mind and then design the mechanization to do that job rather than trying to mechanize the current version.

In other words, the solution you implement will inherently be a different solution than what your client envisions, and only through this difference can you find real success in your solution. Ultimately the client won't notice the changes you've made because your changes will have only improved how the program runs. That vital translation from "How would my client do this?" to "How would a computer do this?" is what makes a good programmer great.

Data is inherently unreliable

Hamming devotes a whole chapter in the book to "unreliable data", and I found it fascinating how much is taken for granted in the programming world when it comes to data.

Hamming brings up the idea of "accelerated life testing" as a means to confidently predict how long a product will last. For example, when making fiber optic cables that will be in extremely cold temperatures for the duration of their use, testers will target small areas of the cable with extremely cold temperatures. This is then used as an accurate representation of the lifespan of the product. If it can survive -50 degrees Celsius for a day, then it should have no problem with -5 to 20 degrees for the next 20 years or so.

Like Hamming, you may see the inherent problem with this type of testing. But there is no alternative! We can't put a cable out in the real-world for twenty years and "see how it holds up". That's just not feasible! So when that cable comes back 5 years later with significant frost damage, we can't be too surprised. There's just no test that can handle every scenario.

This is also illustrated in writing unit tests. If a unit test comes back as a fail, most likely the code it was testing is broken. But, who's to say that the test itself is bug free? How would we go about testing that? Even if we write a unit test for all our unit tests, our new test could be buggy.

Too often programmers can take the data in front of them as "gospel". Too many times in my career have I seen a log message or bug output and gone on a wild goose chase before realizing I was logging the wrong thing or the bug I thought was a bug turned out to be expected behavior, I just wasn't reading it correctly.

However, even with all the messiness and inherent errors that come with data and the harsh reality that 100% data integrity is impossible, don't let that stop you from utilizing data in your projects. As Hamming says:

"There is never time to do the job right, but there is always time to fix it later"

Keep the system at the forefront of your mind

System design is something that shouldn't be relegated to just being "step one" of the Agile development process, or agreed upon by your team and unchanged until the project is over.

I have been guilty of isolating myself in a corner of a codebase and ignoring the other puzzle pieces. After all, I would say to myself, if our codebase is truly as modular as it should be and as long as I follow the contracts given to me, I should have no issues.

Hamming disagrees. He talks about a time where he and some fellow engineers at Bell Labs were all sharing computing time for their in-house machine (back when one computer on campus was a common occurrence). He realized that by optimizing the time it takes to load and remove program instructions from the machine, he could increase the amount of time each engineer got to compute! However, he soon realized that by changing the computer to allow for faster I/O, the rest of the CPU suffered as a consequence! Hamming recalls:

"My solution's very presence altered the system's response. The optimal strategy for the individual was clearly opposed to the optimal strategy for the whole of the laboratories."

This lesson can even be extended to solo-programming projects, where you are often changing the solution as you are developing it.

Hamming has this to say about systems and solutions in the chapter on "Creativity":

"When stuck I often ask myself, 'If I had a solution, what would it look like?' This tends to sharpen up the approach, and may reveal new ways of looking at the problem you had subconsciously ignored but you now see should not be excluded."

This mantra of "If I had a solution, what would it look like?" has been helpful to me when I find myself getting too bogged down in the specifics of a problem. It helps me avoid getting tunnel vision and allows me to examine the requirements of a problem against my proposed solution.

Good engineers follow the requirements, but great engineers plan on the requirements changing

This is similar to "The client's problem isn't the problem you're trying to solve", in the sense that you shouldn't treat the requirements as your only source of documentation for what your programs should be. However, this takeaway focuses more on what your product's life-span will be, and how you interact with it after it is completed.

Hamming's second rule of systems engineering is as follows:

"Part of systems engineering design is to prepare for changes so they can be gracefully made and still not degrade the other parts".

This is sometimes referred to as "extensibility" or "a measure of the ability to extend a system and the level of effort required to implement the extension". Unless you enjoy constantly maintaining old code and reading over bug reports, taking the time to modularize your code, use proper data-scoping, and eliminate as many side-effects as you can will only help you in the future.

Hamming's third rule of systems engineering is in a similar vein:

"The closer you meet specifications, the worse the performance will be when overloaded"

Hamming uses an example of a bridge. The slicker the design to meet the prescribed load, the sooner the collapse of the bridge when the load is exceeded. He describes designing a system that instead of breaking when overloaded, will undergo a "graceful decay".

In the context of programming, organize your code in such a way that if something were to go wrong, instead of instantly blowing up or throwing an error and being done with it, your code can adapt and plan for such eventualities, even if they are unexpected. After all, even the smartest of programmers underestimate just how dumb their users can be.

Learning has compound growth

With the release of some new framework, tool, library, provider, service, or language every other week in the programming world, it can seem overwhelming to try and keep up. I have definitely felt this fatigue.

When Hamming was teaching the contents of this book at the Naval Postgraduate School, he prefaced his course by saying to the students that his job is to prepare them for the future, not to teach them the discoveries of the past.

Given this mindset, Hamming gives some advice on how to stay current and avoid getting buried in the avalanche of new knowledge.

Hamming talks about a "drunken sailor" who wanders aimlessly, each step independent from the next. If you let this sailor wander, each independent step would most likely start cancelling out each others progress, and the sailor will end up nowhere.

However, if you have a goal in mind, even if each step doesn't take you directly towards the finish line of your goal, at least the drunken steps will average out to some forward progress. It may be hard to see at first, but stepping back and seeing where you came from will show that even the simple act of having that finish line to strive for can get you far.

Hamming also describes what he calls "knowledge-hooks". You don't have to remember every word you've read, or every tutorial you've completed. By engaging in the act of learning, you are subconsciously planting these knowledge-hooks. Your brain is getting better at recognizing different programming questions and ideas, whether that be subconsciously or consciously. It's similar to exercise. Not every workout is going to make you lose weight, but it's the consistency that yields results. Being around other programmers, reading code, discussing problems, watching YouTube videos on a framework you'll forget the next day, barely understanding a Stack Overflow response, or reading articles like this one are helping your brain "learn how to learn", even if you don't realize it.

Conclusion

I highly recommend reading "Learning to Learn" if you are a programmer, engineer, or even aspiring to be one. There are many chapters that can get a bit too technical (I'm looking at you digital filters), but the core of the book isn't understanding these computer science concepts. They are merely examples that Hamming pulls from to show you how to be a better learner.

Top comments (0)