<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chris Price</title>
    <description>The latest articles on DEV Community by Chris Price (@cprice404).</description>
    <link>https://dev.to/cprice404</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cprice404"/>
    <language>en</language>
    <item>
      <title>A spooky tale of overprovisioning with Amazon DynamoDB and Redis</title>
      <dc:creator>Chris Price</dc:creator>
      <pubDate>Wed, 08 May 2024 21:50:59 +0000</pubDate>
      <link>https://dev.to/momentohq/a-spooky-tale-of-overprovisioning-with-amazon-dynamodb-and-redis-21nj</link>
      <guid>https://dev.to/momentohq/a-spooky-tale-of-overprovisioning-with-amazon-dynamodb-and-redis-21nj</guid>
      <description>&lt;p&gt;(NOTE: This was originally published in October 2022 🙂👻)&lt;/p&gt;

&lt;p&gt;Halloween is around the corner. Buckle up for a spooky engineering ghost story.&lt;/p&gt;

&lt;p&gt;A few years ago, I worked as a software engineer at a large company building a video streaming service. Our first customer was a major professional sports league who would be using our service to broadcast a livestream of their games once a week to millions of viewers; an opportunity that was both exciting and terrifying!&lt;/p&gt;

&lt;p&gt;‍When we signed on to the project, our service didn’t actually exist yet. But the league’s broadcast schedule certainly did. 🙂 The launch date was rock solid, and the service had to be able to handle all traffic being sent to us.&lt;/p&gt;

&lt;p&gt;‍Is this where the scary part of the story begins? Nope! We had a fantastic engineering team and an architecture design we believed in. The schedule was tight, but we were confident we’d be able to hit our launch date. We put our heads down and got to work.&lt;/p&gt;

&lt;p&gt;‍A few weeks before the first broadcast, we were feeling pretty good. The service was built, sans some finishing touches. The team was in the home stretch of load testing to make sure the service would hold up to the traffic at game time, and everything was business as usual.&lt;/p&gt;

&lt;p&gt;‍But then…&lt;/p&gt;

&lt;p&gt;‍We got our first realistic sample data set from our customer, and we integrated it into our load tests. It did not go smoothly. Based on our budget and our estimates for how much data we would need to store, we had configured a maximum read and write capacity for DynamoDB. But during the load test, we found that we were dramatically exceeding that capacity and running into DynamoDB throttles. Our service failed. Hard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Be afraid. Be very afraid.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgza09lqx2b6ksbcuhzd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgza09lqx2b6ksbcuhzd.png" alt="scary bill" width="700" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Uh oh. It’s only a few weeks until our first broadcast, and we have a major problem. In our architecture design, there were data we needed to store for each individual viewer watching the broadcast to keep track of where they were in the stream. We had decided to store this data in DynamoDB. After investigating the traffic that the broadcaster was sending us, we discovered the size of the payload for each viewer might be up to 10x larger than our estimates. This required 10x the IOPs on DynamoDB—and 10x the costs!&lt;/p&gt;

&lt;p&gt;‍Our workload was very write-heavy. Some napkin math based on the observed 10x increase in data made it clear that storing it in Dynamo would put us far over budget. These data were ephemeral, so we decided that we could move them out of DynamoDB and into a cache server. We did some quick research on our options and decided to move forward with a managed Redis solution.&lt;/p&gt;

&lt;p&gt;‍Managed Redis services have some nice benefits in that you aren’t explicitly responsible for provisioning and operating the individual nodes in your cache cluster. But, you *are* explicitly responsible for determining how many nodes you need in your cache cluster, and how big they need to be.&lt;/p&gt;

&lt;p&gt;‍The next step was to write code to simulate the load that we would put on the Redis cluster, and run it... over and over again. We tested different sizes of nodes. We tested different cluster sizes. We tested different replication configurations. We tested. A lot.&lt;/p&gt;

&lt;p&gt;‍All this writing of synthetic load tests to size a caching cluster was not work that we had accounted for in our engineering plans. Experimenting with different sizes (and types) of cache nodes, monitoring them to ensure they weren’t overloaded during the test runs… These tasks were expensive and time consuming—and largely ancillary to the actual business logic of the service we were trying to build. None of them were especially unique to us. But we still had to allocate precious engineering resources to them.&lt;/p&gt;

&lt;p&gt;‍After a week, we had nailed down the sizing and configuration for our cluster, still racing against the clock. After another week, we had completed the work to migrate that part of our code off of Dynamo onto the Redis cluster.&lt;/p&gt;

&lt;p&gt;‍And the service was up and running again.&lt;/p&gt;

&lt;h2&gt;
  
  
  It’s alive! It’s aliiive!
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pgh946f2i1yalr5jm23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pgh946f2i1yalr5jm23.png" alt="cost monster" width="700" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We did it! The first broadcast went smoothly. As with any major software project, after observing it in action in the real world, we learned some lessons and found some things to improve, but the viewers had a good viewing experience. We rolled out some of those improvements during the subsequent weeks, and before we knew it, the season was well underway. Victory!&lt;/p&gt;

&lt;p&gt;Until…&lt;/p&gt;

&lt;p&gt;‍About a month into the season, we got our AWS bill. To say that it caused us a fright would be an understatement. The bill was… HUGE! What the heck happened?!&lt;/p&gt;

&lt;p&gt;‍## It’s coming from inside the house!&lt;/p&gt;

&lt;p&gt;Because of our architecture, we knew that the biggest chunk of our bill was going to come from DynamoDB. But we had done a reasonable job of estimating that cost based on our DDB capacity limits. So why was the AWS bill so high?&lt;/p&gt;

&lt;p&gt;‍It turns out that the culprit was our Redis clusters. In retrospect, it was predictable, but we had been so busy just trying to make sure that things were operational in time to meet our deadlines, we hadn’t had time to do the math.&lt;/p&gt;

&lt;p&gt;‍To meet the demands of our peak traffic during the games, we had been forced to create clusters with 90 nodes in them—in every region that we were broadcasting from. Plus, we needed each node to have enough RAM to store all the data we were pumping into them, which required very large instance types.&lt;/p&gt;

&lt;p&gt;‍## Is this place haunted?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnsg5knn46h7k68wrwro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnsg5knn46h7k68wrwro.png" alt="haunted cpus" width="700" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Very large instance types that provided the amount of RAM we needed happened to also come with high numbers of vCPUs. Redis is a single-threaded application, meaning it can only take advantage of one vCPU on each node in the cluster, leaving the remaining vCPUs almost 100% idle.&lt;/p&gt;

&lt;p&gt;‍So there we were, paying for boatloads of big 16-vCPU instances, and we were guaranteed each one of them would never be using more than about 6% of the CPU it had available. Believe it or not, this wasn’t even the worst of it.&lt;/p&gt;

&lt;p&gt;‍The peak traffic we would experience during the sports broadcasts dwarfed the traffic we were handling during any other window of time. So not only were we forced to pay for horsepower that we weren’t even fully utilizing during the games, but we were paying for these Redis clusters 24 hours a day, seven days a week, even though they were effectively at 0% utilization outside of the 3-hour window each week when we were broadcasting the sporting events.&lt;/p&gt;

&lt;p&gt;‍And then the season ended and we had no more sports broadcasts for 6 months. So now those clusters were sitting at approximately 0% utilization 24-7.&lt;/p&gt;

&lt;p&gt;‍Okay, fine. Problem identified. All we had to do was fix it and get our cloud bill under control!&lt;/p&gt;

&lt;p&gt;‍## A horde of zombie… engineers!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1e5j6fkx98090heya8q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1e5j6fkx98090heya8q.png" alt="hello world zombies" width="700" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, it turns out fixing our spend on our Redis clusters was much easier said than done. The managed Redis service didn’t have any easy, safe way to scale the clusters up and down. And because Redis clients handle key sharding on the client side, they have to be aware of the list of available servers at any given time, meaning that scaling the cluster in or out carries a high risk of impacting cache hit rate during the transition, and thus would need to be managed very carefully.&lt;/p&gt;

&lt;p&gt;‍These were solvable problems. Throw enough engineers at something, and anything is possible, right? They could update all of the code so that it writes to two different clusters during a scaling event and have reads fail over from the new cluster to the old one for cache misses during the transition. Then, they could scale down by adding a second, smaller Redis cluster alongside the giant one needed for peak traffic. They could definitely handle the work of meticulously monitoring the behavior of the new code while the new cluster was brought online, and they could decide when it’s safe to begin the teardown of the old cluster. Oh, and they can kick that off and meticulously monitor it to make sure that goes smoothly.&lt;/p&gt;

&lt;p&gt;‍So sure, our team was &lt;em&gt;capable&lt;/em&gt; of doing that twice a week: once when we needed to scale up in preparation for the sports broadcast, and again when we needed to scale down to save costs after the event.&lt;/p&gt;

&lt;p&gt;‍But that would be a ton of work. Now we were forced to do some math on how much we were paying those engineers vs. how much we were paying for the overprovisioned Redis clusters.&lt;/p&gt;

&lt;p&gt;‍And then there’s the opportunity cost: none of this cluster scaling nonsense had any unique business value for us, and we had a limited number of engineers available to work on delivering features actually unique to our business and provide actual customer-facing value to our users.&lt;/p&gt;

&lt;p&gt;‍I bet you can guess where we landed. Yep. We never reached a point where we felt like we could justify the engineering cost it would take to try to solve this problem when there were so many more valuable customer projects our engineers could be doing—projects which would actually move the business forward and win us new customers.&lt;/p&gt;

&lt;p&gt;‍So we just kept paying. For something we weren’t using.&lt;/p&gt;

&lt;p&gt;‍At a certain point, if our business was struggling, we might have been forced to allocate the engineering resources to solving this problem in order to reduce our spending and balance the budget. But this would have been a sign that we were in trouble.&lt;/p&gt;

&lt;p&gt;‍And I don’t know how you feel about the cloud services your team spends money on, but I consider it pretty scary that a cloud service can make it so complicated for you to get a fair bill—a bill where you are paying a fair amount for what you are actually using, and not paying a ton of money for resources that are sitting idle—that you will only be able to make time for it if you’ve gotten into a desperate situation.&lt;/p&gt;

&lt;p&gt;‍It’s a great business model for the cloud service provider. Not a great business model for the customer.&lt;/p&gt;

&lt;p&gt;‍It doesn’t have to be this way.&lt;/p&gt;

&lt;p&gt;‍## Momento Cache: All treat, no tricks!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8agosxfr4i8g13apqmd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8agosxfr4i8g13apqmd.png" alt="momento acorns" width="700" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The horrific tale you’ve just read was a large part of the inspiration for us to build Momento’s serverless caching product. One of the best things about serverless cloud services is the fair pricing model: pay for what you use and nothing more. Why should we settle for less with caching?&lt;/p&gt;

&lt;p&gt;‍With Momento, you get a dead-simple pricing policy based strictly on how many bytes you send to and receive from your cache. We don’t think you should have to pay more if those bytes are all transferred within a 3-hour window or are evenly distributed over the course of a week or a month. As far as we’re concerned, you should be able to read and write your cache when you need it. That’s it. Plain and simple.&lt;/p&gt;

&lt;p&gt;‍Of course, serverless doesn’t stop there. We manage all of the tricky stuff on the backend for you. If your traffic increases and your cache needs more capacity, that’s on us. If your traffic decreases, you shouldn’t have to pay the same amount of money for your low-traffic window as you did for your high-traffic window. And you most certainly shouldn’t have to pay for 15 idle CPU cores on a bunch of nodes in a caching cluster just because you needed more RAM.&lt;/p&gt;

&lt;p&gt;‍So: stop letting cloud services trick you into paying for caching capacity that you aren’t using, and see what a treat it is to work with Momento today! You can create a cache—for free—in less than five minutes. If it takes more than five minutes, let us know, and we’ll send you some Halloween candy.&lt;/p&gt;

&lt;p&gt;‍&lt;strong&gt;Visit our &lt;a href="https://docs.momentohq.com/cache/getting-started" rel="noopener noreferrer"&gt;getting started guide&lt;/a&gt; to try it out, and check out our &lt;a href="https://docs.momentohq.com/docs/pricing" rel="noopener noreferrer"&gt;pricing page&lt;/a&gt; to see how we make sure you get what you pay for.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;‍Happy Halloween! 👻&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>cloud</category>
      <category>aws</category>
      <category>redis</category>
    </item>
    <item>
      <title>Moving your bugs forward in time</title>
      <dc:creator>Chris Price</dc:creator>
      <pubDate>Wed, 08 May 2024 21:14:50 +0000</pubDate>
      <link>https://dev.to/momentohq/moving-your-bugs-forward-in-time-3eh7</link>
      <guid>https://dev.to/momentohq/moving-your-bugs-forward-in-time-3eh7</guid>
      <description>&lt;p&gt;Over the course of my years as a software engineer, I’ve slowly become more curmudgeonly deliberate about how to structure a codebase, and how to gauge its relative success.&lt;/p&gt;

&lt;p&gt;In my early days I was myopically focused on &lt;strong&gt;what the code can do &lt;em&gt;today&lt;/em&gt;&lt;/strong&gt;. It was all about speed, and cranking out code as fast as possible. Tests were a nice-to-have. “Works on my machine” was a reasonable acceptance criteria. And I’m not sure I even knew the definition of the word “maintainable”.&lt;/p&gt;

&lt;p&gt;Those times were great fun. And I considered myself to be a pretty 1337 coder.&lt;/p&gt;

&lt;p&gt;Then I watched several codebases I’d worked on grind to an eventual halt because no one could understand them, or they were too hard to extend or debug, or they were so fragile that people were afraid to change anything about them lest they introduce a crazy bug that would explode after being deployed to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Old man yells at cloud
&lt;/h2&gt;

&lt;p&gt;Now, I’m older and I’ve worked with a lot of different engineers on a lot of different codebases. Now, I have a very different opinion. Now, “maintainability” is one of the most important words in my vocabulary. Now, I care so much more than I did before about &lt;strong&gt;what the code will be able to do &lt;em&gt;tomorrow&lt;/em&gt;&lt;/strong&gt;. And perhaps most importantly, I care so much more about what &lt;code&gt;$nextEngineer&lt;/code&gt; will be able to make this code do tomorrow than about what I myself might be able to do. &lt;/p&gt;

&lt;p&gt;These are the things that allow your software to survive and thrive beyond the early days. The things that ensure that your business will be able to continue to grow and evolve at the same pace a year from now that it can today, that it won’t get bogged down by an un-maintainable, un-extensible code foundation that drags your engineering team’s velocity down towards zero.&lt;/p&gt;

&lt;p&gt;The skills, experience and foresight that are required to ensure a maintainable codebase, to be a force multiplier that ensures that a breadth of current and future engineers will be able to achieve sustainable, high velocity working on the code—these are the traits that are at the top of my list when evaluating engineering candidates nowadays. It’s not how many lines of code you can produce nor how quickly—it’s how well those lines of code will hold up as the foundation for your business that grows and evolves over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Boiling it down
&lt;/h2&gt;

&lt;p&gt;Occasionally I reflect back on this mindset transition and try to distill my thoughts on maintainability down into something concrete that I can try to communicate to other engineers. And if I had to choose one overarching theme—that I could boil down to a single sentence—that best captures the spirit of what I believe about maintainability these days, it would be this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structure your code so that you will catch your bugs at compile time, rather than at run time.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Move your bugs forward in time. There is no single thing that you can do that will have a more sustained impact on the medium-to-long-term velocity of your team than this.&lt;/p&gt;

&lt;p&gt;‍For the rest of this post I’ll list off some more tactical examples of things that you can do towards this goal. Savvy readers will note that these are not novel ideas of my own, and in fact a lot of the things on this list are popular core features in modern languages such as &lt;a href="https://kotlinlang.org/" rel="noopener noreferrer"&gt;Kotlin&lt;/a&gt;, &lt;a href="https://www.rust-lang.org/" rel="noopener noreferrer"&gt;Rust&lt;/a&gt;, and &lt;a href="https://clojure.org/" rel="noopener noreferrer"&gt;Clojure&lt;/a&gt;. Kotlin, in particular, has done an amazing job of emphasizing these best practices while still being an extremely practical and approachable language.&lt;/p&gt;

&lt;p&gt;‍So, credit where it is due: the brilliant language designers of these and other languages deserve all of the kudos for bringing these ideas to the foreground of the software engineering zeitgeist. Today, I’m just here to sing their praises. 🙂&lt;/p&gt;

&lt;p&gt;(Side note: spending some time writing code in a variety of new languages is a really amazing way to broaden your horizons and challenge your beliefs about software engineering best practices. You’ll find that it’s often much easier than you think to apply lessons learned from a foundational feature of one language to another language that doesn’t explicitly provide or emphasize that feature. I haven’t written Clojure in several years now, but I firmly believe that the time I spent writing in that language did more to improve my skills as a software engineer than anything else I’ve ever done.)&lt;/p&gt;

&lt;p&gt;‍And now, without further ado, let’s get into it.&lt;/p&gt;

&lt;h2&gt;
  
  
  5 patterns and language features to help catch bugs earlier
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Static types
&lt;/h3&gt;

&lt;p&gt;This one can be a tough pill to swallow for folks who love Python, Ruby, Clojure, and other dynamically typed languages. And I might never convince some of you of this point. But this is a thing that has burned me enough times over the years that I don’t think I’ll ever change my mind on it.&lt;/p&gt;

&lt;p&gt;‍Part of the allure of dynamically typed languages is that if you don’t have to spend time on all of the ceremony of defining types and declaring them on all of your method signatures, you can code more quickly and spend your time thinking about the business logic instead of the object model. And you can write more flexible, re-usable functions that operate on data rather than operating on types.&lt;/p&gt;

&lt;p&gt;‍There’s some truth to those arguments, especially in the early prototyping phase of a project. But what I’ve repeatedly seen is that once a codebase in a dynamically typed language grows beyond a certain size, it becomes harder and harder to reason about it and maintain it. Over and over again I’ve seen cases where a well-meaning developer working on one part of the code passes the wrong object type to a function in another part of the code that they weren’t the original author of, and then when that function call occurs, the app crashes.&lt;/p&gt;

&lt;p&gt;‍The worst part of this is that that error happens at run time. If you’ve already deployed the code to production without catching this bug, you may have a customer-facing outage on your hands, and now you have to go through a fire drill to rollback the change or push a hotfix. Depending on how bad it was, this may cost you customers. Even in the best case scenario it’s stressful—and it has a high opportunity cost, as you have to pull some of your engineering team off to fight the fire.&lt;/p&gt;

&lt;p&gt;‍Some argue that if such a bug makes it through to production, that is a sign that you didn’t add enough test coverage to ensure that the function would only be called with the correct argument types. My response to that is: yes, if you are diligent enough about test coverage, and you don’t make any mistakes in the test code itself, you might be able to avoid shipping most bugs of this classification. But testing is an art form in and of itself, and every one of your engineers must achieve a certain level of proficiency at it in order to clear this bar. And even if they do, we all still make mistakes from time to time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A compiled language with a static type system guarantees that you will avoid shipping this type of bug to production.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No ifs, ands, or buts about it. And it does not rely on the varying experience levels of your engineers - if they write some code that has a bug like this in it, it won’t compile, and it will never even make it into a PR. No matter how good or bad the test coverage is. Let’s offload this category of work to compilers instead of putting it on our engineers!&lt;/p&gt;

&lt;p&gt;‍I’ve become more and more convinced of this over time, to the point where I won’t even advocate for dynamically typed languages for prototypes anymore. Prototypes very often end up being promoted to products, if for no other reason than the code is already written. But if you’re going to promote a prototype to a product then you really ought to have sufficient test coverage to make sure your product is reliable, and at that point you’ll probably have invested the same amount of engineering effort that you would have put into building the prototype in a statically typed language like Kotlin or Go in the first place.&lt;/p&gt;

&lt;p&gt;‍I’m not here to tell people which languages they should love. But if you do find yourself writing production code in a dynamically typed language like Python, Ruby, or JavaScript, I would give serious consideration to opting into the type-checking tools that have become available in those ecosystems. In Python, consider requiring type hints and adding &lt;a href="https://mypy-lang.org/" rel="noopener noreferrer"&gt;mypy&lt;/a&gt; checks to your CI to move your type safety bugs forward in time. For JavaScript, consider incrementally shifting to &lt;a href="https://www.typescriptlang.org/" rel="noopener noreferrer"&gt;TypeScript&lt;/a&gt;. For Ruby, try out the &lt;a href="https://www.honeybadger.io/blog/ruby-rbs-type-annotation/" rel="noopener noreferrer"&gt;RBS type annotation system&lt;/a&gt; that was added in Ruby 3.0.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Null safety
&lt;/h3&gt;

&lt;p&gt;Now we’ll get into some (hopefully) less controversial territory. You’ve probably heard the line about null references being a billion-dollar mistake. And if you’ve worked in a language that doesn’t provide compile-time null safety, you’ve surely encountered your fair share of silly bugs resulting in null pointer exception crashes at run time, or code that is littered with boilerplate null checks at the beginning of every single function call, or both.&lt;/p&gt;

&lt;p&gt;‍Thankfully the trend in modern languages is to help us move these bugs forward in time by giving us a way to declare variables that are not allowed to be null. This is a core feature of C#, Kotlin, and TypeScript, among others. In Java, you can use &lt;code&gt;Optional&lt;/code&gt; instead of allowing &lt;code&gt;null&lt;/code&gt;. So we can let the compilers do this work for us.&lt;/p&gt;

&lt;p&gt;‍&lt;strong&gt;In general if you find yourself using nullable variables these days, it might be a code smell.&lt;/strong&gt; See if there is a different way you can structure the code to avoid it, and if not, see if your language of choice has any mechanism for compile-time/build-time null safety tooling.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Immutable variables and data structures
&lt;/h3&gt;

&lt;p&gt;This one takes some getting used to and may be hard to believe at first, but consider this: &lt;strong&gt;there are precious few places in your code where you actually need any of your variables or types to be mutable.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;‍The first time someone told me this was when I was learning Clojure, where it was a matter of necessity because it’s very difficult to even express a mutable object. I found the idea quite implausible. But once I opened my mind to it and got a little practice, I realized that it was true.&lt;/p&gt;

&lt;p&gt;‍Immutable variables are an incredibly powerful way to improve code maintainability. Here’s why: when you are an engineer working on a code base that you aren’t entirely familiar with, and you encounter a line of code that defines an immutable variable whose type is an immutable data structure, &lt;strong&gt;you know all that you will ever need to know about that variable after reading that one line of code.&lt;/strong&gt; Because it is guaranteed not to be changed anywhere else in the program.&lt;/p&gt;

&lt;p&gt;‍Contrast this with mutable variables and mutable data structures: after reading a line of code where one of these is instantiated, if I want to reason about the state of that variable 100 lines farther down in that same code, there are a ton of things I have to consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Were there any statements in between that might have modified it?&lt;/li&gt;
&lt;li&gt;Was that object passed by reference to any functions that might have mutated it?&lt;/li&gt;
&lt;li&gt;If so, do I need to go examine the source code of all of those functions to make sure I know what the state will be?&lt;/li&gt;
&lt;li&gt;Does my program have more than one thread, and if so, do any other threads have a reference to this object that might have allowed them to mutate it while I was working with it?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is so much hidden complexity that comes along with mutable state. If I have to do the amount of reasoning described above for every line of code that deals with a mutable object, and I can instead write the same program in a way that only uses immutable variables and data structures, the reduction in complexity is astounding. And the corresponding increase in engineering velocity and maintainability is as well.&lt;/p&gt;

&lt;p&gt;‍Many languages have a way to define immutable local variables these days (e.g. Kotlin &lt;code&gt;val&lt;/code&gt;, TypeScript &lt;code&gt;const&lt;/code&gt;). Many also have a way to define immutable data structures (e.g. Kotlin &lt;code&gt;data class&lt;/code&gt;, C# &lt;code&gt;record struct&lt;/code&gt;). Lean into these where you can.&lt;/p&gt;

&lt;p&gt;‍Most engineers I work with are sold on this idea fairly easily, except for when dealing with collections. We are so used to writing loops that build up arrays or maps, it’s really hard to get used to the idea that this can be done without a mutable data structure and without a loop. But it can! Almost all languages these days have some flavor of functional programming tools for operating on collections (map, filter, reduce/fold, etc.). These can take some getting used to but they are well worth the price of admission.&lt;/p&gt;

&lt;p&gt;‍The reduce / fold operation in particular can be a bit of a learning curve but it is the key to eliminating the need for mutable collections in your code. It will allow you to re-write code that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;pepperNames&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;listOf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"jalapeno"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"habanero"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"serrano"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"poblano"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;pepperNameLengths&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;mutableMapOf&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pepperName&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;pepperNames&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;pepperNameLengths&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;pepperName&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pepperName&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;length&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c1"&gt;// from here forward we need to be cognizant about the pepperNameLengths map being mutated!‍&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;without the mutable map:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;pepperNameLengths&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Map&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pepperNames&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fold&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;mapOf&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;accumulator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pepperName&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
    &lt;span class="n"&gt;accumulator&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pepperName&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;pepperName&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;length&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c1"&gt;// no mutable map to worry about here!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Persistent collections (aka immutable collections)
&lt;/h3&gt;

&lt;p&gt;When a coworker originally told me that I should be using immutable collections, my instinct was that this was impractical due to performance concerns and memory usage. If I represent a map as an immutable collection, and then somewhere in my code I need to add or modify a key in it, doesn’t that mean copying the entire data structure in order to obtain the version that contains my modification? Isn’t that crazy expensive?&lt;/p&gt;

&lt;p&gt;‍Well, it turns out: no. As long as you are using persistent collections.&lt;/p&gt;

&lt;p&gt;‍I first encountered this concept in Clojure, and I highly recommend &lt;a href="https://youtu.be/toD45DtVCFM?t=1431" rel="noopener noreferrer"&gt;Rich Hickey’s fantastic talk on the topic&lt;/a&gt;. The tl;dr is that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A persistent data structure is guaranteed to be immutable, but provides modifier functions (put, add, remove etc.) that will produce another persistent data structure with the same immutability guarantees.&lt;/li&gt;
&lt;li&gt;Under the hood, these data structures are implemented as trees, and when you want to modify a single item, you can do so by creating a new tree that shares almost all of the nodes of the original tree. You only need to copy and replace the small set of nodes in the tree on the path to the item you are modifying. In efficient implementations, this means you almost never need to clone more than about 4 nodes in the tree even if it has millions of nodes. The rest can be shared, which is efficient in terms of both memory usage and performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;‍Many languages now have “persistent collections” or “immutable collections” libraries (e.g. Java &lt;a href="https://pcollections.org/" rel="noopener noreferrer"&gt;PCollections&lt;/a&gt;, C# &lt;a href="https://learn.microsoft.com/en-us/archive/msdn-magazine/2017/march/net-framework-immutable-collections" rel="noopener noreferrer"&gt;Immutable Collections,&lt;/a&gt; etc.) that do all of the heavy lifting for you. You interact with them just like normal collections, but you get all of the benefits of immutability while still maintaining great performance.&lt;/p&gt;

&lt;p&gt;‍This concept is amazingly powerful, especially in concurrent programs. &lt;strong&gt;It means that you can pass a reference to a collection around anywhere you like in your program, and many threads can consume it at the same time with no locking or any other concerns about the collection being modified by another thread&lt;/strong&gt;. You’ll be amazed at how much simpler this can make some of your application code! And at how nice it is to stop needing to worry about lock contention.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Algebraic data types and exhaustive pattern matching
&lt;/h3&gt;

&lt;p&gt;These go by many names in different languages. In Kotlin they are called &lt;a href="https://kotlinlang.org/docs/sealed-classes.html" rel="noopener noreferrer"&gt;sealed classes&lt;/a&gt;. In many languages this may end up being just a special flavor of polymorphism. I think of them as an enumeration of types, where you know at compile time all of the types in the enumeration, but each type in the enumeration can have its own discrete properties, methods, etc.&lt;/p&gt;

&lt;p&gt;‍It’s easiest to explain via a specific example. I’ll use an example from the &lt;a href="https://docs.momentohq.com/#learn-about-caching-and-momento-serverless-cache" rel="noopener noreferrer"&gt;Momento Cache API&lt;/a&gt;, since that’s something I’ve been working on a lot lately.&lt;/p&gt;

&lt;p&gt;‍When you make a call to the &lt;code&gt;get&lt;/code&gt; method on a Momento client object to retrieve a value from your cache, the response may be one of three very different types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A cache hit, in which case you will also get back the value that was retrieved from the cache&lt;/li&gt;
&lt;li&gt;A cache miss, in which case there will be no cache value.&lt;/li&gt;
&lt;li&gt;An error, if something went wrong with the request, in which case you might get an error code and an error message.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;‍Without algebraic data types, a common way to try to represent this situation in code might be to provide a &lt;code&gt;GetResponse&lt;/code&gt; object, with a status enum property that could be used to identify whether the response was a &lt;code&gt;HIT&lt;/code&gt;, &lt;code&gt;MISS&lt;/code&gt;, or &lt;code&gt;ERROR&lt;/code&gt;. The object would also need fields to hold the various data that is relevant for each of those cases: e.g. &lt;code&gt;value&lt;/code&gt;, &lt;code&gt;errorCode&lt;/code&gt;, &lt;code&gt;errorMessage&lt;/code&gt;. Those fields would have to be nullable or optional, because they would only be available conditionally, depending on which type of response we got. Something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;enum&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;GetResponseStatus&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;HIT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;MISS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;ERROR&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="kd"&gt;data class&lt;/span&gt; &lt;span class="nc"&gt;GetResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;GetResponseStatus&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?,&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;errorCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;?,&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;errorMessage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is not an awful way to define this API, but it has one big drawback: &lt;strong&gt;it is the developer’s responsibility to write code that checks all of the conditions correctly, and if there is a bug in the code it will only surface at run time.&lt;/strong&gt; For example, if you write code that assumes the response was a &lt;code&gt;HIT&lt;/code&gt; without checking, and you try to access the &lt;code&gt;value&lt;/code&gt; property, you will get a null pointer exception at run time if the response was not actually a &lt;code&gt;HIT&lt;/code&gt;. (In the Kotlin code snippet above, because of Kotlin’s null-safety rules, you’d be forced to write some code to deal with the possibility of those values being null, but in other languages that wouldn’t necessarily be the case. The point remains that it is the developer’s responsibility to reason about which of these fields might be null and when.)&lt;/p&gt;

&lt;p&gt;Algebraic data types provide a much nicer way to specify this API, without exposing any nullable fields at all. Here’s how this might look using Kotlin’s &lt;code&gt;sealed&lt;/code&gt; classes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;sealed&lt;/span&gt; &lt;span class="kd"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;GetResponse&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;data class&lt;/span&gt; &lt;span class="nc"&gt;Hit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;GetResponse&lt;/span&gt;
    &lt;span class="kd"&gt;object&lt;/span&gt; &lt;span class="nc"&gt;Miss&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;GetResponse&lt;/span&gt;
    &lt;span class="kd"&gt;data class&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;errorCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;errorMessage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;GetResponse&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have a discrete class for each of the three cases, and each of those three classes has only the properties that are relevant to it. And they are no longer nullable.&lt;/p&gt;

&lt;p&gt;A developer would access the appropriate class via pattern matching. In Kotlin, this is done via the &lt;code&gt;when&lt;/code&gt; expression:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;getResponse&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;GetResponse&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cacheClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"myCacheKey"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;when&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;getResponse&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="nc"&gt;GetResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Hit&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Cache hit! ${getResponse.value}"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nc"&gt;GetResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Miss&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
        &lt;span class="nf"&gt;println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Cache miss!"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="nc"&gt;GetResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Error&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Error! ${getResponse.errorMessage}"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach is really nice because it removes the burden of knowledge from the developer for questions like “in which cases will &lt;code&gt;value&lt;/code&gt; be available?”  The value property only exists on the &lt;code&gt;Hit&lt;/code&gt; class, so we get compile-time enforcement that it can’t be accessed unless the result was a &lt;code&gt;Hit&lt;/code&gt;. We have once again moved our bugs forward in time!&lt;/p&gt;

&lt;p&gt;The other great thing about this approach is that, in languages like Kotlin, the pattern matching expression is &lt;strong&gt;exhaustive&lt;/strong&gt;. This means that the compiler is smart enough to know whether you have handled all of the possible cases in your when expression, and fail to compile if you have not. Imagine a scenario where you have several of these &lt;code&gt;when&lt;/code&gt; expressions scattered around a large code base, and an engineer is working on a new feature that involves adding an additional type of &lt;code&gt;GetResponse&lt;/code&gt; to the sealed class. Without the exhaustive pattern matching, the engineer would be responsible for identifying every place in your code that is interacting with a &lt;code&gt;GetResponse&lt;/code&gt;, and making sure that it appropriately handles the new type of response. Otherwise what do we end up with? A bug that isn’t exposed until run time.&lt;/p&gt;

&lt;p&gt;‍But with exhaustive pattern matching, once the new type is added, the code won’t compile until we’ve updated all of the places in the code that need to be updated to account for it. Win!&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;The key to building a solid foundation for your software and sustaining high velocity for your engineering team for the life of your product is to make sure your codebase is maintainable. It’s crucially important that future engineers are able to ramp up on the code quickly and safely. Thankfully, trends in modern programming languages are giving us more and more tools to achieve that, and to move entire classes of bugs forward in time from run time to compile time. This also saves us a ton of engineering time that we don’t need to spend writing tests to prove that we haven’t introduced these classes of bugs. (Don’t get me wrong: tests are still very important! But it’s so nice not to have to write tests around the behavior of nullable properties or other such mundane things that aren’t actually related to your business.)&lt;/p&gt;

&lt;p&gt;‍The strategies above have proved especially valuable for me in the projects that I’ve worked on in recent years. I hope you’ll find them valuable too!&lt;/p&gt;

&lt;p&gt;‍&lt;strong&gt;If you have any other favorite approaches for moving bugs forward in time, I would love to hear about them. Send them my way on &lt;a href="https://twitter.com/cprice404" rel="noopener noreferrer"&gt;Twitter (@cprice404)&lt;/a&gt; or &lt;a href="https://www.linkedin.com/in/chris-price-3948716/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;—or join the &lt;a href="https://discord.gg/3HkAKjUZGq" rel="noopener noreferrer"&gt;Momento Discord&lt;/a&gt; and start a discussion!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>leadership</category>
      <category>programming</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
