Probably the most iconic video game company of the 1990s, id Software developed such groundbreaking titles as Wolfenstein 3D, Doom, and Quake. In a somewhat recent talk, John Romero (the co-founder of id), outlined the company’s programming principles that allowed them to produce so many quality titles, one after the other, in a very short time, with a very small team.
Think of this as a companion to Lampson’s timeless paper on system design. Many of the principles are common and they are all quite relevant today.
Although these programming principles refer to video game development, most (if not all) apply to general software development. In this blog post, I outline my own interpretation of these principles, paired with screenshots from Romero’s own slides for reference. I also make an attempt to generalize each principle beyond game development. Here we go.
This doesn’t necessarily mean overly complicate your current version of the product. In fact, this may one way of describing the iterative development practice. Build something that does one thing really well and keep improving upon it constantly. Just keep your quality standards high in each iteration.
In his talk, Romero mentions how they had programmed the code to show an image of a bagel when the user hit an error loading a sprite. By adding good defaults/fallbacks, they made the game still playable. Had they not done this, the user would be blocked until the bug had been fixed (i.e. lost productivity hours). You can imagine how important this becomes as an engineering team grows larger. A practical example of this is using defaultProps in ReactJS.
In his talk, Romero mentioned that he built a level editor called TED. The time he spent building TED paid hefty dividends, since it immensely helped them rapidly ship one game after the other by boosting productivity. Since those days, there has been an explosion of developer tools that have helped boost developer productivity. But if something off-the-shelf doesn’t cut it, try to identify whether an internal tool can help your developers be at their most productive (even if it can take development resources off the main product).
This covers many topics that many of the most effective engineering teams use as best practices: (a) dogfood your product as much as possible; (b) don’t delegate to others (e.g. QA engineers, or worse, customers) to find bugs in your code; (c) write as many tests as possible to accompany your code.
We are very strict about this at AgentRisk, a practice we’ve carried over from our previous startups. During our daily stand-ups, we make sure that any new bugs have the highest priority and get fixed ASAP. Obviously, not all bugs are equal, so some business-related judgement is warranted here.
This is one that may mostly apply to game development. In other cases, you may want to go the other route when it comes to testing during development. For example, you may have users running your application on a mobile device that has very inferior specs, or they may be accessing your web application over a high-latency 2G connection. Make sure they don’t have a sucky UX.
This mostly translates to “don’t transfer over limitations of your past code and its implementation to future code”. This is a tricky one and kind of ties with Principle 4 . As engineers, we are often tempted to “rewrite everything” from scratch. This is where having experienced engineering leadership is really important. Often, the choice to take the plunge and do a new implementation can pay dividends down the line (similar to building tools). For example, as Slack started scaling, they scraped their v1.0 implementation entirely and moved to a brand new architecture, reaping the rewards many times over. We also had a similar experience moving from Python to Elixir, having a much more robust codebase and much increased developer productivity.
This is really hard. If you have ever built and maintained an API, you know how hard it is to get it right (especially the first time). In his talk, Romero gives the example of encapsulate the functionality of a torch together with the flame and other related objects. Had they needed to move or update all torch instances in a level, having more a granular abstraction could have led to e.g. forgetting to move/update a flame. Spend a lot of time on this and try to get this right the first time. There will be compounding rewards in development and debugging time.
Using code review software can help with this. For more complex parts of a product, advance architecture review may be warranted. In any case, make sure you promote a culture that values communication and seeking feedback.
There are many ways to dice an onion. Give your coders the creative freedom to come up with their own solution to the problems they’re working on. Just make sure to enforce some coding standards, so that any member of the team can jump into the codebase. Getting caught up on coding aesthetics can waste valuable time, so it’s best to leave this to linters and auto-formatters. This does not mean that e.g. identifying suboptimal implementations in code reviews shouldn’t be encouraged. Just focus on things that are objectively wrong.
I hope this has been useful. Below is a link to the entire talk. I suggest you also read Masters of Doom, which gives a detailed back-story to id software. Many thanks to Jon V and Diwaker Gupta for their feedback on initial drafts of this.