DEV Community

Alex Bunardzic
Alex Bunardzic

Posted on

Software Complexity: Essential, Accidental and Incidental

Software complexity seems to be getting more and more unmanageable as our field of engineering evolves. In order to gain correct understanding of how to deal with the ever mounting complexity, it is important to discern three basic types of software complexity: essential, accidental and incidental complexity.

Essential complexity

This type of software complexity is related to the complexity of the domain we are trying to model. In more specific terms, we are dealing with business policy rules that govern business processes. When trying to implement and automate the processing of those policy rules, we encounter various levels of complexity.

This complexity cannot be whittled down, no matter how hard we may try. For example, if the business policy rules which govern the processing of a claim require 15 discrete steps, we cannot simplify the rules by skipping some steps and crunching the processing down to just few basic steps.

Bottom line, essential complexity is unavoidable, and is basically the real reason why we are gainfully employed as software engineers.

Accidental complexity

While essential complexity is worth having (delivers value to the customers and brings competitive advantage to the business, plus offers exciting career opportunities to engineers), accidental complexity is counter-productive. This type of complexity is caused by the lack of proper understanding, education, training, and could also be caused by the poor communication between the business and software engineering.

Accidental complexity manifests itself in poor architecture, poor design, poor code as well as poor software engineering processes. Because accidental complexity is caused by some or all of the above factors, the only way to get rid of it is by removing its causes.

The most expedient way to remove the causes of accidental complexity is by education, training, mentoring and coaching. There is always room for improvement, and the more we focus our daily activities on learning, the quicker we’ll be able to minimize accidental complexity.

Incidental complexity

This type of complexity is the toughest one to deal with. And it definitely is the most counter-productive type of complexity.

When I was at the University I learned that writing software consists of receiving data, manipulating it programmatically, storing it locally, and sending it somewhere. At no point in my education was I expected to learn about load balancers, various data stores (SQL, NoSQL, memcached), containers (Vagrant, Docker Kubernetes) and other tools and technologies that each provide merely one part of the application. Back then, those infrastructure tools were viewed as mere annoying details.

The underlying cause that brought these annoying details into such prominence lies in the Unix philosophy: “Do only one thing and do it well”. Tools follow incremental evolutionary path; each tool begins its life as a simple utility meant to scratch an itch. Usually that ‘scratching an itch’ consists of solving a few known problems. From that point on the tools evolve by improving on the previous capabilities. And naturally, that improvement is never focused on removing the complexity.

For example, Git is a better version control tool than Subversion (Git is a subsequent evolution from Subversion), but Subversion is a lot simpler than Git.

How to decrease complexity

The only way to decrease complexity is to remove things. Instead of continuing on the evolutionary path of building better versions of existing stuff, we must eliminate stuff.

For example (a really simplified one), we may currently have two tools in our toolbox. We combine those two tools by bundling them up into our workflow, and that arrangement helps us get the job done.

If we now get rid of these two tools and build one tool that does the two things that each previous tool knew how to do, we eliminate incidental complexity. A single tool not only removes the interface, it also reduces the surface area of both problems that previously we needed two tools to grapple with.

Imagine the gains in simplicity we get if we were to bring dozens of tools together into a single tool. By doing that, we eliminate huge pile of incidental complexity, as we need not worry about the interfaces and the overlap between tools.

Presently, the best-of-breed technology that is serious about removing incidental complexity as much as possible is .Net. Its current iteration (C#-Visual Studio-Azure) proposes to eliminate swathes of specialized tools that come with burdensome overhead. Let’s look at some aspects of incidental complexity we’re trying to tame.

Code-as-text complexity

We write code as text, and that poses a high risk of introducing syntax errors. When we write some code as text, we ask the compiler to read it, and often times it cannot read the code (the attempt to read the code results in compiler errors). Good editors (for example, Visual Studio and Visual Studio Code) propose to speak the programming language, not just assist mechanically in formatting the text.

Good tools (editors) minimize incidental complexity by combining various smaller tools into one big tool. Visual Studio understands the programming languages and offers intellisense/autocomplete. Access control and collaboration are also baked into this tool. In addition to that, refactoring is built in, as well as version control, feature flags, function and type versioning.

Such advanced editors go a long way toward minimizing and even outright eliminating incidental complexity. We are strongly advised to take full advantage of such advanced tools.

Infrastructure complexity

Everything involved in working with the machines that run our code is called computing infrastructure. Queues, firewalls, networking, load balancers, scaling, security, monitoring, DBs, sharding, etc. As software engineers focused on delivering value in an uninterrupted stream, we are only interested in working with data, business policy rules processing, and customers. All of the above infrastructure concepts are mere annoying details that do not add any value to the customers. As such, we regard infrastructure as incidental complexity (a necessary evil). Our paying customers couldn’t care less about our queuing, scaling, monitoring etc. policies.

Best-of-breed technologies such as Azure go a long way toward abstracting many of the above infrastructure concerns, relieving us from a lot of incidental complexity.

Deployment complexity

A finished code (i.e. a release candidate) needs to be synced from one machine to another machine. Conceptually, such operation should be trivial. In practice, it turns out to be a challenge to do this syncing quickly and safely. Why is that? Let us count the ways:

• Packaging the code (Docker container, tarball, webpack, jars, git checkout…)
• Testing the code (code coverage, mutation testing, browser testing/Selenium)
• Syncing the code (git push, Docker registries, artifact hosting, S3, CDNs)
• Enabling the new code (Kubernetes, reverse proxies, Capistrano, swapping symlinks)
• Rolling out the code (feature flags, launch darkly, quarantine launch, blue-green deploys, DB migrations, API versioning)

Again, Azure is making some strides toward minimizing incidental complexity when deploying code.

API complexity

Ideally, using an API should not be any harder than calling a function. However, that’s almost never the case. Authentication, rate limiting, retries, errors etc. conspire to make those calls incidentally complex.

Some ways we’re trying to deal with these challenges: SOAP / REST / HTTP+JSON / gRPC/ GraphQL / Protobuf.

This type of incidental complexity remains to be solved. There is some hope that, as platforms like Azure mature, they will offer tooling to minimize the jungle of options we’re facing when it comes with using APIs.

Conclusion

Complexity is the biggest enemy of shipping quality software. One type of complexity (i.e. essential complexity) is desirable, as it offers competitive advantage to the business and to the customers. Other types of complexity (accidental and incidental) are not desirable, as they add absolutely no value to the paying customers.

By choosing our technology stack wisely, we can avoid some of the incidental complexity. And by choosing our training/education paths, we can also avoid almost all of the accidental complexity.

Bottom line, those businesses who manage to minimize/remove a lot of accidental and incidental complexities from their operations will remain competitive and victorious on the market.

Latest comments (4)

Collapse
 
arnebab profile image
Arne Babenhauserheide

"The only way to decrease complexity is to remove things" — I disagree with that, because the example is git. Git has a lot of unnecessary complexity — as proven by Mercurial providing the same capabilities with lower complexity in the user interface. A large part of the complexity in git is just sloppy work (i.e. autostash working only on pull but not on checkout) and treating the UI as something to do as afterthought (so implementation details invariably seep through in normal usage).

Collapse
 
lsoares profile image
Luís Soares

awesome article!
maybe it could be interesting to have the original article as a reference/curiosity:
worrydream.com/refs/Brooks-NoSilve...

Collapse
 
mcsee profile image
Maxi Contieri

Loved code as text!

Collapse
 
alexbunardzic profile image
Alex Bunardzic

Tools used for writing code should understand the programming language of choice and should push back and disable us from making foolish choices (in syntax and also in semantics).