As someone who has been in the industry for nearly as long as the industry has existed, the trajectory of modern web development is concerning. Often when I step back for a moment, “modern” tools seem overly abstract, interdependent and complex. When I say “modern tools” I’m referring are a range of technologies designed to improve development processes: dependency management (NPM, composer, etc), frameworks (react, laravel, etc) and “DevOps” in general (docker, AWS soup, etc). Slot in any trendy solution of the day.
Or to put it another way…
It’s not that there’s necessarily anything wrong with any of these tools. It’s more that I’m concerned that they are hurting our approach problem solving.
Early this week a Hacker Noon article — Understanding Kafka with Factorio — from a few months ago made it’s way into my work slack.
This post is a prime example of the way of thinking that concerns me the most. For starters, comparing a web development problem+solutions to Factorio— one of the most complex and difficult RTS’ to date — is very telling in and of itself.
The author starts by setting up problem that should be familiar to most developers.
Let’s say we have three microservices. One for mining iron ore, one for smelting iron ore into iron plates, and one for producing iron gear wheels from these plates. We can chain these services with synchronous HTTP calls. Whenever our mining drill has new iron ore, it does a POST call on the smelting furnace, which in turn POSTs to the factory.
So, we have 3 services that depend on each other in a linear fashion. If any one of them fails, the entire system breaks. There appears to be zero fault tolerance and that could be bad.
With Kafka, you can store streams of records in a fault-tolerant and durable way. In Kafka terminology, these streams are called topics.
With asynchronous topics between services, messages, or records, are buffered during peak loads, and when there is an outage.
But hang on. Now we have four points of failure instead of three! In fact, we’ve now introduced one single point of failure. If the Kafka layer fails the entire system fails.
Why should we trust Kafka more than we trust the 3 microservices, we built? Are there ways to make the individual microservices more fault tolerant?
Why not both? The author of this post seems like a solid developer who knows what he’s doing. Perhaps the underlying assumption is that our microservices are already as fault tolerate as they could possibly be and we should add Kafka as an additional layer of fault tolerance.
We’ve introduce a fourth complex and specialized technology into the stack. Now we need a Kafka specialist on our team…
This post is not meant to be an analysis or critique of Apache Kafka.
It’s meant to provide an example of the way modern web developers tend to solve problems. We tend to build or implement complicated systems that provide an abstraction layer above problems, without adequately addressing the root problem.
I'm concerned that we tend to implement the solution du jour while we hand-wave over the problems of yesterme.
I’m quite concerned that we’re fostering a generation of web developers who are building houses of cards on top of houses of cards, to solve problems that don’t fully understand, without properly addressing those problems.