As someone who has been in the industry for nearly as long as the industry has existed, the trajectory of modern web development is concerning. Often when I step back for a moment, βmodernβ tools seem overly abstract, interdependent and complex. When I say βmodern toolsβ Iβm referring are a range of technologies designed to improve development processes: dependency management (NPM, composer, etc), frameworks (react, laravel, etc) and βDevOpsβ in general (docker, AWS soup, etc). Slot in any trendy solution of the day.
Or to put it another wayβ¦
Itβs not that thereβs necessarily anything wrong with any of these tools. Itβs more that Iβm concerned that they are hurting our approach problem solving.
Early this week a Hacker Noon article β Understanding Kafka with Factorio β from a few months ago made itβs way into my work slack.
This post is a prime example of the way of thinking that concerns me the most. For starters, comparing a web development problem+solutions to Factorioβ one of the most complex and difficult RTSβ to date β is very telling in and of itself.
The author starts by setting up problem that should be familiar to most developers.
Letβs say we have three microservices. One for mining iron ore, one for smelting iron ore into iron plates, and one for producing iron gear wheels from these plates. We can chain these services with synchronous HTTP calls. Whenever our mining drill has new iron ore, it does a POST call on the smelting furnace, which in turn POSTs to the factory.
So, we have 3 services that depend on each other in a linear fashion. If any one of them fails, the entire system breaks. There appears to be zero fault tolerance and that could be bad.
Enter Kafkaβ¦
With Kafka, you can store streams of records in a fault-tolerant and durable way. In Kafka terminology, these streams are called topics.
With asynchronous topics between services, messages, or records, are buffered during peak loads, and when there is an outage.
Neat.
But hang on. Now we have four points of failure instead of three! In fact, weβve now introduced one single point of failure. If the Kafka layer fails the entire system fails.
Why should we trust Kafka more than we trust the 3 microservices, we built? Are there ways to make the individual microservices more fault tolerant?
Why not both? The author of this post seems like a solid developer who knows what heβs doing. Perhaps the underlying assumption is that our microservices are already as fault tolerate as they could possibly be and we should add Kafka as an additional layer of fault tolerance.
Weβve introduce a fourth complex and specialized technology into the stack. Now we need a Kafka specialist on our teamβ¦
This post is not meant to be an analysis or critique of Apache Kafka.
Itβs meant to provide an example of the way modern web developers tend to solve problems. We tend to build or implement complicated systems that provide an abstraction layer above problems, without adequately addressing the root problem.
I'm concerned that we tend to implement the solution du jour while we hand-wave over the problems of yesterme.
Iβm quite concerned that weβre fostering a generation of web developers who are building houses of cards on top of houses of cards, to solve problems that donβt fully understand, without properly addressing those problems.
The post Is Modern Web Development Too Complex? appeared first on ohryan.ca.
Top comments (1)
Large companies like Google, Facebook want to make it look difficult with the amount of crap they want you to use.
But the reality is: