DEV Community

Cover image for Why monoliths are a long term bad idea
Bruno Noriller
Bruno Noriller

Posted on • Originally published at linkedin.com

Why monoliths are a long term bad idea

TLDR: It’s not about the technical side, it’s about people!


Monoliths can be as good as anything... on the technical side, but something some people fail to consider is the Dev Experience.

People will be working with them, not only now, but in the future.

Even if you start small today, in a few years, maybe dozens or more people will have to use it... and well... they won’t like using it.

If you’re a developer... you’re probably nodding right now.


The causes:

1 - Monoliths get too big

What would you say if I showed a 1500 lines file?

You would probably be like: “What the hell?”

Even with a proper organization... dozens or hundreds of classes and functions are something overwhelming.

Then why a monolith with dozens or hundreds of files is “not a problem”? Even with a proper organization, its scope is too big to grasp.

And when you follow the usual patterns frameworks tell you to do (this applies to both front and back end), you probably have a mob of files spread over dozens of folders and subfolders that are all related to the one thing you have to use.


2 - Monoliths get old, fast...

Things get fast, and in tech, even faster.

Something from 5 years ago can be called at least outdated and something from 10 years is already ancient.

You might say something like “Java is Beatles”, but even then it’s not like you’re using Java 7 without frameworks. And if you are... then keep reading.

Take Node, for example, you can have vanilla (and with that, you have the “flavor” using Express, Koa, Fastify...) then you had Adonis 4, Adonis 5, Nest, Apollo, maybe you forgo a proper back end and use Next as BFF (or using lambdas, or Firebase/Supabase).

Possibilities are endless, and while some would be harder than others to make a monolith from... be sure someone is doing it.


3 - Monoliths are slow to change

Start that some are not even possible to change...

Sometimes the framework changes too much from one to another that it’s impossible to even think about migration...

Angular 1 to Angular 2+, Adonis v4 to Adonis v5... and what happens? Either you’re still using (and even upgrading/basing your business) in a framework that lost all official support (yes, some companies are still too invested in AngularJS) or you’re probably had to rewrite the whole thing. Maybe you’re still migrating, maybe you haven't even finished that and are already thinking of rewriting again.


The developers:

Developers, well... they want to play with the shiny new toys more than with “old stuff”.

We barely tolerate having to use code we wrote a few months ago, imagine having to use code other people made who knows when?

Why are we still using insert perfectly good framework when we could be using the new version or new framework.

And don’t get developers started in having to choose a language...
(BTW, unless you’re talking about a core business that requires something really specific or with users in the 6 figures numbers... you can probably use the equivalent of WD-40 and Duck Tape and it will work just fine...)

Finally, developers are a lazy bunch. It’s easier to just write the damn thing again than try understanding and messing with whatever you may or not have.


This leads to unhappy developers.

And unhappy developers leave. Leaving with a lot of knowledge in hopes of new green fields.

Or maybe they will push for a great rewrite... and while rewriting everything in Rust can be a stupid idea that will take too long, maybe moving a few core things from the old Ruby on Rails to Rust might be just what you need.


But what is the alternative?

As you would split the 1500 line file into multiple files (and probably in multiple folders), thinking small is probably the better idea.

Finding people to handle the Ruby on Rails monolith might be hard (and expensive), but if the ROR were just a microservice (emphasis on micro) or something inside a monorepo, then to fix it... probably even someone like me that never touched Ruby might be able to do something about it, and if worst come to worst and it starts to give too much trouble, a rewrite will be faster and easier.

And what about Svelte or Remix? Well... for the front end, you have micro front ends... so the old stuff can stay there while new stuff can be made on whatever new fad there is.


As long as you keep the stuff small...

Developers having to maintain stuff will know that the scope will always be small, some will even look up to the (possible) multiple different languages/frameworks/paradigms being used.

Others will look up to the next “new thing” since when they finish the current project being made in Next, the next one will be made on Remix.


The obvious problems

On the technical side? Yes, many obvious ones.

And I’m thinking it’s a 50/50 chance that DevOps Engineers are either thinking on hunting me down or on how fun it would be to automate something like that...


But as a developer, if I enter a new team and see a fraction of this freedom... well... what do you think?


Cover Photo by @charlybron on Unsplash

Latest comments (36)

Collapse
 
nssimeonov profile image
Templar++

I was thinking about this problem and while I get your point that using old technologies is limiting the people you can bring to the project as new hires may not know them or not willing to use them, I also think that you miss a key point here. The most expensive part of a software project is it's maintenance. A monolyth is easier to maintain and you need less DevOps engineers as you will be maintaining a single architecture. If you have multiple teams maintaining different platforms, each written in it's own language you will need more people to support that as well.

Collapse
 
liviufromendtest profile image
Liviu Lupei

Imagine a world where companies are led by developers.
They would never ship anything, they'd just play with new frameworks, overuse Docker and argue about what programming language is the fastest.

(just a joke)

Collapse
 
noriller profile image
Bruno Noriller

To be fair... that's how it usually starts.

Let's go with front end language/framework and back end language/framework because that's what I know.

The problem i see is that it stays like that... because it's how it's always have been done.

Collapse
 
cess11 profile image
PNS11

A midsized business application is probably something like 2 MLoC, so a cap at 1500 lines per file means you'll in practice have way more than 1500 files in the project.

If you have trouble with 1500 LoC files, you'll be even worse off when there's 1500 files.

But doesn't "microservice" solve this?

Only if you make enough money to hire fifteen times 5-10 people so you could actually split the application into a few hundred application code files per team. Very few companies can, and those that have suffer from complexity in networking, interfaces and organisation instead of some arbitrarily chosen file with many lines in it.

Developers can easily handle big files. Are you sure you are fit to handle recruitment, developer churn, middle manager drama and operational complexity after moving a ten people team to one hundred? Will the application be able to sustain that many salaries?

 
codenameone profile image
Shai Almog

It can interact with code written for Java 1.0. It lets me write code that ignores generics which is super useful when building general purpose libraries.

Again, no disrespect towards C#. Java is a slow moving language. You consider that a bug I consider it the winning feature. I think both viewpoints have merit.

 
codenameone profile image
Shai Almog

To each his own. Again, I don't think language features are important. The API is important. The API thrives in a stable environment. C# changed a lot due to reification and other choices that Java mostly avoided.

I really wanted properties in Java. But in retrospect I'm glad we didn't add them. They would have been bad.

Now we have records which are better for one use case. With Valhalla we'll be able to build zero overhead property objects.

Collapse
 
nssimeonov profile image
Templar++ • Edited

Sorrry for the sarcasm, but and article THAT WRONG actually deserves some sarcasm.

1 - Monoliths get too big

Yes, because splitting a monolith into microservices will reduce the codebase? Ah wait - it will actually increase it and code reuse will be reduced. You will have the same thing in a myriad of losely coupled projects and without proper e2e testing you will be lost. Refactoring capabilities are dramatically reduced. My honest advice: unless you actually NEED to split the monolith don't do it. There are use cases where you have to split your code so it runs on separate servers for security, redundancy and speed and you have multiple services communicating to each-other - but once you grow beyond a certain number of users and data size - you have no choice but to do that or pay for extremely expensive hardware and we all know each hardware has some limits, so at one point you may need to split it. But don't confuse the NEED with bad design. What you propose is overdesigning and this is bad. BAD!

2 - Monoliths get old, fast...

Your point being? Everything gets old, fast. Are you suggesting to rewrite the entire codebase every 2 years just to keep it current? Or you're just complaining for no reason here?

3 - Monoliths are slow to change

HELL NO! A bigger change in business logic in a microservices architecture may involve redeployment of a dozen of services along with front-end. With a monolyth you will create a new version of a single project, not a dozen of services. E2E testing will be a nightmare and backward compatibility gets on a different level as you will have to support the versions of each of these microservices.

This leads to unhappy developers.

Microservices lead to insanity. Change my mind!
The company I work for uses A LOT of microservices because we have no choice, but this isn't because we chose not to build a monolyth just to have fancy codebase, but because of scalability - this is a whole different topic as I explained initially.

But what is the alternative?

You confuse bad code with bad design. A single file with 1500 lines of code may or may not be bad. A single function spanning over 150 lines is definitely bad. This has nothing to do with the entire application.

a rewrite will be faster and easier.

You have no idea what you're talking about. A rewrite is always problematic and you will almost certainly lose hidden functionality and will introduce new bugs. ALWAYS! I've never seen documentation good enough to describe all possible edge cases, code with extensive unit and e2e tests that cover literally every single possible situation - this is the reason why even code with 100% coverage still has bugs. We test how we expect the code to work, yet users creatively find new ways to crash it.

As long as you keep the stuff small...

Well, if we can always write small apps, but alas stuff tend to evolve. Companies grow, can you suggest the following to your boss: "Hey, this company seems to be growing too big, let's downsize it a bit"? I dare you.

The obvious problems

Yes there are problems, however instead of one problem you will create 10 new. Bigger. And since you mentioned DevOps - instead of deploying one single monolyth you will have to deal with deploying a myriad of microservices and testing the integration between them. So you need to hire 10 DevOps engineers for each that you currently have. Have fun.

PS: Sorry I just checked your linkedIn profile - 3 years of experience as software developer explains a lot about your point of view. All I can wish you is that you don't experience the consequences of the architecture you are praising. Microservices are all fun and games until you grow after a certain point, then it turns into a real nightmare.

Collapse
 
noriller profile image
Bruno Noriller

I understand your point, but mine is not about the technical part and only on the human aspect.
Since you've checked my Linkedin, you might saw that I'm coming from a business formation, I studied and worked with HR and things like that.

If you check the Stackoverflow surveys on the last years you see that some languages started paying more and more, while others, like JS pay on average less and less.

Why? As Uncle Bob says, every 5 years the number of programmer doubles (I'm not sure how valid this is, but I gonna assume this as true).
When people learn, and I'm using myself and people near me as references. We don't go learning C, Perl or Ruby. Many universities are even dropping from teaching C/Pascal to Python.
New people will learn the "new" and the "hot" languages and frameworks, and they will want to work with that.

AngularJS was released around 12 years ago and there are companies that still haven't let go of it, even as it hit the end of official support. (I know of a low code platform where it's components and everything that is offered is based on AngularJS, if you want to design custom components, you have to make it in AngularJS even.)
Angular2 is another approach and other frameworks appeared and died in between.

Their application is a "monolith" (even if it isn't really), if as time progressed they embraced the new frameworks, new components and features could have been done on other frameworks.

I know it's a nightmare to have multiple frameworks and other crazy stuff instead of one big monolith where you have all the groundwork, utils and everything else you need already done and ready to use.
(And you can mess up in one way or another, so let's just assume equivalent levels of mess.)

But, 5 years from now, let's say they continue with AngularJS, their costs will only rise, as people with the knowledge of it will only dwindle. It will also kept harder and harder to maintain, after all, now you even have to probably have a internal fork that you have to maintain.
Either that, or they would have to bear costs to train people in a framework that's 'dead', having people with lower levels of experience with a framework that will be harder and harder to learn and maintain.
I know people that used to program in COBOL, and they just don't want to work with it anymore. Companies might pay their weight in gold, but they don't want it.
So either you have people who have to start learning it from whatever source that is or you have to pay a premium to people who are able to work with it.

But now, let's say they embraced the nightmare that is to have smaller "services", the AngularJS part would be just one small part of the whole and years ago, probably even before the end of life of it, they would already had migrated it.
Or maybe, not even that! Maybe it's consistent enough to still work without problems, so there's not even a need to think about rewriting it.

Hiring people who know the new and hot stuff is easier and cheaper, the developer's themselves probably appreciate being able to use the fresh stuff than to learn some arcane framework.
People change jobs as often as they change cars, and being familiar with new technologies is something most developers want because that's the bulk of what the positions are after.

And of course, things like micro front ends is one solution for this problem.
It's now easier than ever to have multiple different frameworks in one project.

And before I diverge more, basically my whole point is about the human factor of the equation.

Collapse
 
nssimeonov profile image
Templar++

What about the human aspect? Complexity always increases as project evolves. You cannot avoid that. You can only make things worse if your architecture is inadequate.

I checked your linkedIn profile to get some idea how much experience you have in designing and maintaining software. Your business experience may give you some perspective what the business needs, but not how software works and how much effort is required to maintain the codebase and what are the downsides of some architecture decisions.

Every project has it's lifespan - this is true, so your observations on "we cannot keep using the old COBOL systems indefinitely" are correct. However it's not because we cannot find competent people to maintain them, nor because all new hires don't want to work with old programming languages, but because it's more efficient to use the new tools. As I mentioned before - money is king - if you have to pay a monthly salary to someone to create some functionality using COBOL, while using Python the same can be developed within a few hours - it's obvious what the business will prefer.

Every project reaches some state where maintaining some portions of it (or the entire project) are too expensive, so a re-do may be the better option. I'm are not talking about design flaws here (although this is a common case) but rather about the business needs outgrowing the original design. For example if we originally designed a car we may attach a trailer, we may upgrade the suspension and engine, but we cannot make a train no matter how much we upgrade the car. You cannot build a freightliner from a car although both use regular roads.

I think you get my point - if you NEED - you may redo some portion of the monolith by splitting it into a microservice and you can do that using whatever language/framework/etc. you want. Just don't do that "because it's fancy and the new hire prefers to use this instead of learn how to patch the existing code".

Nobody likes working on someone else's code. It seems easy to start over from scratch and it all looks clean, however if you go that path soon you will end with "Peter's project, Hanna's project, Joshua's project" and everyone will insist on working on their own project and not touching someone else's. The fun part starts when you get into a problem and you need to figure out where this is and apparently Joshua is on vacation and Peter left the company last year. Trust me - I've been there and done that over the past 30 years (yeah, I am a software engineer since 1992).

You want to keep the code as unified as possible and split microservices ONLY if you NEED to do that. Not because you want to do it or because "we can do it better". You should do that only if you have no other choice or if maintaining the old code is proven to be really ineffective.

Thread Thread
 
codymetzz profile image
Cody

As someone with about 6 years of software dev experience, I'm curious to know what your opinion is on when a microservice should be used. I've mostly worked on monolithic backends. Are there any resources you've found that are helpful when determining when to use one over the other?

Thread Thread
 
nssimeonov profile image
Templar++ • Edited

Here are my .2c:

The best rule to judge if you need them is: Don't unless you don't have any other option (or the costs of not doing so are multiple times higher).

Here are some valid use cases:

  1. Scalability and redundancy - obviously it's easier to have multiple small EC2 instances that perform some computation. Smaller VMs are easy to start on-demand. For example a food delivery company (Grubhub, Takeaway, Ubereats, Doordash etc) has a few "rush hours" when customers order, then it gets back to near zero. Another good use case is when you have a slow operation (CPU-intensive or communicating with external APIs) it makes sense to have multiple instances of a microserice instead of spinning off the entire monolyth.

  2. Partial redo of an old system - when you have a big old system, created over a decade ago, usually it's maintenance gets harder and harder over time. It's perfectly ok to rewrite portions of it as microservices in order to decrease maintenance costs, improve performance and implement new functionality.

  3. Memory and cache - if you have some processing that requires a lot of memory or caching (although we have distributed cache like Redis) - it makes perfect sense to extract this as a separate microservice. Implementing custom cache if you need it is ok. Multiple instances using less memory is cheaper to create than a bit less instances using much more memory. Here applies the second part of the rule - it's cost.

Please note that for all of the examples the main rule applies - you don't have any other option or it will cost you much more if you keep the monolyth.

PS: I'm sure that we can come up with more good examples, this is just what came to my mine for the brief time I decided to answer.

Thread Thread
 
codymetzz profile image
Cody

Thanks for the detailed answer! Much appreciated!

Collapse
 
thumbone profile image
Bernd Wechner

I think it might help if you defined what, to your mind, a monolith is.

I admit to some confusion there. You allude to a 1500 line file as a possible monolith. But then I see files about that size often enough. Never struck me as "monlithic" (albeit on the large side) . I had a Django models file larger than that until recently.

But if large files are your concern, why not talk about large files. It can't be that simple, and I sense you're talking about something broader.

But what the alternatives? I mean, my large models file got to be a bother navigate so I split it into individual files for for each model and have instead, say a dozen files with a hundred or two lines a piece.

But the code hasn't shrunk? It's just structured differently.

Then I work on a C++ project in which almost everything is in one folder. Like 1000 files in one folder. Egads.

But structuring this, by grouping conceptually similar files into sub folders is just restructuring and does not reduce size.

So I'm left wondering what you mean by "monolith".

I thought, maybe the Linux kernel which is often described that way. But that's a very particular context and again I can't help suspect you have a broader idea of "monolithic".

But if it's just size, then I am stuck again, because a given job demands a given size. Big, complex jobs grow large code bases. Small simple jobs demand only small code bases.

Perhaps then, you are alluding to a lack of independent units with interfaces? For example Python code to achieve a given job is what's much smaller than VBS code simply because so many packages exist having already implemented the component jobs in a sense.

Anyhow, my point is simply, I'm not sure what you mean by monolithic I guess.

Collapse
 
noriller profile image
Bruno Noriller

I see your point, but this is probably more a problem on how I wrote it.
The 1500 lines was an example on the difference of a single huge file vs multiple smaller ones.

In the case, let's say you have something about people (customers, employees...), then you have about products (prices, stock, transport).
You could have everything in a "single file", but it would make more sense to split it in multiple files and folders.

My point is that even separated on folders is maybe not enough.

It's not just about organization or the lack of it, the cognitive load of a huge application with everything in one place vs multiple smaller projects each one tackling one of those things is different.

Collapse
 
codenameone profile image
Shai Almog

I'm not a fan of async/await so I'm good with that. Project loom is coming in JDK 18 as preview and will solve the "problem" of threading in Java. I quoted problem because the approach Java took has its advantages in many use cases.

Resource use can be reduced with GraalVM which removes the overhead of the JIT and Valhalla which removes the overhead of primitive objects. Loom also removes the overhead of threads. None of those are critical though as you can still write very efficient Java code today. This is mostly benchmark optimizations. What matters is scalability and for scale and distributed computing Java is at a different level.

IMHO Java did the smart thing by not chasing language features introduced by other languages too far. A lot of those languages are a mess where compatibility is fragile at the source and binary level. Java has kept the language simple and stable. As a result it's MUCH easier to build on top of it and expect long term stability.

In my current job I need to maintain agents/tooling for Java, Node and Python. All of them are fine platforms. But the breadth and maturity of Java are at a completely different level. The docs, the instrumentation and debugging tools, etc.

I worked with C# quite a bit but it's been years ago so I have no idea what's the status for that. I always felt C# tried to take everything Java did and add on top of that. To me personally the greatest feature of Java is the stuff it didn't add, so C# was never a favorite.

Collapse
 
siy profile image
Sergiy Yevtushenko • Edited

Looks like you're using word "monolith" to call all code which you don't like (for whatever reason).

Just keep in mind, that "monolith" and "microservices" could be a different packaging options for same application. With this in mind, you'll notice, that all problems which you've mentiond do not belong to "monolith", it's matter of project organization. Whole system can't be good or bad depending on how many deployable artifacts you generated.

Another (tightly related) issue: design patterns used in microservices inherently enforcing bad practices and making code harder to support and maintain. Circuit breaker and Saga patterns, for example, enforce mixing business logic and low level details (connectivity issues in case of circuit breaker or detailed transaction/rollback management in case of Saga pattern).

Collapse
 
jwp profile image
John Peters

Agreed, they are a bad idea. Reason? Visual Studio Code has 'Go to Definition' which is file agnostic. Too many times in Javascript code, I've seen mixing of concerns. When that happens scrolling over all of creation and word searches become untenable. Besides, in Javascript classes are first class citizens.

Collapse
 
rubyshiv profile image
Ruby Valappil

Nice Article!

A few things that I would differ on,

Microservices and Monoliths should be a choice made based on the project requirements, one need not declare the end of the other.

Microservices are suppose to be fast in dev and deployment but we have been doing that with monoliths for years. CI/CD existed before devops just like modularity existed before microservices.

Even with microservices, if the service is dependent on other frameworks like kafka etc, it gets difficult to upgrade if the frameworks don't support that upgrade.

Biggest advantage that I could think of from my experience is convincing the release and business team. They are much happy when a small chunk is moving to production instead of the entire application, it's easy to get approval and needs less discussions within multiple teams.

Collapse
 
noriller profile image
Bruno Noriller

But you see, the people side should be a factor, and a big one at that, of the project requirements.
Will you have people willing (and happy) to work on the the code base (with language, framework and other quirks as is today) in 5 years?
Adonis V4 and V5 have completely different developer experiences but just a few years apart. And in a few more, the gap will only get bigger.

Collapse
 
joshghent profile image
Josh Ghent

I sort of disagree with most points.
You can have large files in microservices as well. And 1 service to update is fair better than multiple.

Collapse
 
noriller profile image
Bruno Noriller

I understand what you saying, but my point is for how long updating one is better than multiple?
Today a new framework that will dominate a great share will start, tomorrow a new version of another one will be released and yesterday we were still using something that was released years ago and becaming harder and harder to maintain, not because it's bad, stale or anything like that... but because people just don't want to learn it anymore.

Collapse
 
noriller profile image
Bruno Noriller

While I agree that it can modular and easy to upgrade... what if it's AngularJS? Or the next AngularJS?
It can have an awesome structure, but people will still be like... I have to work with that?
From what I understand, ROR developers are harder and harder to find... because it's old, as good as it can be, the bottleneck becomes the developers... more than any technical side.

Collapse
 
guillep2k profile image
Guillermo Prandi

I'm all for long source code files (even 1500 lines) if the alternative is splitting the code with such granularity that trying to figure out what ONE function does involves navigating dozens of files. Code review becomes a nightmare. This is rarely the case in front end projects, and less of a nightmare nowadays with good editors like VSCode and such, but depending on the case, long files are justified. For example, device drivers (for microcontrollers, like UART, SPI, etc) tend to be contained within one file.

Collapse
 
noriller profile image
Bruno Noriller

I partially agree with you, because as with anything... it depends.
Maybe the best way could be a 1500+ lines file, but more often than not... 1500+ lines is too much for one single file.

Collapse
 
moopet profile image
Ben Sinclair

What would you say if I showed a 1500 lines file?

Those are rookie numbers