TLDR: It’s not about the technical side, it’s about people!
Monoliths can be as good as anything... on the technical side, but something some people fail to consider is the Dev Experience.
People will be working with them, not only now, but in the future.
Even if you start small today, in a few years, maybe dozens or more people will have to use it... and well... they won’t like using it.
If you’re a developer... you’re probably nodding right now.
1 - Monoliths get too big
What would you say if I showed a 1500 lines file?
You would probably be like: “What the hell?”
Even with a proper organization... dozens or hundreds of classes and functions are something overwhelming.
Then why a monolith with dozens or hundreds of files is “not a problem”? Even with a proper organization, its scope is too big to grasp.
And when you follow the usual patterns frameworks tell you to do (this applies to both front and back end), you probably have a mob of files spread over dozens of folders and subfolders that are all related to the one thing you have to use.
2 - Monoliths get old, fast...
Things get fast, and in tech, even faster.
Something from 5 years ago can be called at least outdated and something from 10 years is already ancient.
You might say something like “Java is Beatles”, but even then it’s not like you’re using Java 7 without frameworks. And if you are... then keep reading.
Take Node, for example, you can have vanilla (and with that, you have the “flavor” using Express, Koa, Fastify...) then you had Adonis 4, Adonis 5, Nest, Apollo, maybe you forgo a proper back end and use Next as BFF (or using lambdas, or Firebase/Supabase).
Possibilities are endless, and while some would be harder than others to make a monolith from... be sure someone is doing it.
3 - Monoliths are slow to change
Start that some are not even possible to change...
Sometimes the framework changes too much from one to another that it’s impossible to even think about migration...
Angular 1 to Angular 2+, Adonis v4 to Adonis v5... and what happens? Either you’re still using (and even upgrading/basing your business) in a framework that lost all official support (yes, some companies are still too invested in AngularJS) or you’re probably had to rewrite the whole thing. Maybe you’re still migrating, maybe you haven't even finished that and are already thinking of rewriting again.
Developers, well... they want to play with the shiny new toys more than with “old stuff”.
We barely tolerate having to use code we wrote a few months ago, imagine having to use code other people made who knows when?
Why are we still using insert perfectly good framework when we could be using the new version or new framework.
And don’t get developers started in having to choose a language...
(BTW, unless you’re talking about a core business that requires something really specific or with users in the 6 figures numbers... you can probably use the equivalent of WD-40 and Duck Tape and it will work just fine...)
Finally, developers are a lazy bunch. It’s easier to just write the damn thing again than try understanding and messing with whatever you may or not have.
This leads to unhappy developers.
And unhappy developers leave. Leaving with a lot of knowledge in hopes of new green fields.
Or maybe they will push for a great rewrite... and while rewriting everything in Rust can be a stupid idea that will take too long, maybe moving a few core things from the old Ruby on Rails to Rust might be just what you need.
But what is the alternative?
As you would split the 1500 line file into multiple files (and probably in multiple folders), thinking small is probably the better idea.
Finding people to handle the Ruby on Rails monolith might be hard (and expensive), but if the ROR were just a microservice (emphasis on micro) or something inside a monorepo, then to fix it... probably even someone like me that never touched Ruby might be able to do something about it, and if worst come to worst and it starts to give too much trouble, a rewrite will be faster and easier.
And what about Svelte or Remix? Well... for the front end, you have micro front ends... so the old stuff can stay there while new stuff can be made on whatever new fad there is.
As long as you keep the stuff small...
Developers having to maintain stuff will know that the scope will always be small, some will even look up to the (possible) multiple different languages/frameworks/paradigms being used.
Others will look up to the next “new thing” since when they finish the current project being made in Next, the next one will be made on Remix.
The obvious problems
On the technical side? Yes, many obvious ones.
And I’m thinking it’s a 50/50 chance that DevOps Engineers are either thinking on hunting me down or on how fun it would be to automate something like that...
But as a developer, if I enter a new team and see a fraction of this freedom... well... what do you think?
Cover Photo by @charlybron on Unsplash
Top comments (41)
I have a different take.
Monolith shouldn't be confused with non-modular/modular code. Even with a single large project, everything is divided properly and easily. We can onboard a developer easily and that new person just needs to know their specific area.
OTOH I see so many companies who picked up microservices for the reasons you outlined, struggle. At first it's fun. But things get messy quickly. You need to move a developer between teams and it's tough. Getting a holistic view of the system is almost impossible. You can't move an inch without a dedicated DevOps.
Reproducing an issue with microservices becomes a nightmare. For me and my employer (Lightrun) this is good since our product is perfect for debugging these sort of failures. But as an engineer I feel this is a bad choice.
Don't get me wrong. I like Microservices. When they are the right fit.
But as Martin Fowler said you should go Monolith first. I can't think of a single case where this doesn't make sense.
Finally about Java. I think your view of Java may be out of date. Java 17 is already pretty powerful as a language/API and getting more so. The ability to instrument, IDEs and entire ecosystem is at least a decade ahead of its closest rivals. I've done a lot of work in other ecosystems recently and I have good things to say about them (especially in the getting up and running phase where Java does suck) but when it comes to real world high scale... There's literally no alternative.
The advantage of having a single team aligning around a single language/platform (regardless of the language) is huge. We can move developers around instantly. Do full stack PRs that are aimed at vertical features. Review faster and scale easily. Java isn't Beatles it's a Mac truck. Heavy, destructive and you need some skills to drive it. But it gets the job done if you have a good developer at the wheel.
I don't! I wholeheartedly hate them. Yes I use them a lot when I have to. And over the years I had to use them for different reasons, but willingly going for microservices architecture because of the reasons outlined in the article above is just outright stupid. I can only assume the person writing the article never had to deal with microservices in the long run. Only a junior developer can get excited about having to deal with small chunks of codebase and this is understandable. Dealing with 1 small chunk however is one thing. Dealing with 10 different chunks each running a different version is another.
You might have mistaken the Java part... I know Java is a powerful comprehensive language, especially the newer versions... but my example was "Java 7 without frameworks".
But I totally agree with Martin Fowler on the technical side, what he says makes sense and in a rational world it would be the way to go.
But we're dealing with programmers here. Ten smaller "mountains" would seem much easier to conquer than one single big one, even if those 10 would amount to X times the big one.
I believe that, more and more, we should start considering the human part of the equation, not just the technical side of it.
I think you need to revisit Java with someone who really knows what they're doing... Unfortunately, there's a lot of outdated nonsense out there that gives the wrong impression about it. You might have gotten a wrong impression early on.
The human element is actually my concern. It isn't hard to write code or get developers to write code. Maintenance is 95-98% of what I do for a project. I don't think I'm an outlier in that regard. In our industry people change jobs faster than they change socks. We need to write code that won't go down the garbage drain because the person who wrote it got a better offer.
At Lightrun they actually take it to the extreme and force people to work on each others code. So a person get an issue assigned that's in an area code they're unfamiliar with. This prevents code rot and forces people to work as a team. It's terrible in terms of merges and code reviews, but on the plus side: the code is good. I think that's a bit extreme but I'm familiar with a company that did microservices "free style". Their product never launched.
This is obviously anecdotal but I think it also makes more sense.
I really don't have a problem with Java.
And you're right, we most read and maintain code than we write it.
But today, people getting into the industry are learning about the newer versions and frameworks, as do current developers, as to not get "outdated".
And then you get into into a new team and see it using "Java 7" (just an example). As maintained as it is, it's not what you've been learning, you're unfamiliar, you probably have to do stuff that is already core on newer versions.
Not only that, as time passes, to upgrade the whole thing can be difficult, maybe even impossible. And you keep with it and the new LTS goes to Java18, Java19, Java20...
To maintain a small part made in Java 7 would be a lot easier if new stuff is being made in Java 17. Even with stuff here and there in Java 8, Java 11, Java 15 each with their own frameworks.
If it was built in a SOLID way, it gonna persist longer and be easier to maintain. The developers might see it as a chore to change the things in something so "outdated", but it would be a "break" from the normal flow, and not the norm.
Meanwhile the new stuff is being made with the shiny new toys that everyone is already learning and enjoying, coming with new tools that might even justify the rewrite of an old service because it can make a real impact there.
I updated COBOL systems and even wrote some COBOL myself on a PDP in my youth ;-)
I agree, systems should be more "fresh". But newer isn't always better and there's reality of stability. Imagine the millions of small microservices with outdated languages and toolchains the world forgot. 1 year old NodeJS projects are already old news and "horrible" can you imagine a 10 year old NodeJS project?
Java 7 is just over 10 years old by now.
I'm not a fan of async/await so I'm good with that. Project loom is coming in JDK 18 as preview and will solve the "problem" of threading in Java. I quoted problem because the approach Java took has its advantages in many use cases.
Resource use can be reduced with GraalVM which removes the overhead of the JIT and Valhalla which removes the overhead of primitive objects. Loom also removes the overhead of threads. None of those are critical though as you can still write very efficient Java code today. This is mostly benchmark optimizations. What matters is scalability and for scale and distributed computing Java is at a different level.
IMHO Java did the smart thing by not chasing language features introduced by other languages too far. A lot of those languages are a mess where compatibility is fragile at the source and binary level. Java has kept the language simple and stable. As a result it's MUCH easier to build on top of it and expect long term stability.
In my current job I need to maintain agents/tooling for Java, Node and Python. All of them are fine platforms. But the breadth and maturity of Java are at a completely different level. The docs, the instrumentation and debugging tools, etc.
I worked with C# quite a bit but it's been years ago so I have no idea what's the status for that. I always felt C# tried to take everything Java did and add on top of that. To me personally the greatest feature of Java is the stuff it didn't add, so C# was never a favorite.
I agree with your points in general, but I think it is also important to remember that most microservices of today use more resources than the monoliths of 10 years ago, and something being a monolith does not necessarily mean it cannot be modular and easy to upgrade. It just takes a lot more discipline.
While I agree that it can modular and easy to upgrade... what if it's AngularJS? Or the next AngularJS?
It can have an awesome structure, but people will still be like... I have to work with that?
From what I understand, ROR developers are harder and harder to find... because it's old, as good as it can be, the bottleneck becomes the developers... more than any technical side.
"Guns don't kill people"
Monolith or microservices is more architectural question. If you can't plan your monolith, why do you think you'll be better with microservices?
It's about the inner architecture of the software itself. Monolith doesn't imply all those things, it's just a matter of moving the complexity where fits better your needs.
A big monolith probably will imply a complexity in Devs side with modular pieces built in, ready to split into services if there's a real need. Microservices move this complexity to infrastructure so we can get the same argument and tell that microservices makes infrastructure guys unhappy (weird, huh?). On the client side is paying more to more devs or paying more to Cloud providers, not much difference at all.
I've also saw multiple times ""micro""services with 1500 lines of code so no chance here.
In overall you need to know how to build the software properly and the reasons behind choosing microservices or monolith can vary a lot.
Think on StackOverflow, it's a very big monolith with a lightning fast CI/CD 🤔
You're totally right, but then again, it's all about the humans.
I do believe though that it's not just monolith or microservices, something like a monorepo can have the advantages of more easily sharing code between similar applications, while being able to formally split it in smaller parts in a way that the humans feel like it's not one big block.
But monorepo also add complexity to the branch management + the pipeline/s that can take much longer than expected, which is not so good while trying to keep a develop live environment + it can break at any time.
A thing to solve this of course is having a live env for each branch, which is expensive, adds high complexity in infrastructure and even that does not solve the issue on your code depending on other piece of code that any other dev is building without adding more complexity linking your branch env to your mate's one.
Imagine a world where companies are led by developers.
They would never ship anything, they'd just play with new frameworks, overuse Docker and argue about what programming language is the fastest.
(just a joke)
To be fair... that's how it usually starts.
Let's go with front end language/framework and back end language/framework because that's what I know.
The problem i see is that it stays like that... because it's how it's always have been done.
I think it might help if you defined what, to your mind, a monolith is.
I admit to some confusion there. You allude to a 1500 line file as a possible monolith. But then I see files about that size often enough. Never struck me as "monlithic" (albeit on the large side) . I had a Django models file larger than that until recently.
But if large files are your concern, why not talk about large files. It can't be that simple, and I sense you're talking about something broader.
But what the alternatives? I mean, my large models file got to be a bother navigate so I split it into individual files for for each model and have instead, say a dozen files with a hundred or two lines a piece.
But the code hasn't shrunk? It's just structured differently.
Then I work on a C++ project in which almost everything is in one folder. Like 1000 files in one folder. Egads.
But structuring this, by grouping conceptually similar files into sub folders is just restructuring and does not reduce size.
So I'm left wondering what you mean by "monolith".
I thought, maybe the Linux kernel which is often described that way. But that's a very particular context and again I can't help suspect you have a broader idea of "monolithic".
But if it's just size, then I am stuck again, because a given job demands a given size. Big, complex jobs grow large code bases. Small simple jobs demand only small code bases.
Perhaps then, you are alluding to a lack of independent units with interfaces? For example Python code to achieve a given job is what's much smaller than VBS code simply because so many packages exist having already implemented the component jobs in a sense.
Anyhow, my point is simply, I'm not sure what you mean by monolithic I guess.
I see your point, but this is probably more a problem on how I wrote it.
The 1500 lines was an example on the difference of a single huge file vs multiple smaller ones.
In the case, let's say you have something about people (customers, employees...), then you have about products (prices, stock, transport).
You could have everything in a "single file", but it would make more sense to split it in multiple files and folders.
My point is that even separated on folders is maybe not enough.
It's not just about organization or the lack of it, the cognitive load of a huge application with everything in one place vs multiple smaller projects each one tackling one of those things is different.
I'm all for long source code files (even 1500 lines) if the alternative is splitting the code with such granularity that trying to figure out what ONE function does involves navigating dozens of files. Code review becomes a nightmare. This is rarely the case in front end projects, and less of a nightmare nowadays with good editors like VSCode and such, but depending on the case, long files are justified. For example, device drivers (for microcontrollers, like UART, SPI, etc) tend to be contained within one file.
I partially agree with you, because as with anything... it depends.
Maybe the best way could be a 1500+ lines file, but more often than not... 1500+ lines is too much for one single file.
I was thinking about this problem and while I get your point that using old technologies is limiting the people you can bring to the project as new hires may not know them or not willing to use them, I also think that you miss a key point here. The most expensive part of a software project is it's maintenance. A monolyth is easier to maintain and you need less DevOps engineers as you will be maintaining a single architecture. If you have multiple teams maintaining different platforms, each written in it's own language you will need more people to support that as well.
Looks like you're using word "monolith" to call all code which you don't like (for whatever reason).
Just keep in mind, that "monolith" and "microservices" could be a different packaging options for same application. With this in mind, you'll notice, that all problems which you've mentiond do not belong to "monolith", it's matter of project organization. Whole system can't be good or bad depending on how many deployable artifacts you generated.
Another (tightly related) issue: design patterns used in microservices inherently enforcing bad practices and making code harder to support and maintain. Circuit breaker and Saga patterns, for example, enforce mixing business logic and low level details (connectivity issues in case of circuit breaker or detailed transaction/rollback management in case of Saga pattern).
“Java is Beatles” - LOL!
All valid points. There's a sentiment among some developers that they don't trust serverless because that makes them dependent on the cloud provider whereas if they keep the monorepo they can just start up a server with someone else, change their DNS entries and walk away.
A few things that I would differ on,
Microservices and Monoliths should be a choice made based on the project requirements, one need not declare the end of the other.
Microservices are suppose to be fast in dev and deployment but we have been doing that with monoliths for years. CI/CD existed before devops just like modularity existed before microservices.
Even with microservices, if the service is dependent on other frameworks like kafka etc, it gets difficult to upgrade if the frameworks don't support that upgrade.
Biggest advantage that I could think of from my experience is convincing the release and business team. They are much happy when a small chunk is moving to production instead of the entire application, it's easy to get approval and needs less discussions within multiple teams.
But you see, the people side should be a factor, and a big one at that, of the project requirements.
Will you have people willing (and happy) to work on the the code base (with language, framework and other quirks as is today) in 5 years?
Adonis V4 and V5 have completely different developer experiences but just a few years apart. And in a few more, the gap will only get bigger.
I sort of disagree with most points.
You can have large files in microservices as well. And 1 service to update is fair better than multiple.
I understand what you saying, but my point is for how long updating one is better than multiple?
Today a new framework that will dominate a great share will start, tomorrow a new version of another one will be released and yesterday we were still using something that was released years ago and becaming harder and harder to maintain, not because it's bad, stale or anything like that... but because people just don't want to learn it anymore.
Sorrry for the sarcasm, but and article THAT WRONG actually deserves some sarcasm.
Yes, because splitting a monolith into microservices will reduce the codebase? Ah wait - it will actually increase it and code reuse will be reduced. You will have the same thing in a myriad of losely coupled projects and without proper e2e testing you will be lost. Refactoring capabilities are dramatically reduced. My honest advice: unless you actually NEED to split the monolith don't do it. There are use cases where you have to split your code so it runs on separate servers for security, redundancy and speed and you have multiple services communicating to each-other - but once you grow beyond a certain number of users and data size - you have no choice but to do that or pay for extremely expensive hardware and we all know each hardware has some limits, so at one point you may need to split it. But don't confuse the NEED with bad design. What you propose is overdesigning and this is bad. BAD!
Your point being? Everything gets old, fast. Are you suggesting to rewrite the entire codebase every 2 years just to keep it current? Or you're just complaining for no reason here?
HELL NO! A bigger change in business logic in a microservices architecture may involve redeployment of a dozen of services along with front-end. With a monolyth you will create a new version of a single project, not a dozen of services. E2E testing will be a nightmare and backward compatibility gets on a different level as you will have to support the versions of each of these microservices.
Microservices lead to insanity. Change my mind!
The company I work for uses A LOT of microservices because we have no choice, but this isn't because we chose not to build a monolyth just to have fancy codebase, but because of scalability - this is a whole different topic as I explained initially.
You confuse bad code with bad design. A single file with 1500 lines of code may or may not be bad. A single function spanning over 150 lines is definitely bad. This has nothing to do with the entire application.
You have no idea what you're talking about. A rewrite is always problematic and you will almost certainly lose hidden functionality and will introduce new bugs. ALWAYS! I've never seen documentation good enough to describe all possible edge cases, code with extensive unit and e2e tests that cover literally every single possible situation - this is the reason why even code with 100% coverage still has bugs. We test how we expect the code to work, yet users creatively find new ways to crash it.
Well, if we can always write small apps, but alas stuff tend to evolve. Companies grow, can you suggest the following to your boss: "Hey, this company seems to be growing too big, let's downsize it a bit"? I dare you.
Yes there are problems, however instead of one problem you will create 10 new. Bigger. And since you mentioned DevOps - instead of deploying one single monolyth you will have to deal with deploying a myriad of microservices and testing the integration between them. So you need to hire 10 DevOps engineers for each that you currently have. Have fun.
PS: Sorry I just checked your linkedIn profile - 3 years of experience as software developer explains a lot about your point of view. All I can wish you is that you don't experience the consequences of the architecture you are praising. Microservices are all fun and games until you grow after a certain point, then it turns into a real nightmare.
I understand your point, but mine is not about the technical part and only on the human aspect.
Since you've checked my Linkedin, you might saw that I'm coming from a business formation, I studied and worked with HR and things like that.
If you check the Stackoverflow surveys on the last years you see that some languages started paying more and more, while others, like JS pay on average less and less.
Why? As Uncle Bob says, every 5 years the number of programmer doubles (I'm not sure how valid this is, but I gonna assume this as true).
When people learn, and I'm using myself and people near me as references. We don't go learning C, Perl or Ruby. Many universities are even dropping from teaching C/Pascal to Python.
New people will learn the "new" and the "hot" languages and frameworks, and they will want to work with that.
AngularJS was released around 12 years ago and there are companies that still haven't let go of it, even as it hit the end of official support. (I know of a low code platform where it's components and everything that is offered is based on AngularJS, if you want to design custom components, you have to make it in AngularJS even.)
Angular2 is another approach and other frameworks appeared and died in between.
Their application is a "monolith" (even if it isn't really), if as time progressed they embraced the new frameworks, new components and features could have been done on other frameworks.
I know it's a nightmare to have multiple frameworks and other crazy stuff instead of one big monolith where you have all the groundwork, utils and everything else you need already done and ready to use.
(And you can mess up in one way or another, so let's just assume equivalent levels of mess.)
But, 5 years from now, let's say they continue with AngularJS, their costs will only rise, as people with the knowledge of it will only dwindle. It will also kept harder and harder to maintain, after all, now you even have to probably have a internal fork that you have to maintain.
Either that, or they would have to bear costs to train people in a framework that's 'dead', having people with lower levels of experience with a framework that will be harder and harder to learn and maintain.
I know people that used to program in COBOL, and they just don't want to work with it anymore. Companies might pay their weight in gold, but they don't want it.
So either you have people who have to start learning it from whatever source that is or you have to pay a premium to people who are able to work with it.
But now, let's say they embraced the nightmare that is to have smaller "services", the AngularJS part would be just one small part of the whole and years ago, probably even before the end of life of it, they would already had migrated it.
Or maybe, not even that! Maybe it's consistent enough to still work without problems, so there's not even a need to think about rewriting it.
Hiring people who know the new and hot stuff is easier and cheaper, the developer's themselves probably appreciate being able to use the fresh stuff than to learn some arcane framework.
People change jobs as often as they change cars, and being familiar with new technologies is something most developers want because that's the bulk of what the positions are after.
And of course, things like micro front ends is one solution for this problem.
It's now easier than ever to have multiple different frameworks in one project.
And before I diverge more, basically my whole point is about the human factor of the equation.
What about the human aspect? Complexity always increases as project evolves. You cannot avoid that. You can only make things worse if your architecture is inadequate.
I checked your linkedIn profile to get some idea how much experience you have in designing and maintaining software. Your business experience may give you some perspective what the business needs, but not how software works and how much effort is required to maintain the codebase and what are the downsides of some architecture decisions.
Every project has it's lifespan - this is true, so your observations on "we cannot keep using the old COBOL systems indefinitely" are correct. However it's not because we cannot find competent people to maintain them, nor because all new hires don't want to work with old programming languages, but because it's more efficient to use the new tools. As I mentioned before - money is king - if you have to pay a monthly salary to someone to create some functionality using COBOL, while using Python the same can be developed within a few hours - it's obvious what the business will prefer.
Every project reaches some state where maintaining some portions of it (or the entire project) are too expensive, so a re-do may be the better option. I'm are not talking about design flaws here (although this is a common case) but rather about the business needs outgrowing the original design. For example if we originally designed a car we may attach a trailer, we may upgrade the suspension and engine, but we cannot make a train no matter how much we upgrade the car. You cannot build a freightliner from a car although both use regular roads.
I think you get my point - if you NEED - you may redo some portion of the monolith by splitting it into a microservice and you can do that using whatever language/framework/etc. you want. Just don't do that "because it's fancy and the new hire prefers to use this instead of learn how to patch the existing code".
Nobody likes working on someone else's code. It seems easy to start over from scratch and it all looks clean, however if you go that path soon you will end with "Peter's project, Hanna's project, Joshua's project" and everyone will insist on working on their own project and not touching someone else's. The fun part starts when you get into a problem and you need to figure out where this is and apparently Joshua is on vacation and Peter left the company last year. Trust me - I've been there and done that over the past 30 years (yeah, I am a software engineer since 1992).
You want to keep the code as unified as possible and split microservices ONLY if you NEED to do that. Not because you want to do it or because "we can do it better". You should do that only if you have no other choice or if maintaining the old code is proven to be really ineffective.
As someone with about 6 years of software dev experience, I'm curious to know what your opinion is on when a microservice should be used. I've mostly worked on monolithic backends. Are there any resources you've found that are helpful when determining when to use one over the other?
Here are my .2c:
The best rule to judge if you need them is: Don't unless you don't have any other option (or the costs of not doing so are multiple times higher).
Here are some valid use cases:
Scalability and redundancy - obviously it's easier to have multiple small EC2 instances that perform some computation. Smaller VMs are easy to start on-demand. For example a food delivery company (Grubhub, Takeaway, Ubereats, Doordash etc) has a few "rush hours" when customers order, then it gets back to near zero. Another good use case is when you have a slow operation (CPU-intensive or communicating with external APIs) it makes sense to have multiple instances of a microserice instead of spinning off the entire monolyth.
Partial redo of an old system - when you have a big old system, created over a decade ago, usually it's maintenance gets harder and harder over time. It's perfectly ok to rewrite portions of it as microservices in order to decrease maintenance costs, improve performance and implement new functionality.
Memory and cache - if you have some processing that requires a lot of memory or caching (although we have distributed cache like Redis) - it makes perfect sense to extract this as a separate microservice. Implementing custom cache if you need it is ok. Multiple instances using less memory is cheaper to create than a bit less instances using much more memory. Here applies the second part of the rule - it's cost.
Please note that for all of the examples the main rule applies - you don't have any other option or it will cost you much more if you keep the monolyth.
PS: I'm sure that we can come up with more good examples, this is just what came to my mine for the brief time I decided to answer.
Thanks for the detailed answer! Much appreciated!