In the world of software architecture, microservices are often heralded as the go-to solution for scalability and flexibility.
However, my experience with a project at my first job utilising microservices, has revealed a different story.
This particularly challenging project highlighted the pitfalls of this architectural pattern, especially when implemented without proper consideration.
It became clear that microservices, while powerful, can introduce significant complexity, overhead, and potential for disarray, particularly in handling inter-service communication and data consistency.
These real-world challenges have led me to question the widespread push towards microservices, especially for projects where simpler, more unified approaches could be more effective.
The project
I worked on a new mobile application that was built from scratch.
It started as a proof of concept which demonstrated the basic functionality of the project to the client, and was ultimately used as the base of the first deployed version.
Microservices were used within the backend of the application.
As new features were added or changed, new services were added.
There was no common base they worked from; it was a matter of what was thought to be added and that was it.
By the end of the project, there was over 25 individual services that could have been shrunk into 4 or 5 (such as ‘authentication’, ‘users’ and ‘discoverability’).
A summary of my experience
When I first joined the project, there was a lot to learn.
Since I was new to microservices, I had a very neutral stance towards it.
Yet, as I kept working on this project over the next 6 months up until deployment, many issues arose.
One of these issues was dealing with authentication.
Given that we were roughly 3 months away from our expected launch date, we did not have any kind of authentication within the app (to validate requests once logged in).
I had spoken to the tech lead and he believed having a users’ id (a generated UUID) was enough to authenticate a request (which by the way, was never validated on the server apart from being a valid UUID).
I argued against this given that security is a very important thing to get right, especially when user data can easily be obtained from requests.
I was then told that other work was more important and that relying purely on a UUID was “enough”.
Later down the track, a security report was made from a 3rd party which rated this decision at a severity level of around 9.9/10 (meaning, resolve this immediately).
The tech lead then had to sacrifice many nights of sleep to implement a JWT authentication package that we could add in to our existing services.
Now, given that there was no common base with all of these services, the authentication package needed to be installed and set up in all of the services.. one by one.
This package then made things more complex to develop with because we needed to pass the JWT to the service on each request.
Given that we had services that called other services, this token then needed to be passed along the line.
At this point it was also next to impossible to run services locally to develop with given that they relied on other services to run.
Another pain point of microservices is dealing with a common database. Where should the source of truth be stored?
Well, in this project, we just relied on having a database running based on manual deployments.
There were no database migrations, let alone any foreign keys used.
Any changes that needed to be made to the database were made by hand.
In my experience, microservices often lead to a fragmented codebase.
Each service, ideally isolated in functionality, ends up duplicating common code, making maintenance and updates a nightmare.
The overhead of managing these numerous, interdependent services isn't just technical; it requires a significant investment in both infrastructure and specialised skills.
Within the project I worked on, I highly emphasise this point given that we had 3 separate deployment environments for all the services (development, staging and production).
An alternative solution
Throughout development of the project I had questioned the tech lead as to why he chose to go with a microservices backend instead of a ‘monolithic’ approach.
I never got a proper answer and was told to “do my own research” into microservices (and hence why thi post has been written).
It's crucial to recognise that 'monolithic' isn't synonymous with 'outdated' or 'inefficient'.
Monolithic architectures, especially for smaller projects, offer a level of simplicity and coherence that microservices struggle to match.
Development and testing are streamlined, as everything resides within a single, unified codebase.
This ultimately leads to a simpler development environment and deployment infrastructure.
Unfortunately, the tech lead did not agree with me on my reasoning.
microservices are often touted for their scalability.
However, this benefit is irrelevant for many applications, particularly those in their infancy.
Small teams can find the distributed nature of microservices overwhelming, detracting from the core functionality of the product.
When Monoliths Triumph
There are numerous examples of successful applications that have thrived with a monolithic architecture.
These examples showcase that starting simple, with a focus on core functionality, can often be the key to a product's success.
Moreover, transitioning to microservices is always an option when the time is right.
I believe that a monolithic architecture would have alleviated a lot of issues within development of this application, as well as speed up the rate of development.
Things such as authentication would be easily managed; database migrations would have been a trivial integration and adding new “services” or “modules” would have been as simple as adding a new route with some functions related to what needed to be done.
Ultimately, implementing microservices from the get-go can be akin to premature optimisation.
It's like building a highway when a simple road would suffice.
The key is to assess your project's current needs and scale your architecture accordingly.
This project was not available to the public, so why did it need to have individual services?
Concluding Thoughts
Microservices, while beneficial in certain contexts, are not a one-size-fits-all solution.
They come with their own set of complexities and challenges that can hinder rather than help, especially in the early stages of a project.
A monolithic approach, on the other hand, offers simplicity, ease of development, and a unified codebase, making it an ideal choice for many projects.
As my experience suggests, sometimes, the best approach is to keep it simple.
To summarise, I learnt a lot from this project.
What to do, and definitely what NOT to do regarding building a backend.
I highly recommend building a backend as a monolith to begin with given that it is the easiest to get going and the quickest to bring to market.
Once you reach scale and run into scaling issues, only then should you consider moving to microservices.
Top comments (7)
Your tech lead seems to be the incompetent one, not the nature of microservices.
Regarding security: We don't install the JWT validation in every microservice. Instead, the gateway microservice forwards validation to the Security microservice, and then all other microservices simply receive verified security data through headers injected by the gateway microservice. The way that it was done is bad because:
Then common code duplication is tackled with packages: NuGet for .Net, NPM for NodeJS, etc. Generally speaking, you put the architecture-related code in a package or packages to ease maintenance. There should be no need for business-related code packages.
The databases part is crucial: Each microservice should have its own database. Sure, you could do one database. Hell, you could use SQL Server's schemas to simulate "different" databases and assign one schema to each microservice. Still, the ideal is different databases.
Source of truth? Intel coined the term "Record Of Origin" for the sources of truth. The ROO microservices are the owners of the truth. My team is developing the Data Replay functionality on ROO's so this truth can get to any microservice that needs it.
Yes, keep it simple is usually the best approach, but the correct best approach is: Properly research. Your tech lead was clearly ignorant on many aspects and why he was doomed to failure. Being a tech lead requires many hours of investigaion, proof of concepts, laying out workflows and making sure all requirements are met on paper before even starting.
totally agree.,
you can also use a service mesh and mTLS to let the internal services communicate internally depending on the infrastructure.
I believe this is the single most overlooked step in most projects, especially fast paced moving teams. Also sanity checking research from other team members is valuable, sometimes people don't research well based on biases or lack of process as you mention above.
if one wants to get their feet wet with different architecture designs you can check out this YouTube channel youtube.com/channel/UCZgt6AzoyjslH...
i have no connections to this person but their content is just very informative
In IT, I often feel like we're like kids. The first generation makes a monolith, packing everything inside until complexity and scalability becomes an issue. Then comes the next generation wanting to "solve" that by splitting that into a myriad of microservices. These are simple on their own but introduce lots of challenges of their own: interaction, code re-use, debuggability, race conditions, orchestration, testing, just to name a few. From a birds eye perspective, it just shifts the complexity from one place to another, from one bigger entity to lots of smaller microservices and their interactions. However, in all that, the golden rule of the grandfathers was forgotten: "low coupling, high cohesion" and I hope seniors will notice that there is a meaningful middle ground in between both ends of the spectrum.
Good article, though I think that I'm with others that mentioned that these issues seem to be ultimately poor design decisions in a more general sense, rather than specifically an issue with microservices.
JWT-based authentication should be handled at the perimeter of the system, as noted by someone else.
Also, the fact that these microservices were so coupled tells me they weren't really designed to be microservices. There should be separation of concerns between microservices, and they should only care about each others' interfaces.
In the end though, everything is a tradeoff, and usually the simplest solution is the best.
When I started in 2015 working on a product, the microservice hype train began. The thing grew up to 15 services and it was a maintenance nightmare. Code duplication was actually one of the less severe problems.
Microservices were meant to be an organizational solution to split a monolithic code base maintained by multiple teams into multiple domain-specific ones to reduce dependencies. In that respect, code duplication is actually a good thing.
However, it was not intended, that one team creates a new microservice for every two new endpoints.
After the years, we now successfully shrunk down the application to only two services, one being a smart gateway the other being what we call a Modulith.
We keep following the DDD patterns to have independent modules in our code base but shipped as one runtime.
We have to scale that service up to a couple instances and we do constant performance testing but there are no disadvantages so far.
On the contrary, working as one team on that code base has become an ease.
I'm afraid this is a terrible implementation, and so can't really be cited as an issue for microservices as such.
However, there are a few common threads here which microservices encourage. The first is domain coupling. You said how each new feature meant a new service. Well, most services are storing and querying data. Imagine if you had one general purpose tool, much like a SQL database in that you could dynamically alter the data structures at runtime.
Image if you could also change the business rules and access rules for data, also at runtime. What this would mean is:
a) Developers could create new data structures without creating all the issues of new projects, such as new databases instances.
b) All the standard stuff, such as security, API access, audit recording, and complex query support would be available generally. You wouldn't need to write code for basic data access.
I've written a few related articles:
dev.to/cheetah100/micro-nightmares...
dev.to/cheetah100/capability-drive...
I use custom module system for my projects. So I said "godbye" to microservises.