Migration from Monolith to Microservices
Many engineering teams are transitioning from monoliths to microservices. While this migration pattern has become common, certain important aspects are often overlooked. One such critical component is the role of an edge or API gateway, which plays a vital part in the transition.
Common Focus Areas in Migration
At the start of a migration, teams typically focus on:
- Domain modeling using Domain-Driven Design and bounded contexts
Building continuous delivery pipelines
Automating infrastructure provisioning
Enhancing monitoring and logging
Adopting new technologies like Docker, Kubernetes, and possibly service meshes
However, less obvious aspects, such as orchestrating the evolution of systems and managing user traffic, are often neglected. While it’s essential to refactor the architecture and integrate new technologies, the primary goal should be to minimize disruption to end-users.
Why the API Gateway Matters
Since most customer interactions pass through a single point of ingress — the edge or API gateway, careful management of this component is essential. The gateway enables experimentation and system evolution while ensuring seamless traffic flow, making it a key area to focus on during migration.
Every User Journey Begins at the Edge
The need for an effective edge solution when transitioning from monolith to microservices is well-established. Phil Calcado, in his extension of Martin Fowler’s original Microservices Prerequisites article, emphasizes this in his fifth prerequisite: “easy access to the edge.”
Calcado explains that many organizations begin their transition by exposing a new microservice directly to the internet alongside their monolith. While this approach may work for a single, simple service, it often doesn't scale. It can also introduce complexity for calling clients, particularly around authorization or data aggregation.
Migration from Monolith to Microservices
It is possible to use the existing monolithic application as a gateway, especially if it contains complex, tightly-coupled authorization and authentication logic. In some cases, this may be the only viable solution until the security components are refactored into a dedicated module or service.
**However, this approach comes with clear downsides:
**Routing updates require changes to the monolith, often necessitating a full redeployment.
All traffic must pass through the monolith, which can become problematic if your microservices are deployed on a new platform (such as Kubernetes). In this scenario, every incoming request must first traverse the old monolithic stack before reaching the new infrastructure, leading to increased latency and potentially higher costs.
Monolith to Microservices
**Edge API Gateways and Reverse Proxies
**You may already be using an edge gateway or reverse proxy—such as NGINX or HAProxy—which offer several advantages when working with backend architectures. Common features include:
- Transparent routing to multiple backend components
- Header rewriting
- TLS termination
- Handling cross-cutting concerns(like logging, authentication, or monitoring) These capabilities remain valuable regardless of how requests are ultimately served. The key question is whether you should continue using the existing gateway for your microservices, and if so, whether it should be used in the same way.
From VMs to Containers (via Orchestration)
Many teams choose to migrate to new infrastructure alongside their transition to microservices. While the benefits and challenges of this shift depend heavily on context, many are moving away from virtual machines (VMs) and Infrastructure as a Service (IaaS) to containers and orchestration platforms like Kubernetes.
If you decide to package your microservices in containers and deploy them to Kubernetes, new challenges arise—particularly around managing traffic at the edge. Here are three common approaches:
Use the existing monolithic application as an edge gateway:
Routes traffic to both the monolith and new microservices.
All requests pass through the monolith, which can handle routing logic and make in-process authentication and authorization (authn/authz) calls.
Deploy and operate an edge gateway in your existing infrastructure:
Routes traffic based on URIs and headers to the monolith or microservices.
Authentication and authorization (authn/authz) are typically handled by calling either the monolith or a refactored security service.
Deploy and operate an edge gateway in your new Kubernetes infrastructure:
Routes traffic based on URIs and headers to the monolith or microservices.
Authn/authz is handled by a refactored security service running in Kubernetes.
Making the Right Tradeoffs
The decision about where to deploy and operate your edge gateway involves tradeoffs. Each option has its own implications in terms of performance, maintainability, and operational complexity. Careful consideration of your system’s architecture, security needs, and long-term strategy is essential when choosing the best path forward.
Monolith to Microservices
Evolving Your System: Strangling the Monolith vs. Monolith-in-a-Box
Once you’ve decided how to implement your edge gateway, the next step is planning how to evolve your system. Broadly speaking, you have two main strategies:
“Strangle” the monolith as-is
Encapsulate the monolith (“monolith-in-a-box”) and gradually chip away at it
Strangling the Monolith
Martin Fowler has written extensively on the Strangler Application Pattern, and although his article was published over a decade ago, the core principles remain relevant for migrating functionality out of a monolith into smaller services.
At its essence, the pattern recommends extracting functionality from the monolith incrementally by building new services. These services interact with the monolith through RPC, REST-like “seams,” messaging, or events.
As the migration progresses, functionality and code within the monolith are gradually retired, allowing the new microservices to "strangle" the old codebase over time.
Tradeoffs of the Strangler Pattern
The primary downside to this pattern is the overhead of maintaining two infrastructures—the monolith and the new microservices platform—until the monolith is fully phased out. This can increase complexity and operational costs, as both the old and new systems must remain functional during the transition.
Monolith to Microservices
Lessons from Groupon’s “I-Tier: Dismantling the Monolith”
Groupon was one of the first companies to discuss using the Strangler Application Pattern in depth with microservices, sharing their experience in the article "I-Tier: Dismantling the Monolith." While there are valuable lessons to take from their approach, modern implementations no longer require custom NGINX modules like Groupon’s “Grout.” Now, modern API Gateways like Edge Stack provide this functionality using simple declarative configuration.
**Monolith-in-a-Box: Simplifying Continuous Delivery
**An increasingly popular pattern among teams migrating to microservices on Kubernetes is what can be referred to as “monolith-in-a-box.” This concept involves packaging the existing monolith within a container and running it like any other service on the new platform.
For example, notonthehighstreet.com followed this pattern during their migration from a monolithic Ruby on Rails application—affectionately named the MonoNOTH—to microservices, which was shared at the ContainerSched conference in 2015.
Benefits of Monolith-in-a-Box
With this approach, the monolith runs on the same new deployment platform (e.g., Kubernetes) as the microservices. This homogenizes the continuous delivery pipeline, bringing consistency to the deployment process. While each service may need customized build steps (or a dedicated build container), once the runtime container is created, the rest of the pipeline can treat it as a standard deployment artifact.
Gradual Transition and Decommissioning
The goal of the monolith-in-a-box pattern is to deploy the monolith to the new infrastructure and gradually move traffic to this new platform. This allows teams to decommission the old infrastructure before the monolith is fully decomposed into microservices.
Following this pattern makes it even more logical to run your edge gateway within Kubernetes, since all traffic will ultimately be routed through the new platform.
Monolith to Microservices
**Moving from VMs to a Cloud-Native Platform
**When transitioning from Virtual Machine (VM)-based infrastructure to a cloud-native platform like Kubernetes, it’s essential to invest time in implementing an effective API gateway/ingress solution to facilitate the migration. You have several options for doing this:
**Use the existing monolithic application as an API gateway.
**Deploy or use an API gateway in your existing infrastructure to route traffic between the current and new services.
Deploy an API gateway within your new Kubernetes platform.
Deploying an API gateway within Kubernetes offers greater flexibility for implementing migration patterns like “monolith-in-a-box.” This approach can also accelerate the transition from monolith to microservices by streamlining traffic management across the new platform.
Simplify Your Transition to Microservices with Edge Stack API Gateway
Managing traffic between monoliths and microservices doesn't have to be complicated. Edge Stack Kubernetes API Gateway provides a modern, cloud-native solution that streamlines your migration to Kubernetes. Gain seamless routing, authentication, and TLS termination—all through simple declarative configuration.
Top comments (0)