DEV Community

Cover image for Deploying Microservices with GitOps
Hannah for Codefresh

Posted on • Originally published at

Deploying Microservices with GitOps

Learn how to adopt GitOps for your microservices deployments.

This blog post will explain the highlights listed below and how it's possible to adopt GitOps and the benefits of deploying with Argo for your microservices.

  • What are microservices?
  • What are some of the challenges with microservices?
  • What is GitOps?
  • What is Argo CD?
  • How can GitOps and Argo CD help your deployments for your microservices?



Microservices are a software architecture pattern for building scalable distributed applications. It's essentially a framework that helps break down monolithic architectures into small independent services that can communicate over common patterns like APIs, event-driven, or even messages. An organization might do this because monolithic architectures are tightly coupled and run as a single service. So, if a particular process of your application with a monolithic architecture experiences a spike in demand, this affects your entire architecture and must be scaled.

It's also important to note that any features or experiments made to this type of application become more complex and therefore can experience more issues and failures.

Breaking down a monolithic application with API services allows you to build and version each function of an application independently. Because of this, microservices enable an organization to run, modify, deploy and scale each service for an application much more quickly.

(include visual of monolithic architecture vs. microservice)

Challenges of Microservices

Microservices provide several benefits for organizations by enabling autonomy amongst development teams, allowing applications to scale, deploying those applications efficiently, and increasing flexibility and resilience for those applications. However, microservices include challenges that we will dive into and explain.

Those challenges are:

  • The exploding number of pipelines.

As the number of applications increases, the management of such pipelines becomes more complex. Often this can result in messy scripts and needing to leverage configuration management tools. With all of that comes maintenance, and operators spend a lot of time fixing pipelines. We have a blog post that dives deeper into this issue if you'd like to learn more.

  • Difficulty managing deployments (all at once, in sequence, and with dependencies).

While deploying a single microservice is more manageable than deploying a monolithic legacy application, there are still challenges. Suddenly, only a few pipelines for a monolithic application have turned into 20 or more pipelines with multiple microservices. These numbers will vary based on your organization, but it's no doubt that now managing multiple microservice repositories and pipelines is more complicated as more and more are created.

  • Difficult to monitor.

Now that there are multiple pipelines and multiple applications with an increase in deployments, those deployments and pipelines need to be monitored for any issues. Previously when using a monolithic architecture, these things could be monitored manually. However, the increased complexity of a distributed system can lead to chances of failure.

This is why it’s so important to ensure you’re monitoring for total failures, network failures, cascading failures caused by remote procedure calls between your microservices, and even ensuring that you’re honoring your SLAs (service level agreements). With a microservice architecture, we need to keep in mind that services can even be temporarily unavailable due to broken releases, configurations, and any changes due to human error. Clients of services should be architected in a resilient way so they do not suffer catastrophic failure when a service is down.

  • Difficult to debug (when things go wrong).

This topic itself could be a blog post. However, some of the critical challenges when debugging a microservice is fragmentation and ensuring that you create the same environment where the bug was identified. This means you need to ensure an identical setup to production, this may include specific cluster versions, serverless functions, the correct environment where the bug was reported, etc. In addition to the asynchronous call complications, the technicalities of running the microservice locally, and the cost it can take to execute all of these things in what you hope is a timely execution.

Monolithic vs Microservices

Even with the above example of the complexity that microservices can create when transitioning from a monolithic architecture to a microservice architecture, the benefits of moving to microservices can’t be understated. To gain the full benefits of microservices in scaling, reliability, and security we need to address these challenges head-on

So, what could help you with some of the challenges organizations face when managing, monitoring, deploying, and maintaining their microservices? Let me introduce you to GitOps, how you can apply it to your microservices, and how it might help!


Before we dive into microservices and explain what they are and what sort of challenges they create for us. First, let's dive into what GitOps is so you can begin to percolate your thoughts as to WHY this might be helpful for your microservices as we explain them.

Perhaps you're familiar with Git and use it already for your source control and recognize how it enables a source of truth for your source code, but what if we expanded on that idea and applied it to our entire system? This is where GitOps comes into play.

GitOps is a paradigm that leverages a Git repository as the source of truth to manage continuous deployments and infrastructure management as code. This allows you to drive automation, bring repeatability, reduce downtime and eliminate human interventions in constant release cycles.

Now that we better understand GitOps let's explore a potential solution for deploying your microservices with GitOps that can be enabled by using Argo CD.

Argo CD

When determining the best deployment process and tools for GitOps, Argo CD is a great solution. Argo CD is a declarative CD (Continuous Delivery) tool that automatically synchronizes and deploys your applications whenever a change is made to your Git repository. This is why it’s one of the leading GitOps solutions.

It allows you to manage the lifecycle of your application and deployments while also providing version control, configuration management, and application definitions with a Kubernetes manifest. This allows you to organize your environments and complex infrastructure data with more ease.

Within this post, we will highlight ways that you can use Argo CD to deploy your microservices while applying GitOps to your workflow.

How GitOps and Argo can help

When trying to determine the best strategy to manage your Kubernetes applications, an increasingly popular approach is GitOps. Argo CD makes GitOps practical for you to use because Argo CD is a Kubernetes-native declarative continuous delivery tool that follows GitOps principles. Argo CD also enables you to accelerate the frequency of your application deployments and lifecycle management by maintaining your configurations and ensuring they’re in sync, applying your desired state directly from Git.

Essentially, Argo CD helps support your deployments while effectively providing several other benefits for your microservices which we’ll explain more below!

  • No more deployment pipelines -> Argo CD deploys them from Git

Previously we mentioned the explosion of pipelines when converting from a monolithic architecture to a microservice architecture. However, by leveraging Argo CD it can reduce the number of pipelines by simply eliminating them. This is because you can use Argo to deploy your microservice you rely on your Git repository instead of using a pipeline. When you add an Argo CD Application and an AppProject, your app's configuration settings can be defined using the Application Custom Resource (CRD) that your Kubernetes resource object represents.

Argo CD fully automates the deployment of an application by recognizing your configuration details from your manifest files and synchronizes any changes made to your system within your targeted namespace. Argo CD allows you to eliminate the use of a deployment pipeline.

  • Reduce files -> use an ApplicationSet

While microservices do make it easier to deploy more frequently, there’s also the increase in files for your configurations. Even when a service is deployed and managed using GitOps, this doesn’t reduce the number of files, such as your Kubernetes manifests, your services, secrets, and configmaps, all stored within your Git repository.

Luckily, Argo CD offers a controller known as an ApplicationSet to help us manage our deployments and files easier. The ApplicationSet allows you to create multiple Argo CD Applications and organize your microservices per Application. The controller also allows you to use a single manifest to target numerous clusters and a single manifest to deploy multiple applications, not just from one repository but multiple!

  • Always know their health -> Argo CD health

Whenever you make any change to your microservices or your system there’s a risk of a potential failure. For example, if you make a change and apply it to your manifest YAML file, Kubernetes does not wait until the resource reports back its status, nor do your automation tools typically report back, either. Instead, Kubernetes will apply the manifest and move on to the following manifest without waiting to know if the first one succeeded and was created or not. Something that can help alert you and provide observability for the state of your service is the Argo CD health status.

Argo CD offers a built-in health assessment that provides an overall Application health status for you. Argo CD checks the health for the following resources: services, ingress, deployment, replicaset, statefulset, daemonset, and a persistentvolumeclaim. You can also add custom health checks for custom resources or any persistent known issues affecting your Ingress or StatefulState. These custom health checks can be done two ways: within a ConfigMap or with a script and implemented with a YAML file. If you cannot find a health check for your resource, you can create one yourself for whatever use case you need it.

Above is an example of the Argo CD Application health status within the Argo CD Web UI. Since Git is our source of truth, this sort of observability and information about your service provides that source of truth for the actual state of your running system.

  • Deployment cadence -> Argo CD sync phases/waves

Typically when managing several microservices and making frequent deployments, it can be a challenge when needing to deploy a specific service in a particular order or, let's say, you want to deploy a service with a database, apart from deployment files. It would help if you had some solution for this. Whatever your use case may be, Argo CD provides a solution for this!

A Syncwave allows you to order how Argo CD applies the manifest files stored in your Git repository. By default, all manifests have a wave of zero, but you can set these by using an annotation.

For example, if you wanted to specify a wave, it would look something like this:

  annotations: "5"

When Argo CD starts a sync action, the manifest gets placed in the following order of their Phase.

When you apply the manifest in a Phase, you're using a resource hook to control the sync operation. Those operations are defined by the three primary phases known as the pre-sync, sync, and post-sync phases. To enable a sync, annotate the specific object manifest with with the type of sync you want to use for that resource.

For example, if you wanted to use the PostSync hook, it would look something like this:

  annotations: PostSync

Whenever Argo CD starts a sync, it orders the resources in the following precedence: the phase, the wave they are in, by kind (e.g., namespaces first and then other Kubernetes resources, followed by custom resources), and by name.

  • Dependencies -> See the graphical view of Argo CD

Now that you have containerized your microservices and wrapped your brain around the Kubernetes ecosystem. Suddenly an extra layer of complexity is added when trying to deploy all of those microservices and reference the never-ending Kubernetes documentation. This is where having some visualization for your deployments is helpful! Enabling some sort of topology for your application and infrastructure would allow you to better understand, monitor, and control your containerized microservice.

Argo CD does just that. The web UI provides a real-time view of application activity and dependencies and continuously monitors running applications when using Argo CD. Monitoring all deployed applications allows you to compare the current, live state against the desired target state (as specified within your Git repository). Any deployed application whose live state deviates from the target state is reported and allows you to view the differences while also providing facilities to automatically or manually sync the live state back to the desired target state of your system.

Image from Argo CD

The image above exemplifies an Argo CD Application with a topology view that shows all of the dependencies of that application.

  • Fast rollbacks/progressive delivery -> Argo Rollouts

When beginning to plan your deployment process for your new microservices, it's essential to determine which deployment strategy is best for you and your organization.

Potential strategies could include:

Rolling: Services that don't need to be available 24/7 can utilize a pattern of swapping in the new for old and essentially hitting the restart button. This allows minimal disruption should there be a service outage. The simplicity aspect is compelling, and the disruptions to end users are less so.

Progressive delivery (this includes blue-green deployments and canary): A blue-green deployment is simple, has excellent rollback, and has minimal downtime in best-case scenarios. Canary deployments tie together the best of all worlds for minimizing downtime, fast rollback, and minimal disruptions in the case of failed deployments.

However, while progressive delivery improves the reliability and stability of deployments, applying it within your complex microservice environments doesn't come easy. Kubernetes was built as a container orchestrator, not a deployment system, requiring manual work to execute progressive delivery. Kubernetes also has a default rolling update strategy that is simple to deploy yet provides little control over user traffic and rollback compared to the more advanced deployment methods like blue-green and canary.

This is where Argo Rollouts walks down the red carpet as an open source Kubernetes controller with a GitOps-based progressive delivery strategy.

Argo Rollouts introduces a new Kubernetes CRD for a rollout entity. It has the same functionality as a Kubernetes Deployment entity, but the Argo Rollouts will monitor for changes and can control the rollout. Users can define a specific rollout strategy configured with a manifest file. Argo Rollouts helps you simplify your release processes and provide full support of progressive delivery with automated deployment rollout and management in one!

Argo Rollout Image of UI Dashboard

This image above exemplifies a visualization of an individual Rollout using a Canary deployment strategy.


As you transition from a monolithic architecture to microservices, this process shares a common goal with GitOps: making it easier to deploy more frequently.

We have shared the benefits and possibilities of adopting GitOps with Argo for your microservices deployments within this blog post. To help you leverage these benefits and get started with your GitOps journey, make sure you understand what GitOps is and the fundamentals of this paradigm. To get started, you can check out our GitOps Fundamentals course for free, which allows you to learn about GitOps and get started with the basics of Argo, in tandem with receiving a certificate of completion to share with your employer and others!

Once you’ve completed this course, you should put your knowledge to the test and try Codefresh GitOps CD to help manage your Argo CD instances. This will provide you with a visualization of your services and any possible failures tied to a commit, a test, pipeline, etc. You can quickly pinpoint issues with access to a release timeline and functionality to execute rollbacks when needed.

Learn more about Codefresh GitOps CD and sign up here!

Top comments (0)