<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Place Exchange</title>
    <description>The latest articles on DEV Community by Place Exchange (@place-exchange).</description>
    <link>https://dev.to/place-exchange</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/place-exchange"/>
    <language>en</language>
    <item>
      <title>Building a Multi-Tenant gRPC Development Platform with Ambassador and AWS EKS</title>
      <dc:creator>Brian Annis</dc:creator>
      <pubDate>Tue, 07 Jul 2020 13:40:36 +0000</pubDate>
      <link>https://dev.to/place-exchange/building-a-multi-tenant-grpc-development-platform-with-ambassador-and-aws-eks-58hk</link>
      <guid>https://dev.to/place-exchange/building-a-multi-tenant-grpc-development-platform-with-ambassador-and-aws-eks-58hk</guid>
      <description>&lt;p&gt;In early 2020 the PlaceExchange SRE team was challenged to build support for the company's first gRPC application that would run on Amazon's Elastic Kubernetes Service (EKS). Our usage of third-party geocoding APIs was beginning to exceed the cost of implementing our own service, and so we decided to build one with a gRPC interface. We had already operated EKS for several months with RESTful services and felt confident in our ability to deliver a platform capable of hosting multiple instances of the API for our developers to work concurrently. Given our extensive use of per-developer environments (namespaces) and adherence to the infrastructure as code model, it seemed natural to extend this pattern to support gRPC services.&lt;/p&gt;

&lt;p&gt;As we began to evaluate options for a fully programmatic edge the Ambassador Edge Stack caught our eye for two reasons; it supported Kubernetes Custom Resource Definitions (CRDs) for defining complex routing rules and it was built on the battle tested Envoy proxy. Naturally we had a lot of questions, namely how to support TLS termination and HTTP/2 without burdening the dev team with undue complexity. Thankfully Ambassador has support for both, and armed with that knowledge we set out to extend our EKS PaaS to support Day 1 secure gRPC services.&lt;/p&gt;

&lt;p&gt;We put together a hands on tutorial to demonstrate our topology, noting our learnings along the way. We hope this article is helpful to all teams looking to adopt gRPC and that this will take some of the mystery out of operating these types of services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;In this exercise you will create one Ambassador deployment on a single k8s cluster and use multiple Host CRDs to request certificates and enable TLS termination for specific domains. You will then deploy two identical gRPC applications and map them to each Host using two distinct Mappings. At the end you will be able to query each service via its respective hostname.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lz97edi6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/s3q86utnel9xocri0mkb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lz97edi6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/s3q86utnel9xocri0mkb.png" alt="Alt Text" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This pattern can be used to give each developer their own "stage" to work in. To enable your team to work concurrently, you can assign one namespace and subdomain to each developer as described in this tutorial.&lt;/p&gt;

&lt;p&gt;Mapping objects are simply Ambassador's take on "virtualhost" functionality that exists in all reverse proxy tools. The key difference here is that Ambassador stores this routing relationship as a Kubernetes native CRD, which extends the usefulness of deployment tools like kubectl and Helm.&lt;/p&gt;

&lt;h3&gt;
  
  
  Objectives
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Deploy your own Ambassador cluster&lt;/li&gt;
&lt;li&gt;Ensure HTTP/2 and TLS support at all levels of the stack&lt;/li&gt;
&lt;li&gt;Build and deploy gRPC application with TLS termination and HTTP/2 support&lt;/li&gt;
&lt;li&gt;Deploy a second instance of the same gRPC application on a different domain&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;First things first, clone the example repository from &lt;a href="https://github.com/PlaceExchange/grpc-example"&gt;GitHub&lt;/a&gt;, you'll need the included docker and k8s manifests complete the steps below.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/placeexchange"&gt;
        placeexchange
      &lt;/a&gt; / &lt;a href="https://github.com/placeexchange/grpc-example"&gt;
        grpc-example
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Workshop files for deploying multi-tenant gRPC services with Ambassador
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;If you're just starting out with Ambassador and gRPC, check out their &lt;a href="https://www.getambassador.io/docs/latest/howtos/grpc/"&gt;documentation&lt;/a&gt; for a basic primer on how to host a single gRPC service over insecure or secure channels.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A Kubernetes cluster in AWS (EKS recommended)&lt;/li&gt;
&lt;li&gt;Cluster privileges to apply CRDs, namespaces, and deployments&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/kubernetes-sigs/external-dns"&gt;external-dns&lt;/a&gt; OR ability to create hosted DNS records&lt;/li&gt;
&lt;li&gt;Three subdomains, one for ambassador, and one for each developer

&lt;ul&gt;
&lt;li&gt;i.e edge.example.com, grpc.subdomain.example.com, grpc.subdomain2.example.com&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Optional: docker registry if you dont want to use the built image&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: while this tutorial makes use of subdomains it should work with any type of domain name. There is also no requirement that all records use the same root domain.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  1. Installing Ambassador
&lt;/h2&gt;

&lt;p&gt;If you have not installed Ambassador you will need to deploy it to your cluster before getting started. If you already have an existing deployment of Ambassador, the "Quick Start method" describes how to edit an existing deployment.&lt;/p&gt;

&lt;p&gt;While not included by default, Ambassador documentation recommends using NLBs when terminating TLS within Ambassador. From the &lt;a href="https://www.getambassador.io/docs/latest/topics/running/ambassador-with-aws/"&gt;docs&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When terminating TLS at Ambassador, you should deploy a L4 Network Load Balancer (NLB) with the proxy protocol enabled to get the best performance out of your load balancer while still preserving the client IP address.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Either installation method describes how to deploy Ambassador with a NLB.&lt;/p&gt;

&lt;h3&gt;
  
  
  1a Quick Start method
&lt;/h3&gt;

&lt;p&gt;To install Ambassador, follow the &lt;a href="https://www.getambassador.io/docs/latest/tutorials/getting-started/"&gt;quick start&lt;/a&gt; instructions. For the purposes of this tutorial, we highly recommend using the YAML method so you can see the modifications required to enable automatic DNS and HTTP/2 support.&lt;/p&gt;

&lt;p&gt;After installing Ambassador using any quick start method, you will need to annotate the ambassador service to use the NLB load balancer type and add your preferred DNS name for AES.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl edit service -n ambassador ambassador
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here you can use the editor to add the following annotation, replacing "edge.example.com" with your preferred domain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;external-dns.alpha.kubernetes.io/hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;edge.example.com&lt;/span&gt;
    &lt;span class="na"&gt;service.beta.kubernetes.io/aws-load-balancer-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nlb&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will instruct your &lt;code&gt;external-dns&lt;/code&gt; deployment to create an A record pointing to the NLB. It will also create a new Network Load Balancer for this service.&lt;/p&gt;

&lt;h3&gt;
  
  
  1b Manifest method
&lt;/h3&gt;

&lt;p&gt;Alternatively, you can use the packaged manifests located in the &lt;code&gt;kube/ambassador&lt;/code&gt; directory. This directory contains the original &lt;code&gt;aes-crds.yaml&lt;/code&gt; from Ambassador with a modified &lt;code&gt;aes.yaml&lt;/code&gt; (source version 1.4.3). This modified manifest includes an annotation on the service to create an A record for the load balancer (NLB).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aes.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ambassador&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ambassador&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;product&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aes&lt;/span&gt;
    &lt;span class="na"&gt;app.kubernetes.io/component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ambassador-service&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;service.beta.kubernetes.io/aws-load-balancer-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;nlb"&lt;/span&gt;
    &lt;span class="na"&gt;external-dns.alpha.kubernetes.io/hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;edge.example.com&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use Find / Replace in your editor of choice to replace &lt;code&gt;edge.example.com&lt;/code&gt; with your preferred DNS name for the API gateway. This will be used by any service that does not provide a &lt;code&gt;host:&lt;/code&gt; or &lt;code&gt;:authority:&lt;/code&gt; key in its Mapping. Once this is complete you can deploy the &lt;code&gt;aes-crds.yaml&lt;/code&gt; and &lt;code&gt;aes.yaml&lt;/code&gt; manifests&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kube/ambassador/aes-crds.yaml
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kube/ambassador/aes.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Configuring the default host
&lt;/h2&gt;

&lt;p&gt;Edit the &lt;code&gt;aes-host.yaml&lt;/code&gt; manifest and use Find / Replace to swap &lt;code&gt;edge.example.com&lt;/code&gt; with your preferred DNS name for the API gateway. This should be the same hostname you just provided in the &lt;code&gt;aes.yaml&lt;/code&gt; Service annotation. This hostname will be used to access any service that does not provide a &lt;code&gt;host:&lt;/code&gt; or &lt;code&gt;:authority:&lt;/code&gt; key in its Mapping, which is &lt;em&gt;not&lt;/em&gt; used in this tutorial but is useful for troubleshooting Ambassador.&lt;/p&gt;

&lt;p&gt;You should also take a moment to Find / Replace &lt;code&gt;registration@example.com&lt;/code&gt; with a valid email for your organization.&lt;/p&gt;

&lt;p&gt;It may take a few minutes for the NLB spin up and for external-dns to create a new A record pointing to it. Once your domain resolves, you can deploy the &lt;code&gt;aes-host.yaml&lt;/code&gt; to create a new Host and TLSContext for Ambassador. This will request a certificate from LetsEncrypt and enable TLS termination for this domain for any service &lt;em&gt;without&lt;/em&gt; a &lt;code&gt;host:&lt;/code&gt; or &lt;code&gt;:authority:&lt;/code&gt; key in its Mapping.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NOTE: if you don't have external-dns deployed in your cluster you can create an A record pointing to your NLB manually, it'll still work. Just remember that you will need to update the record if you delete or recreate the service / NLB for any reason.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kube/ambassador/aes-host.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can check the status of the ACME request at any time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get host &lt;span class="nt"&gt;-n&lt;/span&gt; ambassador
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see that the certificate is issued and the Host CRD is ready.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME   HOSTNAME                    STATE   PHASE COMPLETED   PHASE PENDING   AGE
edge   edge.example.com            Ready                                     11d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the STATE is not ready, you can use &lt;code&gt;kubectl describe host -n ambassador&lt;/code&gt; to see recent events and troubleshoot. Common problems include DNS propagation delays and LetsEncrypt rate limiting.&lt;/p&gt;

&lt;h2&gt;
  
  
  OPTIONAL: Build the image
&lt;/h2&gt;

&lt;p&gt;If you do not want to use the pre-built image hosted on dockerhub you can build and push to your own registry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker build ./docker &lt;span class="nt"&gt;-t&lt;/span&gt; &amp;lt;docker_reg&amp;gt;/grpc-demo
&lt;span class="nv"&gt;$ &lt;/span&gt;docker push &amp;lt;docker_reg&amp;gt;/grpc-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Be sure to update the &lt;code&gt;Image:&lt;/code&gt; value in &lt;code&gt;grpc-demo.yaml&lt;/code&gt; and &lt;code&gt;grpc2-demo.yaml&lt;/code&gt; to prepare for deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Create CNAMEs for service subdomains
&lt;/h2&gt;

&lt;p&gt;In order to route external traffic for each service to Ambassador's NLB, you will need to create CNAMEs for each subdomain that resolve to Ambassador's A record. After creating the records your environment should look something like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="no"&gt;CNAME&lt;/span&gt; &lt;span class="n"&gt;grpc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;subdomain&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;example&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;com&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="no"&gt;A&lt;/span&gt; &lt;span class="n"&gt;edge&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;example&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;com&lt;/span&gt;
&lt;span class="no"&gt;CNAME&lt;/span&gt; &lt;span class="n"&gt;grpc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;subdomain2&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;example&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;com&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="no"&gt;A&lt;/span&gt; &lt;span class="n"&gt;edge&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;example&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;com&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this is done, Find All / Replace All &lt;code&gt;grpc.subdomain.example.com&lt;/code&gt; with your first service subdomain, and &lt;code&gt;grpc.subdomain2.example.com&lt;/code&gt; with your second subdomain.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;external-dns&lt;/code&gt; is not useful in this scenario as the only Service of type LoadBalancer is managed by Ambassador. You &lt;em&gt;could&lt;/em&gt; append multiple domains to the Service's &lt;code&gt;external-dns.alpha.kubernetes.io/hostname&lt;/code&gt; annotation, but this becomes unwieldy in actual just in time environment provisioning as your deployment tooling needs to support string parsing / appending.&lt;/p&gt;

&lt;p&gt;At this time it is probably easiest to have your infrastructure tooling interact directly with your DNS provider as part of your deployment process.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Check TLS termination
&lt;/h2&gt;

&lt;p&gt;At this point, Ambassador is configured and you're ready to deploy a RESTful service to double check everything is working with TLS. Debugging TLS with a gRPC service is tricky so this service will help iron out any problems with certificate requests and DNS.&lt;/p&gt;

&lt;p&gt;Deploy the &lt;code&gt;demo&lt;/code&gt; and &lt;code&gt;demo2&lt;/code&gt; namespaces&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kube/grpc-example/namespace.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now deploy the "quote" application&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kube/grpc-example/quote.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;quote.yaml&lt;/code&gt; manifest will deploy a RESTful service accessible from &lt;code&gt;https://grpc.subdomain.example.com/backend/&lt;/code&gt;. You may need to wait a few moments for Ambassador to request and receive a certificate from LetsEncrypt.&lt;/p&gt;

&lt;p&gt;This manifest contains Service, Deployment, Host, Mapping, and TLSContext objects. The Host and TLSContext will allow Ambassador to terminate TLS for &lt;code&gt;grpc.subdomain.example.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The included Mapping will route requests to &lt;code&gt;/backend/&lt;/code&gt; to the &lt;code&gt;quote&lt;/code&gt; service, hosted on Pod port &lt;code&gt;8080&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;quote.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;getambassador.io/v2&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Mapping&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quote-backend&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grpc.subdomain.example.com&lt;/span&gt;
  &lt;span class="na"&gt;prefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/quote/&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;personal:8080&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you navigate to this endpoint in the browser you should see some quotes from the Datawire team. If you get a timeout or SSL warning, check the Host record in the &lt;code&gt;demo&lt;/code&gt; namespace and make sure your Pods are healthy.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Deploy the first gRPC service
&lt;/h2&gt;

&lt;p&gt;Once TLS termination is confirmed, you can deploy the first gRPC service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kube/grpc-example/grpc-demo.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This manifest contains Service, Deployment, Mapping objects. Since you have already deployed a Host and TLSContext for the service that you wish to host on the host &lt;code&gt;grpc.subdomain.example.com&lt;/code&gt; as part of the &lt;code&gt;quote.yaml&lt;/code&gt; there is no need to deploy them as part of this manifest.&lt;/p&gt;

&lt;p&gt;You can see that gRPC Mappings use a slightly different syntax&lt;/p&gt;

&lt;p&gt;&lt;code&gt;grpc-demo.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;getambassador.io/v2&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Mapping&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grpc-mapping&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;:authority: grpc.subdomain.example.com&lt;/span&gt;
  &lt;span class="na"&gt;grpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;True&lt;/span&gt;
  &lt;span class="na"&gt;prefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/helloworld.Greeter/&lt;/span&gt;
  &lt;span class="na"&gt;rewrite&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/helloworld.Greeter/&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grpc-example:50051&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now its time to test your service. The docker image includes a client that communicates over TLS &lt;em&gt;only&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;BACKEND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;grpc.subdomain.example.com placeexchange/grpc-demo python greeter_client.py
Greeter client received: Hello, you!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. Deploy the second gRPC service
&lt;/h2&gt;

&lt;p&gt;Now that you have deployed the first service and confirmed it's working, you can deploy the second service. This service uses the same image with a different Host, TLSContext and Mapping in the &lt;code&gt;demo2&lt;/code&gt; namespace.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;grpc2-demo.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;getambassador.io/v2&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Mapping&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grpc-mapping&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo2&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;:authority: grpc.subdomain2.example.com&lt;/span&gt;
  &lt;span class="na"&gt;grpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;True&lt;/span&gt;
  &lt;span class="na"&gt;prefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/helloworld.Greeter/&lt;/span&gt;
  &lt;span class="na"&gt;rewrite&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/helloworld.Greeter/&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;grpc-example:50051&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is to demonstrate that you could have &lt;em&gt;n&lt;/em&gt; number of subdomains / namespaces, one for each developer on your team.&lt;/p&gt;

&lt;p&gt;Now you can test the second deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;BACKEND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;grpc.subdomain2.example.com placeexchange/grpc-demo python greeter_client.py
Greeter client received: Hello, you!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You now have two development namespaces that allow individual experimentation and deployment with full TLS termination.&lt;/p&gt;

&lt;h2&gt;
  
  
  Parting Words
&lt;/h2&gt;

&lt;p&gt;We hope that you have found this demonstration of a multiple namespace deployment relevant and useful. Ambassador has made hosting our geocoding application simple, and we look forward to onboarding additional gRPC and RESTful services to our platform in the near future. Here are several resources we found useful as we experimented and deployed Ambassador.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.getambassador.io/docs/latest/howtos/grpc/"&gt;Ambassador gRPC documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://letsencrypt.org/docs/rate-limits/"&gt;LetsEncrypt Rate Limits&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://d6e.co/slack"&gt;Datawire Slack&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Until next time!&lt;/p&gt;

</description>
      <category>sre</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>grpc</category>
    </item>
    <item>
      <title>Slashing Buildkite deployment time by 75%</title>
      <dc:creator>Rushikesh Magar</dc:creator>
      <pubDate>Wed, 03 Jun 2020 12:38:30 +0000</pubDate>
      <link>https://dev.to/placeexchange/slashing-buildkite-deployment-time-by-75-5cd5</link>
      <guid>https://dev.to/placeexchange/slashing-buildkite-deployment-time-by-75-5cd5</guid>
      <description>&lt;p&gt;At &lt;a href="https://www.placeexchange.com/" rel="noopener noreferrer"&gt;Place Exchange&lt;/a&gt;, we use &lt;a href="https://buildkite.com/" rel="noopener noreferrer"&gt;Buildkite&lt;/a&gt; as the continuous integration and continuous deployment platform to deploy our Django application to AWS. We use Buildkite pipelines to deploy applications on all available environments including Dev, QA and Production. Before making any changes, the mean build and deployment time across all these environments was about an hour. Using a host of tactical fixes, we brought that down to &lt;strong&gt;about 15 minutes, resulting in a decrease of almost 75% in our build times.&lt;/strong&gt; This is the story of how we got there.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why did we start looking into this problem?
&lt;/h3&gt;

&lt;p&gt;This all started when we decided to add a new QA environment, primarily for load testing purposes, as a part of the buildkite pipeline. Adding a QA environment suddenly increased time in rolling out features to the Production environment, primarily due to the fact that there were extra build steps which consumed additional build time. This resulted in complaints from developers, who said that the rollout process took too much time and that it was delaying the deployment of hotfixes and new features. While we wanted the deployment to the QA environment to be part of the same pipeline, we also wanted to minimize rollout time to the Production environment. &lt;/p&gt;

&lt;h3&gt;
  
  
  Our initial observations
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Sequential pipeline deployment
&lt;/h4&gt;

&lt;p&gt;Before parallelizing application tests and Dev/QA deployment, we used to run the different steps in our deploy pipeline sequentially.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh903uu6071xp099yofbd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh903uu6071xp099yofbd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a result, time required to run application pipeline was&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In Best Case (All steps are uses docker layer caching to build docker Image) : ~50 mins&lt;/li&gt;
&lt;li&gt;In Worst Case (No steps uses docker layer caching to build docker Image) : ~1 hr&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Docker layers cause increased build times
&lt;/h4&gt;

&lt;p&gt;When building an image, Docker steps through the instructions in your Dockerfile, executing each instruction in the order specified. As each instruction is examined, Docker looks for an existing image in its cache that it can reuse, rather than creating a new (duplicate) image. To know about this in more detail, go &lt;a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#leverage-build-cache" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It was observed, when docker agents got used for the first time (when they start up), it downloaded images from the network, since no cached layer was present on the agent. This resulted in additional time to execute the build steps. Furthermore, combinations of first-time-use of build agents across multiple steps, led to variations in overall build time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fixing the problem and how we made our builds faster
&lt;/h3&gt;

&lt;p&gt;Running a build's jobs in parallel is a simple technique to decrease your build’s total running time and Buildkite provides many options to do so. There are multiple steps involved in the build and deploy process. While we can run some of them in parallel, they need to be decided based on several criteria. We used some of these options below to optimize total build time across different environments.&lt;/p&gt;

&lt;h4&gt;
  
  
  Parallel test execution
&lt;/h4&gt;

&lt;p&gt;Backend and Front-end tests could be run separately since they are independent of each other. However, the results of these two sets of tests get uploaded on Code Climate to generate a holistic testing report. Luckily, Buildkite Agent has &lt;a href="https://buildkite.com/docs/agent/v3/cli-artifact" rel="noopener noreferrer"&gt;artifact support&lt;/a&gt; to store and retrieve files between different steps in a build pipeline. By leveraging this feature, where we can pass information between two parallel Buildkite steps, we imported the results of tests (maintained as build step artifacts), combined them to one and then uploaded it to Code Climate.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff18mgmc8vpg21djm6fri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff18mgmc8vpg21djm6fri.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After this change, build time reduced from &lt;strong&gt;~1 hour to ~35 mins,&lt;/strong&gt; resulting in a 42% reduction.&lt;/p&gt;

&lt;h4&gt;
  
  
  Parallel deployment to independent environments
&lt;/h4&gt;

&lt;p&gt;Depending on the usage of environments in your SDLC setup, environments can be used for different purposes with varying degrees of priority. While we use the Dev environment in the classic sense of using it as a sandbox environment, we use the QA environment purely for load testing purposes.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqfi9sebz09uxm7lxtftv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqfi9sebz09uxm7lxtftv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Buildkite, multiple agents can be used to run different independent steps in parallel. Using this approach, we have updated our application’s pipeline to deploy to Dev and QA environments in parallel, which reduces build time significantly.&lt;/p&gt;

&lt;h4&gt;
  
  
  Ensure execution of only branch-specific steps in deployment pipeline
&lt;/h4&gt;

&lt;p&gt;Steps, such as running local tests and sandbox database migrations, that are supposed to be run on feature branches for local tests and migration do need not to be part of the master pipeline.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fonwdu8kwxsd6ro9y4dby.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fonwdu8kwxsd6ro9y4dby.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Applying branches filter, and executing necessary specific steps, helped reduce build and deployment time to master.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8cknhsyavm1hk8luprmd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8cknhsyavm1hk8luprmd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After this change, build time reduced &lt;strong&gt;from ~35 mins to ~16 mins,&lt;/strong&gt; resulting in a 54% reduction.&lt;/p&gt;

&lt;h3&gt;
  
  
  Measuring success
&lt;/h3&gt;

&lt;p&gt;After applying all the optimizations and changes mentioned above, the total time required to run the application’s pipeline is&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Best Case (All steps are use docker layer caching to build docker Image) : ~16 mins&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In Worst Case (No steps use docker layer caching to build docker Image) : ~25 mins&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Diving into some more details, specific to the master &amp;amp; feature branch pipelines -&lt;/p&gt;

&lt;h4&gt;
  
  
  On the master branch
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbxlz1zoxr4lx7umiq48o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbxlz1zoxr4lx7umiq48o.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above graph, X-axis denotes time range of master branch builds and Y-axis denotes time taken by master branch builds.&lt;/p&gt;

&lt;p&gt;As you can see, prior to 13th March, builds used to take ~50 min to 1 hr to complete. After 13th March, where we pushed build optimization changes, you will notice builds time reduced in the ~17 min to ~25 min range.&lt;/p&gt;

&lt;p&gt;Highlighting some exceptions in graph above -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From 21st Feb to 25th Feb, we used a separate pipeline because we were in the process of migrating our application from Swarm to Kubernetes. During that process we had a lightweight pipeline for the Kube deployment, which caused lower build time.&lt;/li&gt;
&lt;li&gt;Between 23rd March and 26th March, two builds took ~1 hr to complete, after checking it was noticed that they ran without docker layer caching. This was what took us down the path of docker layer caching, and in addition we found that there was a slow network on the hosts that ran our buildkite agents. The slow network was a consequence of using  t3.medium instance type for the BK hosts, which promised "Up to 5gbps" network performance, and such performance degradation happens very infrequently.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  On feature branches
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdatgcpzso2t2fr4s8ruq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdatgcpzso2t2fr4s8ruq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above graph,  X-axis denotes time range over builds happened and Y-axis denotes time taken by build. Before 11th March 2020, builds on non-master branches used to take ~15 min to ~20 min to complete.&lt;/p&gt;

&lt;p&gt;First changes to build optimization i.e. running tests in parallel, were released on 6th march and after 11th march 2020, almost all branches were updated with that change. After 11th March 2020, build time reduced in ~10 min to ~15 min range, resulting in ~33% improvements in build times&lt;/p&gt;

&lt;p&gt;Highlighting some exceptions in graph above -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Some builds after changes took ~15 min to complete, that is because they ran without docker layer cache. &lt;/li&gt;
&lt;li&gt;On 17th March 2020, one build took ~20 min to complete, since that branch was not updated with changes made for build optimization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Next steps
&lt;/h3&gt;

&lt;p&gt;Using the approaches outlined above, we have optimized build and deployment time of our application to a certain extent but optimization is a never ending process.  We are still in the process of making our build times better. Some ideas that we are contemplating exploring include &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;running the same build step in parallel over multiple agents&lt;/li&gt;
&lt;li&gt;reducing docker image build time by inspecting Dockerfile and potentially removing ansible-related packages from Dockerfile. Today, these are running infrastructure-based steps, that are not strictly application-related.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>buildkite</category>
      <category>sre</category>
      <category>docker</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
