<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Architect.io</title>
    <description>The latest articles on DEV Community by Architect.io (@architectio).</description>
    <link>https://dev.to/architectio</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/architectio"/>
    <language>en</language>
    <item>
      <title>Test Out the Kubernetes Terraform Provider</title>
      <dc:creator>Lindsay Brunner</dc:creator>
      <pubDate>Wed, 11 May 2022 03:52:03 +0000</pubDate>
      <link>https://dev.to/architectio/test-out-the-kubernetes-terraform-provider-2k60</link>
      <guid>https://dev.to/architectio/test-out-the-kubernetes-terraform-provider-2k60</guid>
      <description>&lt;p&gt;Kubernetes is a powerful yet complicated container orchestration system. It can be used to run resilient workloads on virtually any cloud platform, including AWS, GCS, Azure, DigitalOcean, and more. In this tutorial, you’ll explore some of the most commonly-used building blocks of a Kubernetes application – Pods, Deployments, and Services. These resources could be created with standard Kubernetes manifests if desired, but the method of using manifests has faults, including one major drawback, which is that there’s no state preservation.&lt;/p&gt;

&lt;p&gt;Terraform is an infrastructure-as-code tool created by Hashicorp to make handling infrastructure more straightforward and manageable. Terraform files use a declarative syntax where the user specifies resources and their properties such as pods, deployments, services, and ingresses. Users then leverage the Terraform CLI to preview and apply expected infrastructure. When changes are desired, a user simply updates and reapplies the same file or set of files; then, Terraform handles resource creation, updates, and deletion as required.&lt;/p&gt;

&lt;p&gt;For this tutorial, start by creating a Kubernetes cluster. By following along, you’ll learn how to define Kubernetes resources using Terraform and apply the configuration to the cluster. When everything is up and running, you’ll have your own “Hello World” service running on the cloud!&lt;/p&gt;

&lt;h2&gt;
  
  
  Project dependencies for Kubernetes and Terraform
&lt;/h2&gt;

&lt;p&gt;You’ll be using &lt;a href="https://learn.hashicorp.com/tutorials/terraform/install-cli"&gt;Terraform&lt;/a&gt; to deploy all of the required resources to the Kubernetes cluster. &lt;code&gt;kubectl&lt;/code&gt; can optionally be installed if you’d like more insights into what has been created. Also, be sure to have an account with a cloud provider that has Kubernetes hosting. Once those requirements are met, you’re ready to get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Define Kubernetes Resources with Terraform
&lt;/h2&gt;

&lt;p&gt;Terraform requires that the user uses its special language called HCL, which stands for Hashicorp Configuration Language. Create a folder called terraform-example where the HCL files will live, then change directories to that folder. Terraform providers will need to be defined and installed to use certain types of resources. This tutorial will use the &lt;a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest"&gt;Kubernetes&lt;/a&gt; and the &lt;a href="https://registry.terraform.io/providers/hashicorp/helm/latest"&gt;Helm&lt;/a&gt; providers. Providers are easily downloaded and installed with a few lines of HCL and a single command. Be sure that you have downloaded your cluster’s &lt;code&gt;kubeconfig&lt;/code&gt;, as it will be necessary for the rest of the tutorial. Create a file called &lt;code&gt;versions.tf&lt;/code&gt; where providers will be defined and add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "&amp;gt;= 2.0.0"
    }
    helm = {
      source = "hashicorp/helm"
    }
  }
}
provider "kubernetes" {
  config_path = "&amp;lt;your_kubeconfig_path&amp;gt;"
}
provider "helm" {
  kubernetes {
    config_path = "&amp;lt;your_kubeconfig_path&amp;gt;"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Be sure to replace &lt;code&gt;&amp;lt;your_kubeconfig_path&amp;gt;&lt;/code&gt; in each provider block with the location of the &lt;code&gt;kubeconfig&lt;/code&gt; you’ve downloaded. Now that the required providers are defined, they can be installed by running the command &lt;code&gt;terraform init&lt;/code&gt;. Ensure that the command is run in the same folder that &lt;code&gt;versions.tf&lt;/code&gt; is in. The command should print something like what’s below, indicating success:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Installing hashicorp/kubernetes v2.0.1...
- Installed hashicorp/kubernetes v2.0.1 (signed by HashiCorp)
- Installing hashicorp/helm v2.0.2...
- Installed hashicorp/helm v2.0.2 (signed by HashiCorp)

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work.

If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that a folder has been created alongside &lt;code&gt;versions.tf&lt;/code&gt; called &lt;code&gt;.terraform&lt;/code&gt;. This folder is where the installed providers are stored to be used for later terraform processes. Now that the prerequisites to run terraform are out of the way, the resource definitions can be created. Add a file alongside &lt;code&gt;versions.tf&lt;/code&gt; called &lt;code&gt;main.tf&lt;/code&gt;. For simplicity, all resources will be created in the same file. Add the following resource definition to &lt;code&gt;main.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_namespace" "hello_world_namespace" {
  metadata {
    labels = {
      app = "hello-world-example"
    }
    name = "hello-world-namespace"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This block defines the Kubernetes namespace that will be created for all of the other resources to live in. A Kubernetes namespace helps separate resources into groups when certain things do not need to interact. It’s not truly necessary in this case, but using namespaces is a good practice to ensure that strange collisions don’t occur down the line. Next, add the resource definition for a simple Kubernetes deployment to &lt;code&gt;main.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_deployment" "hello_world_deployment" {
  metadata {
    name = "kubernetes-example-deployment"
    namespace = "hello-world-namespace"
    labels = {
      app = "hello-world-example"
    }
  }

  spec {
    replicas = 1
    selector {
      match_labels = {
        app = "hello-world-example"
      }
    }
    template {
      metadata {
        labels = {
          app = "hello-world-example"
        }
      }
      spec {
        container {
          image = "heroku/nodejs-hello-world"
          name  = "hello-world"
        }
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The deployment spec is where a user defines the expected state of a set of pods. The deployment controller in the cluster will then update pods to that expected state. Note that the deployment is scoped to the namespace that has just been created, &lt;code&gt;hello-world-namespace&lt;/code&gt;. The spec block in the deployment is where the expected state of a pod or set of pods is defined and, in this case, where the single “Hello World” service is defined. All it takes is specifying the container that needs to be run because the pod will be running a public Docker image. The Kubernetes service is the next resource that needs to be defined. Add the service to &lt;code&gt;main.tf&lt;/code&gt; with the code below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_service" "hello_world_service" {
  depends_on = [kubernetes_deployment.hello_world_deployment]

  metadata {
    labels = {
      app = "hello-world-example"
    }
    name = "hello-world-example"
    namespace = "hello-world-namespace"
  }

  spec {
    port {
      name = "api"
      port = 3000
      target_port = 3000
    }
    selector = {
      app = "hello-world-example"
    }
    type = "ClusterIP"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A Kubernetes service defines how a group of pods should be accessed. It’s important that the service is created after the deployment and the pods are, so Terraform has the handy &lt;code&gt;depends_on&lt;/code&gt; keyword to handle that. &lt;code&gt;depends_on&lt;/code&gt; is an array that exists on many Terraform resource definitions that allows the user to specify what resources a service should be created after. Like the deployment, the service is also created in the &lt;code&gt;hello-world-namespace&lt;/code&gt; to only target pods running there. The service’s selector defines labels of pods that it should be targeting to enable access. The &lt;code&gt;app = "hello-world-example"&lt;/code&gt; selector is defined here because it matches the labels that are set on the deployment’s pods. The service also defines what ports can be accessed. In this case, when traffic is sent to port 3000 of the service, it will then be routed to port 3000 of one of the selected group’s pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Terraform to create Kubernetes resources that enable cluster access
&lt;/h2&gt;

&lt;p&gt;The Terraform resources that have been defined so far create everything that’s needed to run an application accessible to the cluster, but more resources are needed to access the application from the outside world. Most importantly, a load balancer should be put in front of the “Hello World” service to handle the traffic. This tutorial uses the &lt;a href="https://kubernetes.github.io/ingress-nginx/"&gt;Nginx Ingress Controller&lt;/a&gt; and the Helm Terraform provider to create it. Add the following to &lt;code&gt;main.tf&lt;/code&gt; to create the Nginx ingress controller:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "helm_release" "ingress_nginx" {
  name       = "ingress-nginx"
  repository = "https://kubernetes.github.io/ingress-nginx"
  chart      = "ingress-nginx"
  version    = "3.15.2"
  namespace  = "hello-world-namespace"
  timeout    = 300

  values = [&amp;lt;&amp;lt;EOF
controller:
  admissionWebhooks:
    enabled: false
  electionID: ingress-controller-leader-internal
  ingressClass: nginx-hello-world-namespace
  podLabels:
    app: ingress-nginx
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
  scope:
    enabled: true
rbac:
  scope: true
EOF
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the Helm Terraform provider and, in turn, the Helm chart makes creating the required Kubernetes resource much easier because it’s not necessary to add a bunch of boilerplate to the Terraform file. If you’d like to explore more than is covered in this tutorial, feel free to check out the Helm chart &lt;a href="https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx"&gt;here&lt;/a&gt;. When Terraform creates the &lt;code&gt;helm_release&lt;/code&gt; resource, it will create an &lt;code&gt;ingress-nginx-controller&lt;/code&gt; deployment, pod, replica set, and other resources required to run the load balancer within the cluster. One more resource needs to be added to expose the Nginx controller to the outside world. Create a Kubernetes ingress for the Nginx controller by adding the following resource definition to &lt;code&gt;main.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_ingress" "ingress" {
  metadata {
    labels = {
      app                               = "ingress-nginx"
    }
    name = "api-ingress"
    namespace = "hello-world-namespace"
    annotations = {
      "kubernetes.io/ingress.class": "nginx-hello-world-namespace"
    }
  }

  spec {
    rule {
      http {
        path {
          path = "/"
          backend {
            service_name = "hello-world-example"
            service_port = 3000
          }
        }
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the ingress resource is created in the &lt;code&gt;hello-world-namespace&lt;/code&gt; namespace like all other resources in this tutorial. It’s also important that the annotation &lt;code&gt;kubernetes.io/ingress.class&lt;/code&gt; is named &lt;code&gt;nginx-&amp;lt;namespace_name&amp;gt;&lt;/code&gt; so that the Nginx ingress controller knows to handle the ingress rules. Finally, the spec portion of the ingress definition defines how Nginx should be configured. In this case, all traffic is routed to the service named “hello-world-example” at port 3000 and, in turn, to the pods backing the service which are running the “Hello World” application. Now that all required resources are defined, you’re ready to run the Terraform deployment!&lt;/p&gt;

&lt;p&gt;Before running the deployment, it may be useful to see what exactly it is that Terraform will create based on the template. That’s especially useful as the infrastructure grows. Run the command &lt;code&gt;terraform plan -out=tfplan&lt;/code&gt; to see what resources Terraform will add, change, or destroy. On the first run, your output should look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = false
      + chart                      = "ingress-nginx"

...

+ port        = 3000
              + protocol    = "TCP"
              + target_port = "3000"
            }
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

This plan was saved to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each resource that will be created along with details about its properties is shown in the terminal. Once you’re ready to actually create the resources in the cluster, run the command &lt;code&gt;terraform apply tfplan&lt;/code&gt; and wait for it to complete. The load balancer may take a couple minutes to provision. Once completed, you should have seen something like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubernetes_namespace.hello_world_namespace: Creating...
kubernetes_ingress.ingress: Creating...
kubernetes_deployment.hello_world_deployment: Creating...
helm_release.ingress_nginx: Creating...
kubernetes_namespace.hello_world_namespace: Creation complete after 1s [id=hello-world-namespace]
kubernetes_ingress.ingress: Creation complete after 1s [id=hello-world-namespace/api-ingress]
kubernetes_deployment.hello_world_deployment: Creation complete after 9s [id=hello-world-namespace/kubernetes-example-deployment]
kubernetes_service.hello_world_service: Creating...
kubernetes_service.hello_world_service: Creation complete after 0s [id=hello-world-namespace/hello-world-example]
helm_release.ingress_nginx: Still creating... [10s elapsed]
helm_release.ingress_nginx: Still creating... [20s elapsed]

...

helm_release.ingress_nginx: Still creating... [2m20s elapsed]
helm_release.ingress_nginx: Still creating... [2m30s elapsed]
helm_release.ingress_nginx: Creation complete after 2m36s [id=ingress-nginx]

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it; you now have an application running in the cloud! But how can it be accessed? You can find out how with one &lt;code&gt;kubectl&lt;/code&gt; command. Enter the following command in a terminal and be sure to replace &lt;code&gt;&amp;lt;your_kubeconfig_file_path&amp;gt;&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get service ingress-nginx-controller -n hello-world-namespace --kubeconfig=&amp;lt;your_kubeconfig_file_path&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the IPv4 address under the header &lt;code&gt;EXTERNAL-IP&lt;/code&gt;, then on the command line, enter &lt;code&gt;curl -X GET &amp;lt;IPv4_address&amp;gt;&lt;/code&gt; being sure to replace &lt;code&gt;&amp;lt;IPv4_address&amp;gt;&lt;/code&gt; with the external IP of the Kubernetes service. You should see “Hello World” printed to the console. That’s the response from your cloud application running on Kubernetes! Now, what happens when more and more people start using your service?&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Terraform to modify existing Kubernetes resources
&lt;/h2&gt;

&lt;p&gt;There’s only one replica of the application running right now, and more may be needed in the future to handle traffic. Fortunately, Terraform can help add more replicas of the application. Confirm that only one replica exists by running the command &lt;code&gt;kubectl get pods -n hello-world-namespace --kubeconfig=&amp;lt;your_kubeconfig_file_path&amp;gt;&lt;/code&gt; and notice that only one pod exists that is prefixed with &lt;code&gt;kubernetes-example-deployment&lt;/code&gt;. To increase the number of pods running the “Hello World” application, the deployment will need to be updated. Find the line in &lt;code&gt;main.tf&lt;/code&gt; where replicas for the applications are defined as &lt;code&gt;replicas = 1&lt;/code&gt; and update 1 to 3. Now in a terminal, run the following command to see what will be updated in the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan -out=tfplan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should look like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
kubernetes_namespace.hello_world_namespace: Refreshing state... [id=hello-world-namespace]
kubernetes_ingress.ingress: Refreshing state... [id=hello-world-namespace/api-ingress]
kubernetes_deployment.hello_world_deployment: Refreshing state... [id=hello-world-namespace/kubernetes-example-deployment]
kubernetes_service.hello_world_service: Refreshing state... [id=hello-world-namespace/hello-world-example]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # kubernetes_deployment.hello_world_deployment will be updated in-place
  ~ resource "kubernetes_deployment" "hello_world_deployment" {
        id               = "hello-world-namespace/kubernetes-example-deployment"
        # (1 unchanged attribute hidden)


      ~ spec {
          ~ replicas                  = "1" -&amp;gt; "3"
            # (4 unchanged attributes hidden)



            # (3 unchanged blocks hidden)
        }
        # (1 unchanged block hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

------------------------------------------------------------------------

This plan was saved to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because Terraform knows the state of the existing Kubernetes resources, it will only need to change the deployment. Run the command &lt;code&gt;terraform apply tfplan&lt;/code&gt; to update the number of running replicas. They may take a few seconds to spin up, but re-running the command &lt;code&gt;kubectl get pods -n hello-world-namespace --kubeconfig=&amp;lt;your_kubeconfig_file_path&amp;gt;&lt;/code&gt; should now show that there are three replicas of the application running! These can have traffic routed to them through the load balancer and the service which was defined already.&lt;/p&gt;

&lt;p&gt;When you’re ready to clean up the resources from this guide, Terraform offers another command that can help with that. Because it tracks state, it knows everything that needs to be removed. To see what that would look like, enter &lt;code&gt;terraform plan -destroy -out=tfplan&lt;/code&gt; in a terminal and be sure to still be in the same working folder that contains &lt;code&gt;terraform.tfstate&lt;/code&gt;. Something like what’s below should be printed to the console:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # helm_release.ingress_nginx will be destroyed
  - resource "helm_release" "ingress_nginx" {
      - atomic                     = false -&amp;gt; null
      - chart                      = "ingress-nginx" -&amp;gt; null

...

              - protocol    = "TCP" -&amp;gt; null
              - target_port = "3000" -&amp;gt; null
            }
        }
    }

Plan: 0 to add, 0 to change, 5 to destroy.

------------------------------------------------------------------------

This plan was saved to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you’re sure that you’re comfortable with everything being torn down, enter the command &lt;code&gt;terraform apply tfplan&lt;/code&gt; and wait for it to complete. Every resource that was created by any apply step will now be gone. The output should look similar to what’s below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm_release.ingress_nginx: Destroying... [id=ingress-nginx]
kubernetes_namespace.hello_world_namespace: Destroying... [id=hello-world-namespace]
kubernetes_service.hello_world_service: Destroying... [id=hello-world-namespace/hello-world-example]
kubernetes_ingress.ingress: Destroying... [id=hello-world-namespace/api-ingress]
kubernetes_ingress.ingress: Destruction complete after 1s
kubernetes_service.hello_world_service: Destruction complete after 1s
kubernetes_deployment.hello_world_deployment: Destroying... [id=hello-world-namespace/kubernetes-example-deployment]
kubernetes_deployment.hello_world_deployment: Destruction complete after 0s
helm_release.ingress_nginx: Destruction complete after 3s
kubernetes_namespace.hello_world_namespace: Still destroying... [id=hello-world-namespace, 10s elapsed]
kubernetes_namespace.hello_world_namespace: Still destroying... [id=hello-world-namespace, 20s elapsed]

...

kubernetes_namespace.hello_world_namespace: Still destroying... [id=hello-world-namespace, 2m30s elapsed]
kubernetes_namespace.hello_world_namespace: Still destroying... [id=hello-world-namespace, 2m40s elapsed]
kubernetes_namespace.hello_world_namespace: Destruction complete after 2m47s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Learn more about how Architect can deploy your application to Kubernetes and elsewhere
&lt;/h2&gt;

&lt;p&gt;Terraform can deploy your application to Kubernetes easily once templates are written, and all of the resources are defined. What happens when the next best thing comes along, though? Surely Terraform would be able to handle deploying your application to another platform, but that would require more maintenance, and likely an entire rewrite of all Terraform templates. With Architect, your application only needs to be defined once to be deployed anywhere. Find out more about deploying Architect components in our &lt;a href="https://www.architect.io/docs/"&gt;docs&lt;/a&gt; and &lt;a href="https://cloud.architect.io/signup?_ga=2.195883500.1750426037.1652199791-587972838.1647380382"&gt;try it out&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;For more reading, have a look at some of our other tutorials!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/rabbitmq-docker-tutorial"&gt;Implement RabbitMQ on Docker in 20 Minutes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/kafka-docker-tutorial"&gt;Implement Kafka on Docker in 20 Minutes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/gitops-developers-guide"&gt;A Developer’s Guide to GitOps&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have any questions or comments, don’t hesitate to reach out to the team on Twitter &lt;a href="https://twitter.com/architect_team"&gt;@architect_team&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Get Started with Kafka and Docker in 20 Minutes</title>
      <dc:creator>Lindsay Brunner</dc:creator>
      <pubDate>Fri, 29 Apr 2022 04:58:06 +0000</pubDate>
      <link>https://dev.to/architectio/get-started-with-kafka-and-docker-in-20-minutes-hka</link>
      <guid>https://dev.to/architectio/get-started-with-kafka-and-docker-in-20-minutes-hka</guid>
      <description>&lt;p&gt;Apache Kafka is a high-throughput, high-availability, and scalable solution chosen by the world’s top companies for uses such as event streaming, stream processing, log aggregation, and more. Kafka runs on the platform of your choice, such as Kubernetes or ECS, as a cluster of one or more Kafka nodes. A Kafka cluster will be initialized with zero or more topics, which you can think of as message channels or queues. Clients can connect to Kafka to publish messages to topics or to consume messages from topics the client is subscribed to.&lt;/p&gt;

&lt;p&gt;Docker is an application that uses virtualization to run containerized applications on a host machine. Containerization enables users to build, run, and test applications completely separately while still allowing them to communicate across a network. Importantly, containerization enables application portability so that the same application can be run on your local machine, a Kubernetes cluster, AWS, and more.&lt;/p&gt;

&lt;p&gt;Both Kafka and Docker are pretty complex technologies, and it can be difficult to determine where to get started once you’re sure that they’re the right fit for the problem you’re solving. To keep things simple, we’ll create one producer, one consumer, and one Kafka instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project dependencies for Kafka and Docker
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we’ll start by using &lt;a href="https://docs.docker.com/compose/"&gt;Docker Compose&lt;/a&gt; to build, run, and test locally. We’ll also walk through how to use &lt;code&gt;kubectl&lt;/code&gt; to deploy our application to the cloud. Last, we’ll walk through how we can use &lt;a href="https://www.architect.io/"&gt;Architect.io&lt;/a&gt; to seamlessly deploy our application locally and to the cloud using the same configuration. Before getting started, be sure to have the following dependencies installed locally:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;Docker-compose&lt;/li&gt;
&lt;li&gt;A Docker Hub account&lt;/li&gt;
&lt;li&gt;npm&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/@architect-io/cli"&gt;Architect CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;A Kubernetes cluster on Digital Ocean or elsewhere&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As mentioned previously, this part of the tutorial will contain multiple services running on your local machine. You can use &lt;code&gt;docker-compose&lt;/code&gt; to run them all at once and stop them all when you’re ready. Let’s get going!&lt;/p&gt;

&lt;h2&gt;
  
  
  Build the publisher service in Node for Kafka with Docker
&lt;/h2&gt;

&lt;p&gt;Start by creating a project directory with two folders inside it named “subscriber” and “publisher.” These folders will contain the application code, supporting Node files, and Dockerfiles that will be needed to build the apps that will communicate with Kafka.&lt;/p&gt;

&lt;p&gt;The publisher service will be the one that generates messages that will be published to a Kafka topic. For simplicity, the service will generate a simple message at an interval of five seconds. Inside of the “publisher” folder, add a new file called &lt;code&gt;index.js&lt;/code&gt; with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const kafka = require('kafka-node');
const client = new kafka.KafkaClient({
  kafkaHost:
    process.env.ENVIRONMENT === 'local'
      ? process.env.INTERNAL_KAFKA_ADDR
      : process.env.EXTERNAL_KAFKA_ADDR,
});
const Producer = kafka.Producer;
const producer = new Producer(client);

producer.on('ready', () =&amp;gt; {
  setInterval(() =&amp;gt; {
    const payloads = [
      {
        topic: process.env.TOPIC,
        messages: [`${process.env.TOPIC}_message_${Date.now()}`],
      },
    ];

    producer.send(payloads, (err, data) =&amp;gt; {
      if (err) {
        console.log(err);
      }
      console.log(data);
    });
  }, 5000);
});

producer.on('error', err =&amp;gt; {
  console.log(err);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and close the index. We’ll also need some supporting modules installed to our Docker container when it’s built. Also, in the “publisher” folder, create a &lt;code&gt;package.json&lt;/code&gt; with the JSON here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "publisher",
  "version": "0.1.0",
  "main": "index.js",
  "scripts": {
    "start": "node index.js"
  },
  "license": "ISC",
  "dependencies": {
    "body-parser": "^1.19.0",
    "cors": "2.8.5",
    "express": "^4.17.1",
    "kafka-node": "^5.0.0",
    "winston": "^3.2.1"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and close the &lt;code&gt;package.json&lt;/code&gt;. Alongside the last two files, we’ll need a &lt;code&gt;package-lock.json&lt;/code&gt;, which can be created with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i --package-lock-only
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last file to create for the publisher will pull everything together, and that’s the Dockerfile. Create the Dockerfile alongside the other three files that were just created and add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:12-alpine

WORKDIR /usr/src/app

COPY package*.json ./
RUN npm install
COPY . .

CMD [ "npm", "start" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and close the file. Line by line, the Dockerfile that was just added to the folder will instruct the Docker daemon to build the publisher image like so:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pull the Docker image &lt;code&gt;node:12-alpine&lt;/code&gt; as the base container image&lt;/li&gt;
&lt;li&gt;Set the working directory to &lt;code&gt;/usr/src/app&lt;/code&gt;. Subsequent commands will be run in this folder&lt;/li&gt;
&lt;li&gt;Copy the &lt;code&gt;package.json&lt;/code&gt; and &lt;code&gt;package-lock.json&lt;/code&gt; that were just created into the &lt;code&gt;/usr/src/app&lt;/code&gt; directory&lt;/li&gt;
&lt;li&gt;Run npm install to install node modules&lt;/li&gt;
&lt;li&gt;Copy the rest of the files from the directory on the home machine to &lt;code&gt;/usr/src/app&lt;/code&gt;. Importantly, this includes the &lt;code&gt;index.js&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run the command &lt;code&gt;npm start&lt;/code&gt; in the container. npm is already installed on the &lt;code&gt;node:12-alpine&lt;/code&gt; image, and the start script is defined in the &lt;code&gt;package.json&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Build the subscriber service for Kafka with Docker
&lt;/h2&gt;

&lt;p&gt;The subscriber service will be built very similarly to the publisher service and will consume messages from the Kafka topic. Messages will be consumed as frequently as they’re published, again, every five seconds in this case. To start, add a file titled &lt;code&gt;index.js&lt;/code&gt; to the “subscriber” folder and add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const kafka = require('kafka-node');
const client = new kafka.KafkaClient({
  kafkaHost:
    process.env.ENVIRONMENT === 'local'
      ? process.env.INTERNAL_KAFKA_ADDR
      : process.env.EXTERNAL_KAFKA_ADDR,
});
const Consumer = kafka.Consumer;

const consumer = new Consumer(
  client,
  [
    {
      topic: process.env.TOPIC,
      partition: 0,
    },
  ],
  {
    autoCommit: false,
  },
);

consumer.on('message', message =&amp;gt; {
  console.log(message);
});

consumer.on('error', err =&amp;gt; {
  console.log(err);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and close the index. Also, similar to the publisher, we’ll need a &lt;code&gt;package.json&lt;/code&gt; file like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "subscriber",
  "version": "0.1.0",
  "main": "index.js",
  "scripts": {
    "start": "node index.js"
  },
  "author": "Architect.io",
  "license": "ISC",
  "dependencies": {
    "body-parser": "^1.19.0",
    "cors": "2.8.5",
    "express": "^4.17.1",
    "kafka-node": "^5.0.0",
    "winston": "^3.2.1"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and close the &lt;code&gt;package.json&lt;/code&gt;, then create a &lt;code&gt;package-lock.json&lt;/code&gt; using the same command as before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i --package-lock-only
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The subscriber needs one extra file that the publisher doesn’t, and that’s a file we’ll call &lt;code&gt;wait-for-it.js&lt;/code&gt;. Create the file and add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const kafka = require('kafka-node');
const client = new kafka.KafkaClient({
  kafkaHost:
    process.env.ENVIRONMENT === 'local'
      ? process.env.INTERNAL_KAFKA_ADDR
      : process.env.EXTERNAL_KAFKA_ADDR,
});
const Admin = kafka.Admin;
const child_process = require('child_process');

const admin = new Admin(client);
const interval_id = setInterval(() =&amp;gt; {
  admin.listTopics((err, res) =&amp;gt; {
    if (res[1].metadata[process.env.TOPIC]) {
      console.log('Kafka topic created');
      clearInterval(interval_id);
      child_process.execSync('npm start', { stdio: 'inherit' });
    } else {
      console.log('Waiting for Kafka topic to be created');
    }
  });
}, 1000);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file will be used in the Docker container to ensure that the consumer doesn’t attempt to consume messages from the topic before the topic has been created. Each second, it will check whether the topic exists, and when Kafka has started, and the topic is finally created, the subscriber will start. Last, create the Dockerfile in the “subscriber” folder with the following snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:12-alpine

WORKDIR /usr/src/app

COPY package*.json ./
RUN npm install
COPY . .

CMD [ "node", "wait-for-it.js" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The subscriber’s Dockerfile is the same as the publisher’s, with the one difference noted above. The command that starts the container uses the &lt;code&gt;wait-for-it.js&lt;/code&gt; file rather than the index. Save and close the Dockerfile.&lt;/p&gt;

&lt;h2&gt;
  
  
  The docker-compose file for the Kafka stack
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;docker-compose&lt;/code&gt; file is where the publisher, subscriber, Kafka, and Zookeeper services will be tied together. Zookeeper is a service that is used to synchronize Kafka nodes within a cluster. Zookeeper deserves a post all of its own, and because we only need one node in this tutorial I won’t be going in-depth on it here. In the root of the project alongside the “subscriber” and “publisher” folders, create a file called &lt;code&gt;docker-compose.yml&lt;/code&gt; and add this configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3'
services:
  zookeeper:
    ports:
      - '50000:2181'
    image: jplock/zookeeper
  kafka:
    ports:
      - '50001:9092'
      - '50002:9093'
    depends_on:
      - zookeeper
    environment:
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENERS: 'INTERNAL://:9092'
      KAFKA_ADVERTISED_LISTENERS: 'INTERNAL://:9092'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'INTERNAL:PLAINTEXT'
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: '1'
      KAFKA_CREATE_TOPICS: 'example-topic:1:1'
      KAFKA_ADVERTISED_HOST_NAME: host.docker.internal # change to 172.17.0.1 if running on Ubuntu
    image: 'wurstmeister/kafka:2.12-2.4.0'
    volumes:
      - '/var/run/docker.sock:/var/run/docker.sock'
  publisher:
    depends_on:
      - kafka
    environment:
      TOPIC: example-topic
      ENVIRONMENT: local
      INTERNAL_KAFKA_ADDR: 'kafka:9092'
    build:
      context: ./publisher
  subscriber:
    depends_on:
      - kafka
    environment:
      TOPIC: example-topic
      ENVIRONMENT: local
      INTERNAL_KAFKA_ADDR: 'kafka:9092'
    build:
      context: ./subscriber
volumes: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the services block of the &lt;code&gt;docker-compose&lt;/code&gt; contains four keys under which we define specific properties for each service. Below is a service-by-service walkthrough of what each property and its sub-properties are used for.&lt;/p&gt;

&lt;h3&gt;
  
  
  Zookeeper
&lt;/h3&gt;

&lt;p&gt;The ports property instructs Zookeeper to expose itself to Kafka on port 2181 inside of the Docker network. Zookeeper is also available to the host machine on port 50000. The image property instructs the Docker daemon to pull the latest version of the image &lt;a href="https://hub.docker.com/r/jplock/zookeeper"&gt;&lt;code&gt;jplock/zookeeper&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kafka
&lt;/h3&gt;

&lt;p&gt;The Kafka service block includes configuration that will be passed to Kafka running inside of the container, among other properties that will enable communication between the Kafka service and other containers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ports&lt;/code&gt; – Kafka exposes itself on two ports internal to the Docker network, 9092 and 9093. It is also exposed to the host machine on ports 50001 and 50002.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;depends_on&lt;/code&gt; – Kafka depends on Zookeeper to run, so its key is included in the depends_on block to ensure that Docker will start Zookeeper before Kafka.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;environment&lt;/code&gt; – Kafka will pick up the environment variables in this block once the container starts. All configuration options except for &lt;code&gt;KAFKA_CREATE_TOPICS&lt;/code&gt; will be added to a Kafka broker config and applied on startup. The variable &lt;code&gt;KAFKA_CREATE_TOPICS&lt;/code&gt; is used by the Docker image itself, not Kafka, to make working with Kafka easier. Topics defined by this variable will be created when Kafka starts without any external instructions.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;image&lt;/code&gt; – This field instructs the Docker daemon to pull version 2.12-2.4.0 of the image &lt;code&gt;wurstmeister/kafka&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;volumes&lt;/code&gt; – This is a requirement by the Docker image to use the Docker CLI when starting Kafka locally.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Publisher
&lt;/h3&gt;

&lt;p&gt;Most configuration in the publisher block specifies how the publisher should communicate with Kafka. Note that the &lt;code&gt;depends_on&lt;/code&gt; property ensures that the publisher will start after Kafka.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;depends_on&lt;/code&gt; – The publisher service naturally depends on Kafka, so it’s included in the dependency array.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;environment&lt;/code&gt; – These variables are used by the code in the &lt;code&gt;index.js&lt;/code&gt; of the publisher.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TOPIC&lt;/code&gt; – This is the topic that messages will be published to. Note that it matches the topic that will be created by the Kafka container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ENVIRONMENT&lt;/code&gt; – This environment variable determines inside the index file on what port the service will communicate with Kafka. The ternary statement that it is used in exists to use the same code for both local and remote deployments.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;INTERNAL_KAFKA_ADDR&lt;/code&gt; – The publisher will connect to Kafka on this host and port.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;build&lt;/code&gt; – The context inside tells the Docker daemon where to locate the Dockerfile that describes how the service will be built and run, along with supporting code and other files that will be used inside of the container.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Subscriber
&lt;/h3&gt;

&lt;p&gt;Most of the &lt;code&gt;docker-compose&lt;/code&gt; configuration for the subscriber service is identical to that of the publisher service. The one difference is that the context tells the Docker daemon to build from the “subscriber” directory, where its Dockerfile and supporting files were created.&lt;/p&gt;

&lt;h3&gt;
  
  
  Run the example stack
&lt;/h3&gt;

&lt;p&gt;Finally, the moment we’ve all been waiting for, running the services! All that’s needed now is to run the command below from the root directory of the project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it! Once all of the services start up and the Kafka topic is created, the output from the publisher and subscriber services will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;publisher_1   | { 'example-topic': { '0': 0 } }
subscriber_1  | Kafka topic created
subscriber_1  |
subscriber_1  | &amp;gt; @architect-examples/event-subscriber@0.1.0 start /usr/src/app
subscriber_1  | &amp;gt; node index.js
subscriber_1  |
subscriber_1  | {
subscriber_1  |   topic: 'example-topic',
subscriber_1  |   value: 'example-topic_message_1610477237480',
subscriber_1  |   offset: 0,
subscriber_1  |   partition: 0,
subscriber_1  |   highWaterOffset: 1,
subscriber_1  |   key: null
subscriber_1  | }
subscriber_1  | {
subscriber_1  |   topic: 'example-topic',
subscriber_1  |   value: 'example-topic_message_1610477242483',
subscriber_1  |   offset: 1,
subscriber_1  |   partition: 0,
subscriber_1  |   highWaterOffset: 2,
subscriber_1  |   key: null
subscriber_1  | }
publisher_1   | { 'example-topic': { '0': 1 } }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;New messages will continue to be published and consumed until the docker-compose process is stopped by pressing ctrl/cmd+C in the same terminal that it was started in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run Kafka in the cloud on Kubernetes
&lt;/h2&gt;

&lt;p&gt;Running Kafka locally can be useful for testing and iterating, but where it’s most useful is of course, the cloud. This section of the tutorial will guide you through deploying the same application that was just deployed locally to your Kubernetes cluster. Note that most services charge some amount of money by default for running a Kubernetes cluster, though occasionally you can get free credits when you sign up. For the most straightforward setup of a cluster, you can run your Kubernetes cluster with Digital Ocean. For the cluster to pull the Docker images that you will be building, a Docker Hub account will be useful, where you can host multiple free repositories. The same code and Docker images will be used from the previous part of the tutorial.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build and push the images to Docker Hub
&lt;/h3&gt;

&lt;p&gt;For the Kubernetes cluster to pull the Docker images, they’ll need to be pushed to a repository in the cloud where they can be accessed. Docker Hub is the most frequently used cloud-hosted repository, and the images here will be made public for ease of use in this tutorial. To start, be sure that you have a Docker Hub account, then enter the following in a terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enter your Docker Hub username (not email) and password when prompted. You should see the message &lt;code&gt;Login Succeeded&lt;/code&gt;, which indicates that you’ve successfully logged in to Docker Hub in the terminal. The next step is to push the images that will need to be used in the Kubernetes cluster. From the root of the project, navigate to the publisher directory and build and tag the publisher service with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build . -t &amp;lt;your_docker_hub_username&amp;gt;/publisher:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your local machine now has a Docker image tagged as &lt;code&gt;&amp;lt;your_docker_hub_username&amp;gt;/publisher:latest&lt;/code&gt;, which can be pushed to the cloud. You might have also noticed that the build was faster than the first time the publisher was built. This is because Docker caches image layers locally, and if you didn’t change anything in the publisher service, it doesn’t need to be rebuilt completely. Now, push the tagged image with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push &amp;lt;your_docker_hub_username&amp;gt;/publisher:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your custom image is now hosted publicly on the internet! Navigate to &lt;code&gt;https://hub.docker.com/repository/docker/&amp;lt;your_docker_hub_username&amp;gt;/publisher&lt;/code&gt; and log in if you’d like to view it. &lt;/p&gt;

&lt;p&gt;Now, navigate to the subscriber folder and do the same for the subscriber service with two similar commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build . -t &amp;lt;your_docker_hub_username&amp;gt;/subscriber:latest
docker push &amp;lt;your_docker_hub_username&amp;gt;/subscriber:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All of the images needed to run the stack on a Kubernetes cluster should now be available publicly. Fortunately, Kafka and Zookeeper didn’t need to be pushed anywhere, as the images are already public.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy the stack to Kubernetes
&lt;/h3&gt;

&lt;p&gt;Once you have a Kubernetes cluster created on Digital Ocean or wherever you prefer, and you’ve downloaded the cluster’s &lt;code&gt;kubeconfig&lt;/code&gt; or set your Kubernetes context, you’re ready to deploy the publisher, consumer, Kafka, and Zookeeper. Be sure that the cluster also has the Kubernetes dashboard installed. On Digital Ocean, the dashboard will be preinstalled.&lt;/p&gt;

&lt;p&gt;Deploying to Kubernetes in the next steps will also require the Kubernetes CLI, &lt;code&gt;kubectl&lt;/code&gt; to be installed to your local machine. Once the prerequisites are complete, the next steps will be creating and deploying Kubernetes manifests. These manifests will be for a namespace, deployments, and services. In the root of the project, create a directory called “kubernetes” and navigate to that directory. For organization, all manifests will be created here. Start by creating a file called &lt;code&gt;namespace.yml&lt;/code&gt;. Within Kubernetes, the namespace will group all of the resources created in this tutorial.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Namespace
metadata:
  name: kafka-example
  labels:
    name: kafka-example
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and close the file. To create the namespace within the Kubernetes cluster, kubectl will be used. Run the command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f namespace.yml --kubeconfig=&amp;lt;kubeconfig_file_for_your_cluster&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the namespace was created successfully, the message &lt;code&gt;namespace/kafka-example&lt;/code&gt; created will be printed to the console.&lt;/p&gt;

&lt;p&gt;Before deployments are created, Kubernetes services are required to allow traffic to the pods that others depend on. To do this, two services will be created. One will allow traffic to the Kafka pod on its exposed ports, 9092 and 9093, and the other will allow traffic to the Zookeeper pod on its exposed port, 2181. These will allow the publisher and subscriber to send traffic to Kafka and Kafka to send traffic to Zookeeper, respectively. Still in the k8s directory, start by creating a file called &lt;code&gt;kafka-service.yml&lt;/code&gt; with the following yml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Service
apiVersion: v1
metadata:
  name: example-kafka
  namespace: kafka-example
  labels:
    app: example-kafka
spec:
  ports:
    - name: external
      protocol: TCP
      port: 9093
      targetPort: 9093
    - name: internal
      protocol: TCP
      port: 9092
      targetPort: 9092
  selector:
    app: example-kafka
  type: ClusterIP
  sessionAffinity: None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the service in the cluster by running the command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f kafka-service.yml --kubeconfig=&amp;lt;kubeconfig_file_for_your_cluster&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;kubectl&lt;/code&gt; should confirm that the service has been created. Now, create the other service by first creating a file called &lt;code&gt;zookeeper-service.yml&lt;/code&gt;. Add the following contents to that file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Service
apiVersion: v1
metadata:
  name: example-zookeeper
  namespace: kafka-example
  labels:
    app: example-zookeeper
spec:
  ports:
    - name: main
      protocol: TCP
      port: 2181
      targetPort: 2181
  selector:
    app: example-zookeeper
  type: ClusterIP
  sessionAffinity: None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the service within the cluster with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f zookeeper-service.yml --kubeconfig=&amp;lt;kubeconfig_file_for_your_cluster&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, four deployments will need to be created inside the new namespace, one for each service. Start by creating a file called &lt;code&gt;zookeeper-deployment.yml&lt;/code&gt; and add the following &lt;code&gt;yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Deployment
apiVersion: apps/v1
metadata:
  name: example-zookeeper
  namespace: kafka-example
  labels:
    app: example-zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      app: example-zookeeper
  template:
    metadata:
      labels:
        app: example-zookeeper
    spec:
      containers:
        - name: example-zookeeper
          image: jplock/zookeeper
          ports:
            - containerPort: 2181
              protocol: TCP
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      dnsPolicy: ClusterFirst
      schedulerName: default-scheduler
      enableServiceLinks: true
  strategy:
    type: RollingUpdate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the contents and run the command below to create the deployment in the kafka-example namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f zookeeper-deployment.yml --kubeconfig=&amp;lt;kubeconfig_file_for_your_cluster&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the deployment has been created successfully, &lt;code&gt;deployment.apps/example-zookeeper&lt;/code&gt; created will be printed. The next step will be creating and deploying the manifest for Kafka. Create the file &lt;code&gt;kafka-deployment.yml&lt;/code&gt; and add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Deployment
apiVersion: apps/v1
metadata:
  name: example-kafka
  namespace: kafka-example
  labels:
    app: example-kafka
spec:
  replicas: 1
  selector:
    matchLabels:
      app: example-kafka
  template:
    metadata:
      labels:
        app: example-kafka
    spec:
      containers:
        - name: example-kafka
          image: 'wurstmeister/kafka:2.12-2.4.0'
          ports:
            - containerPort: 9093
              protocol: TCP
            - containerPort: 9092
              protocol: TCP
          env:
            - name: KAFKA_ADVERTISED_LISTENERS
              value: INTERNAL://:9092,EXTERNAL://example-kafka.kafka-example.svc.cluster.local:9093
            - name: KAFKA_CREATE_TOPICS
              value: example-topic:1:1
            - name: KAFKA_INTER_BROKER_LISTENER_NAME
              value: INTERNAL
            - name: KAFKA_LISTENERS
              value: INTERNAL://:9092,EXTERNAL://:9093
            - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
              value: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
            - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
              value: '1'
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: example-zookeeper.kafka-example.svc.cluster.local:2181
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      dnsPolicy: ClusterFirst
      schedulerName: default-scheduler
      enableServiceLinks: true
  strategy:
    type: RollingUpdate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and close the file. Similar to the Zookeeper deployment, run the command below in a terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f kafka-deployment.yml --kubeconfig=&amp;lt;kubeconfig_file_for_your_cluster&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;deployment.apps/example-kafka&lt;/code&gt; created should have been printed to the console. The last two deployments to be created will be the subscriber and publisher services. Create &lt;code&gt;publisher-deployment.yml&lt;/code&gt; with the contents and be sure to replace &lt;code&gt;&amp;lt;your_docker_hub_username&amp;gt;&lt;/code&gt; with your own username:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Deployment
apiVersion: apps/v1
metadata:
  name: example-publisher
  namespace: kafka-example
  labels:
    app: example-publisher
spec:
  replicas: 1
  selector:
    matchLabels:
      app: example-publisher
  template:
    metadata:
      labels:
        app: example-publisher
    spec:
      containers:
        - name: example-publisher
          image: '&amp;lt;your_docker_hub_username&amp;gt;/publisher:latest'
          imagePullPolicy: Always
          env:
            - name: ENVIRONMENT
              value: prod
            - name: EXTERNAL_KAFKA_ADDR
              value: example-kafka.kafka-example.svc.cluster.local:9093
            - name: TOPIC
              value: example-topic
      restartPolicy: Always
      dnsPolicy: ClusterFirst
      schedulerName: default-scheduler
      enableServiceLinks: true
  strategy:
    type: RollingUpdate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;kubectl create -f publisher-deployment.yml --kubeconfig=&amp;lt;kubeconfig_file_for_your_cluster&amp;gt;&lt;/code&gt; to create the deployment for the publisher and make sure that &lt;code&gt;kubectl&lt;/code&gt; prints a message letting you know that it’s been created. The last deployment to create is the subscriber, which will be created in the same way as the other services. Create the file &lt;code&gt;subscriber-deployment.yml&lt;/code&gt; and add the following configuration, being sure to replace &lt;code&gt;&amp;lt;your_docker_hub_username&amp;gt;&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Deployment
apiVersion: apps/v1
metadata:
  name: example-subscriber
  namespace: kafka-example
  labels:
    app: example-subscriber
spec:
  replicas: 1
  selector:
    matchLabels:
      app: example-subscriber
  template:
    metadata:
      labels:
        app: example-subscriber
    spec:
      containers:
        - name: example-subscriber
          image: '&amp;lt;your_docker_hub_username&amp;gt;/subscriber:latest'
          imagePullPolicy: Always
          env:
            - name: ENVIRONMENT
              value: prod
            - name: EXTERNAL_KAFKA_ADDR
              value: example-kafka.kafka-example.svc.cluster.local:9093
            - name: TOPIC
              value: example-topic
      restartPolicy: Always
      dnsPolicy: ClusterFirst
      schedulerName: default-scheduler
      enableServiceLinks: true
  strategy:
    type: RollingUpdate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the last of the deployments, create the subscriber by running &lt;code&gt;kubectl create -f subscriber-deployment.yml --kubeconfig=&amp;lt;kubeconfig_file_for_your_cluster&amp;gt;&lt;/code&gt;. If you now navigate to the Kubernetes dashboard for your cluster, you should see that all four deployments have been created, which have in turn created four pods. Each pod runs the container referred to by the image field in its respective deployment.&lt;/p&gt;

&lt;p&gt;Wait for a success message to print to the console. Now that all required services and deployments are created feel free to navigate to the Kubernetes dashboard to view the running pods. Navigate to the running &lt;code&gt;example-subscriber&lt;/code&gt; pod and view the logs to see that it’s consuming messages from the topic.&lt;/p&gt;

&lt;p&gt;If you’re satisfied with your work and want to destroy all of the Kubernetes resources that you just created, use the following command to clean up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete namespace kafka-example --kubeconfig=&amp;lt;kubeconfig_file_for_your_cluster&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whew! That was a little complicated and took quite a few commands and files to run. What if everything that was done could be compressed into a single, short file? What if the entire stack could be created in Kubernetes with a single command? Continue to find out how easy deploying a Kafka-centric stack both locally and on Kubernetes can be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run Kafka locally with Architect
&lt;/h2&gt;

&lt;p&gt;The Architect platform can dramatically simplify deployments of any architecture to both local and cloud environments. Just define a component in a single file representing the services that should be deployed, and that component can be deployed anywhere. The Kafka example which you just ran locally can be defined in the following manner as an Architect component:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: examples/kafka
homepage: https://github.com/architect-team/architect-cli/tree/master/examples/kafka

services:
  zookeeper:
    image: jplock/zookeeper
    interfaces:
      main: 2181
  kafka:
    image: wurstmeister/kafka:2.12-2.4.0
    interfaces:
      internal: 9092
      external: 9093
    environment:
      KAFKA_ZOOKEEPER_CONNECT:
        ${{ services.zookeeper.interfaces.main.host }}:${{ services.zookeeper.interfaces.main.port
        }}
      KAFKA_LISTENERS:
        INTERNAL://:${{ services.kafka.interfaces.internal.port }},EXTERNAL://:${{
        services.kafka.interfaces.external.port }}
      KAFKA_ADVERTISED_LISTENERS:
        INTERNAL://:9092,EXTERNAL://${{ services.kafka.interfaces.external.host }}:${{
        services.kafka.interfaces.external.port }}
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_CREATE_TOPICS: architect:1:1
    debug:
      volumes:
        docker:
          mount_path: /var/run/docker.sock
          host_path: /var/run/docker.sock
      environment:
        KAFKA_ADVERTISED_HOST_NAME: host.docker.internal # change to 172.17.0.1 if running on Ubuntu
        KAFKA_LISTENERS: INTERNAL://:9092
        KAFKA_ADVERTISED_LISTENERS: INTERNAL://:9092
        KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT
  publisher:
    build:
      context: ./publisher/
    interfaces:
    environment:
      EXTERNAL_KAFKA_ADDR:
        ${{ services.kafka.interfaces.external.host }}:${{ services.kafka.interfaces.external.port
        }}
      TOPIC: architect
      ENVIRONMENT: prod
    debug:
      environment:
        INTERNAL_KAFKA_ADDR:
          ${{ services.kafka.interfaces.internal.host }}:${{ services.kafka.interfaces.internal.port
          }}
        ENVIRONMENT: local
  subscriber:
    build:
      context: ./subscriber/
    interfaces:
    environment:
      EXTERNAL_KAFKA_ADDR:
        ${{ services.kafka.interfaces.external.host }}:${{ services.kafka.interfaces.external.port
        }}
      TOPIC: architect
      ENVIRONMENT: prod
    debug:
      environment:
        INTERNAL_KAFKA_ADDR:
          ${{ services.kafka.interfaces.internal.host }}:${{ services.kafka.interfaces.internal.port
          }}
        ENVIRONMENT: local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The same information should be printed to the console as when the stack was run directly with &lt;code&gt;docker-compose&lt;/code&gt;. When you’re ready, press Ctrl/Cmd+C to stop the running application. As mentioned before, an Architect component can be deployed both locally and to any cloud environment. Simply &lt;a href="https://cloud.architect.io/examples/components/kafka/deploy?tag=latest"&gt;hit this link to deploy the Kafka example component to Architect’s hosted cloud service&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A few clicks, and that’s it! The same stack that could be run locally is running in a Kubernetes cluster in the cloud. If you would like to explore more, feel free to register your own cluster as a platform with the Architect Cloud!&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn more about safe, fast deployments with Docker and Architect
&lt;/h2&gt;

&lt;p&gt;Kafka is a powerful yet complicated application that takes careful configuration to get running properly. Fortunately, there are a few robust tools like &lt;code&gt;docker-compose&lt;/code&gt; and Architect to enable smooth deployments locally and in the cloud. If you’d like to understand more about how Architect can help you expedite both local and remote deployments, check out the &lt;a href="https://www.architect.io/docs/"&gt;docs&lt;/a&gt; and &lt;a href="https://cloud.architect.io/signup"&gt;sign up&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;For more reading, check out some of our other tutorials!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/rabbitmq-docker-tutorial"&gt;Implement RabbitMQ on Docker in 20 Minutes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/django-docker-tutorial"&gt;Deploy your Django app with Docker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/gitops-developers-guide"&gt;A Developer’s Guide to GitOps&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have any questions or comments, don’t hesitate to reach out to the team on Twitter &lt;a href="https://twitter.com/architect_team"&gt;@architect_team&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>node</category>
      <category>programming</category>
      <category>docker</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>A Developer's Guide to GitOps</title>
      <dc:creator>Lindsay Brunner</dc:creator>
      <pubDate>Wed, 20 Apr 2022 21:53:05 +0000</pubDate>
      <link>https://dev.to/architectio/a-developers-guide-to-gitops-4ije</link>
      <guid>https://dev.to/architectio/a-developers-guide-to-gitops-4ije</guid>
      <description>&lt;p&gt;One of a modern DevOps team’s driving objectives is to help developers deploy features as quickly and safely as possible. This means creating tools and processes that do everything from provisioning private developer environments to deploying and securing production workloads. This effort is a constant balance between enabling developers to move quickly and ensuring that their haste doesn’t lead to critical outages. Fortunately, both speed and stability improve tremendously whenever automation, like GitOps, is introduced.&lt;/p&gt;

&lt;p&gt;As you might have guessed from that lead-up, GitOps is a tactic for automating DevOps. More specifically, however, it’s an automation tactic that hooks into a critical tool that already exists in developers’ everyday workflow, Git. Since developers are already committing code to a centralized Git repo (often hosted by tools like GitHub, GitLab, or BitBucket), DevOps engineers can wire up any of their operational scripts, like those used to build, test, or deploy applications, to kick off every time developers commit code changes. This means developers get to work exclusively with Git, and everything that helps them get their code to production will be automated behind the scenes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why GitOps?
&lt;/h2&gt;

&lt;p&gt;In years past, DevOps and CI/CD practices were a set of proprietary scripts and tools that executed everyday tasks like running tests, provisioning infrastructure, or deploying an application. However, the availability of new infrastructure tools like Kubernetes combined with the proliferation of microservice architectures have enabled and ultimately &lt;em&gt;demanded&lt;/em&gt; that developers get more involved in CI/CD processes.&lt;/p&gt;

&lt;p&gt;This &lt;em&gt;shift left&lt;/em&gt; exploded the problems seen with custom scripting and manual execution leading to confusing/inconsistent processes, duplication of efforts, and a drastic reduction in development velocity. To take advantage of cloud-native tools and architectures, teams need a consistent, automated approach to CI/CD that would enable developers to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stop building and maintaining proprietary scripts and instead use a universal process&lt;/li&gt;
&lt;li&gt;Create apps and services faster by using said universal deploy process&lt;/li&gt;
&lt;li&gt;Onboard more quickly by deploying every time they make code changes&lt;/li&gt;
&lt;li&gt;Deploy automatically to make releases faster, more frequent, and more reliable&lt;/li&gt;
&lt;li&gt;Rollback and pass compliance audits with declarative design patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Developers love GitOps
&lt;/h2&gt;

&lt;p&gt;For all the reasons cited above (and more), businesses need manageable and automatable approaches to CI/CD and DevOps to succeed in building and maintaining cloud-native applications. However, if automation is all that’s needed, why GitOps over other strategies (e.g., SlackOps, scheduled deployments, or simple scripts)? The answer is simple: developers love GitOps.&lt;/p&gt;

&lt;h3&gt;
  
  
  One tool to rule them all, Git
&lt;/h3&gt;

&lt;p&gt;It’s become apparent in the last few years that GitOps is among the most highly-rated strategies for automating DevOps by developers, and it’s not hard to see why. Developers live in Git. They save temporary changes to git, collaborate using git, peer-review code using git, and store a history and audit trail of all the changes everyone has ever made in git. The pipelining strategy described above was tailor-made for git. Since developers already rely on git so heavily, these processes are, in turn, tailor-made for developers. Developers recognize this and are more than happy to reduce the tools and processes they need to use and follow to do their jobs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Declared alongside code
&lt;/h3&gt;

&lt;p&gt;Beyond just the intuitive, git-backed execution flow, another part of modern CI tools and GitOps that developers love is the declarative design. The previous generation of CI tools had configurations that lived inside private instances of the tools. If you didn’t have access to the tools, you didn’t know what the pipelines did, if they were wrong or right, how or when they executed, or how to change them if needed. It was just a magic black box and hard for developers to trust as a result.&lt;/p&gt;

&lt;p&gt;In modern CI systems, like the ones most commonly used to power GitOps like &lt;a href="https://circleci.com/" rel="noopener noreferrer"&gt;CircleCI&lt;/a&gt;, &lt;a href="https://docs.github.com/en/free-pro-team@latest/actions" rel="noopener noreferrer"&gt;Github Actions&lt;/a&gt;, &lt;a href="https://about.gitlab.com/stages-devops-lifecycle/continuous-integration/" rel="noopener noreferrer"&gt;Gitlab CI&lt;/a&gt;, etc., the configurations powering the pipelines live directly in the Git repository. Just like the source code for the application, these configurations are version controlled and visible to every developer working on the project. Not only can they see what the pipeline process is, but they can also quickly and easily make changes to it as needed. This ease of access for developers is critical since developers write the tests for their applications and ensure it is safe and stable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Completely self-service
&lt;/h3&gt;

&lt;p&gt;New features or bug fixes aren’t considered complete until they land in production. This means that anything standing in the way of getting code changes to production are eating up developer time and mental energy when the feature, as far as the developer is concerned, “works on my machine.” Suppose developers have to wait, even for a few minutes, for a different team or individual to do some task before they can close out their work. In that case, it creates both friction and animosity in the organization.&lt;/p&gt;

&lt;p&gt;Alleviating this back and forth between teams is one of the main benefits of DevOps automation tactics like GitOps. Not only do developers get to work in a familiar tool, but the ability to have their code make its way to production without manual intervention means they are never waiting on someone else before they can complete their tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continuous everything
&lt;/h3&gt;

&lt;p&gt;Yet another big perk of GitOps is that all the processes are continuously running all the time! Every change we make triggers tests builds, and deployments without ANY manual steps required. Since developers would use git with or without GitOps, hooking into their existing workflow to trigger DevOps processes is the perfect place to kick off automated events. Until developers stop using Git, GitOps will remain the ideal way to instrument automated DevOps.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitOps in practice
&lt;/h2&gt;

&lt;p&gt;Naturally, the involvement of developers in the process has led teams to explore the use of developer-friendly tools like Git, but the use of Git as a source of truth for DevOps processes also creates a natural consistency to the shape of CI/CD pipeline stages. There are only so many hooks available in a Git repository after all (e.g., commits, pull requests open/closed, merges, etc.), so the look and feel of most GitOps implementations include a set of typical stages:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqs7gvx4gum10pk03pxc0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqs7gvx4gum10pk03pxc0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Pull requests, tests, and preview environments
&lt;/h3&gt;

&lt;p&gt;After developers have spent time writing the code for their new feature, they generally commit that code to a new Git branch and submit a &lt;a href="https://docs.github.com/en/free-pro-team@latest/github/collaborating-with-issues-and-pull-requests/about-pull-requests" rel="noopener noreferrer"&gt;pull request&lt;/a&gt; or &lt;a href="https://docs.gitlab.com/ee/user/project/merge_requests/getting_started.html" rel="noopener noreferrer"&gt;merge request&lt;/a&gt; back to the mainline branch of the repository. This is something developers already do daily to prompt engineering managers to review the code changes and approve them to be merged into the main application code. Since developers already follow this kind of process for their daily collaboration efforts, it’s a perfect opportunity for DevOps to wire up additional tasks.&lt;/p&gt;

&lt;p&gt;By hooking into the open/close events created by this pull request process using a continuous integration (CI) tool, DevOps teams can trigger the execution of unit tests, creation of preview environments, and execution of integration tests against that new preview environment. Instrumentation of these steps allows engineering managers to establish trust in the code changes quickly and allows product managers to see the code changes via the preview environment before merging. Faster trust development means faster merges, and earlier input from product managers means easier changes without complicated and messy rollbacks. This GitOps hook is a key enabler for faster and healthier product and engineering teams alike.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Merge to master and deploy to staging
&lt;/h3&gt;

&lt;p&gt;Once all parties have reviewed the changes, the code can be merged into the mainline branch of the repository alongside changes from the rest of the engineering team. This mainline branch is often used as a staging ground for code that is almost ready to go to production, and as such, it’s another ideal time for us to run some operational tasks like tests and deployment. While we tested the code for each pull request before it was merged, we’ll want to rerun tests to ensure that code works with the other changes contributed by peer team members. We’ll also want to deploy all these changes to a shared environment (aka “staging”) that the entire team can use to view and test the latest changes before they are released to customers.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Cut releases and deploy to production
&lt;/h3&gt;

&lt;p&gt;Finally, after product and engineering have had time to review and test the latest changes to the mainline branch, teams are ready to cut a release and deploy to production! This is often a task performed by a release manager – a dedicated (or rotating) team member tasked with executing the deploy scripts and monitoring the release to ensure that nothing goes wrong in transit. Without GitOps, this team member would have to know where the proper scripts are, in what order to execute them, and would need to ensure their computer has all the correct libraries and packages required to power the scripts.&lt;/p&gt;

&lt;p&gt;Thanks to GitOps, we can wire up this deployment to happen on another Git-based event – creating a &lt;a href="https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/about-releases" rel="noopener noreferrer"&gt;release&lt;/a&gt; or tag. All a release manager would have to do is create a new “release,” often using semver for naming, and the tasks to build and deploy the code changes would be kicked off automatically. Like most tasks executed by a CI tool, these would be configured with the scripts’ location and order the libraries and packages needed to execute them.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitOps tooling
&lt;/h2&gt;

&lt;p&gt;A solid and intuitive continuous integration tool isn’t the only thing needed to instrument GitOps processes like those described in this article. The CI system can activate scripts based on git events, but you still need strong tools to power those scripts and ensure they can be run and maintained easily and safely. Deploying code changes (aka continuous delivery (CD)) is one of the most challenging steps to automate, so we’ve curated a few tooling categories that can help you through your GitOps journey:&lt;/p&gt;

&lt;h3&gt;
  
  
  Containerization with Docker
&lt;/h3&gt;

&lt;p&gt;Docker launched cloud development into an entirely new, distributed landscape and helped developers begin to realistically consider microservice architectures as a viable option. Part of what made Docker so powerful was how developer-friendly it is compared to the previous generation of virtualization solutions. Just like the declarative CI configurations that live inside our repositories, developers simply have to write and maintain a Dockerfile in their repository to enable automated container builds of deployable VMs. Containerization is an enormously powerful tactic for cloud-native teams and should be a staple tool in your repertoire.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure-as-code (IaC)
&lt;/h3&gt;

&lt;p&gt;A lot goes into provisioning infrastructure and deploying applications that isn’t captured by a Dockerfile. For everything else, there’s infrastructure-as-code (IaC) solutions like &lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;, &lt;a href="https://aws.amazon.com/cloudformation/" rel="noopener noreferrer"&gt;Cloudformation&lt;/a&gt;, and others. These solutions allow developers to describe the other bits of an application, like Kubernetes resources, load balancers, networking, security, and more, in a declarative way. Just like the CI configs and Dockerfiles described earlier, IaC templates can be version controlled and collaborated on by all the developers on your team.&lt;/p&gt;

&lt;h3&gt;
  
  
  DevOps automation tools like Architect
&lt;/h3&gt;

&lt;p&gt;I really can’t talk about DevOps automation without talking about Architect. We love IaC and use it heavily as part of our product. We found that configuring deployments, networking, and network security, especially for microservice architectures, can be demanding on the developers who should be focused on new product features instead of infrastructure.&lt;/p&gt;

&lt;p&gt;Instead of writing IaC templates and CI pipelines, which require developers to learn about Kubernetes, Cilium, API gateways, managed databases, or other infrastructure solutions, just have them write an &lt;code&gt;architect.yml&lt;/code&gt; file. We’ll automatically deploy dependent APIs/databases and securely broker connectivity to them every time someone runs &lt;code&gt;architect deploy&lt;/code&gt;. Our process can automatically spin up private developer environments, automated preview environments, and even production-grade cloud environments with just a single command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn more about DevOps, GitOps, and Architect!
&lt;/h2&gt;

&lt;p&gt;At Architect, our mission is to help ops and engineering teams simply and efficiently collaborate and achieve deployment, networking, and security automation all at once. Ready to learn more? Check out these resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/creating-microservices-nestjs" rel="noopener noreferrer"&gt;Creating Microservices: Nest.js&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/the-importance-of-portability" rel="noopener noreferrer"&gt;The Importance of Portability in Technology&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/docs" rel="noopener noreferrer"&gt;Our Product Docs!&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or &lt;a href="https://cloud.architect.io/signup" rel="noopener noreferrer"&gt;sign up&lt;/a&gt; and try Architect yourself today!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>microservices</category>
      <category>node</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Implement RabbitMQ on Docker in 20 Minutes</title>
      <dc:creator>Lindsay Brunner</dc:creator>
      <pubDate>Mon, 18 Apr 2022 17:37:09 +0000</pubDate>
      <link>https://dev.to/architectio/implement-rabbitmq-on-docker-in-20-minutes-d5g</link>
      <guid>https://dev.to/architectio/implement-rabbitmq-on-docker-in-20-minutes-d5g</guid>
      <description>&lt;p&gt;Here at Architect.io, it’s no secret that we love portable microservices. And what better way to make your services portable than by decoupling their interactions?&lt;/p&gt;

&lt;p&gt;Today we talk about decoupling your services using a classic communication pattern: the message queue. In this tutorial, we’ll show you how to get our favorite open source message broker–RabbitMQ–up and running in just 20 minutes. Then we’ll use Architect.io to deploy the stack to both your local and remote environments.&lt;/p&gt;

&lt;p&gt;If you’re following along with this tutorial at home, you’ll need a few pre-reqs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/get-docker/"&gt;Docker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nodejs.org/en/"&gt;Node&lt;/a&gt; &amp;gt;= 8.2.0&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.python.org/"&gt;Python&lt;/a&gt; &amp;gt;= 3&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.architect.io/signup"&gt;Architect.io&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  RabbitMQ Instance with Docker
&lt;/h2&gt;

&lt;p&gt;First, let’s pull the RabbitMQ Docker image. We’ll use the &lt;code&gt;3-management version&lt;/code&gt;, so we get the Management plugin pre-installed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull rabbitmq:3-management
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s stand it up. We’ll map &lt;code&gt;port 15672&lt;/code&gt; for the management web app and &lt;code&gt;port 5672&lt;/code&gt; for the message broker.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm -it -p 15672:15672 -p 5672:5672 rabbitmq:3-management
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Assuming that ran successfully, you’ve got an instance of RabbitMQ running! Bounce over to &lt;a href="http://localhost:15672/"&gt;http://localhost:15672&lt;/a&gt; to check out the management web app.&lt;/p&gt;

&lt;p&gt;Log in using the default username (&lt;code&gt;guest&lt;/code&gt;) and password (&lt;code&gt;guest&lt;/code&gt;) and explore the management app a little bit. Here you can see an overview of your RabbitMQ instance and the message broker’s basic components: Connections, Channels, Exchanges, and Queues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Send Messages to RabbitMQ from a Producer
&lt;/h2&gt;

&lt;p&gt;RabbitMQ is only interesting if we can send messages, so let’s create an example publisher to push messages to RabbitMQ. In a new session (keep RabbitMQ running), we’ll use the following directory structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p rabbitmq/rabbitmq-producer
mkdir -p rabbitmq/rabbitmq-consumer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our publisher will be a simple Node.js Express web application. Use the Express app generator to bootstrap a simple Express app. We’ll use the &lt;code&gt;amqplib&lt;/code&gt; Node library, the recommended AMQP client.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd rabbitmq/rabbitmq-producer
npx express-generator
npm install
npm install amqplib
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The plan is to add a route that accepts requests to &lt;code&gt;POST /message&lt;/code&gt; with a body that looks something like this: &lt;code&gt;{'message': 'my message'}&lt;/code&gt;. That route will publish each message it receives to our RabbitMQ instance.&lt;/p&gt;

&lt;p&gt;First, create a new file called &lt;code&gt;message.js&lt;/code&gt; next to &lt;code&gt;index.js&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Import &lt;code&gt;amqplib&lt;/code&gt;, set the URL to the location of the RabbitMQ instance, and give our queue a name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var express = require('express');
var router = express.Router();

var amqp = require('amqplib/callback_api');

const url = 'amqp://localhost';
const queue = 'my-queue';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, initialize the connection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let channel = null;
amqp.connect(url, function (err, conn) {
  if (!conn) {
    throw new Error(`AMQP connection not available on ${url}`);
  }
  conn.createChannel(function (err, ch) {
    channel = ch;
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And it’s important that we not forget to add an exit handler to close the channel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;process.on('exit', code =&amp;gt; {
  channel.close();
  console.log(`Closing`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now for the meat. We’ll add a route that receives the message, converts the &lt;code&gt;body.message&lt;/code&gt; string into a Buffer, and sends it to the queue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;router.post('/', function (req, res, next) {
  channel.sendToQueue(queue, new Buffer.from(req.body.message));
  res.render('index', { response: `Successfully sent: ${req.body.message}` });
});

module.exports = router;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Last, we’ll need to register our new route in &lt;code&gt;app.js&lt;/code&gt;. We’ll put it underneath the existing index route, and we’ll nest our routes under &lt;code&gt;/message&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var messagesRouter = require('./routes/message');

app.use('/', indexRouter);
app.use('/message', messagesRouter);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’ll also add a simple HTML form to our view to post messages to the &lt;code&gt;/messages&lt;/code&gt; endpoint. Replace &lt;code&gt;index.js&lt;/code&gt; with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;extends layout

block content
  h1= 'Example RabbitMQ Producer'

  div
    form(action='/message',method='post')
      div.input
          span.label Message:&amp;amp;emsp;
          input(type="text", name="message")
          span.actions &amp;amp;emsp;
            input(type="submit", value="Send")

  div
    p= response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That should do it! &lt;a href="https://github.com/architect-team/architect-cli/tree/tutorials/rabbit-mq-1/examples/rabbitmq/rabbit-producer"&gt;See the full Express app source code here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now we can run the sample producer app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we can visit it here: &lt;a href="http://localhost:3000/"&gt;http://localhost:3000&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Try sending a message from the browser. It should send a &lt;code&gt;POST HTTP&lt;/code&gt; request to the Express app, which will subsequently stick it on the queue. If you navigate to the RabbitMQ management app, you should see traffic coming through &lt;a href="http://localhost:15672/#/"&gt;http://localhost:15672&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Receive Messages from RabbitMQ with a Consumer
&lt;/h2&gt;

&lt;p&gt;Now that we’re publishing messages let’s see if we can receive them with a consumer application.&lt;/p&gt;

&lt;p&gt;We’ll use Python for our sample consumer application. This illustrates the flexibility that a message queue introduces to our stack. While we wouldn’t suggest a multi-language stack just for the hell of it, using AMQP for inter-process communication does give us the polyglot option in a pinch!&lt;/p&gt;

&lt;p&gt;In a new session (keep RabbitMQ and the producer running):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd rabbitmq/rabbitmq-consumer
touch consumer.py
touch requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’ll use &lt;code&gt;pika&lt;/code&gt;, a recommended AMQP Python client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install 'pika==1.1.0'
echo 'pika == 1.1.0' &amp;gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;code&gt;consumer.py&lt;/code&gt;, first import &lt;code&gt;pika&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pika
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And just like we did in the producer app, set the AMQP host and the queue name that our app will listen on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;host = 'localhost'
queue = 'my-queue'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, define a method for handling the messages coming off the queue. For the sake of this tutorial, our app will simply log all received messages to &lt;code&gt;stdout&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def on_message(ch, method, properties, body):
    message = body.decode('UTF-8')
    print(message)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s create a &lt;code&gt;main() method&lt;/code&gt; for the core logic. Here we create the connection and ensure the queue exists. Last, we pass in the message handler and start consuming.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def main():
    connection_params = pika.ConnectionParameters(host=host)
    connection = pika.BlockingConnection(connection_params)
    channel = connection.channel()

    channel.queue_declare(queue=queue)

    channel.basic_consume(queue=queue, on_message_callback=on_message, auto_ack=True)

    print('Subscribed to ' + queue + ', waiting for messages...')
    channel.start_consuming()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, call the &lt;code&gt;main() method&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if __name__ == '__main__':
    main()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See the full Python consumer app &lt;a href="https://github.com/architect-team/architect-cli/blob/tutorials/rabbit-mq-1/examples/rabbitmq/rabbit-consumer/consumer.py"&gt;source code here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now run the consumer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python consumer.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we navigate back to our &lt;a href="http://localhost:3000/"&gt;producer webapp&lt;/a&gt;, we can publish a message. The browser app posts the message to our Node Express server, which publishes the message to RabbitMQ. If you’re watching the logs in our Python command line consumer app, you should see the message come across. Works like a charm!&lt;/p&gt;

&lt;p&gt;Queues are a nifty decoupling mechanism. We’ve got a Node app and a Python app seamlessly communicating! Even better- neither one depends on the uptime of the other. Try killing the consumer application and then publish a message from the producer. Now restart the Python consumer. You should see the message come through! No traffic was lost while the consumer was down.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy RabbitMQ with Docker
&lt;/h2&gt;

&lt;p&gt;We already have three components in our stack: the web app producer, the message broker, and the command line consumer. How are we going to deploy this stack into a remote or production environment?&lt;/p&gt;

&lt;p&gt;Now, let’s make these applications a little bit more portable using docker. The RabbitMQ container is already Dockerized, so we’ll create Dockerfiles for both the Express server and the Python consumer.&lt;/p&gt;

&lt;p&gt;First, the producer Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd rabbitmq/rabbit-producer
touch Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:14

WORKDIR /usr/src/app

## copying package.json and npm install before copying directory saves time w/ caching
COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000

CMD [ "npm", "start" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Adding a &lt;code&gt;.dockerignore&lt;/code&gt; file next to the producer &lt;code&gt;Dockerfile&lt;/code&gt; will save us some build time here too:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_modules
npm-debug.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, add the consumer Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd rabbitmq/rabbit-consumer
touch Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3

WORKDIR /usr/src/app

COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

CMD [ "python", "-u", "consumer.py" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s fire them all up locally. First, in one session, we’ll run RabbitMQ:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm -it -p 15672:15672 -p 5672:5672 rabbitmq:3-management
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, open up another session and run the producer. We’ll map port 3000, so we can access our Express app.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd rabbitmq/rabbit-producer
docker docker build -t rabbit-producer .
docker run -it --rm -p 3000:3000 rabbit-producer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Uh oh… that crashed. Looks like we got an error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: AMQP connection not available on amqp://localhost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This makes sense: &lt;code&gt;localhost&lt;/code&gt; no longer resolves to RabbitMQ now that we’re running the producer inside a docker container. The RabbitMQ instance is running in a different container! We might make this work by changing the URL to &lt;code&gt;host.docker.internal&lt;/code&gt;, but this is just as brittle: now we can’t run the producer without Docker. Further, once we try to deploy this remotely, we’ll need to change this again.&lt;/p&gt;

&lt;p&gt;This sounds like a good candidate for an environment variable! So let’s factor it out. While we’re at it, let’s do the same for the queue name since this is liable to change across environments as well.&lt;/p&gt;

&lt;p&gt;In our Node.js producer &lt;code&gt;message.js&lt;/code&gt;, we’ll change the URL and queue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const url = `amqp://${process.env.AMQP_HOST}`;
const queue = process.env.QUEUE_NAME;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Likewise, in our Python &lt;code&gt;consumer.py&lt;/code&gt;, we’ll do the same (you’ll need to &lt;code&gt;import os&lt;/code&gt; at the top of the file).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;host = os.environ.get('AMQP_HOST')
queue = os.environ.get('QUEUE_NAME')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Be sure the &lt;code&gt;rabbitmq&lt;/code&gt; Docker container is still running in another shell. Now, in a new shell, we should be able to run the producer and pass in the environment variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd rabbitmq/rabbit-producer
docker build -t rabbit-producer .
docker run -it --rm -p 3000:3000 -e QUEUE_NAME='my-queue' -e AMQP_HOST='host.docker.internal' rabbit-producer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keep the producer running and open up a new shell. Then run the consumer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd rabbitmq/rabbit-consumer
docker build -t rabbit-consumer .
docker run -it --rm -e QUEUE_NAME='my-queue' -e AMQP_HOST='host.docker.internal' rabbit-consumer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, if you navigate to &lt;a href="http://localhost:3000/"&gt;http://localhost:3000&lt;/a&gt;, you should be able to publish a message and watch that messages flow!&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy RabbitMQ with Docker and Architect
&lt;/h2&gt;

&lt;p&gt;By Dockerizing our services, we’ve taken a great step toward a more portable application.&lt;/p&gt;

&lt;p&gt;But deploying these services remotely still requires several steps with manual configuration. While Docker makes individual services more portable, &lt;a href="https://www.architect.io/blog/the-feature-docker-forgot#make-apps-portable-with-network-awareness"&gt;it doesn’t quite do the same for the full stack&lt;/a&gt;. Docker Compose gets us some of the way there for local development, but an important principle of portable application development is that we operate our services the same way, regardless of the environment! A developer shouldn’t have to change the way they deploy services between local and remote environments. &lt;a href="https://cloud.architect.io/signup"&gt;Architect.io&lt;/a&gt; solves that.&lt;/p&gt;

&lt;p&gt;Using Architect.io to make a fully deployable stack is easy:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Mark up your services with a YML file&lt;/li&gt;
&lt;li&gt;Deploy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you haven’t signed up for &lt;a href="https://cloud.architect.io/signup"&gt;Architect.io&lt;/a&gt; yet, &lt;a href="https://cloud.architect.io/signup"&gt;do it now&lt;/a&gt;! It’s fast, free, and we’ll blow you away with our deploy process!&lt;/p&gt;

&lt;p&gt;Download the Architect CLI and log in to Architect Cloud:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g @architect-io/cli
architect login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To get started, we’ve composed an example &lt;a href="https://github.com/architect-team/architect-cli/blob/master/examples/rabbitmq/architect.yml"&gt;architect.yml&lt;/a&gt; for this RabbitMQ stack. You can copy it to your directory with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd rabbitmq
curl https://raw.githubusercontent.com/architect-team/architect-cli/tutorials/rabbit-mq-2/examples/rabbitmq/architect.yml &amp;gt; architect.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you registered with Architect Cloud, you were prompted to create an account. Change the top line of the &lt;code&gt;architect.yml&lt;/code&gt; to match your new account name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: &amp;lt;your-account-name&amp;gt;/rabbitmq
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, deploying the example RabbitMQ stack on your local environment becomes as easy as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;architect link architect.yml
architect dev &amp;lt;your-account-name&amp;gt;/rabbitmq:latest -p QUEUE_NAME=my-queue -i app:app mgmt:mgmt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then navigate to &lt;a href="http://app.localhost/"&gt;app.localhost&lt;/a&gt; to see your running producer or &lt;a href="http://mgmt.localhost/"&gt;mgmt.localhost&lt;/a&gt; to see the RabbitMQ management webapp.&lt;/p&gt;

&lt;p&gt;To deploy the same app to a remote staging or production environment, first register it with Architect Cloud:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;architect register architect.yml -t latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then deploy with a similar command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;architect deploy &amp;lt;your-account-name&amp;gt;/rabbitmq:latest -a &amp;lt;your-account-name&amp;gt; -e example-environment -p QUEUE_NAME=my-queue -i app:app mgmt:mgmt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Architect Cloud will generate a deployment diff and prompt you to review it. Press Y to continue and deploy. Architect is now deploying your RabbitMQ stack–producer, consumer, and broker–to a remote Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;You can watch the deployment unfold here: &lt;a href="https://cloud.architect.io/%3Cyour-account-name%3E/environments/example-environment/"&gt;https://cloud.architect.io//environments/example-environment/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;While you’re waiting for that deployment to complete, you can also explore deploying this stack to your infrastructure by registering your AWS account or Kubernetes Cluster with Architect: &lt;a href="https://cloud.architect.io/%3Cyour-account-name%3E/platforms/new"&gt;https://cloud.architect.io//platforms/new&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once the deployment completes, you should be able to see the message consumer and broker running live:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;https://app.example-environment..arc.domains/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Likewise, your RabbitMQ management interface:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;https://mgmt.example-environment.&amp;lt;your-account-name&amp;gt;.arc.domains/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: it takes a few minutes for new SSL certificates to propagate, so domains may appear to be insecure initially. Rest assured, they will propagate.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Finally, to clean up this environment and break down your deployed services, use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;architect destroy -a &amp;lt;your-account-name&amp;gt; -e example-environment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Learn More About Architect and Modern Deployment Practices
&lt;/h2&gt;

&lt;p&gt;We hope you enjoyed following along!&lt;/p&gt;

&lt;p&gt;Check out our &lt;a href="https://www.architect.io/blog"&gt;other tutorials and blog series&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you’d like to play around with Architect.io some more, &lt;a href="https://www.architect.io/docs"&gt;check out our Docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And don’t be afraid to reach out to the team with any questions or comments! You can find us on Twitter &lt;a href="https://twitter.com/architect_team"&gt;@architect_team&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>docker</category>
    </item>
    <item>
      <title>The Importance of Software Portability</title>
      <dc:creator>Lindsay Brunner</dc:creator>
      <pubDate>Thu, 14 Apr 2022 18:08:06 +0000</pubDate>
      <link>https://dev.to/architectio/the-importance-of-software-portability-4gpc</link>
      <guid>https://dev.to/architectio/the-importance-of-software-portability-4gpc</guid>
      <description>&lt;p&gt;The evolution of software might be a story of innovation in delivery channels – the mainframe to the personal computer, hardware-specific applications to cross-architecture compilation, desktop to mobile, on-premise to cloud. These new delivery methods represented a unique opportunity for developers to reach more users with the same application. The benefits are obvious: write once, make available anywhere. We might use the word &lt;strong&gt;portability&lt;/strong&gt;, then, in very general terms, as a characteristic of software: highly portable software can be written once and deployed anywhere.&lt;/p&gt;

&lt;p&gt;Portable applications require less development and operational effort even as they are exposed to more potential users. This same value, though, also applies to the internal operations of software teams. In modern microservice architectures, developers also play the role of consumers- consuming the services and APIs created by other teams both inside and outside of their organization. Thus, application portability matters significantly to the internal operations of software companies.&lt;/p&gt;

&lt;p&gt;Today, we’ll shed light on what portability means in the context of cloud software and why it’s important for both your customers and your team members.&lt;/p&gt;

&lt;h2&gt;
  
  
  “It works on my machine”
&lt;/h2&gt;

&lt;p&gt;Software teams generally use the word “environment” to describe the context in which an application runs. It’s a broad term- you might use it to refer to a specific machine, an OS, or an entire network. Importantly, the word captures a fundamental concept with a serious implication: context differs across environments, and if your software depends on context, its behavior will differ across environments! This distinction is often the culprit of poorly functioning software: it seemingly works in one place but is buggy when you run it somewhere else.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T1LBuA4U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nbutchbdxy0nw3x40wpz.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T1LBuA4U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nbutchbdxy0nw3x40wpz.jpeg" alt="Image description" width="728" height="610"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Picking apart the logic of your software from the characteristics of the environment is a central skill in developing any software application, and it’s a skill that strives toward the above prize: portability. A developer who has isolated their software from its environment finds themselves with an elegant bundle of business logic that will behave the same regardless of where it is run: their own machine, the company QA environment, their production cloud, even their customer’s cloud!&lt;/p&gt;

&lt;h2&gt;
  
  
  Who benefits from software portability?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  IT Departments: Commoditizing cloud providers
&lt;/h3&gt;

&lt;p&gt;A smart business will limit its hard dependencies if given the chance. Vendor lock-in introduces a central point of failure that exposes a company both to disruptions in service and the pricing whims of the vendor. Horizontal application portability is characterized by minimizing environment switching costs such that an IT department can avoid vendor lock-in. If you can run your application just as easily on GCP or AWS you avoid pinning your company to the uptime and pricing of one cloud provider.&lt;/p&gt;

&lt;h3&gt;
  
  
  Developers and DevOps: Building and releasing extensible services
&lt;/h3&gt;

&lt;p&gt;It is not uncommon for environments to multiply rapidly in even small software teams. Developers need to run the application locally, quality engineers in a test environment, sales reps in a demo environment, and operators run the application in staging and production.&lt;/p&gt;

&lt;p&gt;The develop/test/demo/deploy lifecycle has a cost that is directly correlated to the portability of the application. Software that requires much environment-related configuration and tuning will cost time and effort as new versions move through the lifecycle. Portability saves time and mental overhead for anyone involved in moving new versions of the software across environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TV9_7s6r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yjnx4owp801orxl8xsn4.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TV9_7s6r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yjnx4owp801orxl8xsn4.jpeg" alt="Image description" width="818" height="828"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Sales: Increasing addressable market
&lt;/h3&gt;

&lt;p&gt;Many potential customers prefer to run vendor software on their own premises for security reasons. If a business makes rigid software – requiring specific operating systems, cloud providers, embedded security, and extensive environment configuration – the business is inadvertently limiting its addressable market to those customers that satisfy these conditions. A company that ships portable software, on the other hand, removes these restrictions on their addressable market.&lt;/p&gt;

&lt;h2&gt;
  
  
  The three dimensions of software portability
&lt;/h2&gt;

&lt;p&gt;Building software that is portable actually encourages patterns that support a host of other worthwhile properties. Suppose you make it easier for your software to be run here or there. It follows that it is easier to run it here and there: supporting replication within and across environments and enabling engineers across teams and orgs to operate the software themselves. An application that provides full portability and is easy for developers to run is easier to build on top of. An application with great portability lends itself to great extensibility.&lt;/p&gt;

&lt;p&gt;So how do we know if our apps and services are portable? Can they be portable in some ways and not others? To determine this for ourselves, let’s evaluate three different dimensions in which our application can be portable:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Replication (deep)
&lt;/h3&gt;

&lt;p&gt;The first dimension of portability is crucial to operating cloud applications at scale – scaling and replication. The ability for your service to maintain multiple running instances that work as a cohesive unit is paramount to its ability to support concurrent users at scale. Consistent packaging mechanics, like VM images and containers, are often the key to automating the replication of services in cloud environments at scale. Still, this replication demands consistent methods for load balancing and distributing incoming traffic. By combining packaging consistency with API gateways, service meshes, and other load balancing solutions, teams can quickly achieve deep application portability.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Platform/provider migration (horizontal)
&lt;/h3&gt;

&lt;p&gt;The second dimension of portability is typically the first that most think of when they consider cloud portability – cloud migration and/or multi-cloud deployments. The ability for your application to be run on multiple platforms is a great defensive strategy. It ensures that cloud apps can remain cost-effective and protected from outages. Designing applications to be run on commodity infrastructure (e.g., Linux vs. Windows) or on multiple cloud providers (e.g., AWS vs. Azure vs. GCP) enables teams to run in multiple locations concurrently or swap out providers should pricing prove beneficial.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Development lifecycle (vertical)
&lt;/h3&gt;

&lt;p&gt;The third dimension of portability is often overlooked despite being far more impactful than horizontal mobility. This is portability through the software development release cycle. Software developers are constantly building or modifying services inside an application stack. As such, they find themselves needing to test in environments that they can be sure will match production. The consistency of the application context from local development, through test/QA/staging, and finally to production environments is crucial to ensuring trust, maintaining a strong development flow, and ensuring that product features get in front of customers quickly and safely.&lt;/p&gt;

&lt;h2&gt;
  
  
  History and future
&lt;/h2&gt;

&lt;p&gt;The yearning for portable software is not new. The development of the Java Virtual Machine is among the most successful software portability innovations to date. Now, any machine can run a single compiled .jar file and any OS and display identical behavior. With Docker, applications can go a step further: an entire OS can be shipped as a lightweight artifact and run anywhere.&lt;/p&gt;

&lt;p&gt;Note the caveats, and note the seeming inconsistencies with the entire concept of portability! If java needs a JVM installed, isn’t that a hard violation of everything we’ve discussed? Alas, it seems so. Here lies another, related- even inverse- concept to portability: the platform. Portable software still needs be executed by something; a platform, an OS, an environment. As developers struggle to make their applications more portable, companies struggle to make the “universal platform” on which all applications might be run portable. Microsoft tried it with an OS, Oracle with a narrow VM, Docker with a more general VM, and most recently Kubernetes with an open-source hardware abstraction and Terraform/Cloudformation with reproducible infrastructure-as-code templates.&lt;/p&gt;

&lt;h2&gt;
  
  
  True portability
&lt;/h2&gt;

&lt;p&gt;Are applications truly portable? The best way to answer that is to look at our own applications. Are my applications portable? Can I share my application with other developers? Can they run or access it using their own tools and hardware?&lt;/p&gt;

&lt;p&gt;The industry has made enormous strides toward allowing cloud software to become portable. With each innovation comes a new opportunity for software architecture to push the boundaries even further. Yesterday I could make a portable monolithic application by putting it in an AMI or Docker image. Today my app contains multiple images that run separately yet still need to connect together. The pursuit of portability is an ever ongoing effort, but the value of the pursuit always remains.&lt;/p&gt;

&lt;p&gt;Learn more about modern continuous delivery practices on the Architect.io blog!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/2022-04-14/the-basics-of-secret-management/"&gt;The basics of secret management&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/2021-01-11/gitops-developers-guide/"&gt;A developer's guide to GitOps&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/2020-09-16/why-distributed-apps-need-dependency-management/"&gt;Why distributed apps needs dependency management&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And as always, we'd love to continue the conversation on Twitter! Find us &lt;a href="https://twitter.com/architect_team"&gt;@architect_team&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>programming</category>
      <category>devops</category>
      <category>codenewbie</category>
    </item>
    <item>
      <title>Deploy Your Django App with Docker</title>
      <dc:creator>Lindsay Brunner</dc:creator>
      <pubDate>Thu, 07 Apr 2022 16:27:33 +0000</pubDate>
      <link>https://dev.to/architectio/deploy-your-django-app-with-docker-1bee</link>
      <guid>https://dev.to/architectio/deploy-your-django-app-with-docker-1bee</guid>
      <description>&lt;p&gt;Django is an excellent Python Web framework, but it can be tricky to deploy to the cloud. If you’re building in Python, you want the confidence that what you develop and deploy locally will translate to production. This quick-start guide demonstrates how to set up and run a simple Django/PostgreSQL app locally for development and production-ready in the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;There are many tools out there that provide support for local development OR remote deployment. Architect was built to do both. This tutorial will show how, with one simple &lt;code&gt;architect.yml&lt;/code&gt; file, any developer can run their application locally and in the cloud without having to learn/write docker-compose and infrastructure as code templates.&lt;/p&gt;

&lt;p&gt;Before you begin, make sure the following tools and services are installed on your system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architect CLI&lt;/strong&gt; - The best way to install the CLI is via NPM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm install -g @architect-io/cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, you can download the binary for your system architecture from Github. Just download the appropriate bundle, extract it, and link the included bin folder to your user home directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker&lt;/strong&gt; - This is a software platform for building applications based on containers. Install it according to &lt;a href="https://www.docker.com/get-started"&gt;the docs on their site&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Django, Docker, and more
&lt;/h2&gt;

&lt;p&gt;For this project, you need to create a Dockerfile, a Python dependencies file, and an &lt;code&gt;architect.yml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Create an empty project directory. You can name the directory something easy for you to remember. This directory is the context for your application image. The directory should only contain resources to build that image.&lt;/p&gt;

&lt;p&gt;You’ll next need to create a new file called Dockerfile in your project directory. The Dockerfile defines an application’s image content via one or more build commands that configure that image. Once built, you can run the image in a container.&lt;/p&gt;

&lt;p&gt;Add the following content to the Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a &lt;code&gt;requirements.txt&lt;/code&gt; in your project directory. This file is used by the RUN&lt;code&gt;pip install -r requirements.txt&lt;/code&gt; command in your Dockerfile. Pip is a package management system similar to npm except for Python. Each line in the file represents an external dependency and the required version of that software.&lt;/p&gt;

&lt;p&gt;Add the required software in the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Django&amp;gt;=3.0,&amp;lt;4.0
psycopg2-binary&amp;gt;=2.8
uwsgi&amp;gt;=2.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a file called &lt;code&gt;architect.yml&lt;/code&gt; in your project directory. The &lt;code&gt;architect.yml&lt;/code&gt; file describes the services that make your app. In this example, those services are a web server and database. Add the following configuration to the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: examples/django
parameters:
  django_secret_key:
    default: warning-override-for-production
  postgres_password:
    default: warning-override-for-production

services:
  db:
    image: postgres
    interfaces:
      main: 5432
    environment:
      POSTGRES_DB: postgres
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: ${{ parameters.postgres_password }}
  web:
    build:
      context: .
    command: |
      sh -c '
        python manage.py collectstatic --noinput
        python manage.py migrate --noinput
        uwsgi --http "0.0.0.0:8000" --module architectexample.wsgi:application --master --processes 4 --threads 2 --static-map /static=/code/static
      '
    interfaces:
      main: 8000
    environment:
      DEBUG: 'False'
      ALLOWED_HOST: .${{ ingresses.web.host }}
      SECRET_KEY: ${{ parameters.django_secret_key }}
      POSTGRES_DB: ${{ services.db.environment.POSTGRES_DB }}
      POSTGRES_USER: ${{ services.db.environment.POSTGRES_USER }}
      POSTGRES_PASSWORD: ${{ services.db.environment.POSTGRES_PASSWORD }}
      POSTGRES_HOST: ${{ services.db.interfaces.main.host }}
      POSTGRES_PORT: ${{ services.db.interfaces.main.port }}
    debug:
      command: |
        sh -c '
          python manage.py migrate --noinput
          python manage.py runserver 0.0.0.0:${{ services.web.interfaces.main.port }}
        '
      environment:
        ALLOWED_HOST: '*'
        DEBUG: 'True'
      volumes:
        code:
          mount_path: /code
          host_path: .

interfaces:
  web: ${{ services.web.interfaces.main.url }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This manifest file does the following three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Outlines parameter values which allow you to configure the services per deployment&lt;/li&gt;
&lt;li&gt;Defines the services to be deployed. In this case db is the Postgres database and web is the Django application. Each service block defines the interfaces (ports) that are exposed along with the environment variables required for the service to run&lt;/li&gt;
&lt;li&gt;Defines development specific configuration in service debug blocks. This is powerful because it lets us define different start commands for development and production.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can check out the &lt;a href="https://www.architect.io/docs/components/architect-yml"&gt;&lt;code&gt;architect.yml&lt;/code&gt; reference&lt;/a&gt; for more information on how this file works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create your Django project
&lt;/h2&gt;

&lt;p&gt;Next, you’ll create a Django starter project by building the image from the build context defined in the previous procedure.&lt;/p&gt;

&lt;p&gt;Switch to the root of your project directory. Create the Django project by running the command as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;docker run --rm -it -v $&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;PWD&lt;span class="o"&gt;}&lt;/span&gt;:/code &lt;span class="si"&gt;$(&lt;/span&gt;docker build &lt;span class="nt"&gt;-q&lt;/span&gt; .&lt;span class="si"&gt;)&lt;/span&gt; django-admin startproject architectexample &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the command completes, list the contents of your project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ls -l

drwxr-xr-x 2 root   root   architectexample
-rw-rw-r-- 1 user   user   architect.yml
-rw-rw-r-- 1 user   user   Dockerfile
-rwxr-xr-x 1 root   root   manage.py
-rw-rw-r-- 1 user   user   requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configure Django
&lt;/h2&gt;

&lt;p&gt;Now it’s time to set up the database connection for Django along with a few other settings.&lt;/p&gt;

&lt;p&gt;In your project directory, edit the &lt;code&gt;architectexample/settings.py&lt;/code&gt; file. Replace or add the the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# settings.py
import os

STATIC_ROOT =  os.path.join(BASE_DIR, 'static/')

SECRET_KEY = os.environ.get('SECRET_KEY', 'warning-override-for-production')

DEBUG = os.environ.get('DEBUG', 'False') == 'True'

ALLOWED_HOSTS = [os.environ.get('ALLOWED_HOST', '')]

DATABASES = {
  'default': {
      'ENGINE': 'django.db.backends.postgresql',
      'NAME': os.environ.get('POSTGRES_DB', 'postgres'),
      'USER': os.environ.get('POSTGRES_USER', 'postgres'),
      'PASSWORD': os.environ.get('POSTGRES_PASSWORD', 'postgres'),
      'HOST': os.environ.get('POSTGRES_HOST', '0.0.0.0'),
      'PORT': os.environ.get('POSTGRES_PORT', '5432'),
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy your Django app locally
&lt;/h2&gt;

&lt;p&gt;Run the &lt;a href="https://www.architect.io/docs/reference/cli#architect-deploy-environment_config_or_component"&gt;architect deploy&lt;/a&gt; command from the top level directory for your project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ architect dev architect.yml -i django:web
http://django.localhost:80/ =&amp;gt; examples--django--web--latest--cvkrs58l

http://localhost:50000/ =&amp;gt; examples--django--db--latest--cbyiekkg
http://localhost:50001/ =&amp;gt; examples--django--web--latest--cvkrs58l
http://localhost:80/ =&amp;gt; gateway

. . .

web_1  | July 30, 2020 - 18:35:38
web_1  | Django version 3.0.8, using settings 'architectexample.settings'
web_1  | Starting development server at http://0.0.0.0:8000/
web_1  | Quit the server with CONTROL-C.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Go to &lt;code&gt;http://django.localhost&lt;/code&gt; on a web browser to see the Django welcome page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SZGwwRHQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w15h4xvyk9r2ghos41dh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SZGwwRHQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w15h4xvyk9r2ghos41dh.png" alt="Image description" width="880" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to shut down the services, simply stop the application by typing Ctrl-C in the same shell where you started it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy your Django app remotely
&lt;/h2&gt;

&lt;p&gt;You now know how to run our stack of services locally in a repeatable way, but what about deploying to production-grade environments? How do you deploy all our services to AWS ECS or Kubernetes? How do we deal with the networking and configuration of our services? Fortunately, Architect has this handled too! Since we already described our services as Architect Components, they are primed and ready to be deployed to production-grade container platforms without any additional work.&lt;/p&gt;

&lt;p&gt;Before you can deploy components to remote environments, you must &lt;a href="https://cloud.architect.io/signup"&gt;create an account with Architect&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you’ve successfully created your account, go ahead and &lt;a href="https://cloud.architect.io/examples/components/django/deploy?tag=latest&amp;amp;interface=django%3Aweb"&gt;deploy it to a sample Kubernetes cluster powered by Architect Cloud&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: Deploying to production disables DEBUG, and the base URL will 404. Confirm it’s working by loading /admin. An empty app is no fun so take a look at the next steps.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Next steps with Django and Docker
&lt;/h2&gt;

&lt;p&gt;Now you’re ready to build your application with the confidence that if you can run it locally, it will also run in the cloud. Django has an excellent &lt;a href="https://docs.djangoproject.com/en/3.1/intro/tutorial01/#creating-the-polls-app"&gt;polls tutorial&lt;/a&gt;, which you should try if this is your first time. The only difference is the command to create the example polls app.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm -it -v ${PWD}:/code $(docker build -q .) python manage.py startapp polls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Learn more about how to deploy faster and more securely
&lt;/h2&gt;

&lt;p&gt;Congratulations! That’s all it takes to take a locally runnable component and deploy it to a remote cluster with Architect.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: You can register your own Kubernetes or ECS cluster on the platforms tab of your account. Then create an environment for that platform and try deploying again!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: We skipped the component registration step in this tutorial because we’ve already published this example component to the registry. If you want to try publishing yourself, simply change the component name to include your account name as the prefix instead of examples and then run architect register &lt;code&gt;architect.yml&lt;/code&gt; in the project directory.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://www.architect.io/docs/getting-started/introduction#register-a-component"&gt;Docs&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://cloud.architect.io/examples/components/django/"&gt;View component&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you’d like to read more about how Architect enables safe, fast deployments, we’ve got you covered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/2020-09-16/why-distributed-apps-need-dependency-management/"&gt;Why Distributed Apps Need Dependency Management&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/creating-microservices-nestjs"&gt;Creating Microservices: Nest.js&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/cycling-credentials-without-cycling-containers"&gt;Cycling Credentials Without Cycling Containers&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And as always, we’d love to have you follow along as we release new content and features. Check us out on Twitter &lt;a href="https://twitter.com/architect_team"&gt;@architect_team&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>python</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>What the Heck is Event-Driven Architecture?</title>
      <dc:creator>Lindsay Brunner</dc:creator>
      <pubDate>Tue, 05 Apr 2022 15:58:02 +0000</pubDate>
      <link>https://dev.to/architectio/what-the-heck-is-event-driven-architecture-1j32</link>
      <guid>https://dev.to/architectio/what-the-heck-is-event-driven-architecture-1j32</guid>
      <description>&lt;p&gt;Applications have quickly become complex webs of interconnected microservices. Failures in the API calls between microservices grow more common and far more dastardly – wreaking havoc throughout applications in unforeseen ways. Accidents and errors can happen even with the most brilliant engineers and most controlled environments in the world. Unfortunately, this means that outright elimination of API call failures is not an option. Instead, we have to prepare our applications for failure, and this is where event-driven architecture comes into play.&lt;/p&gt;

&lt;p&gt;If you’ve worked with or researched microservices in the last decade, chances are you’ve heard of and probably implemented event-driven architecture. The pattern has become extraordinarily popular amongst cloud-native and distributed teams in recent years as it solves some very real problems with fault tolerance, availability, and coupling of microservices. Instead of communicating directly with one another through API calls, services publish and subscribe to events. In doing so, both the publisher and subscriber can exist and perform their work regardless of the other’s availability, thus achieving the fault tolerance needed for the application to support a growing number of users.&lt;/p&gt;

&lt;p&gt;This all sounds like a nice silver bullet on the surface, but what even is an event, and do you leverage event-driven design in an application? In this article, I’ll discuss the different uses of events, the various technologies and practices that can broker events, and the risks involved with event-driven architecture. I’ll even debunk a few myths about event-driven design to boot!&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of events
&lt;/h2&gt;

&lt;p&gt;Events are used to communicate with other applications and services – it’s that simple. There’s a lot of thought that can go into which events you publish, who subscribes to them, and what contents go inside the event, but none of that matters when it comes to describing what an event is and what it can be used for. What matters is whether or not your event needs a response. Is your event just miscellaneous information that you’re making available for other applications to do whatever they want with, or are you using your event to request additional information from a peer app or service?&lt;/p&gt;

&lt;h3&gt;
  
  
  Broadcast notifications
&lt;/h3&gt;

&lt;p&gt;For those of you who still watch live TV, you’re probably aware of the fact that others can tune into and out of the same channel as you at the same time, and they’ll see the same content you do. In fact, the content doesn’t change, no matter how many people tune in or out of the channel. Whether a hundred or a million people are watching, your favorite sportscaster is still going to be saying the same thing to everyone who tunes in.&lt;/p&gt;

&lt;p&gt;Broadcast application events work the same way as broadcast media – the event gets published by a single entity but can be received by unlimited subscribers. Since the publisher isn’t expecting any kind of response from subscribers, they can continue their broadcast regardless of the number of viewers on the other end.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NwKghJxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p7p4ml1r7sk6rd3e5o8j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NwKghJxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p7p4ml1r7sk6rd3e5o8j.png" alt="Image description" width="880" height="691"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Broadcast events play a critical role in distributed applications, especially for core services like identity management and payments services. These services use events to communicate with the rest of the application whenever actions are taken, or important state changes are made. A reporting service may want to forecast new financial projections whenever payments are processed, or a shipping service may wish to change delivery targets whenever a user updates their primary residence. The identity and payments services don’t care what others do with the information, but they know that other services may want to tune in for updates to act on critical information for themselves.&lt;/p&gt;

&lt;h3&gt;
  
  
  Action/response events
&lt;/h3&gt;

&lt;p&gt;Broadcast events are great when we don’t care who the subscribers are or what they intend to do with the event information, but they don’t help us make regular API calls more fault-tolerant, as described earlier. Many direct API calls not only know which service it is being made to, but the response from the service they are connecting to is of great importance. Maybe we need to check the identity service to make sure the user has 2FA enabled before they can wire money, or maybe we need to query the product catalog for the latest prices before adding an item to the shopping cart. These are pervasive and intuitive thoughts and workflows for developers and applications, but how would a developer go about instrumenting this flow using event-driven architecture?&lt;/p&gt;

&lt;p&gt;The answer lies with &lt;a href="https://martinfowler.com/bliki/CQRS.html"&gt;Command Query Responsibility Segregation (CQRS)&lt;/a&gt; – a pattern involving the separation of workflows and data structures for reading and writing information respectively. Instead of relying on a single event, which is limited to sharing information from publisher to subscriber, developers would use two events to replicate their API calls using event-driven design: one for the upstream service to trigger an action and another for the downstream service to respond. As long as the downstream knows the name of the action event and the upstream knows the name of the response event, they can subscribe to each other to fulfill the bi-directional request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2TkzmGSb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jht7wxqaki8hm518v9s4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2TkzmGSb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jht7wxqaki8hm518v9s4.png" alt="Image description" width="880" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This action/response event style has become increasingly popular in recent years. It allows developers to replace API call flows with near-identical event-driven flows, but swapping to events outright comes with its own set of hurdles. With direct API calls, developers get to store application state in memory while they await a response from the downstream API. Events, on the other hand, require state to be stored and accessible in a more persistent manner. With events, there’s no guarantee that the same instance that published the action event will receive the response event. As a result, event-driven architectures often demand more thought around session management and persistence to maintain state between action/response events.&lt;/p&gt;

&lt;h2&gt;
  
  
  Event-driven application brokers
&lt;/h2&gt;

&lt;p&gt;We’ve talked a lot about how events make applications fault-tolerant since they don’t have to wait for subscribers, but how does that functionally work? What magic is it that allows these events to “complete” even when the subscribers are down or otherwise unavailable? The answer is surprisingly simple – events get stored in a database.&lt;/p&gt;

&lt;p&gt;Yes, you really can store your events in just about anything that can persist the events. It’s the fact that the events are persisted that enables fault-tolerance. If a subscriber isn’t ready for the event just yet, either because it’s busy handling another event or it crashed, the event remains in the database until the subscriber comes back up.&lt;/p&gt;

&lt;p&gt;This database could be something as raw as the filesystem, or you can dump them right into a MySQL or PostgreSQL database you already have available. That said, there is an abundance of database software explicitly designed to handle events. These database solutions are often referred to as brokers due to the way they mediate the relationship between event publishers and subscribers. There are several different brands and solutions you can select, but before you dive right into the brands themselves, you’ll need to decide which brokering style is best for your application:&lt;/p&gt;

&lt;h3&gt;
  
  
  Queue-backed brokers
&lt;/h3&gt;

&lt;p&gt;If you’ve done any research into event-driven architecture, you’ve probably also heard of the word “message queue.” Message queues are one of the two ways that event brokers can store and enable subscription to published events. Publishers simply write their event to a queue, and a subscriber can pop the events off of said queue when it’s ready to be processed. Action/response events are straightforward to operate and integrate with: events don’t stay on the queue forever, which means that the database can remain generally small, and integration with it is relatively intuitive to developers who already understand the concept of a queue.&lt;/p&gt;

&lt;p&gt;The downside of queue-backed brokers, however, is that multiple subscribers can’t consume messages. Once a subscriber claims a message from a queue, the message is gone and unable to be consumed by a different subscriber. This means that to distribute a notification to multiple subscribers, a publisher has to write the message to multiple queues – one for each subscriber.&lt;/p&gt;

&lt;p&gt;Fortunately, this isn’t as difficult as it sounds with modern brokers. Some solutions, like &lt;a href="https://www.rabbitmq.com/"&gt;RabbitMQ&lt;/a&gt;, natively support the notion of &lt;a href="https://www.rabbitmq.com/tutorials/tutorial-five-python.html"&gt;topics&lt;/a&gt; which allow you to publish once and have the broker handle the writing to multiple queues. Other solutions, like &lt;a href="https://aws.amazon.com/sqs/"&gt;AWS SQS&lt;/a&gt;, have sister services like &lt;a href="https://aws.amazon.com/sns/"&gt;SNS&lt;/a&gt; that can connect to SQS to write to multiple queues as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---HsUQtN6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gqqsqpximnfbbsqks2h1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---HsUQtN6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gqqsqpximnfbbsqks2h1.png" alt="Image description" width="880" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Event streams
&lt;/h3&gt;

&lt;p&gt;The competitive methodology for storing and subscribing to events is through a persistent event stream, like that seen in &lt;a href="https://kafka.apache.org/"&gt;Apache Kafka&lt;/a&gt;. In this model, events are stored permanently in an ordered list and are never popped off like a queuing system. This means that multiple subscribers can read the same message. It also means that it’s up to subscribers to keep track of which event was the last one they read in. Subscribers can join the stream at different times, reprocess historical events, and generally control their destiny. This can also place more responsibility on each subscriber, making it more difficult for developers to rationalize.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AQlyPriM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wwzdlcteg48p1cgio8qu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AQlyPriM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wwzdlcteg48p1cgio8qu.png" alt="Image description" width="880" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Myths about event-driven architecture
&lt;/h2&gt;

&lt;p&gt;Event-driven architecture is clearly a powerful way to protect distributed systems from inevitable failure. Still, there’s a lot of misunderstanding about what an event is and what value it provides back to an application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Event-driven architecture decouples microservices
&lt;/h3&gt;

&lt;p&gt;Developers are readily taught that event-driven architecture “decouples” microservices from one another, allowing each service to run separately from each other without crashing. While it’s true that these services can now run without crashing, is a subscriber of an event doing anything of substance if the publisher isn’t running or isn’t available?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dLJBegp0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pq5a91jorhtug80v5wyy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dLJBegp0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pq5a91jorhtug80v5wyy.png" alt="Image description" width="880" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Without the events being published, a subscribing service is generally left idling, burning compute power waiting for an event to come in. It may not crash, but it certainly isn’t doing anything useful. Don’t get me wrong, the fact that it’s not crashing is enormously important for fault tolerance in production environments. However, the application architecture still has dependencies – subscribing services are still dependent on event publishers to do meaningful work and provide value back to end-users.&lt;/p&gt;

&lt;p&gt;It’s essential to capture these events and relationships if teams are to better understand what their applications are doing. This understanding helps with tracing and debugging requests and allows for topology maps to be generated, analyzed, and used to educate developers on where and how to contribute new features effectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  Events are asynchronous
&lt;/h3&gt;

&lt;p&gt;Another myth about events is that they are de facto asynchronous – publishers don’t wait for subscribers and vice-versa and will begin processing if and when they eventually hear from each other. While this is true of the two types of brokers we outlined in this article, which are the two main types used in event-driven architectures, it is not the nature of an event that makes this true, but rather the fact that we are using databases to persist events and broker the relationships.&lt;/p&gt;

&lt;p&gt;The usage of a database to broker events is not a requirement. You’ve most certainly heard of an event type that does not use a database to broker events, webhooks. Webhooks involve subscribers registering themselves with event publishers directly, and events that the publisher ships out are done using direct, synchronous API calls. This is identical to a “broadcast” event like we described earlier but highlights that it’s the database usage that provides fault tolerance rather than event-driven architecture itself.&lt;/p&gt;

&lt;p&gt;You could even intercept synchronous API calls and force the calls onto a message broker to get this same fault tolerance for direct API calls between microservices. Instrumentation of this is wildly impractical, which is why it’s seldom done, but even its possibility further highlights that persistence is the secret sauce behind event-driven architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ws7FvOY_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/se744a2hy6k1l1uqbd04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ws7FvOY_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/se744a2hy6k1l1uqbd04.png" alt="Image description" width="880" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling event-driven architecture for your application
&lt;/h2&gt;

&lt;p&gt;Event-driven architecture is powerful, but implementation leaves a lot to be desired – specifically when it comes to understanding the relationships between services whose communication is managed by a message broker or event stream. At Architect.io, we strive to make it as easy as possible for developers to incorporate best-in-breed architecture, like event streaming, into their everyday workflows. By automating service discovery and network security with each deployment, developers can more easily and more safely build event-driven design into their applications. Check out some of our other articles to learn more:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/gitops-developers-guide"&gt;A Developer’s Guide to GitOps&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/blog/why-distributed-apps-need-dependency-management"&gt;Why distributed apps need dependency management&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.architect.io/docs/"&gt;Our product docs!&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or &lt;a href="https://cloud.architect.io/signup"&gt;sign up&lt;/a&gt; and try Architect.io yourself today!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>beginners</category>
      <category>architecture</category>
      <category>devops</category>
    </item>
    <item>
      <title>Creating Microservices in Nest.js</title>
      <dc:creator>Lindsay Brunner</dc:creator>
      <pubDate>Fri, 01 Apr 2022 17:57:53 +0000</pubDate>
      <link>https://dev.to/architectio/creating-microservices-in-nestjs-b92</link>
      <guid>https://dev.to/architectio/creating-microservices-in-nestjs-b92</guid>
      <description>&lt;p&gt;Microservices can seem intimidating at first, but at the end of the day they’re just regular applications. They can execute tasks, listen for requests, connect to databases, and everything else a regular API or process would do. We only call them microservices colloquially because of the way we use them, not because they are inherently small.&lt;/p&gt;

&lt;p&gt;In this tutorial we’ll demystify the creation and operation of microservices for Node.js developers by creating a microservice using a popular Node.js framework, &lt;a href="https://nestjs.com/" rel="noopener noreferrer"&gt;NestJS&lt;/a&gt;. We won’t go into detail about the design or architecture of NestJS applications specifically, so if you’re unfamiliar with the framework I’d recommend you check out its docs first, or simply skip to another one of our &lt;a href="https://github.com/architect-team/architect-cli/tree/master/examples/react-app" rel="noopener noreferrer"&gt;Node.js samples&lt;/a&gt; that uses Express directly.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Want to skip to the source code? &lt;a href="https://github.com/architect-team/architect-cli/tree/master/examples/nestjs-microservices/simple" rel="noopener noreferrer"&gt;Click here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a NestJS microservice
&lt;/h2&gt;

&lt;p&gt;NestJS is an opinionated framework for developing server-side Node.js applications, including, but not limited to, microservices. Their default walk throughs and tutorials all show how to create and operate a REST API using NestJS, but in this tutorial we’ll show how to use some of their other helpful microservice libraries to create and operate a TCP-based microservice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnta6qkzv7str39kho808.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnta6qkzv7str39kho808.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To start, let’s download NestJS’s CLI to help us bootstrap our new microservice project. The CLI will do all the work to build the project skeleton making it a lot easier for us to make the changes we need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm i -g @nestjs/cli
$ nest new nestjs-microservice
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the application has been fully initialized, we’re going to install the NestJS microservices library to help us modify the boilerplate application from an http-based REST API to a TCP-based microservice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm i --save @nestjs/microservices
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once installed, go ahead and replace the contents of your src/main.ts file with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { NestFactory } from '@nestjs/core';
import { Transport } from '@nestjs/microservices';
import { AppModule } from 'src/app.module';

async function bootstrap() {
  const port = process.env.PORT ? Number(process.env.PORT) : 8080;
  const app = await NestFactory.createMicroservice(AppModule, {
    transport: Transport.TCP,
    options: {
      host: '0.0.0.0',
      port,
    },
  });
  await app.listen(() =&amp;gt; console.log('Microservice listening on port:', port));
}
bootstrap();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’re already familiar with NestJS, this file should be easy to read through. The only unique part is how we’re initializing the application – instead of using the default &lt;code&gt;NestFactory.create()&lt;/code&gt; method, we’re using &lt;code&gt;NestFactory.createMicroservice()&lt;/code&gt; which provides us additional controls over the protocols and contracts our application responds to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const app = await NestFactory.createMicroservice(AppModule, {
  transport: Transport.TCP,
  options: {
    host: '0.0.0.0',
    port,
  },
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above snippet, we’re declaring that our microservice responds to TCP requests and listens on our configurable port (defaults to &lt;code&gt;8080&lt;/code&gt;). This means our service won’t be a REST API, but will respond to a more raw request format.&lt;/p&gt;

&lt;p&gt;Next, let’s take a look at the generated controller which defines the routes and methods our API responds to, &lt;code&gt;src/app.controller.ts&lt;/code&gt;. Since our microservices respond to TCP requests instead of HTTP, we’ll need to change the annotations on our controller methods to respond to more relevant request structures. Go ahead and paste the contents below into the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Controller } from '@nestjs/common';
import { MessagePattern } from '@nestjs/microservices';

@Controller()
export class AppController {
  @MessagePattern({ cmd: 'hello' })
  hello(input?: string): string {
    return `Hello, ${input || 'there'}!`;
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the way we define and annotate NestJS controllers remains the same as the generated project code, but the way we annotate methods within our controllers is different. Instead of using &lt;code&gt;@Get()&lt;/code&gt;, &lt;code&gt;@Post()&lt;/code&gt;, and other http-specific annotations, we define our TCP interfaces using &lt;code&gt;@MessagePattern()&lt;/code&gt; – an annotation that maps controller methods to incoming requests so long as they match the provided pattern. In our case, we’ve defined the pattern to be any request that contains &lt;code&gt;{ cmd: 'hello' }&lt;/code&gt;. We also expect the request payload to be an optional string that will be used to enrich our response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hello(input?: string): string {
  return `Hello, ${input || 'there'}!`;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great! Now let’s make sure our microservice will start up. Our NestJS project came pre-baked with a &lt;code&gt;package.json&lt;/code&gt; file that includes all the appropriate start commands, so let’s use the one designed for local development:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm run start:dev
[5:41:22 PM] Starting compilation in watch mode...
[5:41:27 PM] Found 0 errors. Watching for file changes.
[Nest] 6361   - 08/31/2020, 5:41:28 PM   [NestFactory] Starting Nest application...
[Nest] 6361   - 08/31/2020, 5:41:28 PM   [InstanceLoader] AppModule dependencies initialized +20ms
[Nest] 6361   - 08/31/2020, 5:41:28 PM   [NestMicroservice] Nest microservice successfully started +8ms
Microservice listening on port: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we know the application boots correctly, let’s build a Dockerfile for the service. Creating a Dockerfile will allow our service to be built into a portable, scalable image that anyone (or any machine) can run consistently without issues. This means we’ll be able to run it ourselves in a stable virtual environment, we’ll be able to hand it off to team members to test more easily, and we’ll be able to deploy it to production-grade environments with ease.&lt;/p&gt;

&lt;p&gt;Our Dockerfile will inherit from an open-source node image, install npm modules, and will run our npm &lt;code&gt;run&lt;/code&gt; build command to transpile our typescript and minimize the code footprint. Simple copy the file contents.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Start with a Node.js base image that uses Node v13
FROM node:13
WORKDIR /usr/src/app

# Copy the package.json file to the container and install fresh node_modules
COPY package*.json tsconfig*.json ./
RUN npm install

# Copy the rest of the application source code to the container
COPY src/ src/

# Transpile typescript and bundle the project
RUN npm run build

# Remove the original src directory (our new compiled source is in the `dist` folder)
RUN rm -r src

# Assign `npm run start:prod` as the default command to run when booting the container
CMD ["npm", "run", "start:prod"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating a client service
&lt;/h2&gt;

&lt;p&gt;Knowing that our microservice is booting up properly is great, but the best way to test it in a practical setting is to see if we can extend it from another microservice. So let’s go ahead and create one!&lt;/p&gt;

&lt;p&gt;Just like with the previous service, let’s start by creating a new NestJS project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ nest new client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s also install two additional NestJS libraries. The first is the config library to make it easier to parse and manage application variables, and the second is the microservices library which contains several helper methods that can be used to more easily access other NestJS microservices:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm i --save @nestjs/config @nestjs/microservices
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have our required libraries installed, let’s use them both together to create a client service for accessing the microservice we created in the previous step. Open up &lt;code&gt;src/app.module.ts&lt;/code&gt; and paste in the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Module } from '@nestjs/common';
import { ConfigModule, ConfigService } from '@nestjs/config';
import { ClientProxyFactory, Transport } from '@nestjs/microservices';
import { AppController } from './app.controller';

@Module({
  imports: [ConfigModule.forRoot()],
  controllers: [AppController],
  providers: [
    {
      provide: 'HELLO_SERVICE',
      inject: [ConfigService],
      useFactory: (configService: ConfigService) =&amp;gt;
        ClientProxyFactory.create({
          transport: Transport.TCP,
          options: {
            host: configService.get('HELLO_SERVICE_HOST'),
            port: configService.get('HELLO_SERVICE_PORT'),
          },
        }),
    },
  ],
})
export class AppModule {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first thing to note from the file contents above is the import of the config module. This import allows the &lt;code&gt;ConfigService&lt;/code&gt; to be utilized throughout our application module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;imports: [ConfigModule.forRoot()];
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next addition to the file is the &lt;code&gt;HELLO_SERVICE&lt;/code&gt; provider. This is where we use &lt;code&gt;ClientProxyFactory&lt;/code&gt; from the nest microservices library to create a service that allows us to make calls to our other microservice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  provide: 'HELLO_SERVICE',
  inject: [ConfigService],
  useFactory: (configService: ConfigService) =&amp;gt; ClientProxyFactory.create({
    transport: Transport.TCP,
    options: {
      host: configService.get('HELLO_SERVICE_HOST'),
      port: configService.get('HELLO_SERVICE_PORT'),
    },
  }),
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above snippet, we’re registering a ClientProxy instance to the provider key &lt;code&gt;HELLO_SERVICE&lt;/code&gt; that points to &lt;code&gt;HELLO_SERVICE_HOST&lt;/code&gt; listening on &lt;code&gt;HELLO_SERVICE_PORT&lt;/code&gt;. These two values come from the &lt;code&gt;ConfigService&lt;/code&gt; we imported earlier, and the values are loaded up from environment parameters. This kind pf parameterization is crucial to enable us to run the service in multiple environments (like dev, staging, and production) without code changes.&lt;/p&gt;

&lt;p&gt;Now that we’ve successfully created our proxy instance, let’s open up &lt;code&gt;src/app.controller.ts&lt;/code&gt; and set it up with our proxy methods. Paste the following content into the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Controller, Get, Inject, Param } from '@nestjs/common';
import { ClientProxy } from '@nestjs/microservices';

@Controller('hello')
export class AppController {
  constructor(@Inject('HELLO_SERVICE') private client: ClientProxy) {}

  @Get(':name')
  getHelloByName(@Param('name') name = 'there') {
    // Forwards the name to our hello service, and returns the results
    return this.client.send({ cmd: 'hello' }, name);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first thing you’ll see is that we’ve injected an instance of our client proxy into the controller. We registered with client under the key &lt;code&gt;HELLO_SERVICE&lt;/code&gt;, so this is the key we use to indicate which client instance we want injected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;constructor(
  @Inject('HELLO_SERVICE') private client: ClientProxy
) {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Armed with a client that points to our TCP microservice, we can start sending requests that match the &lt;code&gt;@MessagePattern&lt;/code&gt; we defined in the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Get(':name')
getHelloByName(@Param('name') name = 'there') {
  // Forwards the name to our hello service, and returns the results
  return this.client.send({ cmd: 'hello' }, name);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The line above listens for incoming GET requests on &lt;code&gt;/hello/:name&lt;/code&gt;, formats and forwards the request to our downstream TCP-based microservice, and returns the results.&lt;/p&gt;

&lt;p&gt;Just like with our downstream microservice, let’s create a Dockerfile for this new service so that it can be built into an image, run by other team members, and deployed to production. Since this is also a NestJS application, we can use the same Dockerfile we used with our previous service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Start with a Node.js base image that uses Node v13
FROM node:13
WORKDIR /usr/src/app

# Copy the package.json file to the container and install fresh node_modules
COPY package*.json tsconfig*.json ./
RUN npm install

# Copy the rest of the application source code to the container
COPY src/ src/

# Transpile typescript and bundle the project
RUN npm run build

# Remove the original src directory (our new compiled source is in the `dist` folder)
RUN rm -r src

# Assign `npm run start:prod` as the default command to run when booting the container
CMD ["npm", "run", "start:prod"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running both services together
&lt;/h2&gt;

&lt;p&gt;As you may have noticed, we haven’t yet tested our new client service. While it also has an npm run &lt;code&gt;start:dev&lt;/code&gt; command like our TCP-based service, we need to make sure the TCP service is running and that it’s host/port values can be assigned as environment parameters in our client service. This means that deploying our client service includes a few extra steps beyond just npm run &lt;code&gt;start:dev&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;There aren’t very many manual steps involved with running our two microservices locally, but would that still be true if our TCP service had it’s own set of dependencies? What happens if it needs a database, or access to another API? The set of manual steps required to deploy continues to compound exponentially with each new dependency. This kind of API dependency resolution is exactly what Architect.io was designed for, so we’re going to use it to ensure both our services can be run at the same time and automatically connect to each other with a single command.&lt;/p&gt;

&lt;p&gt;In order to make use of Architect.io to deploy both services in unison, we’ll be creating &lt;code&gt;architect.yml&lt;/code&gt; files for each that describes it as a component. Architect.io Component’s are fully contained, deployable units that include both the details on how to run services as well as an inventory of the dependencies that each service requires. By capturing the set of dependencies, Architect.io can automatically deploy and resolve dependency relationships without needing to spin everything up in multiple steps.&lt;/p&gt;

&lt;p&gt;Let’s start with our TCP-based microservice. Go ahead and paste the following into an &lt;code&gt;architect.yml&lt;/code&gt; file at the root of the TCP service project directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Meta data describing our component so others can discover and reference it
name: examples/nestjs-simple
description: Simple NestJS microservice that uses TCP for inter-process communication
keywords:
  - nestjs
  - examples
  - tcp
  - microservices

# List of microservices powering our component
services:
  api:
    # Specify where the source code is for the service
    build:
      context: ./
    # Specify the port and protocol the service listens on
    interfaces:
      main:
        port: 8080
        protocol: tcp
    # Mount our src directory to the container and use our dev command so we get hot-reloading
    debug:
      command: npm run start:dev
      volumes:
        src:
          host_path: ./src/
          mount_path: /usr/src/app/src/

# List of interfaces our component allows others to connect to
interfaces:
  main:
    description: Exposes the API to upstream traffic
    url: ${{ services.api.interfaces.main.url }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The manifest file above does three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Declares a name, description, and keywords for the component so that others can discover and refer to it&lt;/li&gt;
&lt;li&gt;Outlines the services our component needs in order to operate, and&lt;/li&gt;
&lt;li&gt;Declares interfaces that others can connect to from outside the component boundaries&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Armed with this simple manifest file, we can deploy our component locally and to the cloud without any further code changes. Let’s try it out by installing the CLI and testing out our component:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install the Architect.io CLI
$ npm install -g @architect-io/cli

# Link the component to our local registry
$ architect link .
Successfully linked examples/nestjs-simple to local system at /Users/username/nestjs-microservice

# Deploy the component and expose the `main` interface on `http://app.localhost/`
$ architect dev examples/nestjs-simple:latest -i app:main
Using locally linked examples/nestjs-simple found at /Users/username/nestjs-microservice
http://app.localhost:80/ =&amp;gt; examples--nestjs-simple--api--latest--qkmybvlf
http://localhost:50000/ =&amp;gt; examples--nestjs-simple--api--latest--qkmybvlf
http://localhost:80/ =&amp;gt; gateway
Wrote docker-compose file to: /var/folders/7q/hbx8m39d6sx_97r00bmwyd9w0000gn/T/architect-deployment-1598910884362.yml

[9:56:15 PM] Starting compilation in watch mode...
examples--nestjs-simple--api--latest--qkmybvlf_1  |
examples--nestjs-simple--api--latest--qkmybvlf_1  | [9:56:22 PM] Found 0 errors. Watching for file changes.
examples--nestjs-simple--api--latest--qkmybvlf_1  |
examples--nestjs-simple--api--latest--qkmybvlf_1  | [Nest] 32   - 08/31/2020, 9:56:23 PM   [NestFactory] Starting Nest application...
examples--nestjs-simple--api--latest--qkmybvlf_1  | [Nest] 32   - 08/31/2020, 9:56:23 PM   [InstanceLoader] AppModule dependencies initialized +29ms
examples--nestjs-simple--api--latest--qkmybvlf_1  | [Nest] 32   - 08/31/2020, 9:56:23 PM   [NestMicroservice] Nest microservice successfully started +16ms
examples--nestjs-simple--api--latest--qkmybvlf_1  | Microservice listening on port: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we know our TCP-based service can be deployed via Architect.io, let’s go ahead and create a second component to represent our upstream, REST API. Since this component needs to connect to the previous one, we’ll be using Architect.io’s dependencies field in our &lt;code&gt;architect.yml&lt;/code&gt; file to indicate that we need the TCP service available to connect to. Paste the following into another &lt;code&gt;architect.yml&lt;/code&gt; file in the REST API project root directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# architect.yml
name: examples/nestjs-simple-client
description: Client used to test the connection to the simple NestJS microservice
keywords:
  - nestjs
  - examples
  - microservice
  - client

# Sets up the connection to our previous microservice
dependencies:
  examples/nestjs-simple: latest

services:
  client:
    build:
      context: ./
    interfaces:
      main: 3000
    environment:
      # Dyanmically enriches our environment variables with the location of the other microservice
      HELLO_SERVICE_HOST: ${{ dependencies['examples/nestjs-simple'].interfaces.main.host }}
      HELLO_SERVICE_PORT: ${{ dependencies['examples/nestjs-simple'].interfaces.main.port }}
    debug:
      command: npm run start:dev
      volumes:
        src:
          host_path: ./src/
          mount_path: /usr/src/app/src/

# Exposes our new REST API to upstream traffic
interfaces:
  client:
    description: Exposes the REST API to upstream traffic
    url: ${{ services.client.interfaces.main.url }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just like with the prior component, let’s make sure we can deploy the new component with Architect.io.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Link the component to our local registry
$ architect link .
Successfully linked examples/nestjs-simple-client to local system at /Users/username/nestjs-microservice-client

# Deploy the component and expose the `main` interface on `http://app.localhost/`
$ architect dev examples/nestjs-simple-client:latest -i app:client
Using locally linked examples/nestjs-simple-client found at /Users/username/nestjs-microservice-client
Using locally linked examples/nestjs-simple found at /Users/username/nestjs-microservice
http://app.localhost:80/ =&amp;gt; examples--nestjs-simple-client--client--latest--qb0e6jlv
http://localhost:50000/ =&amp;gt; examples--nestjs-simple-client--client--latest--qb0e6jlv
http://localhost:50001/ =&amp;gt; examples--nestjs-simple--api--latest--qkmybvlf
http://localhost:80/ =&amp;gt; gateway
Wrote docker-compose file to: /var/folders/7q/hbx8m39d6sx_97r00bmwyd9w0000gn/T/architect-deployment-1598987651541.yml

[7:15:45 PM] Starting compilation in watch mode...
examples--nestjs-simple-client--client--latest--qb0e6jlv_1  |
examples--nestjs-simple--api--latest--qkmybvlf_1            | [7:15:54 PM] Found 0 errors. Watching for file changes.
examples--nestjs-simple--api--latest--qkmybvlf_1            |
examples--nestjs-simple--api--latest--qkmybvlf_1            | [Nest] 31   - 09/01/2020, 7:15:55 PM   [NestFactory] Starting Nest application...
examples--nestjs-simple--api--latest--qkmybvlf_1            | [Nest] 31   - 09/01/2020, 7:15:55 PM   [InstanceLoader] AppModule dependencies initialized +18ms
examples--nestjs-simple--api--latest--qkmybvlf_1            | [Nest] 31   - 09/01/2020, 7:15:55 PM   [NestMicroservice] Nest microservice successfully started +9ms
examples--nestjs-simple--api--latest--qkmybvlf_1            | Microservice listening on port: 8080
examples--nestjs-simple-client--client--latest--qb0e6jlv_1  | [7:15:55 PM] Found 0 errors. Watching for file changes.
examples--nestjs-simple-client--client--latest--qb0e6jlv_1  |
examples--nestjs-simple-client--client--latest--qb0e6jlv_1  | [Nest] 30   - 09/01/2020, 7:15:56 PM   [NestFactory] Starting Nest application...
examples--nestjs-simple-client--client--latest--qb0e6jlv_1  | [Nest] 30   - 09/01/2020, 7:15:56 PM   [InstanceLoader] ConfigHostModule dependencies initialized +18ms
examples--nestjs-simple-client--client--latest--qb0e6jlv_1  | [Nest] 30   - 09/01/2020, 7:15:56 PM   [InstanceLoader] ConfigModule dependencies initialized +1ms
examples--nestjs-simple-client--client--latest--qb0e6jlv_1  | [Nest] 30   - 09/01/2020, 7:15:56 PM   [InstanceLoader] AppModule dependencies initialized +2ms
examples--nestjs-simple-client--client--latest--qb0e6jlv_1  | [Nest] 30   - 09/01/2020, 7:15:56 PM   [RoutesResolver] AppController {/hello}: +6ms
examples--nestjs-simple-client--client--latest--qb0e6jlv_1  | [Nest] 30   - 09/01/2020, 7:15:56 PM   [RouterExplorer] Mapped {/hello, GET} route +5ms
examples--nestjs-simple-client--client--latest--qb0e6jlv_1  | [Nest] 30   - 09/01/2020, 7:15:56 PM   [RouterExplorer] Mapped {/hello/:name, GET} route +2ms
examples--nestjs-simple-client--client--latest--qb0e6jlv_1  | [Nest] 30   - 09/01/2020, 7:15:56 PM   [NestApplication] Nest application successfully started +3ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, all it takes is one command to deploy the TCP-service, our upstream HTTP service, and enrich the networking so that both services are automatically talking to each other. The command below deploys the &lt;code&gt;examples/nestjs-simple-client&lt;/code&gt; component locally and exposes the client interface at &lt;code&gt;http://app.localhost/hello/world&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ architect dev examples/nestjs-simple-client:latest -i app:client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying to the cloud
&lt;/h2&gt;

&lt;p&gt;We now know how to run our stack of microservices locally in a repeatable way, but what about deploying to production-grade environments? How do we deploy all our services to AWS ECS or Kubernetes? How do we deal with networking and configuration of our services? Fortunately, Architect.io has this handled too! Since we already described our services as Architect.io Components, they are primed and ready to be deployed to production-grade container platforms without any additional work.&lt;/p&gt;

&lt;p&gt;Before you can deploy components to remote environments, you must &lt;a href="https://cloud.architect.io/signup" rel="noopener noreferrer"&gt;create an account with Architect.io&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you’ve successfully created your account, go ahead and click &lt;a href="https://cloud.architect.io/examples/components/nestjs-simple-client/deploy?tag=latest&amp;amp;interface=main%3Aclient" rel="noopener noreferrer"&gt;this link&lt;/a&gt; to deploy it to a sample Kubernetes cluster powered by the Architect Cloud.&lt;/p&gt;

&lt;p&gt;If you’re already familiar with Architect.io you can use the CLI instead. Once you’ve successfully created your account, go ahead and log in using Architect.io’s CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ architect login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we’re ready to deploy our component! Let’s go ahead and try out Architect.io’s public platform (&lt;code&gt;example-environment&lt;/code&gt;) so that we don’t need to create a cluster right away (be sure to replace &lt;code&gt;&amp;lt;account&amp;gt;&lt;/code&gt; with your account name). Just like deploying locally, deploying remotely as as simple as running &lt;code&gt;architect deploy&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ architect deploy examples/nestjs-simple-client:latest -i app:client --account="&amp;lt;account&amp;gt;" --environment="example-environment"
Creating deployment... done
Deployment ready for review: https://cloud.architect.io/&amp;lt;account&amp;gt;/environments/example-environment/deployments/&amp;lt;deployment-id&amp;gt;
? Would you like to apply? Yes
Deploying... done
Deployed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations! That’s all it takes to take a locally runnable component and deploy it to a remote cluster with Architect.io. Once the deployment completes, you’ll be able to test it out live via a URL.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: You can register your own Kubernetes or ECS cluster on the platforms tab of your account. Then create an environment for that platform and try deploying again!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: We skipped the component registration step in this tutorial because we’ve already published these two example components to the registry. If you want to try publishing yourself, simply change the component names to include your account name as the prefix instead of examples and then run &lt;code&gt;architect register architect.yml&lt;/code&gt; in each project directory.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Ready to learn more about Architect.io? Check out &lt;a href="https://docs.architect.io/getting-started/introduction#register-a-component" rel="noopener noreferrer"&gt;our docs&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>microservices</category>
      <category>node</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
