<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Michael Laccetti</title>
    <description>The latest articles on DEV Community by Michael Laccetti (@mlaccetti).</description>
    <link>https://dev.to/mlaccetti</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mlaccetti"/>
    <language>en</language>
    <item>
      <title>Building a Kubernetes-based Solution in a Hybrid Environment by Using KubeMQ</title>
      <dc:creator>Michael Laccetti</dc:creator>
      <pubDate>Thu, 11 Mar 2021 18:42:02 +0000</pubDate>
      <link>https://dev.to/mlaccetti/building-a-kubernetes-based-solution-in-a-hybrid-environment-by-using-kubemq-1ag5</link>
      <guid>https://dev.to/mlaccetti/building-a-kubernetes-based-solution-in-a-hybrid-environment-by-using-kubemq-1ag5</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;As Kubernetes has become the de-facto standard for deploying, managing and scaling container-based workloads, it has introduced features to allow clusters to exist across multiple clouds, span clouds, and on-premises environments, and even connect clouds and edge computing deployments. Working across clouds and edges has been done via the Kubernetes native confederation project or third-party control planes that allow managing discrete clusters from one dashboard.&lt;/p&gt;

&lt;p&gt;However, Kubernetes is a platform for deploying applications and isn’t inherently aware of location, topologies, or architectural distribution. While it can provide scalability and connect clouds to the edge, it falls to the application that is built and deployed on Kubernetes to work within that environment. &lt;/p&gt;

&lt;p&gt;Requiring a developer to be aware of that level of detail pushes overhead and complexity too far into the development lifecycle. This is where a Messaging Platform abstraction layer comes into play.&lt;/p&gt;

&lt;p&gt;Let’s look at one solution to this problem, &lt;a href="https://kubemq.io/"&gt;KubeMQ&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  KubeMQ
&lt;/h1&gt;

&lt;p&gt;KubeMQ is a Kubernetes-native messaging platform that was specifically designed for operating in these various scenarios; it excels at connecting clusters and nodes across different locations. KubeMQ leverages &lt;strong&gt;Operators&lt;/strong&gt; to ensure that deployment is painless and leverages proven technologies, then uses &lt;strong&gt;Bridges&lt;/strong&gt; to connect different Kubernetes environments together, whether they are federations, discrete clusters, or span clouds and edges.&lt;/p&gt;

&lt;h1&gt;
  
  
  Cross/Hybrid Cloud Bridges
&lt;/h1&gt;

&lt;p&gt;Two of the more common approaches to deploying Kubernetes in hybrid environments are from cloud-to-cloud and cloud to on-prem. Whether this is from using a single control plane like &lt;a href="https://rancher.com/why-rancher/___hybrid-multi-cloud/"&gt;Rancher&lt;/a&gt;,  &lt;a href="https://platform9.com/managed-kubernetes/"&gt;Platform9&lt;/a&gt;, or &lt;a href="https://gardener.cloud/"&gt;Gardener&lt;/a&gt; to create multiple clusters that are managed from a single location, or utilizing &lt;a href="https://github.com/kubernetes-sigs/kubefed"&gt;Kubernetes federation&lt;/a&gt; to create a cluster that spans different regions, this model has become a key feature offered by Kubernetes that has helped drive adoption.&lt;/p&gt;

&lt;p&gt;However, the ability to create hybrid deployments comes with additional complexity, especially when managing applications that are deployed across these clusters and the need for services to communicate with one another. &lt;/p&gt;

&lt;p&gt;For example, imagine a scenario where most of the transactional systems are hosted in AWS. They were originally built utilizing Lambdas and Fargate, and there is no appetite to move them. However, the organization has built out most of the analytical capabilities within Google’s Cloud Platform since the data team was more familiar with BigQuery and Data Studio. This new data platform is built utilizing GKE - Google’s managed Kubernetes offering - and is combined with native ingestion of Google Analytics information.&lt;/p&gt;

&lt;p&gt;Modifying the existing systems in such a way that they would be aware of the partition would add complexity and overhead. Instead, it makes more sense to externalize that and build on top of a platform that will distribute messages and data across multiple clouds without changing the system itself, which is where KubeMQ comes into play - helping to bridge clouds without the systems having awareness.&lt;/p&gt;

&lt;p&gt;KubeMQ Bridges offer many different ways of transferring information from one cluster to another via topologies. KubeMQ Bridges support 1:1 messaging across clusters, replication for 1:n, aggregation for n:1, and transformation for m:n messaging. Of course, KubeMQ isn’t limited to simply message delivery but also for message transformation as part of cross-cluster communication.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0KNVbQqB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g5e2thg2wff6tp6h1ehr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0KNVbQqB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g5e2thg2wff6tp6h1ehr.png" alt="KubeMQ Bridges - Binding Multiple Clusters Together"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;KubeMQ Bridges - Binding Multiple Clusters Together&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This allows for the construction of architectures where information can be loaded from databases, caches, and services in one cluster then distributed to another cluster for further processing and storage. This only requires some YAML configuration for the KubeMQ Bridge. &lt;/p&gt;

&lt;p&gt;For example, here is the same configuration for a Bridge with the Sources (within the current cluster) and Targets (in external clusters). It takes information from a source RPC query on one cluster and sends it to the target RPC destination for processing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bindings:
  - name: clusters-sources # unique binding name
    properties: # Bindings properties such middleware configurations
      log_level: error
      retry_attempts: 3
      retry_delay_milliseconds: 1000
      retry_max_jitter_milliseconds: 100
      retry_delay_type: "back-off"
      rate_per_second: 100
    sources:
      kind: source.query # Sources kind
      name: name-of-sources # sources name 
      connections: # Array of connections settings per each source kind
        - .....
    targets:
      kind: target.query # Targets kind
      name: name-of-targets # targets name
      connections: # Array of connections settings per each target kind
        - .....
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;KubeMQ’s value isn’t limited to connecting large clusters and architectures together, however. It can also be used in environments where data needs to be collected from a variety of devices where computation needs to be performed as close as possible, such as the edge.&lt;/p&gt;

&lt;h1&gt;
  
  
  Edge Computing
&lt;/h1&gt;

&lt;p&gt;Kubernetes has also been making inroads in connecting cloud environments with the edges, specifically, deployments where computation moves as close to the user as possible. &lt;/p&gt;

&lt;p&gt;Typically, edge computing nodes are more resource-constrained, leading to different resource usage patterns where part of the workload occurs as close to the end-user as possible. Heavier or less time-constrained processing happens upstream in the cloud.&lt;/p&gt;

&lt;p&gt;A sample edge computing scenario that could benefit from better-connected topologies is modern security cameras like the Eufy Indoor Cam 2k. AI-driven functionality is integrated into the hardware to help classify the different types of action being monitored, such as whether a human or pet is within the frame. &lt;/p&gt;

&lt;p&gt;However, the actual video storage and transcoding to different formats cannot take place on the device itself and requires computational resources. Building a stream-based architecture where the edge node is close to the consumer will result in a better user experience, leading to a conversion to a paying subscription.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kubeedge.io/"&gt;KubeEdge&lt;/a&gt;&lt;/strong&gt; is a Cloud Native Computing Foundation (CNCF) project that is designed to operate in this space. It helps deploy container-based applications in the cloud and at the edge and also integrates IoT devices that support the MQTT protocol. (Note: MQTT is actually the name of the protocol, not an acronym.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ff2BH9gE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5jm251fn8ppl0igqgtml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ff2BH9gE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5jm251fn8ppl0igqgtml.png" alt="KubeEdge and MQTT for Cloud-Edge Computing"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;KubeEdge and MQTT for Cloud-Edge Computing&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;KubeMQ was also designed to operate in this environment, on both sides of the KubeEdge deployment within the cloud and on the edge nodes. KubeMQ has extremely low resource consumption, which allows it to run on &lt;a href="https://docs.kubemq.io/#kubernetes-ready"&gt;Edge-specific Kubernetes distributions&lt;/a&gt;, including &lt;a href="https://k3s.io/"&gt;K3s&lt;/a&gt; and MicroK8s. This allows KubeMQ to create a bridge between edge nodes and more powerful computer nodes running in a cloud, pushing messages upstream as required. &lt;/p&gt;

&lt;p&gt;KubeMQ also &lt;a href="https://docs.kubemq.io/configuration/connectors/kubemq-sources/messaging/mqtt"&gt;provides a source&lt;/a&gt; to process MQTT messages directly from the broker, allowing IoT devices to participate in a larger microservices-based architecture without additional effort. This allows users to create systems that can collect and aggregate information from tens of thousands of hardware devices (or more, due to the scalable nature of Kubernetes and KubeMQ) and build near real-time capabilities right at the edge.&lt;/p&gt;

&lt;p&gt;Here’s an example of creating an MQTT source within KubeMQ that will allow the processing of events from an MQTT broker. In this configuration, messages from the local MQTT broker will be consumed and sent in real-time to an event subscriber for consumption:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bindings:
  - name: mqtt-kubemq-event
    source:
      kind: messaging.mqtt
      name: mqtt-source
      properties:
        host: "localhost:1883"
        dynamic_map: "true"
        topic: "queue"
        username: "username"
        password: "password"
        client_id: "client_id"
        qos: "0"
    target:
      kind: kubemq.events
      name: target-kubemq-events
      properties:
        address: "kubemq-cluster:50000"
        client_id: "kubemq-http-connector"
        channel: "events.mqtt"
    properties:
      log_level: "info"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using KubeMQ as a messaging platform allows for the seamless creation and management of systems that operate at the edge and helps ensure that projects will not stall when addressing resource-constrained deployments or environments.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;With Kubernetes becoming the de facto container deployment system, there has been a shift to hybrid cloud computing. Whether that is implemented as a single control plane managing multiple clusters or creating a federated cluster that operates across regions, many organizations are looking into hybrid clouds for the future. &lt;/p&gt;

&lt;p&gt;In fact, &lt;a href="https://www.bain.com/insights/hybrid-cloud-future-tech-report-2020/"&gt;at least two-thirds of CIOs&lt;/a&gt; have stated an intent to utilize hybrid clouds: to maximize savings or increase resiliency. Other organizations have embraced deployments that span clouds and on-prem hardware to add capacity, handle privacy or security concerns, or deal with scenarios where a core system must remain on-prem, but the rest of the services can be deployed elsewhere.&lt;/p&gt;

&lt;p&gt;In addition, there are more organizations where they are shipping devices where computational resources are co-located with the customer, creating scenarios where organizations need to bridge their cloud deployments with their edge computing needs. &lt;/p&gt;

&lt;p&gt;Having a messaging platform that was purposefully built utilizing the underlying Kubernetes platform is a crucial component to a successful deployment and operational model in these environments. KubeMQ bridges allow the creation of microservice-based architectures that span on-prem, clouds, and edge devices, helping to create message-driven systems that can scale up reliably and guarantee success.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>messaging</category>
      <category>cloudnative</category>
      <category>edgecomputing</category>
    </item>
    <item>
      <title>How KubeMQ customers build scalable messaging platforms with Kubernetes Operators</title>
      <dc:creator>Michael Laccetti</dc:creator>
      <pubDate>Thu, 04 Mar 2021 16:03:11 +0000</pubDate>
      <link>https://dev.to/mlaccetti/how-kubemq-customers-build-scalable-messaging-platforms-with-kubernetes-operators-4fg5</link>
      <guid>https://dev.to/mlaccetti/how-kubemq-customers-build-scalable-messaging-platforms-with-kubernetes-operators-4fg5</guid>
      <description>&lt;p&gt;Over the last several years, the adoption of Kubernetes has increased tremendously. In fact, according to a Cloud Native Computing Foundation (CNCF) survey, 78% of respondents in late 2019 were using Kubernetes in production. Leveraging Kubernetes allows organizations to create a management layer to commodify clouds themselves and build cross- or hybrid-cloud deployments that hide the provider-specific implementation details from the rest of the team.&lt;/p&gt;

&lt;p&gt;One crucial part of the Kubernetes ecosphere is Operators — a tool initially introduced by CoreOS in 2016 to utilize the Kubernetes APIs themselves to deploy and manage the state of applications. Operators are a critical part of the deployment and operation of applications in a cross/hybrid environment. They can help manage and maintain state across a federated Kubernetes deployment (multiple Kubernetes clusters running together) or even across clusters.&lt;/p&gt;

&lt;p&gt;But what exactly are Operators, and how do they help manage these stateful applications? Let’s take a look at Operators in detail, how they work within Kubernetes, and how the KubeMQ messaging platform uses Operators to help you build complex and scalable messaging services with minimal coding and overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Operators?
&lt;/h2&gt;

&lt;p&gt;At a high level, Operators allow you to automate tasks beyond what Kubernetes natively provides. Operators are software extensions that hook into Kubernetes APIs and the control plane to manage a custom resource (CR) — or an extension to the Kubernetes API. The CR describes the desired state of the application, and the control plane component (the Operator itself) monitors the CR to ensure that the application is running as expected.&lt;/p&gt;

&lt;p&gt;For example, an Operator might deploy and scale a pod, take and restore a backup, manage network services and ingresses, or manage a persistent data store.&lt;/p&gt;

&lt;p&gt;Since Operators hook into native Kubernetes tools like kubectl, they become a common language for managing complex deployments where state is involved. Using a Helm chart is excellent for deploying and managing a stateless application like a web server. Still, for deploying stateful systems like etcd, PostgreSQL, and KubeMQ, Operators are the key to success.&lt;/p&gt;

&lt;h2&gt;
  
  
  KubeMQ — Kubernetes-Native Messaging Platform
&lt;/h2&gt;

&lt;p&gt;Before we dive into understanding why Operators are an essential ingredient to KubeMQ’s success, let’s first examine the benefits of KubeMQ.&lt;/p&gt;

&lt;p&gt;At a high level, KubeMQ isn’t just a message broker or queue but a messaging platform that can be used to build a message-based architecture that works across multi- or hybrid-clouds and edge computing. KubeMQ allows services in these environments to communicate with each other in any messaging pattern — pub/sub, streams, queues, and so on. KubeMQ is Kubernetes Native and is easy and quick to deploy in less than a minute.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core components
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp23ds9mqevd51jmyf4k3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp23ds9mqevd51jmyf4k3.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The KubeMQ messaging platform comprises four core components — server, bridges, sources, and targets.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;KubeMQ server is deployed to each Kubernetes cluster, which handles the message processing.&lt;/li&gt;
&lt;li&gt;KubeMQ bridges help to build the desired messaging topology across clusters, sending messages between KubeMQ servers.
( KubeMQ sources ingest messages from existing systems like RabbitMQ or Kafka into the KubeMQ platform.&lt;/li&gt;
&lt;li&gt;KubeMQ targets send messages to external systems, like Redis, PostgreSQL, S3, or other messaging systems like ActiveMQ.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Combining all of these components allows for creating a cross-cluster messaging and no-code-message-based-microservices architecture on Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does KubeMQ Use Operators to Succeed?
&lt;/h2&gt;

&lt;p&gt;So how does KubeMQ work in the world of Operators?&lt;/p&gt;

&lt;p&gt;First, KubeMQ deploys as an Operator to ensure that it can operate at a native Kubernetes level. One of the tenets of utilizing KubeMQ is that it is better to create many small clusters and bind them together rather than creating one massive cluster.&lt;/p&gt;

&lt;p&gt;This allows for better performance, scalability, and resilience. One key to success with this approach is utilizing Operators as the deployment and management tool for KubeMQ. The Operator deploys the clusters and ensures that the various KubeMQ bridges, sources, and targets are configured correctly for each cluster. This extends to how KubeMQ was written utilizing Go. This makes KubeMQ fast and helps hook KubeMQ into native Kubernetes data models, events, and APIs, making it less complicated to manage the state of the clusters. It also allows for easier configuration validation.&lt;/p&gt;

&lt;p&gt;Deploying as an operator also helps KubeMQ keep overhead to a minimum. For example, a large financial company with high volumes of real-time messages for price quotes, transactions, and client funding leverages KubeMQ to decrease the number of servers previously required to fulfill their needs. It has also allowed the company to reallocate the operational overhead to better tasks, rather than monitoring and maintaining messaging infrastructure. Similarly, the company has leveraged the KubeMQ operator to help elastically scale their infrastructure based on the load. For example, when markets close, demand drops, and the clusters can scale down accordingly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cross/Hybrid Cloud and Operators
&lt;/h2&gt;

&lt;p&gt;The KubeMQ Operator also helps track state — a key reason for leveraging KubeMQ for reliable cross/hybrid cloud deployments. First, this state can validate that the desired capacity and configuration are in place for each cluster. Comparing the desired state in the CR against the existing state in Kubernetes allows the Operator to ensure that failures are caught and addressed, capacity is added as required, and the various bridges, sources, and targets are configured.&lt;/p&gt;

&lt;p&gt;One KubeMQ customer in the agricultural vertical heavily leverages this feature of KubeMQ to run their messaging platform across edge computing systems along with cloud deployments. This provides better performance and reliability and allows them to grow their business with new services without interruptions or downtime. They simply create new clusters and configure them as required. Knowing that the KubeMQ operator validates the CR definition helps to prevent deploying clusters with faulty configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Operators are critical for managing and deploying complex Kubernetes systems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They operate utilizing Kubernetes-native APIs and systems.&lt;/li&gt;
&lt;li&gt;An Operator’s CR definition provides configuration management and validation at deployment time.&lt;/li&gt;
&lt;li&gt;They track the desired state against the existing state and act accordingly if there are any deltas between them.&lt;/li&gt;
&lt;li&gt;Operators are well-known and understood at most organizations, ensuring that no additional training is required for operational success,&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;KubeMQ heavily leverages the Operator model to help customers succeed in building out complex and scalable messaging platforms with minimal coding and overhead. This helps to deliver greater business value by allowing organizations to solve business problems quickly and efficiently without spending time and resources on managing and maintaining messaging infrastructure.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>messaging</category>
      <category>kubemq</category>
      <category>operators</category>
    </item>
    <item>
      <title>Bringing Your (Encryption) Keys to Multi/Hybrid Clouds</title>
      <dc:creator>Michael Laccetti</dc:creator>
      <pubDate>Tue, 08 Sep 2020 02:07:52 +0000</pubDate>
      <link>https://dev.to/mlaccetti/bringing-your-encryption-keys-to-multi-hybrid-clouds-29kj</link>
      <guid>https://dev.to/mlaccetti/bringing-your-encryption-keys-to-multi-hybrid-clouds-29kj</guid>
      <description>&lt;h1&gt;
  
  
  Tools and Setup
&lt;/h1&gt;

&lt;p&gt;Before we dive into the fun part of getting keys shared amongst cloud providers, there are a variety of tools required to get this tutorial working. First, you’ll need to &lt;a href="https://www.vaultproject.io/docs/install"&gt;download and install Vault&lt;/a&gt;, then get it &lt;a href="https://learn.hashicorp.com/tutorials/vault/getting-started-dev-server?in=vault/getting-started"&gt;up and running&lt;/a&gt;. You will also need to install &lt;code&gt;cURL&lt;/code&gt; and &lt;code&gt;OpenSSL&lt;/code&gt; — these usually comes pre-installed with most Linux OSs, and are available via most package managers (&lt;code&gt;apt&lt;/code&gt;, &lt;code&gt;yum&lt;/code&gt;, &lt;code&gt;brew&lt;/code&gt;, &lt;code&gt;choco&lt;/code&gt;/&lt;code&gt;scoop&lt;/code&gt;, etc.). Our examples also use head and diff which are part of the &lt;code&gt;coreutils&lt;/code&gt; and &lt;code&gt;diffutils&lt;/code&gt; packages under Ubuntu; you can either find a similar package for your OS or find a manual workaround for those portions. Next, install the &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html"&gt;AWS command line tools&lt;/a&gt; (CLI) and make sure you &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html"&gt;configure the CLI&lt;/a&gt; to connect to your account. The last step is to &lt;a href="https://devcenter.heroku.com/articles/heroku-cli#download-and-install"&gt;install&lt;/a&gt; and &lt;a href="https://devcenter.heroku.com/articles/heroku-cli#getting-started"&gt;configure&lt;/a&gt; the Heroku CLI.&lt;/p&gt;

&lt;p&gt;One last note — the Heroku feature to utilize keys from AWS requires a private or shield database plan, so please ensure your account has been configured accordingly.&lt;/p&gt;

&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;In today’s hyperconnected world, the former approach of locking services behind Virtual Private Networks (VPNs) or within a demilitarized zone (DMZ) is no longer secure. Instead, we must operate on a zero-trust network model, where every actor must be assumed as malicious. This means that a focus on encryption — both at rest and in transit — along with identity and access management is critical to ensuring that systems can interact with each other.&lt;/p&gt;

&lt;p&gt;One of the most important parts of the encryption process is the keys used to encrypt and decrypt information or used to validate identity. A recent approach to this need is called Bring Your Own Key (BYOK) — where you as the customer/end user own and manage your key, and provide it to third parties (notably cloud providers) for usage. However, before we dig into what BYOK is and how we can best leverage it, let’s have a quick recap on key management.&lt;/p&gt;

&lt;h1&gt;
  
  
  Key Management
&lt;/h1&gt;

&lt;p&gt;At a high level, key management is the mechanism by which keys are generated, validated, and revoked — manually and as part of workflows. Another function of key management is ensuring that the root certificate that is used as a source of all truth is kept protected at a layer below other certificates, since revoking a root certificate would render the entire tree of certificates issued by it invalid.&lt;/p&gt;

&lt;p&gt;One of the more popular tools used for key management is &lt;a href="https://www.vaultproject.io/"&gt;HashiCorp’s Vault&lt;/a&gt; — specifically designed for a world of low trust and dynamic infrastructure, where key ages can be measured in minutes or hours, rather than years. It includes functionality to manage secrets, encryption, and identity-based access, provides many ways to interact with it (CLI, API, web-based UI), and can connect to many different providers through plugins. This article will not focus on how to deploy Vault in a secure fashion, but the use cases that Vault can offer around BYOK and now to consume the keys in multiple cloud environments.&lt;/p&gt;

&lt;p&gt;A key feature of using Vault is that it functions in an infrastructure- and provider-agnostic fashion — it can be used to provision and manage keys across different systems and clouds. At the same time, Vault can be used to encrypt and decrypt information without exposing keys to users, allowing for greater security.&lt;/p&gt;

&lt;h1&gt;
  
  
  BYOK on Multi Cloud
&lt;/h1&gt;

&lt;p&gt;At this point, we’d like to dive into a specific use-case — demonstrating how you can create and ingest your own keys into multiple clouds, focusing on Amazon Web Services (AWS) and Heroku. For our purposes, we’ll start with uploading our keys to AWS KMS using Amazon’s CLI and demonstrating how the keys can be used within AWS. We will then rotate the keys manually — ideally, this is automated in a production implementation. Finally, we will utilize the generated keys in Heroku to encrypt a Postgres database.&lt;/p&gt;

&lt;p&gt;📝 Note&lt;/p&gt;

&lt;p&gt;Before we get started, ensure that Vault is &lt;a href="https://learn.hashicorp.com/tutorials/vault/getting-started-install"&gt;installed&lt;/a&gt; and &lt;a href="https://learn.hashicorp.com/tutorials/vault/getting-started-dev-server?in=vault/getting-started"&gt;running&lt;/a&gt;, and you have followed the &lt;a href="https://www.vaultproject.io/docs/secrets/transit#setup"&gt;Vault transit secrets engine setup guide&lt;/a&gt;, since we’ll use that to generate the keys we upload to AWS. Once you have validated that Vault is running and the &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html"&gt;AWS CLI is installed&lt;/a&gt;, it is time to get started.&lt;/p&gt;

&lt;h1&gt;
  
  
  Create Key in Vault
&lt;/h1&gt;

&lt;p&gt;First, we need to create our encryption key within Vault:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault write -f transit/keys/byok-demo-key exportable=true allow\_plaintext\_backup=true
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To export the key, we can use the Vault UI (&lt;a href="http://localhost:8200/ui/vault/secrets/transit/actions/byok-demo-key?action=export"&gt;http://localhost:8200/ui/vault/secrets/transit/actions/byok-demo-key?action=export&lt;/a&gt;), or we’ll need to use &lt;code&gt;cURL&lt;/code&gt;, since the Vault CLI doesn’t support exporting it directly. The 127.0.0.1 address maps to the Vault server — a production setup would not be localhost, nor unencrypted.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl — header “X-Vault-Token: &amp;lt;token&amp;gt;” [http://127.0.0.1:8200/v1/transit/export/encryption-key/byok-demo-key/1](http://127.0.0.1:8200/v1/transit/export/encryption-key/byok-demo-key/1)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This command will output a base64 encoded plaintext version of the key which we will upload to AWS KMS. Save the base64 plaintext key in a file — we used &lt;code&gt;vault_key.b64&lt;/code&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Upload Key to AWS KMS
&lt;/h1&gt;

&lt;p&gt;Now, we need to generate a Customer Master Key (CMK) with no key material:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\# create the keyaws kms create-key — origin EXTERNAL — description “BYOK Demo” — key-usage ENCRYPT\_DECRYPT\# give it a nice nameaws kms create-alias — alias-name alias/byok-demo-key — target-key-id &amp;lt;from above&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Copy down the key ID from the output, and download the public key and import token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws kms get-parameters-for-import — key-id &amp;lt;from above&amp;gt; — wrapping-algorithm RSAES\_OAEP\_SHA\_1 — wrapping-key-spec RSA\_2048
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Copy the public key and import token from the output of the above step into separate files (imaginatively, we used &lt;code&gt;import_token.b64&lt;/code&gt; and &lt;code&gt;public_key.b64&lt;/code&gt; as the filenames), then &lt;code&gt;base64&lt;/code&gt; decode them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl enc -d -base64 -A -in public\_key.b64 -out public\_key.binopenssl enc -d -base64 -A -in import\_token.b64 -out import\_token.bin
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;With the import token and public key downloaded, we can now use them to wrap the key from vault, first by converting the key to the OpenSSL byte format, then encrypting it using the public key from KMS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\# convert the vault key to bytesopenssl enc -d -base64 -A -in vault\_key.b64 -out vault\_key.bin\# encrypt the vault key with the KMS keyopenssl rsautl -encrypt -in vault\_key.bin -oaep -inkey public\_key.bin -keyform DER -pubin -out encrypted\_vault\_key.bin\# import the encrypted key into KMSaws kms import-key-material — key-id &amp;lt;from above&amp;gt; — encrypted-key-material fileb://encrypted\_vault\_key.bin — import-token fileb://import\_token.bin — expiration-model KEY\_MATERIAL\_EXPIRES — valid-to $(date — iso-8601=ns — date=’364 days’)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that the key has been uploaded, we can quickly encrypt and decrypt via the CLI to validate that the key is functioning properly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\# generate some random text to a filehead /dev/urandom | tr -dc A-Za-z0–9 | head -c 1024 &amp;gt; encrypt\_me.txt\# encrypt the fileaws kms encrypt — key-id &amp;lt;from above&amp;gt; — plaintext fileb://encrypt\_me.txt — output text — query CiphertextBlob | base64 — decode &amp;gt; encrypted\_file.bin\# decrypt the fileaws kms decrypt — key-id &amp;lt;from above&amp;gt; — ciphertext-blob fileb://encrypted\_file.bin — output text — query Plaintext | base64 — decode &amp;gt; decrypted.txt\# validate they match; should be a blank linediff encrypt\_me.txt decrypted.txt
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;At this point, the key is in place and can be used to encrypt data at rest or in transit in different parts of AWS. One of the easiest and most important places to encrypt data is in S3; files sitting around in storage mandate being encrypted. When creating a bucket, make sure you enable server-side encryption, select KMS as the key type, then choose the specific key you created from the KMS master key dropdown:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2hmzVlx---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cquk1kthbrxxk8zgmbys.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2hmzVlx---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cquk1kthbrxxk8zgmbys.jpg" alt="AWS S3 Bucket Settings Screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, any objects that are put in the bucket will automatically be encrypted and then decrypted when they are read. This same approach can be used for encrypting databases — for a complete list of the different services that integrate with KMS, you can see the list &lt;a href="https://aws.amazon.com/kms/features/#AWS_Service_Integration"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Use KMS Key in Heroku
&lt;/h1&gt;

&lt;p&gt;Our next step is to use the key we generated and uploaded to AWS in Heroku! For our purposes, we’ll encrypt a Postgres database during creation. To do this, we need to grant Heroku’s AWS account access to the key that we created. This can be done via the AWS UI when creating a key, via the CLI, or via a policy.&lt;/p&gt;

&lt;p&gt;During the key creation wizard in AWS, step four will ask to define key usage permissions; there will be a separate section that allows you to add AWS account IDs to the key:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xJGAHxUO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/l8ospqqcyc5cd9mp9ktk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xJGAHxUO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/l8ospqqcyc5cd9mp9ktk.png" alt="AWS KMS Customer Managed Keys Screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Type in the Heroku AWS account ID (&lt;code&gt;021876802972&lt;/code&gt;) and finish the wizard. If you want to use the CLI to achieve this, you need to update the default policy for the key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\# get the existing key policyaws kms get-key-policy — policy-name default — key-id &amp;lt;from above&amp;gt; — output text
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Save the output from the above into a text file called &lt;code&gt;heroku-kms-policy.json&lt;/code&gt; and add the following two statements:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Sid": "Allow use of the key",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::021876802972:root"
  },
  "Action": [
    "kms:Encrypt",
    "kms:Decrypt",
    "kms:ReEncrypt\*",
    "kms:GenerateDataKey\*",
    "kms:DescribeKey"
  ],
  "Resource": "\*"
},
{
  "Sid": "Allow attachment of persistent resources",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::021876802972:root"
  },
  "Action": [
    "kms:CreateGrant",
    "kms:ListGrants",
    "kms:RevokeGrant"
  ],
  "Resource": "*",
  "Condition": {
    "Bool": {
      "kms:GrantIsForAWSResource": "true"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, update the existing policy with the new statements:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws kms put-key-policy — key-id &amp;lt;from above&amp;gt; — policy-name default — policy file://heroku-kms-policy.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;There is no output from the above, so re-run the &lt;code&gt;get-key-policy&lt;/code&gt; command to validate that it worked.&lt;/p&gt;

&lt;p&gt;To update the policy via the UI, browse to the key under the “Customer managed keys” section in the AWS KMS console and edit to add the two statements from above. When switching from policy to default view, the “Other AWS accounts” section should show up:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q7A_e2mt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2sxo1gayt1cl642in1rs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q7A_e2mt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2sxo1gayt1cl642in1rs.png" alt="AWS KMS Console Screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Heroku granted access to the key, we now can use the Heroku CLI to attach a Postgres database to an app, specifying the key ARN (which should look something like arn:aws:kms:region:account ID:key/key ID):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;heroku addons:create heroku-postgresql:private-7 — encryption-key CMK\_ARN — app your-app-name
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can find a &lt;a href="https://devcenter.heroku.com/articles/encrypting-heroku-postgres-with-your-key"&gt;full set of documentation&lt;/a&gt; on encrypting your Postgres database on Heroku, including information on how to encrypt an already existing database.&lt;/p&gt;

&lt;h1&gt;
  
  
  Cleaning Up
&lt;/h1&gt;

&lt;p&gt;At this point, given the cost of running a private or shield database in Heroku, you may want to delete any resources you have created. Similarly, removing the keys from AWS is suggested, since it will take seven days for them to be fully deprovisioned and deleted. Quitting the Vault dev server will remove all information, since the dev instance is ephemeral.&lt;/p&gt;

&lt;h1&gt;
  
  
  Considerations
&lt;/h1&gt;

&lt;p&gt;While there is a lot to like about the BYOK approach to AWS and Heroku, there are a few considerations that need to be highlighted:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  While Vault allows for key rotation, it requires external automation — cron, a CI pipeline, Nomad/Kubernetes jobs, etc.&lt;/li&gt;
&lt;li&gt;  Similarly, customer managed keys in AWS also require &lt;a href="https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html#rotate-keys-manually"&gt;manual rotation&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;  If there is a &lt;a href="https://docs.aws.amazon.com/kms/latest/developerguide/importing-keys.html#importing-keys-considerations"&gt;catastrophic failure in AWS&lt;/a&gt;, it is possible that the key will need to be re-imported; automating, testing, and validating this should be part of any production use case.&lt;/li&gt;
&lt;li&gt;  If keys are deleted in AWS while being used in Heroku, &lt;a href="https://devcenter.heroku.com/articles/encrypting-heroku-postgres-with-your-key#disabling-your-encryption-key"&gt;all services and servers that depend on that key will be shutdown&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;With all of this in place, we have demonstrated that maintaining local ownership of your encryption keys is both possible and desirable, and that ownership can then be extended into various cloud providers. This article only scratched the surface of the functionality offered by Vault, KMS, and Heroku — Vault itself can be used to manage asymmetric encryption as well, helping to protect data in transit along with identity validation. KMS is a cornerstone for encryption in AWS, and hooks into most services, allowing for an effortless way to ensure data is kept secure. Finally, Heroku’s ability to consume KMS keys directly allows for an additional level of security without adding management overhead.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>heroku</category>
      <category>vault</category>
      <category>encryption</category>
    </item>
  </channel>
</rss>
