<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: mkdelta</title>
    <description>The latest articles on DEV Community by mkdelta (@mkdelta).</description>
    <link>https://dev.to/mkdelta</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mkdelta"/>
    <language>en</language>
    <item>
      <title>Deploying to DigitalOcean Kubernetes using Terraform Cloud and GitHub Actions</title>
      <dc:creator>mkdelta</dc:creator>
      <pubDate>Tue, 14 Dec 2021 15:30:43 +0000</pubDate>
      <link>https://dev.to/mkdelta/deploying-to-digitalocean-kubernetes-using-terraform-cloud-and-github-actions-1me6</link>
      <guid>https://dev.to/mkdelta/deploying-to-digitalocean-kubernetes-using-terraform-cloud-and-github-actions-1me6</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a follow-up to my previous post about my submission to the &lt;a href="https://www.digitalocean.com/community/pages/kubernetes-challenge" rel="noopener noreferrer"&gt;DigitalOcean Kubernetes Challenge&lt;/a&gt;! I recommend that you at least skim through it for context on &lt;a href="https://www.kubegres.io/" rel="noopener noreferrer"&gt;Kubegres&lt;/a&gt;, the Kubernetes operator we'll be using to deploy Postgres.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Disclaimer: This isn't an intro to any of the technologies mentioned! If you haven't used Terraform with GitHub Actions before,&lt;/em&gt; &lt;strong&gt;&lt;em&gt;I highly suggest going through &lt;a href="https://learn.hashicorp.com/tutorials/terraform/github-actions" rel="noopener noreferrer"&gt;this tutorial&lt;/a&gt; from HashiCorp itself&lt;/em&gt;&lt;/strong&gt;. &lt;em&gt;I'll mostly be riffing off of it, pointing out important departures throughout.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A GitHub account and a working Git installation&lt;/li&gt;
&lt;li&gt;A Terraform Cloud account and a working Terraform installation&lt;/li&gt;
&lt;li&gt;A DigitalOcean account (the process for other providers is very similar, however)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Brief overview
&lt;/h2&gt;

&lt;p&gt;I had recently deployed a scalable Postgres cluster to DigitalOcean Kubernetes, but I did it manually. The process is straightforward but quite tedious, which makes it a prime candidate for automation.&lt;/p&gt;

&lt;h3&gt;
  
  
  How it works
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Infrastructure configuration is pushed to the GitHub repo, triggering a GitHub Actions workflow&lt;/li&gt;
&lt;li&gt;GitHub Actions checks out code to a &lt;a href="https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners" rel="noopener noreferrer"&gt;runner&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Runner connects to Terraform Cloud to plan and apply the configuration&lt;/li&gt;
&lt;li&gt;Terraform Cloud connects to the provider (DigitalOcean in this case) to provision the needed resources&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flny4yieohodcoqjf5dbp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flny4yieohodcoqjf5dbp.png" alt="Diagram of how it works"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The steps
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Set up Terraform Cloud&lt;/li&gt;
&lt;li&gt;Set up the GitHub repository&lt;/li&gt;
&lt;li&gt;Set up the Terraform file&lt;/li&gt;
&lt;li&gt;Push to the repository&lt;/li&gt;
&lt;li&gt;Cleanup!&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  1. Set up Terraform Cloud
&lt;/h3&gt;

&lt;p&gt;1.1. From your DigitalOcean account, &lt;a href="https://docs.digitalocean.com/reference/api/create-personal-access-token/" rel="noopener noreferrer"&gt;create a personal access token&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;1.2. From your Terraform Cloud account, create a new workspace, selecting &lt;strong&gt;API-driven workflow&lt;/strong&gt; as its type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45qywk0ws4ctzsrxz3f3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45qywk0ws4ctzsrxz3f3.png" alt="Workflow types"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1.3. In your newly created workspace, go to the variables tab and make a new workspace variable called &lt;strong&gt;DIGITALOCEAN_TOKEN&lt;/strong&gt;. Select the &lt;strong&gt;env&lt;/strong&gt; variable type and check the &lt;strong&gt;Sensitive&lt;/strong&gt; box.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchjqim9pep0k4hyolaog.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchjqim9pep0k4hyolaog.png" alt="Variables tab"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fripxcg7xm7gogvgkmxe4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fripxcg7xm7gogvgkmxe4.png" alt="Entering DigitalOcean token"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1.4. From your Terraform Cloud account, go to the &lt;strong&gt;User settings&lt;/strong&gt; page, select &lt;strong&gt;Tokens&lt;/strong&gt; from the sidebar, and generate a new token. We'll need this for GitHub Actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftukf4oe1bw2lgzedpc2e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftukf4oe1bw2lgzedpc2e.png" alt="Generating a Terraform Cloud token"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h3&gt;
  
  
  2. Set up a GitHub repository
&lt;/h3&gt;

&lt;p&gt;2.1. Create a new repository. Go to the &lt;strong&gt;Settings&lt;/strong&gt; tab and select &lt;strong&gt;Secrets&lt;/strong&gt; from the sidebar.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7gemdn8caai4ex1r50h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7gemdn8caai4ex1r50h.png" alt="Settings tab"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pdo7fhveigsvmct70xr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pdo7fhveigsvmct70xr.png" alt="GitHub secrets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.2. Create a new secret called &lt;strong&gt;TF_API_TOKEN&lt;/strong&gt; and paste the Terraform Cloud token you just generated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4ouguh9zcfuf33f1ph2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa4ouguh9zcfuf33f1ph2.png" alt="Terraform API token"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.3. Navigate to the &lt;strong&gt;Actions&lt;/strong&gt; tab in your repository and find the &lt;strong&gt;Terraform&lt;/strong&gt; template. Click &lt;strong&gt;Set up this workflow&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcdis3ajc9rf7jgvjmd9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcdis3ajc9rf7jgvjmd9.png" alt="Actions tab"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo582os7e9uxgpq1l7qtv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo582os7e9uxgpq1l7qtv.png" alt="Terraform template"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Refer to the &lt;strong&gt;Review Actions workflow&lt;/strong&gt; section in &lt;a href="https://learn.hashicorp.com/tutorials/terraform/github-actions" rel="noopener noreferrer"&gt;this tutorial&lt;/a&gt; for a breakdown of the workflow steps. The template we're using is slightly different in that it doesn't have the update pull request steps.&lt;/p&gt;

&lt;p&gt;2.4. Commit the file. The workflow is going to be triggered but it'll quickly error out because we don't have a Terraform file yet!&lt;/p&gt;


&lt;h3&gt;
  
  
  3. Set up the Terraform file
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/mkdlt/digital-ocean-k8s-challenge/blob/main/main.tf" rel="noopener noreferrer"&gt;Click here&lt;/a&gt; to see the Terraform file I used. This section of the tutorial is gonna be a breakdown of the file instead of a sequence of steps. For the experts in the audience: I'm new to Terraform so go easy on me! I tried ordering it in a way conducive to explanation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "remote" {
    organization = "your-org-here"

    workspaces {
      name = "your-workspace-name-here"
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This part tells Terraform to use Terraform Cloud to plan, apply, etc. instead of doing it locally. This also means the state of your deployment will be &lt;a href="https://medium.com/@itsmattburgess/why-you-should-be-using-remote-state-in-terraform-2fe5d0f830e8" rel="noopener noreferrer"&gt;stored remotely and securely&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;required_providers {
    digitalocean = {
      source  = "digitalocean/digitalocean"
      version = "~&amp;gt; 2.16.0"
    }

    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~&amp;gt; 2.6.0"
    }

    kubectl = {
      source  = "gavinbunney/kubectl"
      version = "&amp;gt;= 1.7.0"
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pretty straightforward. The &lt;a href="https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs" rel="noopener noreferrer"&gt;kubectl provider&lt;/a&gt; is super useful for elegantly doing &lt;code&gt;kubectl apply&lt;/code&gt; to our cluster (we did a lot of that manually last time). We'll see it in action later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "digitalocean_project" "k8s_challenge" {
  name        = "k8s-challenge"
  description = "Entry for the DigitalOcean Kubernetes Challenge"
  purpose     = "Just trying out DigitalOcean"
  environment = "Development"

  resources = [
    digitalocean_kubernetes_cluster.postgres.urn
  ]
}

resource "digitalocean_vpc" "k8s" {
  name   = "k8s-vpc"
  region = "sgp1"

  timeouts {
    delete = "4m"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;DigitalOcean uses projects to organize resources. We'll put our cluster in a new one and create a new VPC for it. The delete timeout section in the VPC resource makes sure everything else has been deleted before deleting the VPC (it'll throw an error in the destroy process otherwise; I've found that deletions take a few minutes to register).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "digitalocean_kubernetes_versions" "prefix" {
  version_prefix = "1.21."
}

resource "digitalocean_kubernetes_cluster" "postgres" {
  name         = "postgres"
  region       = "sgp1"
  auto_upgrade = true
  version      = data.digitalocean_kubernetes_versions.prefix.latest_version

  vpc_uuid = digitalocean_vpc.k8s.id

  maintenance_policy {
    start_time = "04:00"
    day        = "sunday"
  }

  node_pool {
    name       = "worker-pool"
    size       = "s-2vcpu-2gb"
    node_count = 3
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we're finally configuring the cluster itself. We're more or less creating a default one. Notice that we're using the id of the VPC we created. The maintenance policy determines when DigitalOcean will install updates and patches.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "kubernetes" {
  host  = digitalocean_kubernetes_cluster.postgres.endpoint
  token = digitalocean_kubernetes_cluster.postgres.kube_config[0].token
  cluster_ca_certificate = base64decode(
    digitalocean_kubernetes_cluster.postgres.kube_config[0].cluster_ca_certificate
  )
}

provider "kubectl" {
  host  = digitalocean_kubernetes_cluster.postgres.endpoint
  token = digitalocean_kubernetes_cluster.postgres.kube_config[0].token
  cluster_ca_certificate = base64decode(
    digitalocean_kubernetes_cluster.postgres.kube_config[0].cluster_ca_certificate
  )
  load_config_file = false
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we're configuring our providers to get credentials from the cluster for adding Kubegres resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "superUserPassword" {}
variable "replicationUserPassword" {}


resource "kubernetes_secret" "postgres_secret" {
  metadata {
    name      = "mypostgres-secret"
    namespace = "default"
  }

  data = {
    superUserPassword       = var.superUserPassword
    replicationUserPassword = var.replicationUserPassword
  }

  type = "Opaque"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is basically the equivalent of the &lt;code&gt;my-postgres-secret.yaml&lt;/code&gt; in the &lt;a href="https://www.kubegres.io/doc/getting-started.html" rel="noopener noreferrer"&gt;Kubegres tutorial&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Short detour: put these secrets in your Terraform Cloud workspace variables!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi2ssdd4pebafilwcbxm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbi2ssdd4pebafilwcbxm.png" alt="Terraform Cloud workspace variables"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "kubectl_path_documents" "docs" {
  pattern = "./manifests/*.yaml"
}

resource "kubectl_manifest" "kubegres" {
  for_each  = toset(data.kubectl_path_documents.docs.documents)
  yaml_body = each.value
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we're telling the kubectl provider to apply all the manifests in our &lt;code&gt;./manifests/*&lt;/code&gt; directory. We're using &lt;code&gt;kubectl_path_documents&lt;/code&gt; instead of &lt;code&gt;kubectl_filename_list&lt;/code&gt; because the &lt;code&gt;kubegres.yaml&lt;/code&gt; file actually consists of multiple documents defining different resources. I got stuck on this the first time around :^)&lt;/p&gt;

&lt;p&gt;See the &lt;a href="https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs/data-sources/kubectl_path_documents" rel="noopener noreferrer"&gt;kubectl provider docs&lt;/a&gt; for more details.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Short detour: create a manifests directory in your repo and put the required manifests in it! Also check the previous post for context.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrvvtfledfwlawguzdh2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrvvtfledfwlawguzdh2.png" alt="Manifests directory"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  4. Push to the repository
&lt;/h3&gt;

&lt;p&gt;4.1. You should be pretty much done! Push everything to the repository. At the minimum, you should have &lt;strong&gt;a main.tf file, a manifests directory, and a .github/workflows directory&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;4.2. Look at your Actions tab to see the triggered workflow. You should see something like the following.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yr9e3mye6zjyg9szeyp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yr9e3mye6zjyg9szeyp.png" alt="Triggered workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A lot of configuration is hidden in that Kubegres manifest. Don't panic if the console throws thousands of lines of output at you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1mv5qtwufaat2uyrioy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1mv5qtwufaat2uyrioy.png" alt="Thousands of lines of output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also check the ongoing run in your Terraform Cloud account. The &lt;code&gt;terraform apply&lt;/code&gt; part takes a few minutes. Grab a cup of your favorite beverage and sit tight!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fue93tmfehsikbygk4x1h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fue93tmfehsikbygk4x1h.png" alt="Terraform Cloud output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After a few minutes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpe6y9eaxn3xdh4nzcpn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpe6y9eaxn3xdh4nzcpn.png" alt="GitHub Action apply complete"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3hvysqh7zx3tbta33ss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3hvysqh7zx3tbta33ss.png" alt="Terraform Cloud Apply complete"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also view the cluster in your DigitalOcean control panel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwl3xfhl3n3u6cxmcgk9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwl3xfhl3n3u6cxmcgk9o.png" alt="DigitalOcean control panel"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clusters also come with a dashboard by default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwdrk810r3zi2kiejhcd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwdrk810r3zi2kiejhcd.png" alt="Kubernetes cluster dashboard"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  5. Cleanup!
&lt;/h3&gt;

&lt;p&gt;5.1 Since we used Terraform Cloud, we can simply queue up a destroy plan! Just go to your workspace &lt;strong&gt;Settings&lt;/strong&gt; and select &lt;strong&gt;Destruction and Deletion&lt;/strong&gt;. Click the red &lt;strong&gt;Queue destroy plan&lt;/strong&gt; and confirm by entering the name of your cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvo2g1j135mvx66ue4to2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvo2g1j135mvx66ue4to2.png" alt="Terraform Cloud settings"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdb26ci5uldrybjzwld0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdb26ci5uldrybjzwld0.png" alt="Queue destroy plan"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1nhqkssg2j975evclbe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1nhqkssg2j975evclbe.png" alt="Confirm destroy plan"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.2. You should be taken to a new run. Click &lt;strong&gt;Confirm &amp;amp; Apply&lt;/strong&gt; below, add a comment, and click &lt;strong&gt;Confirm Plan&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fepifbl5sa3aqx23x1zhb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fepifbl5sa3aqx23x1zhb.png" alt="Confirm destroy plan"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.3. Wait a few minutes and your cluster should be destroyed! The created DigitalOcean project should also disappear from your control panel shortly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2cou6op08fdgcwpdsf96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2cou6op08fdgcwpdsf96.png" alt="Successful destroy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmuw22qcgzok9z3zfw04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmuw22qcgzok9z3zfw04.png" alt="VPC delay"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the VPC took some time to get destroyed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Thank you!
&lt;/h2&gt;

&lt;p&gt;And that's it! I know this tutorial was a bit gisty so feel free to ask questions and ask for debugging help. Thanks to DigitalOcean for organizing the challenge! The repo can be found &lt;a href="https://github.com/mkdlt/digital-ocean-k8s-challenge" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>kubernetes</category>
      <category>digitalocean</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Deploying a scalable PostgreSQL cluster on DigitalOcean Kubernetes using Kubegres</title>
      <dc:creator>mkdelta</dc:creator>
      <pubDate>Sat, 11 Dec 2021 09:38:09 +0000</pubDate>
      <link>https://dev.to/mkdelta/deploying-a-scalable-postgresql-cluster-on-digitalocean-kubernetes-using-kubegres-1200</link>
      <guid>https://dev.to/mkdelta/deploying-a-scalable-postgresql-cluster-on-digitalocean-kubernetes-using-kubegres-1200</guid>
      <description>&lt;p&gt;&lt;em&gt;This was done for the &lt;a href="https://www.digitalocean.com/community/pages/kubernetes-challenge" rel="noopener noreferrer"&gt;DigitalOcean Kubernetes Challenge&lt;/a&gt;!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Disclaimer: This isn't an intro to Kubernetes! Some working knowledge is required, but honestly not a lot. It's also assumed that you have kubectl installed already.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Brief overview
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;When deploying a database on Kubernetes, you have to make it redundant and scalable. You can rely on database management operators like KubeDB or database-specific solutions like Kubegres for PostgreSQL or the MySQL Operator for MySQL.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The challenge prompt says it all. I had barely done anything stateful with Kubernetes before, so I was a bit intimidated. Thankfully, &lt;a href="https://www.kubegres.io/" rel="noopener noreferrer"&gt;Kubegres&lt;/a&gt; was very easy to set up. All you have to do is apply some manifests and it'll handle replication, failover, and backups for you. We really only scratch the surface here, so check out their site for more details. Massive props to the contributors!&lt;/p&gt;

&lt;h3&gt;
  
  
  The steps
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Set up a cluster using the GUI&lt;/li&gt;
&lt;li&gt;Connect to your cluster using doctl&lt;/li&gt;
&lt;li&gt;Apply Kubegres manifests&lt;/li&gt;
&lt;li&gt;Done! Delete some pods &lt;del&gt;for fun&lt;/del&gt; to test replica promotion&lt;/li&gt;
&lt;li&gt;Cleanup&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Simple, right?&lt;/p&gt;




&lt;h3&gt;
  
  
  1. Set up a cluster using the GUI
&lt;/h3&gt;

&lt;p&gt;I had never used DigitalOcean before, so I opted to use the GUI the first time around.&lt;/p&gt;

&lt;p&gt;1.1. In your control panel, click the &lt;strong&gt;Create&lt;/strong&gt; button at the top of the screen and select &lt;strong&gt;Kubernetes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mgcv305qe5uybhzrrn6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mgcv305qe5uybhzrrn6.png" alt="Green create button at the top of the control panel"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1.2. Customize your cluster then click &lt;strong&gt;Create Cluster&lt;/strong&gt; at the bottom of the page. As for me, I just left everything on default :D &lt;br&gt;
That also means the cluster will be associated with my default project (every user has one) and made in the default VPC of the default region.&lt;/p&gt;

&lt;p&gt;1.3. You should be taken to your cluster's page. Provisioning should be done in a few minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F574gzhes98nhktnp22gk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F574gzhes98nhktnp22gk.png" alt="Kubernetes cluster page"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h3&gt;
  
  
  2. Connect to your cluster using doctl
&lt;/h3&gt;

&lt;p&gt;Once the cluster is up, the Getting Started section on your cluster's page should, well, get you started!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpxioi4j0gkrszyr4j5s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpxioi4j0gkrszyr4j5s.png" alt="Getting started section in cluster page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.1. Install and configure doctl, DigitalOcean's CLI, if you haven't yet. &lt;a href="https://docs.digitalocean.com/reference/doctl/how-to/install/" rel="noopener noreferrer"&gt;Here's a guide.&lt;/a&gt; You'll have to &lt;a href="https://docs.digitalocean.com/reference/api/create-personal-access-token/" rel="noopener noreferrer"&gt;create a personal access token&lt;/a&gt; as well.&lt;/p&gt;

&lt;p&gt;2.2. Once you've verified that doctl is working, run the command given on your cluster's Getting Started section to connect to your cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1p1ryyoqk4rm7ewjcwf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1p1ryyoqk4rm7ewjcwf.png" alt="Command to connect to cluster"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should get something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Notice: Adding cluster credentials to kubeconfig file found in "/your/kube/config/path"
Notice: Setting current-context to &amp;lt;your-cluster-id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.3. Confirm that you have access to your cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                   STATUS   ROLES    AGE   VERSION
my-node-pool-asdfjkl   Ready    &amp;lt;none&amp;gt;   10m   v1.21.5   
my-node-pool-asdfjkm   Ready    &amp;lt;none&amp;gt;   10m   v1.21.5
my-node-pool-asdfjkn   Ready    &amp;lt;none&amp;gt;   10m   v1.21.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  3. Apply Kubegres manifests
&lt;/h3&gt;

&lt;p&gt;Here, the &lt;a href="https://www.kubegres.io/doc/getting-started.html" rel="noopener noreferrer"&gt;Kubegres getting started guide&lt;/a&gt; takes over. Still very straightforward. I'll give the condensed version.&lt;/p&gt;

&lt;p&gt;3.1. Install the Kubegres operator and check its components in the created namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f https://raw.githubusercontent.com/reactive-tech/kubegres/v1.14/kubegres.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get all -n kubegres-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.2. Check the storage class. If you did everything correctly, DigitalOcean Block Storage should be the default for your cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get sc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                         PROVISIONER                 ...
do-block-storage (default)   dobs.csi.digitalocean.com   ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.3. Create a file for your Postgres superuser and replication user credentials.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vi my-postgres-secret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadata:
  name: mypostgres-secret
  namespace: default
type: Opaque
stringData:
  superUserPassword: postgresSuperUserPsw
  replicationUserPassword: postgresReplicaPsw
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The keys and values under &lt;code&gt;stringData&lt;/code&gt; are arbitrary, but we'll use these for now.&lt;/p&gt;

&lt;p&gt;3.4. Create the secret in your cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f my-postgres-secret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.5. Create a file for your Kubegres resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vi my-postgres.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kubegres.reactive-tech.io/v1
kind: Kubegres
metadata:
  name: mypostgres
  namespace: default
spec:
   replicas: 3
   image: postgres:14.1
   database:
      size: 1Gi
   env:
      - name: POSTGRES_PASSWORD
        valueFrom:
           secretKeyRef:
              name: mypostgres-secret
              key: superUserPassword
      - name: POSTGRES_REPLICATION_PASSWORD
        valueFrom:
           secretKeyRef:
              name: mypostgres-secret
              key: replicationUserPassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Kubegres tutorial uses &lt;code&gt;200Mi&lt;/code&gt; for the database size but the minimum block storage size provisioned by DigitalOcean is &lt;code&gt;1Gi&lt;/code&gt;. This bit me the first time around :^)&lt;/p&gt;

&lt;p&gt;3.6. Apply using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f my-postgres.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and watch K8s spin up the pods with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -o wide -w
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.7. If it seems like something went wrong, run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get events
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to get more information.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. Done!
&lt;/h3&gt;

&lt;p&gt;4.1. Check all the created resources with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pod,statefulset,svc,configmap,pv,pvc -o wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should get something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                 READY   STATUS    NODE
pod/mypostgres-1-0   1/1     Running   worker1
pod/mypostgres-2-0   1/1     Running   worker2
pod/mypostgres-3-0   1/1     Running   worker3

NAME                            READY
statefulset.apps/mypostgres-1   1/1
statefulset.apps/mypostgres-2   1/1
statefulset.apps/mypostgres-3   1/1

NAME                         TYPE
service/mypostgres           ClusterIP
service/mypostgres-replica   ClusterIP

NAME
configmap/base-kubegres-config

NAME                          CAPACITY
persistentvolume/pvc-838...   1Gi
persistentvolume/pvc-da6...   1Gi
persistentvolume/pvc-e25...   1Gi

NAME                                               CAPACITY
persistentvolumeclaim/postgres-db-mypostgres-1-0   1Gi
persistentvolumeclaim/postgres-db-mypostgres-2-0   1Gi
persistentvolumeclaim/postgres-db-mypostgres-3-0   1Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.2. Check the dashboard! Go to your cluster's page and click &lt;strong&gt;Kubernetes Dashboard&lt;/strong&gt; to the right of the cluster's name. It's pretty comprehensive!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibuh7dtbnyqa82uabo8h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibuh7dtbnyqa82uabo8h.png" alt="Kubernetes dashboard button on cluster page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Look at your pods by clicking &lt;strong&gt;Pods&lt;/strong&gt; in the sidebar, under &lt;strong&gt;Workloads&lt;/strong&gt;. Find the primary Postgres pod using the labels on each pod's page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdfnsk9e58quktk35r70.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdfnsk9e58quktk35r70.png" alt="Pods option in dashboard sidebar"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqce22tn2r1sb64ks4lb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqce22tn2r1sb64ks4lb.png" alt="Primary pod"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.3. Back to the terminal! Check which pod is primary with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods --selector replicationRole=primary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME             READY   STATUS    RESTARTS   AGE
mypostgres-1-0   1/1     Running   0          3m21s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check which ones are replicas with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods --selector replicationRole=replica
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME             READY   STATUS    RESTARTS   AGE
mypostgres-2-0   1/1     Running   0          2m59s
mypostgres-3-0   1/1     Running   0          2m19s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.4. Delete the primary pod with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl delete pod &amp;lt;pod-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and watch as a replica gets promoted with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -w --selector replicationRole=primary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Blink and you'll miss it!&lt;/p&gt;

&lt;p&gt;4.5. Recheck the primary and replica pods with commands from previous steps.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods --selector replicationRole=primary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME             READY   STATUS    RESTARTS   AGE
mypostgres-2-0   1/1     Running   0          51s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods --selector replicationRole=replica
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME             READY   STATUS    RESTARTS   AGE
mypostgres-3-0   1/1     Running   0          4m21s
mypostgres-4-0   1/1     Running   0          34s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The king is dead, long live the king!&lt;/p&gt;




&lt;h3&gt;
  
  
  5. Cleanup
&lt;/h3&gt;

&lt;p&gt;We wouldn't want to rack up a huge bill, would we?&lt;/p&gt;

&lt;p&gt;5.1. Go to your cluster's page and select the &lt;strong&gt;Settings&lt;/strong&gt; tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fky085gt995p35z93wojo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fky085gt995p35z93wojo.png" alt="Settings tab on cluster page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.2. Scroll down to the very bottom and click the red &lt;strong&gt;Destroy&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2mclhj2njuec3rh1jez5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2mclhj2njuec3rh1jez5.png" alt="Destroy cluster button"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.3. You'll be asked if you want to destroy the persistent volumes provisioned along with your cluster. Select &lt;strong&gt;Destroy All&lt;/strong&gt;, enter the cluster's name, and click the red &lt;strong&gt;Destroy&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu5pqffj4gvxe2m761ci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu5pqffj4gvxe2m761ci.png" alt="Destroy prompt"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Thank you!
&lt;/h2&gt;

&lt;p&gt;And that's it! Thank you for reading my first post on dev.to :) And thank you to the folks at DigitalOcean for organizing the challenge and inspiring me to get off my butt!&lt;/p&gt;

&lt;p&gt;In my next post I'll demonstrate how to do the same deployment using Terraform and GitHub Actions! Check out &lt;a href="https://github.com/mkdlt/digital-ocean-k8s-challenge" rel="noopener noreferrer"&gt;the repo&lt;/a&gt; in the meantime.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>kubernetes</category>
      <category>digitalocean</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
