DEV Community

Cover image for Building a Fully Reproducible Kubernetes Platform Using Scaleway, Terraform, and ArgoCD - Part 1
Mayowa Adeniyi
Mayowa Adeniyi

Posted on • Originally published at Medium

Building a Fully Reproducible Kubernetes Platform Using Scaleway, Terraform, and ArgoCD - Part 1

Prerequisites

Docker
Terraform
Set up a PostgreSQL database and get the connection string
Familiarity with using Terraform

In this article, you will learn the following:

  1. Provision a Kubernetes cluster using Scaleway Kapsule with Terraform

  2. Bootstrap ArgoCD using the App-of-Apps pattern with Terraform

  3. Inject secrets securely at run time into the cluster using Infisical

  4. Deploying infrastructure components with GitOps

  5. Automatically configure DNS records using Cloudflare

  6. Prepare the platform for secure access control using Cloudflare Zero Trust

What is a Reproducible Kubernetes Platform?

It simply means an infrastructure setup that can be created and re-created from scratch with code, using an Infrastructure as Code (IaC) tool like Terraform for infrastructure provisioning and ArgoCD for application management.

One of the primary advantages of this approach is that if your Kubernetes cluster is totally down, everything can be re-created from Git with a single Terraform command.

Getting Started

Clone this repo https://github.com/Ademayowa/k8s-api-deployment

In the root of the project, run these commands:

docker login

docker build -t your_dockerhub_username/real-estate-backend:v1.0.0 .
Enter fullscreen mode Exit fullscreen mode

Replace your_docker_username with yours in the command above.

Push the image to Docker Hub by running the command:

docker push your_dockerhub_username/real-estate-backend:v1.0.0
Enter fullscreen mode Exit fullscreen mode

Check your Docker Hub; the image should be there.

Creating Secrets with Infisical

Now, you need to create a PostgreSQL database and get the connection string. The database connection string you created will be kept on Infisical. You can create a free PostgreSQL database on Supabase, Render, Neon, etc.

Next, create an account on Infisical: https://app.infisical.com/signup

Note: The aim of the steps below is to get the client_id, client_secret, and project_slug on Infisical. All other parts that involve setting up secrets for the Kubernetes cluster will be done using Terraform.

During sign-up, enter the organization name as Real Estate. On the dashboard, select Add new project.

organization creation on Infisical

A modal is displayed on the screen. Follow the steps and click the Create Project button.

create a project

You will be redirected to a page to let you add secrets. Select the dev environment and add a secret.

add a secret

Now, add the database URL connection string from your PostgreSQL database into the value field in the form.

Next, fill in the other text fields and click the Create Secret button.

create secrets

Now that the secret has been created successfully, you will need to create a Machine Identities by following the steps below:

machine identities

After selecting the Add Machine Identity to Project button, a modal is displayed. Fill the form fields and click on the create button.

add machine identities

Next, you will need the client id, client secret, and the project slug.

On the Access Control tab, under the Authentication section, click the drop-down arrow icon where there is Universal Auth.

access control

Then scroll down and copy the client id. To add a client secret, click the Add client secret button.

In the modal window, enter text in the text field, then click the Create button. Once that is done, the secret is automatically generated on the screen for you. Ensure to copy it.

create client secret

To get the project id, click the settings tab and select copy project id as shown below:

project id

Now that you have the client id and client secret from Infisical, note the two values. You will need them in the next section when implementing Terraform.

Implementing Terraform: Setting Up Terraform Cloud

To set up a Terraform Cloud account, go to: https://app.terraform.io/signup/account

Sign up with (email or GitHub). GitHub is faster, though.

Create an organization - name it something like real estate

organization on terraform cloud

Select personal. It is suitable for projects like this. Then enter the organization name and click the Create organization button.

personal

Next, select the CLI workflow. This will enable you to run Terraform commands in the terminal.

cli driven workflow

Add a workspace name and click the Create button at the bottom of the page.

create new workspace

Note your organization name and workspace name - you will need it later.

Set up Kubernetes Kapsule on Scaleway using Terraform

First, sign up for free here: https://www.scaleway.com/en/kubernetes-kapsule/

Select Personal project for the account type, and choose your preferred sign-up method.

create account on scaleway

You will need to add a valid card during the sign-up process. Note that you won't be debited. On the dashboard, create an organization and add a project.

create project

Enter the project name and description in the modal form displayed on the screen.

Next, get Scaleway API Credentials from your dashboard. Go to https://console.scaleway.com/iam/api-keys

Click on the Generate API key button. A modal is displayed on the screen. Fill the form fields. You can select No for the section Will this API key be used for Object Storage? Then click on the Generate API key button on the modal to generate your API keys.

generate api keys

After clicking the Generate API keys button, scroll down to review all the credential values. Click the drop-down icon, then copy the values for each key.

api-keys

Next, add those credentials to your Terraform Cloud dashboard as variables. Go to your workspace and select the project on the central page. On the sidebar, select variables.

add-variables

Click on the Add variable button. Paste your values and use these as the keys: infisical_client_id, infisical_client_secret, scaleway_access_key, scaleway_organization_id, scaleway_project_id, and scaleway_secret_key.

Then click the Save variable button.

adding variables on terraform cloud

Also, make sure the Terraform directory is set on Terraform Cloud. Click on the workspace from the central dashboard. Select the settings, then scroll down to the Terraform Working Directory section.

terraform working directory

Click on the Save settings button.

Creating Terraform Modules for Kubernetes Kapsule on Scaleway

In the root of the project, run these commands:

mkdir -p infra/terraform/modules/scaleway/kubernetes

touch infra/terraform/modules/scaleway/kubernetes/main.tf

touch infra/terraform/modules/scaleway/kubernetes/variables.tf

touch infra/terraform/modules/scaleway/kubernetes/outputs.tf
Enter fullscreen mode Exit fullscreen mode

Add this content to infra/terraform/modules/scaleway/kubernetes/main.tf

terraform {
 required_version = ">= 1.0"

 required_providers {
   scaleway = {
     source  = "scaleway/scaleway"
     version = "~> 2.0"
   }
 }
}

resource "scaleway_k8s_cluster" "main" {
 name    = var.cluster_name
 version = var.kubernetes_version
 region  = var.region
 cni     = "cilium"
 tags    = var.tags

 private_network_id = scaleway_vpc_private_network.k8s.id

 # Delete associated resources on cluster deletion
 delete_additional_resources = true

 # Set project id
 project_id = var.project_id
}

resource "scaleway_k8s_pool" "main" {
 cluster_id = scaleway_k8s_cluster.main.id
 name       = "${var.cluster_name}-pool"
 node_type  = var.node_type
 size       = var.node_count

 autoscaling = var.autoscaling
 min_size    = var.autoscaling ? var.min_nodes : null
 max_size    = var.autoscaling ? var.max_nodes : null

 tags = var.tags
}

# Required for stable cluster creation with Cilium on Scaleway
resource "scaleway_vpc_private_network" "k8s" {
 name       = "${var.cluster_name}-pn"
 project_id = var.project_id
 tags       = var.tags
}
Enter fullscreen mode Exit fullscreen mode

The code above sets up a Kubernetes cluster on Scaleway, on a private network.

Next, add the following to infra/terraform/modules/scaleway/kubernetes/variables.tf

variable "cluster_name" {
  description = "Cluster name"
  type        = string
}

variable "region" {
  description = "Scaleway region"
  type        = string
  default     = "fr-par"
}

variable "kubernetes_version" {
  description = "Kubernetes version"
  type        = string
}

variable "node_type" {
  description = "Node instance type"
  type        = string
}

variable "node_count" {
  description = "Number of nodes"
  type        = number
  default     = 1
}

variable "autoscaling" {
  description = "Enable autoscaling"
  type        = bool
  default     = false
}

variable "min_nodes" {
  description = "Min nodes for autoscaling"
  type        = number
  default     = 1
}

variable "max_nodes" {
  description = "Max nodes for autoscaling"
  type        = number
  default     = 3
}

variable "tags" {
  description = "Resource tags"
  type        = list(string)
  default     = []
}

variable "project_id" {
  description = "Scaleway project ID"
  type        = string
}
Enter fullscreen mode Exit fullscreen mode

The code above are variables needed for the cluster creation on Scaleway.

Add this content to infra/terraform/modules/scaleway/kubernetes/outputs.tf

output "cluster_id" {
 description = "Cluster ID"
 value       = scaleway_k8s_cluster.main.id
}

output "cluster_name" {
 description = "Cluster name"
 value       = scaleway_k8s_cluster.main.name
}

output "cluster_endpoint" {
 description = "API endpoint"
 value       = scaleway_k8s_cluster.main.apiserver_url
 sensitive   = true
}

output "cluster_region" {
 description = "Cluster region"
 value       = scaleway_k8s_cluster.main.region
}

output "cluster_version" {
 description = "Kubernetes version"
 value       = scaleway_k8s_cluster.main.version
}

output "kubeconfig_host" {
 value = scaleway_k8s_cluster.main.kubeconfig[0].host
}

output "kubeconfig_token" {
 value     = scaleway_k8s_cluster.main.kubeconfig[0].token
 sensitive = true
}

output "kubeconfig_cluster_ca_certificate" {
 value     = scaleway_k8s_cluster.main.kubeconfig[0].cluster_ca_certificate
 sensitive = true
}

output "kubeconfig" {
 description = "Complete kubeconfig file"
 sensitive   = true
 value       = scaleway_k8s_cluster.main.kubeconfig[0].config_file
}
Enter fullscreen mode Exit fullscreen mode

Next, run:

mkdir -p infra/terraform/envs/dev/scaleway

touch infra/terraform/envs/dev/scaleway/main.tf

touch infra/terraform/envs/dev/scaleway/variables.tf

touch infra/terraform/envs/dev/scaleway/outputs.tf
Enter fullscreen mode Exit fullscreen mode

Add the following content to infra/terraform/envs/dev/scaleway/main.tf

terraform {
  required_version = ">= 1.0"

  required_providers {
    scaleway = {
      source  = "scaleway/scaleway"
      version = "~> 2.0"
    }
  }

  backend "remote" {
    organization = "real-estate"

    workspaces {
      name = "real-estate-app"
    }
  }
}

provider "scaleway" {
  access_key      = var.scaleway_access_key
  secret_key      = var.scaleway_secret_key
  organization_id = var.scaleway_organization_id
  region          = var.region
  zone            = var.zone
}

module "kubernetes" {
  source = "../../../modules/scaleway/kubernetes"

  cluster_name       = var.cluster_name
  region             = var.region
  kubernetes_version = var.kubernetes_version
  node_type          = var.node_type
  node_count         = var.node_count
  autoscaling        = var.autoscaling
  min_nodes          = var.min_nodes
  max_nodes          = var.max_nodes
  tags               = var.tags
  project_id         = var.scaleway_project_id
}
Enter fullscreen mode Exit fullscreen mode

Here is what happens in the code above:

Store the Terraform state remotely using Terraform Cloud real-estate
Authenticates Scaleway by using your API credentials with access_key, secret_key, and organization_id.
Call the reusable modules by importing them.

Next, the follwoing to infra/terraform/envs/dev/scaleway/variables.tf


variable "scaleway_access_key" {
  description = "Scaleway access key"
  type        = string
  sensitive   = true
}

variable "scaleway_secret_key" {
  description = "Scaleway secret key"
  type        = string
  sensitive   = true
}

variable "scaleway_organization_id" {
  description = "Scaleway organization ID"
  type        = string
  sensitive   = true
}

variable "scaleway_project_id" {
  description = "Scaleway project ID"
  type        = string
  sensitive   = true
}

variable "cluster_name" {
  description = "Cluster name"
  type        = string
  default     = "real-estate-dev"
}

variable "region" {
  description = "Scaleway region"
  type        = string
  default     = "fr-par"
}

variable "zone" {
  description = "Scaleway zone"
  type        = string
  default     = "fr-par-1"
}

variable "kubernetes_version" {
  description = "Kubernetes version"
  type        = string
}

variable "node_type" {
  description = "Node type"
  type        = string
  default     = "DEV1-M" # 3 vCPUs & 4 GB RAM
}

variable "node_count" {
  description = "Number of nodes"
  type        = number
  default     = 1
}

variable "autoscaling" {
  description = "Enable autoscaling"
  type        = bool
  default     = false
}

variable "min_nodes" {
  description = "Min nodes"
  type        = number
  default     = 1
}

variable "max_nodes" {
  description = "Max nodes"
  type        = number
  default     = 3
}

variable "tags" {
  description = "Resource tags"
  type        = list(string)
  default     = ["dev", "real-estate", "kubernetes"]
}

variable "repo_url" {
  description = "Git repository URL for ArgoCD to sync from"
  type        = string
}

variable "target_revision" {
  description = "Git branch or tag for ArgoCD to sync from"
  type        = string
  default     = "main"
}

variable "bootstrap_path" {
  description = "Path in the Git repo to the ArgoCD bootstrap folder"
  type        = string
}

variable "infisical_client_id" {
  description = "Infisical universal auth client ID"
  type        = string
  sensitive   = true
}

variable "infisical_client_secret" {
  description = "Infisical universal auth client secret"
  type        = string
  sensitive   = true
}

variable "infisical_env_slug" {
  description = "Infisical environment slug"
  type        = string
}

variable "infisical_project_slug" {
  description = "Infisical project slug"
  type        = string
}
Enter fullscreen mode Exit fullscreen mode

The code above are Terraform variable for the dev environment.

Add this content to infra/terraform/envs/dev/scaleway/outputs.tf

output "cluster_id" {
 description = "Cluster ID"
 value       = module.kubernetes.cluster_id
}

output "cluster_name" {
 description = "Cluster name"
 value       = module.kubernetes.cluster_name
}

output "cluster_endpoint" {
 description = "API endpoint"
 value       = module.kubernetes.cluster_endpoint
 sensitive   = true
}

output "cluster_region" {
 description = "Cluster region"
 value       = module.kubernetes.cluster_region
}

output "cluster_version" {
 description = "Kubernetes version"
 value       = module.kubernetes.cluster_version
}

output "kubeconfig" {
 description = "Kubeconfig for the Scaleway Kubernetes cluster"
 sensitive   = true
 value       = module.kubernetes.kubeconfig
}
Enter fullscreen mode Exit fullscreen mode

Next, run:

touch infra/terraform/envs/dev/scaleway/terraform.tfvars
Enter fullscreen mode Exit fullscreen mode

Add this content to the file:

cluster_name       = "real-estate-dev"
region             = "fr-par"
zone               = "fr-par-1"
kubernetes_version = "1.34.5"

node_type  = "DEV1-M" # 3 vCPUs & 4 GB RAM
node_count = 1

autoscaling = false
min_nodes   = 1
max_nodes   = 3

tags = ["dev", "real-estate", "kubernetes"]

repo_url               = "https://github.com/Ademayowa/k8s-api-deployment" # Add your repo URL
target_revision        = "main"
bootstrap_path         = "infra/k8s/argocd/bootstrap/dev"
infisical_project_slug = "real-estate-db-sc" # Add your Infisical project slug
infisical_env_slug     = "dev"
Enter fullscreen mode Exit fullscreen mode

Replace with your repo_url and infisical_project_slug. This file should be added to .gitignore.
Next, add the following to the Terraform root directory as a .gitignore file:

# Terraform directories
**/.terraform/*

# State files
*.tfstate
*.tfstate.*

# Crash logs
crash.log
crash.*.log

# Sensitive files
*.tfvars
*.tfvars.json

# Override files
override.tf
override.tf.json
*_override.tf
*_override.tf.json

# CLI config
.terraformrc
terraform.rc

# Certificates and keys
*.pem
*.key
*.crt

# Kubeconfig
kubeconfig
*.kubeconfig

# Lock file 
.terraform.lock.hcl
Enter fullscreen mode Exit fullscreen mode

Creating ArgoCD Modules in Terraform

Here, you need to create ArgoCD modules to bootstrap the entire platform, both the infrastructure and the application layer.

In this case, the infrastructure layer consists of cert-manager, which handles TLS/SSL certificates, Traefik for Ingress routing, and Cloudflare for DNS management. The application layer consists of the Golang backend APIs.

Now, run the commands below:

mkdir -p infra/terraform/modules/argocd-bootstrap

touch infra/terraform/modules/argocd-bootstrap/main.tf

touch infra/terraform/modules/argocd-bootstrap/variables.tf

touch infra/terraform/modules/argocd-bootstrap/outputs.tf
Enter fullscreen mode Exit fullscreen mode

Add this content to infra/terraform/modules/argocd-bootstrap/main.tf

terraform {
  required_providers {
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = "~> 1.14"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.0"
    }
    helm = {
      source  = "hashicorp/helm"
      version = "~> 2.11"
    }
  }
}

# Install ArgoCD via Helm
resource "helm_release" "argocd" {
  name       = "argocd"
  repository = "https://argoproj.github.io/argo-helm"
  chart      = "argo-cd"
  namespace  = "argocd"
  version    = var.chart_version

  create_namespace = true
  wait             = true
  timeout          = 600
}

# Create ArgoCD root application (App of Apps)
resource "kubectl_manifest" "argocd_root_app" {
  wait = true

  yaml_body = <<-YAML
   apiVersion: argoproj.io/v1alpha1
   kind: Application
   metadata:
     name: root
     namespace: argocd
     finalizers:
       - resources-finalizer.argocd.argoproj.io
   spec:
     project: default
     source:
       repoURL: ${var.repo_url}
       targetRevision: ${var.target_revision}
       path: ${var.bootstrap_path}
     destination:
       server: https://kubernetes.default.svc
       namespace: argocd
     syncPolicy:
       automated:
         prune: true
         selfHeal: true
       syncOptions:
         - CreateNamespace=true
 YAML

  depends_on = [helm_release.argocd]
}

# Terraform owns the dev-real-estate namespace - ArgoCD does not create it
resource "kubernetes_namespace_v1" "dev-real-estate" {
  metadata {
    name = "dev-real-estate"
  }

  depends_on = [kubectl_manifest.argocd_root_app]
}

# Creates the Infisical auth secret so the operator can authenticate with Infisical
resource "kubernetes_secret_v1" "infisical_auth" {
  metadata {
    name      = "infisical-auth-real-estate"
    namespace = "dev-real-estate"
  }

  data = {
    clientId     = var.infisical_client_id
    clientSecret = var.infisical_client_secret
  }

  depends_on = [kubernetes_namespace_v1.dev-real-estate]
}
Enter fullscreen mode Exit fullscreen mode

The code above is the core for bootstrapping a reproducible Kubernetes platform. Here is what happens.

When the terraform apply command is run, ArgoCD creates an application named root. This root is the parent application that automatically creates two child applications: infrastructure and applications. Note that Terraform only creates the dev-real-estate namespace; all other workloads in this namespace will be created by ArgoCD.

Next, add this to infra/terraform/modules/argocd-bootstrap/variables.tf

Next, add this content to infra/terraform/modules/argocd-bootstrap/variables.tf

variable "chart_version" {
  description = "ArgoCD Helm chart version"
  type        = string
  default     = "9.4.0"
}

variable "repo_url" {
  description = "Git repository URL for ArgoCD to sync from"
  type        = string
}

variable "target_revision" {
  description = "Git branch or tag for ArgoCD to sync from"
  type        = string
  default     = "main"
}

variable "bootstrap_path" {
  description = "Path in the Git repo to the bootstrap folder for the environment"
  type        = string
}

variable "infisical_client_id" {
  description = "Infisical universal auth client ID"
  type        = string
  sensitive   = true
}

variable "infisical_client_secret" {
  description = "Infisical universal auth client secret"
  type        = string
  sensitive   = true
}

variable "infisical_env_slug" {
  description = "Infisical environment slug"
  type        = string
}
variable "infisical_project_slug" {
  description = "Infisical project slug"
  type        = string
}
Enter fullscreen mode Exit fullscreen mode

The code above defines variables for the argocd-bootstrap module.

Next, add this content to infra/terraform/modules/argocd-bootstrap/outputs.tf

output "namespace" {
  description = "Namespace where ArgoCD is installed"
  value       = helm_release.argocd.namespace
}

output "chart_version" {
  description = "Installed ArgoCD chart version"
  value       = helm_release.argocd.version
}
Enter fullscreen mode Exit fullscreen mode

Adding the Infisical Operator Modules in Terraform

The Infisical operator handles the management of secrets in the Kubernetes cluster. Now, run the commands below:

mkdir -p infra/terraform/modules/infisical-operator

touch infra/terraform/modules/infisical-operator/main.tf

touch infra/terraform/modules/infisical-operator/variables.tf

touch infra/terraform/modules/infisical-operator/outputs.tf
Enter fullscreen mode Exit fullscreen mode

Add this content to infra/terraform/modules/infisical-operator/main.tf

resource "helm_release" "infisical_operator" {
  name       = "infisical-operator"
  repository = "https://dl.cloudsmith.io/public/infisical/helm-charts/helm/charts/"
  chart      = "secrets-operator"
  namespace  = "infisical-operator-system"
  version    = var.chart_version

  create_namespace = true
}
Enter fullscreen mode Exit fullscreen mode

The code above installs the Infisical operator into the Kubernetes cluster via a Helm chart.

Next, add this content to infra/terraform/modules/infisical-operator/variables.tf

variable "chart_version" {
 description = "Infisical Helm chart version"
 type        = string
 default     = "0.10.5"
}
Enter fullscreen mode Exit fullscreen mode

Add the follwoing to infra/terraform/modules/infisical-operator/outputs.tf

output "namespace" {
 description = "Namespace where Infisical operator is installed"
 value       = helm_release.infisical_operator.namespace
}

output "chart_version" {
 description = "Installed Infisical chart version"
 value       = helm_release.infisical_operator.version
}
Enter fullscreen mode Exit fullscreen mode

Update the infra/terraform/envs/dev/scaleway/main.tf to have the Helm provider, Kubernetes provider, and the modules get updated:

terraform {
  required_version = ">= 1.0"

  required_providers {
    scaleway = {
      source  = "scaleway/scaleway"
      version = "~> 2.0"
    }
    helm = {
      source  = "hashicorp/helm"
      version = "~> 2.11"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.0"
    }
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = "~> 1.14"
    }
  }

  backend "remote" {
    organization = "real-estate"

    workspaces {
      name = "real-estate-app"
    }
  }
}

provider "scaleway" {
  access_key      = var.scaleway_access_key
  secret_key      = var.scaleway_secret_key
  organization_id = var.scaleway_organization_id
  region          = var.region
  zone            = var.zone
}

provider "helm" {
  kubernetes {
    host  = module.kubernetes.kubeconfig_host
    token = module.kubernetes.kubeconfig_token
    cluster_ca_certificate = base64decode(
      module.kubernetes.kubeconfig_cluster_ca_certificate
    )
  }
}

provider "kubernetes" {
  host  = module.kubernetes.kubeconfig_host
  token = module.kubernetes.kubeconfig_token
  cluster_ca_certificate = base64decode(
    module.kubernetes.kubeconfig_cluster_ca_certificate
  )
}

provider "kubectl" {
  host  = module.kubernetes.kubeconfig_host
  token = module.kubernetes.kubeconfig_token
  cluster_ca_certificate = base64decode(
    module.kubernetes.kubeconfig_cluster_ca_certificate
  )
  load_config_file = false
}

module "kubernetes" {
  source = "../../../modules/scaleway/kubernetes"

  cluster_name       = var.cluster_name
  region             = var.region
  kubernetes_version = var.kubernetes_version
  node_type          = var.node_type
  node_count         = var.node_count
  autoscaling        = var.autoscaling
  min_nodes          = var.min_nodes
  max_nodes          = var.max_nodes
  tags               = var.tags
  project_id         = var.scaleway_project_id
}

module "infisical_operator" {
  source = "../../../modules/infisical-operator"

  depends_on = [module.kubernetes]
}

module "argocd" {
  source = "../../../modules/argocd-bootstrap"

  repo_url                = var.repo_url
  target_revision         = var.target_revision
  bootstrap_path          = var.bootstrap_path
  infisical_client_id     = var.infisical_client_id
  infisical_client_secret = var.infisical_client_secret
  infisical_project_slug  = var.infisical_project_slug
  infisical_env_slug      = var.infisical_env_slug

  depends_on = [module.kubernetes]
}
Enter fullscreen mode Exit fullscreen mode

To verify that the updated files work as expected, run:

cd infra/terraform/envs/dev/scaleway

terraform init
terraform plan
Enter fullscreen mode Exit fullscreen mode

Implementing ArgoCD App of Apps Pattern

Note: Change repo_URL to your own GitHub repository in the files below.

Now, in the root of the project, run the commands below:

mkdir -p infra/k8s/argocd/bootstrap/dev

touch infra/k8s/argocd/bootstrap/dev/infrastructure.yaml

touch infra/k8s/argocd/bootstrap/dev/applications.yaml
Enter fullscreen mode Exit fullscreen mode

Add this content to infra/k8s/argocd/bootstrap/dev/infrastructure.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
 name: infrastructure
 namespace: argocd
 finalizers:
   - resources-finalizer.argocd.argoproj.io
spec:
 project: default

 source:
   repoURL: https://github.com/Ademayowa/k8s-api-deployment
   targetRevision: main
   path: infra/k8s/argocd/infrastructure
   directory:
     recurse: true
     include: "*/application.yaml"

 destination:
   server: https://kubernetes.default.svc
   namespace: argocd

 syncPolicy:
   automated:
     prune: true
     selfHeal: true
Enter fullscreen mode Exit fullscreen mode

Here is what happens in the code above: any application.yaml file created under the infra/k8s/argocd/infrastructure directory automatically becomes an ArgoCD application that gets deployed.

Next, add this content to infra/k8s/argocd/bootstrap/dev/applications.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
 name: applications
 namespace: argocd
 finalizers:
   - resources-finalizer.argocd.argoproj.io
spec:
 project: default

 source:
   repoURL: https://github.com/Ademayowa/k8s-api-deployment
   targetRevision: main
   path: infra/k8s/argocd/applications/dev
   directory:
     recurse: true

 destination:
   server: https://kubernetes.default.svc
   namespace: argocd

 syncPolicy:
   automated:
     prune: true
     selfHeal: true
Enter fullscreen mode Exit fullscreen mode

In the code above, any YAML file that has infra/k8s/argocd/applications/dev directory automatically gets deployed.

Next, run these commands below:

mkdir -p infra/k8s/argocd/applications/dev

touch infra/k8s/argocd/applications/dev/real-estate.yaml

touch infra/k8s/argocd/applications/dev/real-estate-infisical-secret.yaml

touch infra/k8s/argocd/applications/dev/namespace.yaml
Enter fullscreen mode Exit fullscreen mode

Add the following content to infra/k8s/argocd/applications/dev/real-estate.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
 name: real-estate-backend-dev
 namespace: argocd
spec:
 project: default

 source:
   repoURL: https://github.com/Ademayowa/k8s-api-deployment
   targetRevision: main
   path: infra/helm/real-estate-chart
   helm:
     valueFiles:
       - values-dev.yaml

 destination:
   server: https://kubernetes.default.svc
   namespace: dev-real-estate

 syncPolicy:
   # Enable automatic sync
   automated:
     prune: true
     selfHeal: true
     allowEmpty: false # Prevent deleting all resources
Enter fullscreen mode Exit fullscreen mode

The code above is an ArgoCD application that deploys the Golang backend APIs via a Helm chart.

Next, add the following content to infra/k8s/argocd/applications/dev/real-estate-infisical-secret.yaml

apiVersion: secrets.infisical.com/v1alpha1
kind: InfisicalSecret
metadata:
 name: real-estate-infisical-secret
 namespace: dev-real-estate
 labels:
   app: real-estate
spec:
 hostAPI: https://app.infisical.com/api

 # Authenticate with Infisical
 authentication:
   universalAuth:
     secretsScope:
       envSlug: "dev"
       secretsPath: "/"
       projectSlug: "real-estate-db-sc" # Your project slug
     credentialsRef:
       secretName: infisical-auth-real-estate
       secretNamespace: dev-real-estate

 # Store the synced secrets in Kubernetes
 managedSecretReference:
   secretName: real-estate-secret
   secretNamespace: dev-real-estate
   secretType: Opaque
   creationPolicy: "Owner"

 resyncInterval: 300
Enter fullscreen mode Exit fullscreen mode

The code above automatically syncs the secrets from Infisical (using infisical-auth-real-estate) into the Kubernetes cluster as a secret real-estate-secret every five minutes. That way, there's no sensitive value hanging around in the YAML files or Git repository, which keeps the cluster more secure.

Now, add this content to infra/k8s/argocd/applications/dev/namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
 name: dev-real-estate
Enter fullscreen mode Exit fullscreen mode

Create the Infrastructure directory

Next, run the commands below:

mkdir -p infra/k8s/argocd/infrastructure/cert-manager

touch infra/k8s/argocd/infrastructure/cert-manager/applications.yaml

touch infra/k8s/argocd/infrastructure/cert-manager/letsencrypt-prod.yaml
Enter fullscreen mode Exit fullscreen mode

Add the following content to infra/k8s/argocd/infrastructure/cert-manager/applications.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: cert-manager
  namespace: argocd
spec:
  project: default

  source:
    repoURL: https://charts.jetstack.io
    chart: cert-manager
    targetRevision: v1.17.2
    helm:
      values: |
        crds:
          enabled: true

  destination:
    server: https://kubernetes.default.svc
    namespace: cert-manager

  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
Enter fullscreen mode Exit fullscreen mode

Add this content to infra/k8s/argocd/infrastructure/cert-manager/letsencrypt-prod.yaml

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
 name: letsencrypt-prod
spec:
 acme:
   server: https://acme-v02.api.letsencrypt.org/directory
   email: # Add your email here
   privateKeySecretRef:
     name: letsencrypt-prod
   solvers:
     - http01:
         ingress:
           class: traefik
Enter fullscreen mode Exit fullscreen mode

Run these commands below:

mkdir -p infra/k8s/argocd/infrastructure/traefik

touch infra/k8s/argocd/infrastructure/traefik/application.yaml
Enter fullscreen mode Exit fullscreen mode

Next, add this content to infra/k8s/argocd/infrastructure/traefik/application.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
 name: traefik
 namespace: argocd
spec:
 project: default

 source:
   repoURL: https://traefik.github.io/charts
   chart: traefik
   targetRevision: 26.0.0
   helm:
     values: |
       service:
         type: LoadBalancer
       ports:
         websecure:
           tls:
             enabled: true

 destination:
   server: https://kubernetes.default.svc
   namespace: traefik

 syncPolicy:
   automated:
     prune: true
     selfHeal: true
   syncOptions:
     - CreateNamespace=true
Enter fullscreen mode Exit fullscreen mode

Bootstrap the Kubernetes Platform

Run the command below:

terraform login
Enter fullscreen mode Exit fullscreen mode

terraform login

Copy the token displayed in the browser, paste it directly into your terminal, and press Enter. Your terminal output should look like this:

terraform login successful

Next, run the commands below:

cd infra/terraform/envs/dev/scaleway

terraform plan
terraform apply
Enter fullscreen mode Exit fullscreen mode

Now, to connect and interact with the Kubernetes cluster on Scaleway, run the commands below and export the kubeconfig file:

terraform output -raw kubeconfig > ~/.kube/config-scaleway

# Secure the file (very important) 
chmod 600 ~/.kube/config-scaleway

export KUBECONFIG=~/.kube/config-scaleway
Enter fullscreen mode Exit fullscreen mode

To ensure the ArgoCD applications work well, from the root of the project run:

kubectl get applications -n argocd
Enter fullscreen mode Exit fullscreen mode

get argo applications

Now, you can get all pods running in the cluster. With this approach of bootstrapping the entire platform with a single terraform apply command, every other workload is reproducible from Git.

In the next part of this series, we will look at how to secure the platform and make the cluster production-ready.

For now, destroy the cluster by running:

terraform destroy
Enter fullscreen mode Exit fullscreen mode

Conclusion

This tutorial showed how to bootstrap a reproducible Kubernetes platform using Scaleway Kapsule, Terraform, ArgoCD, and Infisical for handling secret management.

Here is the GitHub repository for the boilerplate: https://github.com/Ademayowa/k8s-api-deployment

Resources

Scaleway Terraform Providers Official Documentation

Scaleway Kubernetes Kapsule

ArgoCD App of Apps Official Documentation

Infisical Secret CRD Official Documentation

Top comments (0)