<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Robin Cher</title>
    <description>The latest articles on DEV Community by Robin Cher (@robincher).</description>
    <link>https://dev.to/robincher</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/robincher"/>
    <language>en</language>
    <item>
      <title>Automating Kong Konnect Configuration with Terraform</title>
      <dc:creator>Robin Cher</dc:creator>
      <pubDate>Fri, 07 Jun 2024 06:37:19 +0000</pubDate>
      <link>https://dev.to/robincher/automating-kong-konnect-configuration-with-terraform-3c0c</link>
      <guid>https://dev.to/robincher/automating-kong-konnect-configuration-with-terraform-3c0c</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;HashiCorp built Terraform on top of a plug-in system, where vendors can build their own extensions to Terraform. These extensions are called “providers.” Providers map the declarative configuration into the required API interactions, ensuring that the desired state is met. They act as a bridge between Terraform and a third-party API.&lt;/p&gt;

&lt;p&gt;Kong has always placed developer experience as top priority, and building a terraform provider is a no-brainer since its widely adopted by the community at large&lt;/p&gt;

&lt;p&gt;For today walkthrough, we will attempt to create a Control Plane, Service , Route and a Rate Limit Plugin in Kong &lt;a href="https://docs.konghq.com/konnect/" rel="noopener noreferrer"&gt;Konnect&lt;/a&gt;. Kong Konnect is a hybrid saas platform where the control plane is hosted/managed by Kong, and customer will deploy Data Plane(proxy) on their own environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj3bwxwiwcb73zpa2i9l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzj3bwxwiwcb73zpa2i9l.png" alt="Kong Konnect Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Ensure you have &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Terraform CLI installed&lt;/li&gt;
&lt;li&gt;Kong Konnect Control Plane Access&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;First ,lets create a auth.tf that will configure your Kong Konnect tf provider, and a personal access token for authentication with Kong Konnect.&lt;/p&gt;

&lt;p&gt;You can generate a access token by navigating to the top right, click on** Personal Access Token*&lt;em&gt;, and then *&lt;/em&gt; Generate Token**&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdycantho2e23fc770747.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdycantho2e23fc770747.png" alt="Konnect Access Token"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# auth.tf
# Configure the provider to use your Kong Konnect account
terraform {
  required_providers {
    konnect = {
      source  = "kong/konnect"
      version = "0.2.5"
    }
  }
}

provider "konnect" {
  personal_access_token = "kpat_xxxx"
  server_url            = "https://au.api.konghq.com"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Subsequently, lets create the resources declarative file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#main.tf

# Create a new Control Plane
resource "konnect_gateway_control_plane" "tfdemo" {
  name         = "Terraform Control Plane"
  description  = "This is a sample description"
  cluster_type = "CLUSTER_TYPE_HYBRID"
  auth_type    = "pinned_client_certs"

  proxy_urls = [
    {
      host     = "example.com",
      port     = 443,
      protocol = "https"
    }
  ]
}

# Configure a service and a route that we can use to test
resource "konnect_gateway_service" "httpbin" {
  name             = "HTTPBin"
  protocol         = "https"
  host             = "httpbin.org"
  port             = 443
  path             = "/"
  control_plane_id = konnect_gateway_control_plane.tfdemo.id
}

resource "konnect_gateway_route" "anything" {
  methods = ["GET"]
  name    = "Anything"
  paths   = ["/anything"]

  strip_path = false

  control_plane_id = konnect_gateway_control_plane.tfdemo.id
  service = {
    id = konnect_gateway_service.httpbin.id
  }
}

resource "konnect_gateway_plugin_rate_limiting" "my_rate_limiting_plugin" {
  enabled = true
  config = {
    minute = 5
    policy = "local"
  }

  protocols        = ["http", "https"]
  control_plane_id = konnect_gateway_control_plane.tfdemo.id
  route = {
    id = konnect_gateway_route.anything.id
  }
}



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Run a terraform plan to validate what will be build&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

terraform plan


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should have the following file in the directory&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma58mrpbo969hshkchzm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma58mrpbo969hshkchzm.png" alt="Directory"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the terraform apply to commit the resources&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

terraform apply


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If everything went well, you should see a freshly created Control plane with a sample Service and Route attached with a Rate Limit Plugin&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0mx3s16ymgp0f83hjxu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0mx3s16ymgp0f83hjxu.png" alt="New CP"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxw4gxw3g5ak38snzygz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxw4gxw3g5ak38snzygz.png" alt="Route with Rate Limit Plugin"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;With a Konnect TF provider, customers can leverage on existing CI/CD pipeline to run Kong's api configuration automatically and consistently across different environment.  DevEX is something Kong will be focusing on, and do expect more toolings from Kong in the coming months!&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Kong Konnect TF provider - &lt;a href="https://github.com/Kong/terraform-provider-konnect" rel="noopener noreferrer"&gt;https://github.com/Kong/terraform-provider-konnect&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Kong Konnect - &lt;a href="https://docs.konghq.com/konnect/" rel="noopener noreferrer"&gt;https://docs.konghq.com/konnect/&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>terraform</category>
      <category>kong</category>
    </item>
    <item>
      <title>Automating Kong API Gateway deployment with Flux</title>
      <dc:creator>Robin Cher</dc:creator>
      <pubDate>Thu, 27 Apr 2023 13:30:11 +0000</pubDate>
      <link>https://dev.to/robincher/deploying-kong-with-flux-48n1</link>
      <guid>https://dev.to/robincher/deploying-kong-with-flux-48n1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In recent years, GitOps has created an industry shift in how configuration change is managed. GitOps elevates the source control repository to the source of truth for configuration change management, and makes the repository the central hub of change control.  The benefits of following this development paradigm are many, for example, GitOps helps: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Improve Collaboration&lt;/li&gt;
&lt;li&gt;Increase deployment reliability, stability and frequency&lt;/li&gt;
&lt;li&gt;Decrease deployment time and reduce human error&lt;/li&gt;
&lt;li&gt;Improve compliance and auditing&lt;/li&gt;
&lt;li&gt;and many others…&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For these reasons, many engineering organizations are implementing GitOps, including in control of deployments to their API Gateway. &lt;/p&gt;

&lt;p&gt;One of the most popular tools in this space is &lt;a href="https://fluxcd.io/" rel="noopener noreferrer"&gt;Flux&lt;/a&gt;, which we'll be using today.For this post, we will be setting up Flux to demonstrate how one can deploy Kong in a GitOps fashion. &lt;/p&gt;

&lt;h3&gt;
  
  
  What is Flux?
&lt;/h3&gt;

&lt;p&gt;Let's start by defining what Flux is. Flux is a tool used to keep your Kubernetes clusters in sync with sources of configuration, such as Git repositories, and also automate updates to your configuration when there is new code to deploy. The declarative nature of Flux means that Kong configurations can be written as a set of facts directly within the source code, and Git is the "single source of truth". Essentially, Flux focuses on managing your infrastructure and platform, the way that a developer already is familiar with, using git to commit code changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Kong?
&lt;/h3&gt;

&lt;p&gt;Kong is the world's most adopted API Gateway that lets you secure, manage and extend APIs or microservices. Get started with Kong API Gateway and learn why it’s one of the best API Gateway in the industry.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;Now that we understand what Flux is, let's dive into what our architecture looks like when using Flux. Below, we'll find a diagram that summarizes how Flux (with its CRDs) manages Kong using GitOps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nqeco7pimjmvblu9u6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nqeco7pimjmvblu9u6u.png" alt="Context"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As one can see from the diagram above, the Platform Engineer has one responsibility: to push code to the repository. It is at this point that the whole GitOps flow starts and triggers a number of actions that will finish with a release of an API (in our case). Let's dive into it further.&lt;br&gt;
Tech Stack and Tooling&lt;/p&gt;

&lt;p&gt;Note: You will incur some cost in running public cloud resources, do remember to tear it down upon this exercise. &lt;/p&gt;

&lt;p&gt;For this walkthrough, these are the tools that we will be using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://fluxcd.io/flux/installation/" rel="noopener noreferrer"&gt;Flux v2&lt;/a&gt; (and its CLI)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/" rel="noopener noreferrer"&gt;Github &lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Access to Amazon &lt;a href="https://aws.amazon.com/eks/" rel="noopener noreferrer"&gt;Elastic Kubernetes Service (EKS)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;aws cli&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://eksctl.io/" rel="noopener noreferrer"&gt;eksctl&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;p&gt;To get started, we need to make sure that we have our stack set up, so we'll perform the following steps: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Fork this Repository, and clone it locally - &lt;a href="https://github.com/robincher/kong-flux-gitops" rel="noopener noreferrer"&gt;https://github.com/robincher/kong-flux-gitops&lt;/a&gt;. This will be the working directory for us to execute commands for this exercise &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generate a Github’s Personal Access Tokens (PAT)  - Refer to the Github Guide for details&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to Deploy Kong using Flux
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Set up the EKS Cluster**
&lt;/h3&gt;

&lt;p&gt;To get started, let's create an EKS cluster with eksctl &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

eksctl create cluster --name Kong-GitOps-Test-Cluster  --version 1.23 --region &amp;lt;preferred-aws-region&amp;gt;  --without-nodegroup


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Subsequently, let create a node group for the cluster&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

eksctl create nodegroup --cluster Kong-GitOps-Test-Cluster --name Worker-NG  --region &amp;lt;preferred-aws-region&amp;gt;  --node-type t3.medium --nodes 1 --max-pods-per-node 50


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After the cluster is completely set up, let's ensure you are able to access the kube-api by running the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

aws eks update-kubeconfig --region &amp;lt;preferred-aws-region&amp;gt; --name Kong-GitOps-Test-Cluster


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let check if the cluster is up and you have access to it&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl get nodes


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;There should be one node up and running&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

NAME                                               STATUS   ROLES    AGE    VERSION
ip-192-168-49-64.ap-southeast-1.compute.internal   Ready    &amp;lt;none&amp;gt;   6d2h   v1.23.7


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Creating a Remote RDS Postgres (For Kong Control Plane)
&lt;/h3&gt;

&lt;p&gt;Generally we will advise customers to use a Managed Postgres when running Kong in production-like environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupohindyqhut3972btbl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupohindyqhut3972btbl.png" alt="RDS-Setup"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Remember to pre-create an initial database. When using a remote database like RDS, Kong will not automatically initialize the database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrmyywj25okskkehr7q4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrmyywj25okskkehr7q4.png" alt="RDS-Kong-DB"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, create a secret for the DB password&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl create secret generic kong-db-password --from-literal=postgresql-password=xxxxx -n kong


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s test the connectivity from the EKS cluster to Postgres by first creating a temporary postgres pod &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl run -i --tty --rm debug --image=postgres --restart=Never -- sh


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Subsequently., we can run a test command to check the connectivity.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

psql --host=postgres.internal.somewhere --port=5432 --username=konger --password --dbname=kong


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Lastly, we can create an ExternalName for the postgres host.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl create service externalname  kong-db-postgresql  --external-name postgres.internal.somewhere -n kong


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Deploying Kong via HelmRepository and HelmRelease
&lt;/h3&gt;

&lt;p&gt;Now we should have our cluster properly bootstrapped. &lt;/p&gt;

&lt;p&gt;Now it's time to briefly explain the configurations. For our experiment, we will deploy Kong using Helm. Flux supports helm deployments via their HelmRepository and HelmRelease CRDs.&lt;/p&gt;

&lt;p&gt;Let’s go into the folder you just forked and clone locally&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd ~/yourpath/kong-flux-gitops&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;HelmRepository defines the source where Flux will attempt to pull the helm charts from. &lt;/p&gt;

&lt;p&gt;If you already forked from the repository, you will see a HelmRepository CRD pre-configured for you to pull Kong’s helm charts. At this stage, you do not need to modify anything for this file&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#sources/kong.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
  name: kong
spec:
  url: https://charts.konghq.com
  interval: 10m


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s look at the HelmRelease CRD that will we will be updating, and we will go through the configuration step by step&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cat ~/yourpath/kong-flux-gitops/platform/kong-gw/release/yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Understanding HelmRelease CRD
&lt;/h3&gt;

&lt;p&gt;From Flux’s Official documentation: HelmRelease defines a Flux’s resource for controller driven reconciliation of Helm releases via Helm actions such as install, upgrade, test, uninstall, and rollback.  As such, we need to allow Flux to understand what to deploy and configure for Kong.&lt;/p&gt;

&lt;p&gt;The first step is to specify the chart that will be pulled for the deployment under spec.chart, and indicate the HelmRepository that was created in the previous step, and the corresponding chart version. &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#platform/kong-gw/release.yaml

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: kong
spec:
  interval: 5m
  chart:
    spec:
      chart: kong
      version: 2.15.3
      sourceRef:
        kind: HelmRepository
        name: kong
        namespace: flux-system
      interval: 1m



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Subsequently, the next section is spec.values where you will indicate all possible helm values from Kong Official Helm Chart which is located in Kong's official Github repository, here.&lt;/p&gt;

&lt;p&gt;So, without further ado, let’s start off by setting the image repository and the tag for the kong container image that we will be deploying. The result can be shown below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#platform/kong-gw/release.yaml

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: kong
spec:
  chart:
  ….
  ….
  values:
    image:
      repository: kong/kong-gateway
      tag: "3.1.1.3"
    replicaCount: 1
—--


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Configuring Kong Environment Variables
&lt;/h4&gt;

&lt;p&gt;Within the file, you may notice the section: spec.values.env. This section is one that allows the user to overwrite the default Kong Gateway configuration parameters via the use of environmental variables. This is a very important feature, which is documented here. An example of some of the environmental variables that you would wish to overwrite is shown below:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#platform/kong-gw/release.yaml
……
   env:
      prefix: /kong_prefix/
      database: postgres
      # Pre-created Kubernetes ExternalName that pointing to the PG Host
      pg_host: kong-db-postgresql.kong-enterprise.svc.cluster.local
      pg_port: 5432
      pg_user: konger
      pg_database: kong # Pre-create in RDS First
      pg_password: 
        valueFrom:
          secretKeyRef:
            name: kong-db-password # Pre-created previously
            key: postgresql-password # Pre-created previously

      # Logs Output
      log_level: warn

      # Configuring Admin GUI (Kong Manager) and Admin API
      admin_api_uri: http://admin.customer.kongtest.net:8001 
      admin_gui_url: http://manager.customer.kongtest.net:8002

      # Configuring Portal Settings 
      portal_gui_protocol: http
      portal_api_url: http://portalapi.customer.kongtest.net:8004
      portal_gui_host: portal.customer.kongtest.net:8003   
      portal_session_conf:
        valueFrom:
          secretKeyRef:
            name: kong-session-config
            key: portal_session_conf

      portal: on



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h5&gt;
  
  
  Configuring Admin GUI (Kong Manager) and Admin API
&lt;/h5&gt;

&lt;p&gt;Within the section, you'll find some important settings such as: &lt;strong&gt;admin_api_uri&lt;/strong&gt; and &lt;strong&gt;admin_gui_url&lt;/strong&gt;, where you have to indicate two hostnames that can be accessed by the API Operator using Kong Manager. admin_gui_url is the value Kong used to set in “Access-Control-Allow-Origin” header for CORS purposes, and it will decide which domain could access Kong Admin API via the web browser.&lt;/p&gt;
&lt;h5&gt;
  
  
  Configuring Portal Settings (Enterprise)
&lt;/h5&gt;

&lt;p&gt;&lt;strong&gt;portal_gui_host&lt;/strong&gt; is the hostname for your Developer Portal, portal_api_url and is the address on which your Kong Dev Portal API is accessible by Kong. Both settings are required if you are mapping your portal with your own DNS hostname.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;portal_session_conf&lt;/strong&gt; is the portal session configuration so as to create a session cookie that is to authenticate dev portal users. The sample snippet to create a portal_session_conf is found in the scripts/kong-gw-initial.sh&lt;/p&gt;

&lt;p&gt;More information about portal session configuration can be found here: &lt;a href="https://docs.konghq.com/gateway/latest/kong-enterprise/dev-portal/authentication/sessions/" rel="noopener noreferrer"&gt;https://docs.konghq.com/gateway/latest/kong-enterprise/dev-portal/authentication/sessions/&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Configuring Enterprise License and Default Kong Manager  Secret
&lt;/h4&gt;

&lt;p&gt;For an enterprise customer, one needs to create a license secret in the cluster by running the following:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl create secret generic kong-enterprise-license --from-file=license=license.json -n kong --dry-run=client -o yaml | kubectl apply -f -


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;From the helm configuration, indicate the license_secret you just created, in this case the secret name is kong-enterprise-license.&lt;/p&gt;

&lt;p&gt;For a better security posture, we enabled RBAC to secure our Kong Manager with basic-auth.To achieve this, we have to indicate the authentication mechanism via enterprise.admin_gui_auth, and enterprise.session_conf_secret for the default password. &lt;/p&gt;

&lt;p&gt;You can read more about this here: &lt;a href="https://docs.konghq.com/gateway/latest/kong-manager/auth/basic/" rel="noopener noreferrer"&gt;https://docs.konghq.com/gateway/latest/kong-manager/auth/basic/&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#platform/kong-gw/release.yaml
……
   enterprise:
      enabled: true
      # CHANGEME: https://github.com/Kong/charts/blob/main/charts/kong/README.md#kong-enterprise-license
      license_secret: kong-enterprise-license
      vitals:
        enabled: true
      portal:
        enabled: true
      rbac:
        enabled: true
        admin_gui_auth: basic-auth
        session_conf_secret: kong-session-config
        admin_gui_auth_conf_secret: kong-session-config
      smtp:
        enabled: false


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Configuring Kong Services
&lt;/h4&gt;

&lt;p&gt;As you might have known, a single Kong container comprised core services to make it work.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kong Manager  - Admin UI for you to manage API. Interact with Kong Admin API behind the scene&lt;/li&gt;
&lt;li&gt;Kong Admin API - Admin API for you to manage API&lt;/li&gt;
&lt;li&gt;Kong Developer Portal - Developer Portal to browse and search API documentation, and test API endpoints&lt;/li&gt;
&lt;li&gt;Kong Developer Portal API - API to interact with Dev Portal.&lt;/li&gt;
&lt;li&gt;Kong Proxy - Proxy service that process API request&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To deploy these services, it is just as simple as enabling them in the helm values. For this exercise, we will only enable Kong Proxy and Admin API.  &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#platform/kong-gw/release.yaml
……
   admin:
      enabled: true
      type: LoadBalancer
      annotations: 
        service.beta.kubernetes.io/aws-load-balancer-type: "nlb" 
        service.beta.kubernetes.io/aws-load-balancer-internal: "false"
   manager:
      enabled: false
   proxy:
      # Creating a Kubernetes service for the proxy
      enabled: true
      type: LoadBalancer 
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: "nlb" 
        service.beta.kubernetes.io/aws-load-balancer-internal: "false"
  portal:
      enabled: false

  portalapi:
      enabled: false


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The full sample HelmRelease CRD can be accessed here : &lt;a href="https://github.com/robincher/kong-flux-gitops/blob/main/platform/kong-gw/release.yaml" rel="noopener noreferrer"&gt;https://github.com/robincher/kong-flux-gitops/blob/main/platform/kong-gw/release.yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we can update and check-in the above two configuration file to the following locations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HelmRepository &lt;a href="https://github.com/your-userid/kong-flux-gitops/blob/main/platform/sources/kong.yaml" rel="noopener noreferrer"&gt;https://github.com/your-userid/kong-flux-gitops/blob/main/platform/sources/kong.yaml&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;HelmRelease &lt;a href="https://github.com/your-userid/kong-flux-gitops/blob/main/platform/kong-gw/release.yaml" rel="noopener noreferrer"&gt;https://github.com/your-userid/kong-flux-gitops/blob/main/platform/kong-gw/release.yaml&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Bootstrapping our cluster with Flux + Kong
&lt;/h3&gt;

&lt;p&gt;After the cluster has been created and configuration updated, we need to bootstrap the cluster with the Flux CLI. Bootstrapping is a Flux process that's used to install the Flux CRD's and then deploy a sample application, which in our case is Kong. To find out more about what bootstrapping means, see the &lt;a href="https://fluxcd.io/flux/get-started/" rel="noopener noreferrer"&gt;Flux documentation&lt;/a&gt; for information regarding bootstrapping.&lt;/p&gt;

&lt;p&gt;Before running the bootstrap command, we need to retrieve a Github Personal Access Token(PAT). Without a PAT, the Flux CLI will not be able to access Github API to do the necessary action like creating a fresh source repository if it doesn’t exist.&lt;/p&gt;

&lt;p&gt;The command on how to bootstrap Kong is shown below:&lt;/p&gt;

&lt;p&gt;Create an environment variable based on your Github PAT. This token will be used by Flux to interact with the corresponding Github repository&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

export GITHUB_TOKEN=&amp;lt;your-token&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Run the following command to bootstrap Flux into the remote EKS Cluster&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

flux bootstrap github \
 --owner=your-github-id \
 --repository=kong-flux-gitops \
 --path=clusters/staging \   #The Folder that contain the manifest
 --personal


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once you've bootstrapped the cluster, you should now have the Flux CRDs installed in your cluster and also see the cluster being initiated with Kong pods. &lt;/p&gt;

&lt;p&gt;The flux bootstrap command perform the followings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A new Git repository for our manifests is created on the source repository&lt;/li&gt;
&lt;li&gt;A flux-system namespace with all Flux components is configured on our cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl get pods -n flux-system


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Expected output with all Flux’s components up and running&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

❯ 
NAME                                       READY   STATUS    RESTARTS   AGE
helm-controller-56fc8dd99d-sf4ml           1/1     Running   0          4d2h
kustomize-controller-bc455c688-mwfv4       1/1     Running   0          4d2h
notification-controller-644f548fb6-69wr4   1/1     Running   0          4d2h
source-controller-7f66565fb8-q4r7j         1/1     Running   0          4d2h


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;The Flux controllers are set up to sync with our new git repository&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

flux get sources git


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Expected output with all Flux’s components up and running&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

NAME        REVISION        SUSPENDED   READY   MESSAGE
flux-system main/43187e2    False       True    stored artifact for revision 'main/43187e206a7d6f3e06406afc17b9579dec3ee04d'


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After the sync is established,  any Kubernetes manifests checked into the source repository will be automatically deployed into the target cluster.&lt;/p&gt;

&lt;p&gt;Let’s check if all the Kong pods are up and running&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl get pods -n kong


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Expected output&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

NAME                              READY   STATUS      RESTARTS   AGE
kong-kong-665fc7d8db-m67lb        1/1     Running     0          2m2s
kong-kong-init-migrations-62c7x   0/1     Completed   0          2m2s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl get services -n kong


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Expected output&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP                            PORT(S)                         AGE
kong-kong-admin      LoadBalancer   10.100.0.98     xx2.elb.ap-southeast-1.amazonaws.com   8001:31386/TCP,8444:30909/TCP   2m21s
kong-kong-proxy      LoadBalancer   10.100.140.25   xx1.elb.ap-southeast-1.amazonaws.com   80:31675/TCP,443:30040/TCP      2m21s




&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Making Kong Config Updates
&lt;/h3&gt;

&lt;p&gt;As shown above there is only 1 x Kong Pod, so what can we do to scale it up via GitOps? &lt;/p&gt;

&lt;p&gt;Let’s update the replica count to 2&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

values:
    image:
      repository: kong/kong-gateway
      tag: "3.1.1.3"
    replicaCount: 2 # Change to this


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Commit the code, and push to the remote repository. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

git add kong-gw/release.yaml
git commit -m "chore: Bump 2 x Kong pod"

git push origin main



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Observe the number of Kong Pods now by running the below command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


❯ kubectl get pods -n kong-enterprise
NAME                                      READY   STATUS      RESTARTS   AGE
kong-kong-665fc7d8db-m67lb                1/1     Running     0          5m
kong-kong-665fc7d8db-nsqm7                1/1     Running     0          26s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should now see there are 2 x  Kong pods running, and what happened is due to Flux pulling the new manifest from the source repository it's synchronized with, and then updating the state of the cluster accordingly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cleaning-Up
&lt;/h3&gt;

&lt;p&gt;Before you receive any bill shock, do remember to destroy the EKS cluster upon the conclusion of your experiment.  Below are the steps to clean up and destroy the cluster upon experimentation&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Uninstall Flux&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 uninstall --namespace=flux-system

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Clean up Kong Resource&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Do a full-clean up of any remaining Kong resources&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 delete ns kong

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Delete the node groups&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;eksctl delete nodegroup --name Kong-GitOps-Test-Cluster  --region &amp;lt;aws-region&amp;gt;&lt;/code&gt;       &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delete the cluster&lt;/strong&gt;&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&lt;br&gt;
 delete cluster --name Kong-GitOps-Test-Cluster  --region &amp;lt;aws-region&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Summary&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;The key message here is that Kong is flexible and agnostic enough to be installed using several methods. With Gitops tooling like Flux, it can support all Kong Kubernetes deployment flavours with their operators or controllers, and have them consistently deployed across clusters.&lt;/p&gt;

&lt;p&gt;With GitOps, you can easily automate configurations across multiple clusters. This is critical if ,for any unfortunate event, that you have to rebuild your current cluster in another site. With Flux GitOps, you can easily replicate the configuration by bootstrapping the new cluster with Flux and pointing to the existing source repository like what we did in this exercise. As a result, downtime is decreased and services are impacted minimally.&lt;/p&gt;

&lt;p&gt;Kong supports the following deployment flavours for Kubernetes, and you can easily use them with Flux GitOps&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Helm
&lt;/li&gt;
&lt;li&gt;Kubernetes Custom Resource Definition (CRD)&lt;/li&gt;
&lt;li&gt;Kubernetes Operators (&lt;a href="https://incubator.konghq.com/p/gateway-operator/" rel="noopener noreferrer"&gt;Incubator&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sample Repository can be found here:  &lt;a href="https://github.com/robincher/kong-flux-gitops" rel="noopener noreferrer"&gt;https://github.com/robincher/kong-flux-gitops&lt;/a&gt;&lt;/p&gt;

</description>
      <category>flux</category>
      <category>gitops</category>
      <category>kong</category>
    </item>
    <item>
      <title>Securing your site via OIDC, powered by Kong and KeyCloak</title>
      <dc:creator>Robin Cher</dc:creator>
      <pubDate>Wed, 19 Jan 2022 06:30:33 +0000</pubDate>
      <link>https://dev.to/robincher/securing-your-site-via-oidc-powered-by-kong-and-keycloak-2ccc</link>
      <guid>https://dev.to/robincher/securing-your-site-via-oidc-powered-by-kong-and-keycloak-2ccc</guid>
      <description>&lt;p&gt;&lt;em&gt;Quick sharing on how you can further secure your api or endpoints with OIDC, and powered by Kong and Keycloak. The examples shared are all open-source solutions.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As we start shipping services out to the open world, one of the primary considerations is how we can prevent any malicious attacks, since now we are in the wild west.Thankfully, We have the opportunity to &lt;strong&gt;tinker and experiment with open-source solutions&lt;/strong&gt; due our work nature at that time. In this post, i would like to share how we incorporate &lt;strong&gt;Kong Ingress Controller, KeyCloak and Kubernetes&lt;/strong&gt; to have an initial OIDC flow to front our external services.&lt;/p&gt;

&lt;h2&gt;
  
  
  System Context
&lt;/h2&gt;

&lt;p&gt;Before we dive deeper, let's take a closer look on how OIDC will verify the authenticity of a user before allowing the request to be fulfilled.&lt;/p&gt;

&lt;h3&gt;
  
  
  OIDC (OpenID Connect) Overview
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Extracted from : &lt;a href="https://github.com/nokia/kong-oidc" rel="noopener noreferrer"&gt;https://github.com/nokia/kong-oidc&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh86wh5w7ukelooeq9mjg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh86wh5w7ukelooeq9mjg.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparation
&lt;/h2&gt;

&lt;p&gt;Before we begin, we need to have the following :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kong - An API Gateway (community edition is open source and free)&lt;/li&gt;
&lt;li&gt;Kong OIDC Plugin - Open-sources OIDC plugin for Kong, maintained by the &lt;a href="https://github.com/revomatico" rel="noopener noreferrer"&gt;community&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Kong JWT KeyCloak Plugin - Plugin for Kong so as to allow JWT token to be issued and validated by Keycloak. Maintained by the &lt;a href="https://github.com/BGaunitz/kong-plugin-jwt-keycloak" rel="noopener noreferrer"&gt;community&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Keycloak - A OpenID Connect Provider (Open Source)&lt;/li&gt;
&lt;li&gt;Kubernetes - Open-source container orchestration system where we will deploy Kong and KeyCloak&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Baking the OIDC Plugin with Kong Base Image
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Important Note&lt;/strong&gt; : This is a community maintained OIDC Plugin. If required, please consider checking out the official OIDC Plugin that is only available with &lt;a href="https://docs.konghq.com/hub/kong-inc/openid-connect/" rel="noopener noreferrer"&gt;Kong Enterprise&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we are using the open-source plugin by &lt;a href="https://github.com/revomatico/kong-oidc" rel="noopener noreferrer"&gt;Revomatico&lt;/a&gt;, we need to bake the plugin with the base kong image.&lt;/p&gt;

&lt;p&gt;Sample Dockerfile&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

FROM kong/kong:2.7.0

ENV OIDC_PLUGIN_VERSION=1.2.3-2
ENV JWT_PLUGIN_VERSION=1.1.0-1
ENV GIT_VERSION=2.24.4-r0
ENV UNZIP_VERSION=6.0-r7
ENV LUAROCKS_VERSION=2.4.4-r1


USER root
RUN apk update &amp;amp;&amp;amp; apk add git=${GIT_VERSION} unzip=${UNZIP_VERSION} luarocks=${LUAROCKS_VERSION}
RUN luarocks install kong-oidc

RUN git clone --branch v1.2.3-2 https://github.com/revomatico/kong-oidc.git
WORKDIR /kong-oidc
RUN luarocks make

RUN luarocks pack kong-oidc ${OIDC_PLUGIN_VERSION} \
     &amp;amp;&amp;amp; luarocks install kong-oidc-${OIDC_PLUGIN_VERSION}.all.rock

WORKDIR /
RUN git clone --branch 20200505-access-token-processing https://github.com/BGaunitz/kong-plugin-jwt-keycloak.git
WORKDIR /kong-plugin-jwt-keycloak
RUN luarocks make

RUN luarocks pack kong-plugin-jwt-keycloak ${JWT_PLUGIN_VERSION} \
     &amp;amp;&amp;amp; luarocks install kong-plugin-jwt-keycloak-${JWT_PLUGIN_VERSION}.all.rock

USER kong


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Build and push the image to a registry&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Build the container with any new changes
docker build -t &amp;lt;reponame&amp;gt;/kong-oidc:&amp;lt;tag&amp;gt; -f Dockerfile . 

# Run the container in detached mode
docker run -d --name kong-oidc &amp;lt;reponame&amp;gt;/kong-oidc:&amp;lt;tag&amp;gt;       

# Pushing the container image to a registry
docker push &amp;lt;reponame&amp;gt;/kong-oidc:&amp;lt;tag&amp;gt;   


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Understand the Kong Cluster Plugin Configurations
&lt;/h3&gt;

&lt;p&gt;Kong Ingress Controller allow us to configure Kong specific features using several Custom Resource Definitions(CRDs)&lt;/p&gt;

&lt;p&gt;These are the following CRDs that we need to take note of&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;odic&lt;/strong&gt; - This plugin is used to communicate with the Keycloak Identity provider and is required if you'd like to enable (recommended) SSO for your ingress.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;request-transformer&lt;/strong&gt; - To strip off unnecessary headers upon authentication with the identity platform&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;cors&lt;/strong&gt; - Allow Cross Site origin at global level&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Deploying into Kubernetes
&lt;/h3&gt;

&lt;p&gt;Examples manifest can be found on &lt;a href="https://github.com/robincher/kong-oidc-keycloak-boilerplate/tree/master/kubernetes" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
#### Deploy the previously baked Kong image (with OIDC plugin). &lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f kong-deployment.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
#### Deploy KeyCloak&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f keycloak-deployment.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.&lt;/p&gt;

&lt;h4&gt;
  
  
  KeyCloak Configurations
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Create a new Realm in KeyCloak&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nfge78i60lhosn780d0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nfge78i60lhosn780d0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new Kong Client in the realm , eg kong-oidc&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28lfwv7yz6adqoprgugs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28lfwv7yz6adqoprgugs.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Client Configurations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Go to &lt;strong&gt;Clients&lt;/strong&gt;, and then click on &lt;strong&gt;Settings&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Make the following updates&lt;/p&gt;

&lt;p&gt;Access Type: &lt;strong&gt;Confidential&lt;/strong&gt;&lt;br&gt;
Valid Redirect URIs: *&lt;br&gt;
Web Origin: localhost (Allowed CORS origin)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copied the Client ID, and then go to &lt;strong&gt;Credentials&lt;/strong&gt; to get the &lt;strong&gt;Secret&lt;/strong&gt; value.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0d9cggsqnabak60c50n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa0d9cggsqnabak60c50n.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Retrieve OpenID Endpoint Configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Go to &lt;strong&gt;Realm Setting&lt;/strong&gt;, and select &lt;strong&gt;General&lt;/strong&gt;. Click on &lt;strong&gt;OpenID Endpoint Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09ujgj518gqwxh33g765.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09ujgj518gqwxh33g765.png" alt="Image description"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Override the values of the OIDC Kong Plugins&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
  name: oidc
  annotations:
    kubernetes.io/ingress.class: "kong"
  labels:
    global: "false"
disabled: false # optionally disable the plugin in Kong
plugin: oidc
config: # configuration for the plugin
  client_id: kong-oidc # Realm
  client_secret: xxxxxxxx  # Client Secret Copied
  realm: kong
  discovery: https://localhost/.well-known/openid-configuration # OpenID Endpoint Configuration Copied
  scope: openid
  redirect_after_logout_uri : https://localhost/auth/realms/kong/protocol/openid-connect/logout?redirect_uri=https://localhost


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.&lt;/p&gt;

&lt;h4&gt;
  
  
  Deploy the required Cluster Plugins as mentioned above
&lt;/h4&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f kong-crds.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;5&lt;/p&gt;

&lt;h4&gt;
  
  
  Testing your site with OIDC
&lt;/h4&gt;

&lt;p&gt;We can now test a sample ingress that is intercepted by Kong Ingress Controller. All you have to do is indicating the plugins annotations on the ingress&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# sample-oidc.yaml
....
....
----
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echoserver-oidc
  namespace: default
  annotations:
    konghq.com/plugins: request-transformer,oidc #indicate here
spec:
  ingressClassName: kong
  rules:
    - host: echo-oidc.localhost.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service: 
                name: echoserver
                port: 
                  number: 80


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should be prompted to sign in when attempting to access the site, and have successfully implement a minimal OIDC solution using open-source technology !&lt;/p&gt;

&lt;h3&gt;
  
  
  Learnings
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We choose Kong because of its flexibility (we only scratch its surface on what it capable of as a full-fledge API Gateway) and being lightweight, which allow us to make changes quickly. Additionally, it is cloud-native which ease the effort of deployment. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;KeyCloak is the de-facto IAM solution we have in our lab, and integrating it is straight forward enough. We don't really have to re-engineer much in the near future when we onboard more type of user pools (Social Logins, Azure Active Directory etc..)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If possible, start something small to see if it work as intended, especially when there are many inter-connected dots. It is always good to retain some technical acumen internally to allow better decision making when engaging with our software partners. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Credits
&lt;/h2&gt;

&lt;p&gt;This work is not possible without the contribution from my following team mates:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/angweeliang" rel="noopener noreferrer"&gt;Wee Liang - DevOps Engineer @ Thales Airlab&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/arunsudhakar" rel="noopener noreferrer"&gt;Arun - DevOps &amp;amp; Integration @ Thales Airlab&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/kelvin-neo" rel="noopener noreferrer"&gt;Kelvin - Software Engineer @ Thales Airlab&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/revomatico/kong-oidc" rel="noopener noreferrer"&gt;Revomatico OIDC Plugin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/robincher/kong-oidc-keycloak-boilerplate" rel="noopener noreferrer"&gt;Sample Repo with Examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.keycloak.org/" rel="noopener noreferrer"&gt;KeyCloak&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://konghq.com/kong/" rel="noopener noreferrer"&gt;Kong API Gateway&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>oidc</category>
      <category>kong</category>
      <category>keycloak</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Shifting Left - Secret as Code with Flux V2 and Mozilla SOPS</title>
      <dc:creator>Robin Cher</dc:creator>
      <pubDate>Mon, 10 Jan 2022 06:11:29 +0000</pubDate>
      <link>https://dev.to/robincher/shifting-left-secret-as-code-with-flux-v2-and-mozilla-sops-19cg</link>
      <guid>https://dev.to/robincher/shifting-left-secret-as-code-with-flux-v2-and-mozilla-sops-19cg</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;A question that every engineers faced during their software development lifecycle is, "where should i stored my secrets?"&lt;/p&gt;

&lt;p&gt;In this article, I will be sharing how my team "shift-left" to allow engineers self-service in managing their secrets. The guiding principle we adhering to is being as cloud-agnostic as possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Concepts
&lt;/h2&gt;

&lt;p&gt;Before we dive deeper, let's align some concepts first. When we discuss about secret management for application, there are various way secret can be managed.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Writing them directly on source code as plain text. The anti-pattern and &lt;strong&gt;bad security practice&lt;/strong&gt;, although it will work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Putting the secrets somewhere else, like Cloud Provider solutions, Vault, File Systems in the Platform or Env Variables stored in the CI/CD. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Keeping them in the same code repository in an encrypted manner. Now we are talking about secrets lifecycle being managed via a pull request in a git-based workflow. This is the topic of secret as code we are talking about, where our Git Repository is the source of truth for secrets and configs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;Before we start, we need to bootstrap flux to an existing cluster first.&lt;/p&gt;

&lt;p&gt;Follow the instruction here if you need to: &lt;a href="https://fluxcd.io/docs/get-started/"&gt;https://fluxcd.io/docs/get-started/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For my experiment, these are the technologies i use&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Flux v2&lt;/li&gt;
&lt;li&gt;Gitlab &lt;/li&gt;
&lt;li&gt;Microsoft Azure Kubernetes Service &lt;/li&gt;
&lt;li&gt;Microsoft Key Vault&lt;/li&gt;
&lt;li&gt;Azure CLI&lt;/li&gt;
&lt;li&gt;SOPs&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Context
&lt;/h2&gt;

&lt;p&gt;Below are the steps on how we can bootstrap an existing Kubernetes cluster to have the capability of decrypting SOPS secrets. &lt;/p&gt;

&lt;p&gt;For my experimentation, i will be using Azure Key Vault for cryptographic operations. It is also supported by other Cloud Providers Key Management Service, or self generated PGP/GPG Key.&lt;/p&gt;

&lt;h3&gt;
  
  
  High-Level Setup
&lt;/h3&gt;

&lt;p&gt;Here are the summarised steps to give Flux the capability in performing cryptographic operations. Flux's controller will be able to decrypt SOPs secret whenever the secret is being consumed by a pod.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install Pod-Identity&lt;/li&gt;
&lt;li&gt;Create Role Assignments for Kubelet&lt;/li&gt;
&lt;li&gt;Create a managed identity&lt;/li&gt;
&lt;li&gt;Create Azure KeyVault and Signing Key&lt;/li&gt;
&lt;li&gt;Configure in-cluster secrets decryption&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Detailed set-up : &lt;br&gt;
&lt;a href="https://techcommunity.microsoft.com/t5/azure-global/gitops-and-secret-management-with-aks-flux-cd-sops-and-azure-key/ba-p/2280068"&gt;https://techcommunity.microsoft.com/t5/azure-global/gitops-and-secret-management-with-aks-flux-cd-sops-and-azure-key/ba-p/2280068&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sample Repository:&lt;br&gt;
&lt;a href="https://github.com/robincher/bluesky-flux-sops-azure-template"&gt;https://github.com/robincher/bluesky-flux-sops-azure-template&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Checking in Secrets
&lt;/h2&gt;

&lt;p&gt;Before continuing , we need to &lt;a href="https://github.com/mozilla/sops/releases"&gt;install sops&lt;/a&gt; locally in our machine first.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a sample secret
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# sample-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: demo-credentials
  namespace: default
type: Opaque
stringData:
  username: admin
  password: xxxxx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Encrypting the secret &lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Running the SOPs command with required parameters
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sops -e --azure-kv &amp;lt;kv_key_address&amp;gt; --in-place --encrypted-regex "^(data|stringData)$" sample-secret.yaml &amp;gt; sample-secret-enc.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Important flags to provide&lt;/p&gt;

&lt;p&gt;a. azure-kv : Azure Keyvault Key Address, make sure you or the machine have the access&lt;/p&gt;

&lt;p&gt;b. encrypyed-regex: Specifying what to be encrypted from the secret file.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating a default SOPS setting&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Alternatively, we can create a default SOPS config that specify &lt;strong&gt;what to use for encryption&lt;/strong&gt;, and &lt;strong&gt;where to be encrypted.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a .sops.yaml with the following values. We will encrypt all blocks under the stringData or data key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# .sops.yaml
creation_rules:
  - path_regex: .*.yaml
    encrypted_regex: ^(data|stringData)$
    azure_keyvault: https://demo123.vault.azure.net/keys/demo-secret-key/c12231211
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The default sops configs will be applied to its root and sub-folders.&lt;/p&gt;

&lt;p&gt;Now we just have to run the following command to encrypt.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sops -e --in-place sample-secret.yaml &amp;gt; sample-secret-enc.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should get the following sample secret upon encryption&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# sample-secret-enc.yaml
apiVersion: v1
kind: Secret
metadata:
    name: demoapp-credentials
    namespace: default
type: Opaque
stringData:
    username: ENC[AES256_GCM,data:21q4bo0=,iv:LOLxXQurjQR6cu9heQlZDdmhNgYO6VCBybbQHV6rO0w=,tag:58ep32CDrlCFuuDnD65VEQ==,type:str]
    password: ENC[AES256_GCM,data:oTZDkadQKL45dA==,iv:5VVbXC55xTVwH/n3t5gtKNtlkB3q7t8lW7Jw1czNSL0=,tag:WuqdubjTu6mQN5x1b3zDyw==,type:str]
sops:
    kms: []
    gcp_kms: []
    azure_kv:
        - vault_url: https://demo123.vault.azure.net
          name: demo-secret-key
          version: b7bc85c1a4ef4180be9d1de46725304c
          created_at: "2021-04-23T14:22:15Z"
          enc: KuFxRbcge198GU7hwHs078JNd_1EFtvcFqQ6bOLJDYMnWaW0kSbeD4DCxY0jX9MA17Rv3UMKHGfImgEbNfXGGIh7UucLPygpiuUyn9I73ClSQQ4trc4bD2yVkonCMwz5-0MiPVC3muhQpn3KjhThSucOgjhBnqQy_zzzzTeUP9PWi1pSp1jc3S2BxQIuKy09-oEakQogU4BRy55219befizYN7EFe8mstSIkvpksqGxKccH6dQum2k-OqsBUH2jkxiVgi5CEU35COy0pNWVJpZGuOaDMkGGqo7lrT4XKEGxtFKvEDxr6bTfjjQafuuxW9-4a9ZtaBkHCKopk66R9dcQ
    hc_vault: []
    age: []
    lastmodified: "2021-01-23T14:22:18Z"
    mac: ENC[AES256_GCM,data:as5mfREh5xdeiwbchkiiBS96tGuLJnEqme6VdDrPWKV9R0A4ATIM/1+HcbdAzGBXb8TmhO71hZMl3IvmX9DrNA/tvpPwFvLCkDfNhoWXJoXRRv6aRR7AJPlfcXkVMxxYaRDqz+ugAJkZG+5dhYeh1QAmiswjZOXaINEOw3Jf5dI=,iv:p/M2OhPdh2Naxu37Jt7EwiLf9Eb9OgExsmXX3hSUOJQ=,tag:fVqJ2jy++6GxHBPGXZHmHw==,type:str]
    pgp: []
    encrypted_regex: ^(data|stringData)$
    version: 3.7.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;p&gt;Let's check in the secret and see the magic by GitOps&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rm sample-secret.yaml
git add sample-secret-enc.yaml
git commit -m "chore: Add encrypted secret"
git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check out the secret in Kubernetes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl describe secret -n demoapp demoapp-credentials
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Observation and Learning
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We need to further educate the developers this new way of doing as the majority of them are used to checking in secrets into an external secret manager or stored as CI/CD variables.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mozilla SOPs with Flux allow us to versioned our secrets as it follows a git-based workflow, and provide audit trails through the commit messages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can be deployed universally in both cloud or on-premises environment, as long as the cryptographic keys are backup and kept safely. We acknowledge that using Azure Key Vault might required additional work for us if we are to move to another provider's Key Management Service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using a cloud-based KMS alleviate the risk of having the decryption key being leaked (which typically being stored in the same machine), as there are access control in placed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is not a silver bullet to address all kind of secrets requirements as what work well for us might be going against the way of working for yours. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://fluxcd.io/docs/guides/mozilla-sops"&gt;Flux with SOPS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://techcommunity.microsoft.com/t5/azure-global/gitops-and-secret-management-with-aks-flux-cd-sops-and-azure-key/ba-p/2280068"&gt;GitOps with AKS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/robincher/bluesky-flux-sops-azure-template"&gt;Sample Repository Flux and AKS&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>gitops</category>
      <category>sops</category>
      <category>azure</category>
      <category>flux</category>
    </item>
    <item>
      <title>Moving to Pomerium Identity Aware Proxy</title>
      <dc:creator>Robin Cher</dc:creator>
      <pubDate>Fri, 13 Nov 2020 11:03:37 +0000</pubDate>
      <link>https://dev.to/robincher/moving-to-pomerium-identity-aware-proxy-4fom</link>
      <guid>https://dev.to/robincher/moving-to-pomerium-identity-aware-proxy-4fom</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This is one of several learnings we encountered as part of Engineering Suite in Government Digital Services, GovTech SG. The focus for us in Engineering suite are creating productivity tools that supports our &lt;a href="https://www.tech.gov.sg/products-and-services/singapore-government-tech-stack/" rel="noopener noreferrer"&gt;SG Tech Stack&lt;/a&gt; initiative.In this article, I will be sharing about how an Identity-Aware Proxy (IAP) enhanced my team’s security posture while making our developer's life happier&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Identity Aware Proxy (IAP) ?
&lt;/h2&gt;

&lt;p&gt;A proxy that enables secure access to upstream services. It can act as an identity attestation or perform delegated authorisation for these services. &lt;/p&gt;

&lt;p&gt;It represents a paradigm shift from the traditional security model where the focus was securing the network perimeter through IP/Ports. With the proxy approach, not only are users verified, but the application requests can be terminated, examined, and authorised as well. IAP relies on application level access controls, whereby configured policies access user or application intent, not just ports and IPs.&lt;/p&gt;

&lt;p&gt;Additionally, it supports the mantra of &lt;strong&gt;"Always verify, never trust"&lt;/strong&gt; of a Zero-trust security model. You can read more about &lt;a href="https://medium.com/google-cloud/what-is-beyondcorp-what-is-identity-aware-proxy-de525d9b3f90" rel="noopener noreferrer"&gt;IAP&lt;/a&gt; as described by Google.&lt;/p&gt;

&lt;p&gt;Image credit from &lt;a href="https://cloud.google.com/iap/docs/concepts-overview" rel="noopener noreferrer"&gt;Google IAP Overview&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcmhdfl83hf7j2mv107mx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcmhdfl83hf7j2mv107mx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Scenario
&lt;/h2&gt;

&lt;p&gt;Due to the Covid pandemic, most organizations have shifted to a remote working environment. It is by no means an easy feat to get everyone connected securely from their home, and the risk profile these days have changed. The last thing we want to do is to implement extra layers of security restrictions and constraints, resulting in a terrible developer's experience with marginal security gain. We wanted a solution that enables our users to access our workloads from untrusted networks with the use of yet another VPN. Traditionally, accessing the development toolchain required a different VPN service as the applications are hosted by a different team and it was impossible to leverage on their VPN service to front the team’s DEV environment.&lt;/p&gt;

&lt;p&gt;We looked at a few IAP solutions, and eventually decided to proceed with&lt;a href="https://www.pomerium.com/" rel="noopener noreferrer"&gt;Pomerium&lt;/a&gt; mainly because it is open-sourced and our team have prior experience working with it. It is also very easy to deploy a full setup in a Kubernetes cluster&lt;/p&gt;

&lt;h2&gt;
  
  
  Decomposing Pomerium
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pomerium Proxy
&lt;/h3&gt;

&lt;p&gt;It mainly interject and direct request to Authentication service so as to establish an identity from the idP. Additionally, it will process polices to determined internal/external route mapping.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pomerium Authenticate (AuthN)
&lt;/h3&gt;

&lt;p&gt;It handles authentication flow to the idP&lt;/p&gt;

&lt;h3&gt;
  
  
  Pomerium Authorize (AuthZ)
&lt;/h3&gt;

&lt;p&gt;Processes policy to determine permissions for each service and handle authorization checks&lt;/p&gt;

&lt;h3&gt;
  
  
  Pomerium Cache
&lt;/h3&gt;

&lt;p&gt;Stores session and identity data in persistent storage, and stores idP access and refresh tokens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;The components required:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes cluster with ingress &lt;/li&gt;
&lt;li&gt;Azure Active Directory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article, i will be deploying Pomerium to a single Node in AWS EKS Cluster with NGINX ingress, and the perimeter are fronted by an Application Load Balancer. Azure Active Directory will be configured as the identity provider. &lt;/p&gt;

&lt;h3&gt;
  
  
  System Context
&lt;/h3&gt;

&lt;p&gt;An overview on how pomerium are set-up, whereby we need to secure both an internal service that reside within the same cluster, and an external service that are hosted outside.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc6goyw64ai9cehexrbjo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc6goyw64ai9cehexrbjo.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, will be listing down some of the good improvements that can be done in the later section.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pomerium Configurations
&lt;/h3&gt;

&lt;p&gt;Explaining Pomerium configuration in details.&lt;/p&gt;

&lt;h4&gt;
  
  
  Main Configurations
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Main configuration flags : https://www.pomerium.io/docs/reference/reference/
insecure_server: true
grpc_insecure: true
address: ":80"
grpc_address: ":80"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Leave this as default. This is to configure how Pomerium's services discovered each other. For this experiment , we can allow insecure traffic.&lt;/p&gt;

&lt;h4&gt;
  
  
  Workloads URL
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## Workload URL
authenticate_service_url: https://authenticate.yourdomain.com

authorize_service_url: http://pomerium-authorize-service.default.svc.cluster.local

cache_service_url: http://pomerium-cache-service.default.svc.cluster.local

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is to define the route for required service. Pay special authentication to authenticate_service_url, which is a publicly discoverable URL that will be redirected to the web browser, while the other 2 are internal Kubernetes hostnames.&lt;/p&gt;

&lt;h4&gt;
  
  
  IDP Details
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;idp_provider: azure
idp_client_id: xxxxx
idp_client_secret: "xxx"
idp_provider_url : xxxx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enter the details that you retrieve from the identity provider.&lt;/p&gt;

&lt;p&gt;I will &lt;strong&gt;strongly recommend&lt;/strong&gt; you not to hard code any secrets into the yaml files for &lt;strong&gt;production&lt;/strong&gt; use. Alternative, you can create Kubernetes secrets and inject these as environmental variables.&lt;/p&gt;

&lt;p&gt;For the list of supported identity provider and steps to generate the above details, please visit &lt;a href="https://www.pomerium.com/docs/identity-providers/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Policy
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;policy:
  - from: https://httpbin.yourdomain.com
    to: http://httpbin.default.svc.cluster.local:8000
    allowed_domains:
      - outlook.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Policy contains route specific settings, and access control details. For this example, Pomerium will intercept all request to &lt;a href="https://httpbin.example.com" rel="noopener noreferrer"&gt;https://httpbin.example.com&lt;/a&gt;, and then redirect to the internal dns hostname of the httpbin workload.&lt;/p&gt;

&lt;p&gt;Your final pomerium-config.yaml should look something like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Main configuration flags : https://www.pomerium.io/docs/reference/reference/
insecure_server: true
grpc_insecure: true
address: ":80"
grpc_address: ":80"

## Workload URL
authenticate_service_url: https://authenticate.yourdomain.com
authorize_service_url: http://pomerium-authorize-service.default.svc.cluster.local
cache_service_url: http://pomerium-cache-service.default.svc.cluster.local

idp_provider: azure
idp_client_id: REPLACE_ME.apps.googleusercontent.com
idp_client_secret: "REPLACE_ME"


policy:
  - from: https://httpbin.yourdomain.com
    to: http://httpbin.default.svc.cluster.local:8000
    allowed_domains:
      - outlook.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploying
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Pomerium configmap
&lt;/h3&gt;

&lt;p&gt;We will create a configmap based on the configuration above, and then it will be mounted by the Pomerium workloads.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f pomerium-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Create random secret
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret generic shared-secret --from-literal=shared-secret=$(head -c32 /dev/urandom | base64)
kubectl create secret generic cookie-secret --from-literal=cookie-secret=$(head -c32 /dev/urandom | base64)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Nginx controller
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f ingress-nginx.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Mock Httpbin Services
&lt;/h4&gt;

&lt;p&gt;Deploy a test internal httpbin service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f httpbin-internal.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Deploy Pomerium Workloads
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f pomerium-proxy.yml
kubectl apply -f pomerium-authenticate.yml
kubectl apply -f pomerium-authorize.yml
kubectl apply -f pomerium-cache.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Test
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Ensure Pomerium workloads are up and running&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8v3xpxeqt94fd04jrcjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8v3xpxeqt94fd04jrcjc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Accessing Httpbin &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From your web-browser, enter &lt;a href="https://httpbin.yourdomain.com" rel="noopener noreferrer"&gt;https://httpbin.yourdomain.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should be redirected to the IDP sign-in page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fz30socbh5lmvdjlfulhj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fz30socbh5lmvdjlfulhj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So what happened here? Pomerium proxy interject your request based on the policy defined above, and then redirect to the identity provider's authentication page.&lt;/p&gt;

&lt;p&gt;Enter your credentials, and you will be able to access Httpbin's page.&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional use case
&lt;/h3&gt;

&lt;p&gt;Promerium will be able to proxy services which reside outside of the EKS cluster by leveraging on EKS externalServiceName(insert link). By doing so, Services that reside outside of the EKS cluster will be able to have a service FQDN within EKS which will then be routable by Promerim.&lt;/p&gt;

&lt;h2&gt;
  
  
  Impact
&lt;/h2&gt;

&lt;p&gt;Being able to protect our development and staging environments without another set of VPN greatly enhances the productivity of our engineering team. With Azure AD integration, we can carry out device posture checks, via conditional access policies, on the developer machine being used to access our protected environment. This improves the team’s security posture as a devices within a VPN boundary is no longer a guarantee of safety in today’s cybersecurity context.&lt;/p&gt;

&lt;p&gt;Operationally, running the proxy on K8 facilitates capacity scaling in order to meet any sudden surge in demand. A properly instrumented K8 cluster can also help the DevOps team closely monitor all incoming network traffic. &lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/robincher/pomerium-kubernetes-recipe/commits/main" rel="noopener noreferrer"&gt;Pomerium Workload Recipe @ Github&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.pomerium.io/" rel="noopener noreferrer"&gt;Pomerium Official Website&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>kubernetes</category>
      <category>pomerium</category>
      <category>iap</category>
      <category>security</category>
    </item>
  </channel>
</rss>
