<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jonas Bergström</title>
    <description>The latest articles on DEV Community by Jonas Bergström (@luckyswede).</description>
    <link>https://dev.to/luckyswede</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/luckyswede"/>
    <language>en</language>
    <item>
      <title>Building docker images in docker, and dynamic sequential Jenkins stages</title>
      <dc:creator>Jonas Bergström</dc:creator>
      <pubDate>Thu, 17 Feb 2022 09:51:29 +0000</pubDate>
      <link>https://dev.to/goals/building-docker-images-in-docker-and-dynamic-sequential-jenkins-stages-2fni</link>
      <guid>https://dev.to/goals/building-docker-images-in-docker-and-dynamic-sequential-jenkins-stages-2fni</guid>
      <description>&lt;p&gt;A couple of things I thought would be super easy but turned out to require a few hours of research and trial-and-error...&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://goals.co/"&gt;GOALS&lt;/a&gt; we run &lt;a href="https://plugins.jenkins.io/kubernetes/"&gt;Jenkins in Kubernetes&lt;/a&gt; for various reasons.&lt;br&gt;
Some build jobs generates Docker images as artefacts, for example when we build new versions of some backend service.&lt;br&gt;
However, &lt;a href="https://levelup.gitconnected.com/kubernetes-is-deprecating-docker-in-2021-fa8317f9f070"&gt;Kubernetes is deprecating Docker&lt;/a&gt;, and we are running all our nodes on &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/using-containerd"&gt;containerd&lt;/a&gt;.&lt;br&gt;
So, how to build Docker images then?&lt;/p&gt;

&lt;p&gt;Turns out that Google has a project for it, named &lt;a href="https://github.com/GoogleContainerTools/kaniko"&gt;Kaniko&lt;/a&gt;. And since we're running GKE, and has &lt;a href="https://github.com/GoogleContainerTools/kaniko#pushing-to-gcr-using-workload-identity"&gt;Workload Identity&lt;/a&gt; properly configured, it should be super easy to build Docker images and push them to our GCP managed docker repo, right?&lt;/p&gt;

&lt;p&gt;No. I get this error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "europe-west1-docker.pkg.dev/XXX": creating push check transport for europe-west1-docker.pkg.dev failed: GET https://europe-west1-docker.pkg.dev/v2/token?YYY: UNAUTHORIZED: authentication failed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;AWS credentials&lt;/em&gt; - wth?&lt;/p&gt;

&lt;p&gt;After some testing I found &lt;a href="https://github.com/GoogleContainerTools/kaniko/issues/1287#issuecomment-1036638533"&gt;this&lt;/a&gt;, which solved the issue :). For an example Jenkinsfile, see below.&lt;/p&gt;

&lt;p&gt;Another issue I had was that I wanted to generate dynamic build stages in Jenkins, and &lt;em&gt;execute them sequentially&lt;/em&gt;. There are a lot of examples of how to execute dynamically generated stages in parallel, but that's not what I wanted.&lt;br&gt;
Turned out to be super simple in the end ofc, but it aint simple until you've learned.&lt;/p&gt;

&lt;p&gt;Here's an example Jenkinsfile that demonstrates both Kaniko and dynamic sequential build steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def stageConfigs = [
  [name: "Sara", age: 20, color: "blue"],
  [name: "Mona", age: 10, color: "green"],
  [name: "Lotta", age: 8, color: "red"]
]

def generateBuildStage(stageConfig) {
  return {
    stage("Building ${stageConfig.name}") {
      container('build-container') {
        echo "Building ${stageConfig.name}"
        writeFile(
          file: "${stageConfig.name}.generated",
          text: "${stageConfig.name} is ${stageConfig.age} years old and loves ${stageConfig.color} things")
      }
    }
  }
}

def buildStages = stageConfigs.collectEntries {
    ["${it}" : generateBuildStage(it)]
}

podTemplate(
  inheritFrom: 'linux',
  containers: [
    containerTemplate(name: "build-container", image: "busybox", command: "sleep", args: "infinity"),
    containerTemplate(name: "kaniko", image: "gcr.io/kaniko-project/executor:343f78408c891ef7a85bab1ecbf2dd69367a58bc-debug", command: "sleep", args: "infinity", runAsUser: "0", ttyEnabled: true)])
  {
  node(POD_LABEL) {
    stage("Checkout") {
      checkout(scm)
    }

    stage("Build application") {
      // execute builds in parallel
      // parallel(buildStages)
      // execute builds sequentially
      for (stage in buildStages.values()) {
        stage.call()
      }
      container('build-container') {
        writeFile(
          file: "Dockerfile",
          text: '''
            FROM busybox
            COPY *.generated ./
          ''')
        sh "ls -al"
      }
    }

    stage('Build image') {
      container('kaniko') {
        sh "/kaniko/executor --context `pwd` --dockerfile `pwd`/Dockerfile --destination europe-west1-docker.pkg.dev/XXX"
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>jenkins</category>
      <category>docker</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>HOWTO; Connect to a private GKE cluster using a site-to-site VPN between a GCP VPC...</title>
      <dc:creator>Jonas Bergström</dc:creator>
      <pubDate>Sat, 12 Feb 2022 16:50:05 +0000</pubDate>
      <link>https://dev.to/goals/howto-connect-to-a-private-gke-cluster-using-a-site-to-site-vpn-between-a-gcp-vpc-2ak</link>
      <guid>https://dev.to/goals/howto-connect-to-a-private-gke-cluster-using-a-site-to-site-vpn-between-a-gcp-vpc-2ak</guid>
      <description>&lt;p&gt;&lt;strong&gt;... and a double NAT'ed Ubiquiti Dream Machine Pro&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Wait, what? I just want to access a Kubernetes cluster in a secure way ... should be easy.&lt;/p&gt;

&lt;p&gt;[2022-03-10] Update from Google! I just got this message: &lt;em&gt;"We are writing to let you know that starting June 15, 2022, we will remove the restriction on the Internet Key Exchange (IKE) identity of peer Cloud VPN gateways."&lt;/em&gt; &lt;br&gt;
This means in practice that NAT'ed setups will become easier to manage because GCP will not require remote IP to match remote id.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://goals.co" rel="noopener noreferrer"&gt;GOALS&lt;/a&gt; we are cloud native, and we are serious about security. As a consequence our Kubernetes clusters are provisioned with &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="noopener noreferrer"&gt;private endpoints only&lt;/a&gt;, where all nodes have internal IP addresses. This is great, but the question immediately arrises - how do we operate such a cluster, when we cannot access it from outside the VPC?&lt;/p&gt;

&lt;p&gt;Here is a high level overview of what we have:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3gc8466jr10ktcza4i9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3gc8466jr10ktcza4i9.png" alt="Private GKE and office"&gt;&lt;/a&gt;&lt;br&gt;
At the top of the diagram we see the private Google managed Kubernetes (GKE) cluster. A Kubernetes cluster consists of a control plane and worker nodes. In the case of GKE, Google manages the control plane (api server, etcd nodes, etc), the underlying VM's the control plane is running on, and the underlying VM's that worker nodes are running on.&lt;br&gt;
We have set up our own VPC and a subnet where the worker nodes are running, and Google creates a managed VPC where the control plane is running. Google automatically peers the control plane VPC with our VPC.&lt;/p&gt;

&lt;p&gt;At the bottom of the diagram we see an overview of our office space network. Since we are a startup we rent office space and share the network with other tenants. We have connected a Ubiquity Dream Machine Pro to the office space network, and created our own GOALS network where we connect our work stations.&lt;/p&gt;

&lt;p&gt;Obviously, running eg &lt;code&gt;kubectl describe nodes&lt;/code&gt; from my workstation in our office network doesn't work since &lt;code&gt;kubectl&lt;/code&gt; needs access to the cluster's api server. So, how can we connect our office network to our VPC in a secure way, and enable management of the GKE cluster using &lt;code&gt;kubectl&lt;/code&gt;?&lt;/p&gt;

&lt;h2&gt;
  
  
  A note on our infrastructure
&lt;/h2&gt;

&lt;p&gt;We use &lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; to provision all GCP resources. Google provides opinionated Terraform modules to manage GCP resources &lt;a href="https://github.com/terraform-google-modules" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
Our infra leverages a &lt;a href="https://cloud.google.com/vpc/docs/shared-vpc" rel="noopener noreferrer"&gt;shared VPC&lt;/a&gt; and we use the &lt;a href="https://github.com/terraform-google-modules/terraform-google-project-factory" rel="noopener noreferrer"&gt;project factory&lt;/a&gt; module to create the host project and the service projects.&lt;br&gt;
The VPN will be provisioned in the host project that owns the VPC.&lt;br&gt;
Our GKE clusters are created with the &lt;a href="https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/modules/private-cluster" rel="noopener noreferrer"&gt;private cluster&lt;/a&gt; terraform module.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites and preparations
&lt;/h2&gt;

&lt;p&gt;There are a few things we need to ensure and some information we must gather before we can start.&lt;br&gt;
First, we need admin access to the Ubiquiti Dream Machine (UDM) and we need a Google user with Network Management Admin.&lt;br&gt;
Now you need to gather the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The office space external IP address, eg &lt;strong&gt;123.45.67.89&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The Goals network subnet range, eg &lt;strong&gt;192.168.1.0/24&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Ensure that the Kubernetes cluster is prepared
&lt;/h3&gt;

&lt;p&gt;Two configuration entries are necessary to get right when setting up GKE to enable VPN access to the cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;master_authorized_networks&lt;/code&gt; needs to include the office network subnet range, ie &lt;strong&gt;192.168.1.0/24&lt;/strong&gt; in our case.&lt;/li&gt;
&lt;li&gt;VPC peering must be configured to &lt;strong&gt;export custom routes&lt;/strong&gt; - in this particular case it means that the custom network route that the VPN will create to enable communication from the VPC to the office network will also be available in the Google managed VPC that host the GKE control plane. This is enabled by adding the following terraform configuration to the GKE setup:
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;module "kubernetes_cluster" {&lt;br&gt;
  source = "terraform-google-modules/kubernetes-engine/google//modules/private-cluster"&lt;br&gt;
  version = "18.0.0"&lt;br&gt;
  ...&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "google_compute_network_peering_routes_config" "peering_gke_routes" {&lt;br&gt;
  peering = module.kubernetes_cluster.peering_name&lt;br&gt;
  network = var.vpc_id&lt;br&gt;
  import_custom_routes = false&lt;br&gt;
  export_custom_routes = true&lt;br&gt;
}&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
### Shared VPN secret
Generate a shared secret that will be used for authentication of the VPN peers and put it into a new secret in GCP Secret Manager, name it **office-vpn-shared-secret**.

### Create a Cloud VPN Gateway
For this part we will use the [VPN](https://github.com/terraform-google-modules/terraform-google-vpn) module, setting it up in the host project like this:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;locals {&lt;br&gt;
  name = "office-vpn"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "google_compute_address" "vpn_external_ip_address" {&lt;br&gt;
  project = var.project_id&lt;br&gt;
  name = local.name&lt;br&gt;
  network_tier = "PREMIUM"&lt;br&gt;
  region = var.region&lt;br&gt;
  address_type = "EXTERNAL"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;data "google_secret_manager_secret_version" "office_vpn_shared_secret" {&lt;br&gt;
  project = var.project_id&lt;br&gt;
  secret = "office-vpn-shared-secret"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;module "office_site_to_site" {&lt;br&gt;
  source = "terraform-google-modules/vpn/google"&lt;br&gt;
  version = "2.2.0"&lt;/p&gt;

&lt;p&gt;project_id = var.project_id&lt;br&gt;
  network = var.vpc_id&lt;br&gt;
  region = var.region&lt;br&gt;
  gateway_name = local.name&lt;br&gt;
  tunnel_name_prefix = local.name&lt;br&gt;
  shared_secret = data.google_secret_manager_secret_version.office_vpn_shared_secret.secret_data&lt;br&gt;
  ike_version = 2&lt;br&gt;
  peer_ips = [ var.office_public_ip ]&lt;br&gt;
  remote_subnet = var.office_subnet_ranges&lt;br&gt;
  vpn_gw_ip = resource.google_compute_address.vpn_external_ip_address.address&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "google_compute_firewall" "allow_office_traffic" {&lt;br&gt;
  project = var.project_id&lt;br&gt;
  name = "${local.name}-allow-office-traffic"&lt;br&gt;
  network = var.vpc_id&lt;br&gt;
  description = "Allow traffic from the office network"&lt;br&gt;
  allow { protocol = "icmp" }&lt;br&gt;
  allow {&lt;br&gt;
    protocol = "udp"&lt;br&gt;
    ports = [ "0-65535" ]&lt;br&gt;
  }&lt;br&gt;
  allow {&lt;br&gt;
    protocol = "tcp"&lt;br&gt;
    ports = [ "0-65535" ]&lt;br&gt;
  }&lt;br&gt;
  source_ranges = var.office_subnet_ranges&lt;br&gt;
}&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`google_compute_address.vpn_external_ip_address` creates the external static IP address which becomes the VPN endpoint on the GCP end.
`google_secret_manager_secret_version.office_vpn_shared_secret` fetches the shared secret used to authenticate the VPN peers.
The `office_site_to_site` module creates a "classic" Cloud VPN. The UDM does not support BGP yet which means that we cannot create a "HA Cloud VPN" variant.
`google_compute_firewall.allow_office_traffic` allows traffic originating from the office subnet (`192.168.1.0/24`) to enter our VPC.

After applying the configuration the VPN tunnel will be in an error state because it cannot connect to it's peer. This is expected since we have not set up the UDM side yet.

### Create a VPN network on the UDM Pro
The UDM Pro does not support configuration by code as far as I know, so here we need to resort to manually use the management GUI.

Go to Settings-&amp;gt;Networks-&amp;gt;Add New Network, choose a name and select VPN-&amp;gt;Advanced-&amp;gt;Site-to-site-&amp;gt;Manual IPSec.
**Pre-shared Secret Key** is the `office-vpn-shared-secret` from above.
**Public IP Address (WAN)** is the IP address the UDM has on the office space network, ie it is __not__ the public IP our office space provider has. For example `192.168.10.150`.
In the **Remote Gateway/Subnets** section, add the subnet ranges in your VPC that you want to access from the office, eg `10.0.0.0/8` and `172.16.0.0/16`.
The **Remote IP Address** is the public static IP that was created for the VPN endpoint in GCP, eg `123.45.67.99`.
Expand the Advanced section and choose `IKEv2`. Leave PFS and Dynamic routing enabled.
Save the new network.

Unfortunately we are not ready yet, because with the current configuration the UDM will identify itself using the WAN IP we have configured, which doesn't match the IP it connects to GCP with.
To fix this last part of configuration we need to `ssh` into the UDM Pro. Once you are on the machine, we can update the ipsec configuration.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;$ cd /run/strongswan/ipsec.d/tunnels&lt;br&gt;
$ vi .config&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Add the following line just below the `left=192.168.10.150` line:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;leftid=123.45.67.99&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;This makes the UDM identify itself using it's actual public IP when it connects to the VPN on the GCP end.
Finally, refresh the IPSec configuration:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;$ ipsec update&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
## Thats it!
Verify that the connection on the UDM is up and running by invoking:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;$ swanctl --list-sas&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The output should list information about the tunnel, and the tunnel should be in {% raw %}`ESTABLISHED` state.
Now the VPN tunnel state in GCP should move into a green state as well.
And, finally, accessing the Kubernetes cluster from a workstation in the Goals office network is possible:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;$ kubectl get nodes&lt;br&gt;
NAME                                      STATUS   ROLES    AGE   VERSION&lt;br&gt;
gke-XXXXX-controller-pool-28b7a87b-9ff2   Ready       17d   v1.21.6-gke.1500&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;We now have a setup looking like below:

![Private GKE and office with VPN](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zlornjjrpq8ya2eldfkx.png)

## Troubleshooting
While being connected to the UDM, run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;$ tcpdump -nnvi vti64&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;to see all traffic routed via the VPN tunnel.

Sometimes routes are cached on the workstations, eg on Mac you can run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;$ sudo route -n flush&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;a couple of times and disable/enable Wifi to make sure that your routing configuration is up to date.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>gcp</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
  </channel>
</rss>
