<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mikhail Khomenko</title>
    <description>The latest articles on DEV Community by Mikhail Khomenko (@myardyas).</description>
    <link>https://dev.to/myardyas</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/myardyas"/>
    <language>en</language>
    <item>
      <title>InterSystems Kubernetes Operator Deep Dive: Part 2</title>
      <dc:creator>Mikhail Khomenko</dc:creator>
      <pubDate>Sat, 13 Mar 2021 11:00:35 +0000</pubDate>
      <link>https://dev.to/intersystems/intersystems-kubernetes-operator-deep-dive-part-2-d7m</link>
      <guid>https://dev.to/intersystems/intersystems-kubernetes-operator-deep-dive-part-2-d7m</guid>
      <description>&lt;p&gt;In the &lt;a href="https://dev.to/intersystems/intersystems-kubernetes-operator-deep-dive-introduction-to-kubernetes-operators-54ol"&gt;previous article&lt;/a&gt;, we looked at one way to create a custom operator that manages the IRIS instance state. This time, we’re going to take a look at a ready-to-go operator, InterSystems Kubernetes Operator (IKO). &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=AIKO"&gt;Official documentation&lt;/a&gt; will help us navigate the deployment steps.&lt;/p&gt;

&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;p&gt;To deploy IRIS, we need a Kubernetes cluster. In this example, we’ll use Google Kubernetes Engine (&lt;a href="https://cloud.google.com/kubernetes-engine"&gt;GKE&lt;/a&gt;), so we’ll need to use a Google account, set up a Google Cloud project, and install &lt;a href="https://cloud.google.com/sdk/docs/quickstart"&gt;gcloud&lt;/a&gt; and &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/"&gt;kubectl&lt;/a&gt; command line utilities.&lt;/p&gt;

&lt;p&gt;You’ll also need to install the &lt;a href="https://helm.sh/docs/intro/install/"&gt;Helm3&lt;/a&gt; utility:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm version
version.BuildInfo{Version:"v3.3.4"...}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Be aware that on &lt;a href="https://cloud.google.com/free"&gt;Google free tier&lt;/a&gt;, not all resources are free.&lt;/p&gt;

&lt;p&gt;It doesn’t matter in our case which type of GKE we use – &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster"&gt;zonal&lt;/a&gt;, &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-regional-cluster"&gt;regional&lt;/a&gt;, or &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters"&gt;private&lt;/a&gt;. After we create one, let’s connect to the cluster. We’ve created a cluster called "iko" in a project called "iko-project". Use your own project name in place of "iko-project" in the later text.&lt;/p&gt;

&lt;p&gt;This command adds this cluster to our local clusters configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gcloud container clusters get-credentials iko --zone europe-west2-b --project iko-project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Install IKO
&lt;/h4&gt;

&lt;p&gt;Let’s deploy IKO into our newly-created cluster. The recommended way to install packages to Kubernetes is using Helm. IKO is not an exception and can be installed as a Helm chart. Choose &lt;a href="https://helm.sh/docs/topics/v2_v3_migration/"&gt;Helm version 3&lt;/a&gt; as it's more secure.&lt;/p&gt;

&lt;p&gt;Download IKO from the WRC page &lt;a href="https://wrc.intersystems.com/wrc/coDistGen.csp"&gt;InterSystems Components&lt;/a&gt;, creating a free developer account if you do not already have one. At the moment of writing, the latest version is &lt;em&gt;2.0.223.0&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Download the archive and unpack it. We will refer to the unpacked directory as the current directory.&lt;/p&gt;

&lt;p&gt;The chart is in the &lt;em&gt;chart/iris-operator&lt;/em&gt; directory. If you just deploy this chart, you will receive an error when describing deployed pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Failed to pull image "intersystems/iris-operator:2.0.0.223.0": rpc error:
code = Unknown desc = Error response from daemon:
pull access denied for intersystems/iris-operator,
repository does not exist or may require 'docker login'.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, you need to make an IKO image available from the Kubernetes cluster. Let’s push this image into Google Container Registry first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker load -i image/iris_operator-2.0.0.223.0-docker.tgz
$ docker tag intersystems/iris-operator:2.0.0.223.0 eu.gcr.io/iko-project/iris-operator:2.0.0.223.0
$ docker push eu.gcr.io/iko-project/iris-operator:2.0.0.223.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, we need to direct IKO to use this new image. You should do this by editing the Helm values file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vi chart/iris-operator/values.yaml
...
operator:
  registry: eu.gcr.io/iko-project
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we’re ready to deploy IKO into GKE:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm upgrade iko chart/iris-operator --install --namespace iko --create-namespace

$ helm ls --all-namespaces --output json | jq '.[].status'
"deployed"

$ kubectl -n iko get pods # Should be Running with Readiness 1/1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s look at the IKO logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n iko logs -f --tail 100 -l app=iris-operator
…
I1212 17:10:38.119363 1 secure_serving.go:116] Serving securely on [::]:8443
I1212 17:10:38.122306 1 operator.go:77] Starting Iris operator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/"&gt;Custom Resource Definition&lt;/a&gt; &lt;em&gt;irisclusters.intersystems.com&lt;/em&gt; was created during IKO deployment.&lt;br&gt;
You can look at the API schema it supports, although it is quite long:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get crd irisclusters.intersystems.com -oyaml | less
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One way to look at all available parameters is to use the "explain" command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl explain irisclusters.intersystems.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another way is using &lt;a href="https://stedolan.github.io/jq/"&gt;jq&lt;/a&gt;. For instance, viewing all top-level configuration settings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get crd irisclusters.intersystems.com -ojson | jq '.spec.versions[].schema.openAPIV3Schema.properties.spec.properties | to_entries[] | .key'
"configSource"
"licenseKeySecret"
"passwordHash"
"serviceTemplate"
"topology"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using jq in this way (viewing the configuration fields and their properties), we can find out the following configuration structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;configSource
  name
licenseKeySecret
  name
passwordHash
serviceTemplate
  metadata
    annotations
  spec
    clusterIP
    externalIPs
    externalTrafficPolicy
    healthCheckNodePort
    loadBalancerIP
    loadBalancerSourceRanges
    ports
    type
topology
  arbiter
    image
    podTemplate
      controller
        annotations
      metadata
        annotations
      spec
        affinity
          nodeAffinity
            preferredDuringSchedulingIgnoredDuringExecution
            requiredDuringSchedulingIgnoredDuringExecution
          podAffinity
            preferredDuringSchedulingIgnoredDuringExecution
            requiredDuringSchedulingIgnoredDuringExecution
          podAntiAffinity
            preferredDuringSchedulingIgnoredDuringExecution
            requiredDuringSchedulingIgnoredDuringExecution
        args
        env
        imagePullSecrets
        initContainers
        lifecycle
        livenessProbe
        nodeSelector
        priority
        priorityClassName
        readinessProbe
        resources
        schedulerName
        securityContext
        serviceAccountName
        tolerations
    preferredZones
    updateStrategy
      rollingUpdate
      type    
  compute
    image
    podTemplate
      controller
        annotations
      metadata
        annotations
      spec
        affinity
          nodeAffinity
            preferredDuringSchedulingIgnoredDuringExecution
            requiredDuringSchedulingIgnoredDuringExecution
          podAffinity
            preferredDuringSchedulingIgnoredDuringExecution
            requiredDuringSchedulingIgnoredDuringExecution
          podAntiAffinity
            preferredDuringSchedulingIgnoredDuringExecution
            requiredDuringSchedulingIgnoredDuringExecution
        args
        env
        imagePullSecrets
        initContainers
        lifecycle
        livenessProbe
        nodeSelector
        priority
        priorityClassName
        readinessProbe
        resources
          limits
          requests
        schedulerName
        securityContext
        serviceAccountName
        tolerations
    preferredZones
    replicas
    storage
      accessModes
      dataSource
        apiGroup
        kind
        name
      resources
        limits
        requests
      selector
      storageClassName
      volumeMode
      volumeName
    updateStrategy
      rollingUpdate
      type
  data
    image
    mirrored
    podTemplate
      controller
        annotations
      metadata
        annotations
      spec
        affinity
          nodeAffinity
            preferredDuringSchedulingIgnoredDuringExecution
            requiredDuringSchedulingIgnoredDuringExecution
          podAffinity
            preferredDuringSchedulingIgnoredDuringExecution
            requiredDuringSchedulingIgnoredDuringExecution
          podAntiAffinity
            preferredDuringSchedulingIgnoredDuringExecution
            requiredDuringSchedulingIgnoredDuringExecution
        args
        env
        imagePullSecrets
        initContainers
        lifecycle
        livenessProbe
        nodeSelector
        priority
        priorityClassName
        readinessProbe
        resources
          limits
          requests
        schedulerName
        securityContext
        serviceAccountName
        tolerations
    preferredZones
    shards
    storage
      accessModes
      dataSource
        apiGroup
        kind
        name
      resources
        limits
        requests
      selector
      storageClassName
      volumeMode
      volumeName
    updateStrategy
      rollingUpdate
      type
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are so many settings, but, you don’t need to set them all. The defaults are suitable. You can see examples of configuration in the file &lt;em&gt;iris_operator-2.0.0.223.0/samples&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;To run a minimal viable IRIS, we need to specify only a few settings, like IRIS (or IRIS-based application) version, storage size, and license key.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; about license key: we’ll use a community IRIS, so we don’t need a key. We cannot just omit this setting, but can create a secret containing a pseudo-license. License secret generation is simple:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
 $ touch iris.key # remember that a real license file is used in the most cases
 $ kubectl create secret generic &lt;b&gt;iris-license&lt;/b&gt; --from-file=iris.key

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;An IRIS description understandable by IKO is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat iko.yaml
apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
  name: iko-test
spec:
  passwordHash: '' # use a default password SYS
  licenseKeySecret:
    name: iris-license # use a Secret name bolded above
  topology:
    data:
      image: intersystemsdc/iris-community:2020.4.0.524.0-zpm # Take a community IRIS
      storage:
        resources:
          requests:
            storage: 10Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Send this manifest into the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f iko.yaml

$ kubectl get iriscluster
NAME       DATA COMPUTE MIRRORED STATUS   AGE
iko-test 1                       Creating 76s

$ kubectl -n iko logs -f --tail 100 -l app=iris-operator
db.Spec.Topology.Data.Shards = 0
I1219 15:55:57.989032 1 iriscluster.go:39] Sync/Add/Update for IrisCluster default/iko-test
I1219 15:55:58.016618 1 service.go:19] Creating Service default/iris-svc.
I1219 15:55:58.051228 1 service.go:19] Creating Service default/iko-test.
I1219 15:55:58.216363 1 statefulset.go:22] Creating StatefulSet default/iko-test-data.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We see that some resources (Service, StatefulSet) are going to be created in a cluster in the "default" namespace.&lt;/p&gt;

&lt;p&gt;In a few seconds, you should see an IRIS pod in the "default" namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get po -w
NAME            READY STATUS             RESTARTS AGE
iko-test-data-0 0/1   ContainerCreating  0        2m10s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait a little until the IRIS image is pulled, that is, until Status becomes Ready and Ready becomes 1/1. You can check what type of disk was created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pv
NAME                                     CAPACITY ACCESS MODES  RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-b356a943-219e-4685-9140-d911dea4c106 10Gi     RWO    Delete Bound   default/iris-data-iko-test-data-0 standard 5m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reclaim policy "&lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#delete"&gt;Delete&lt;/a&gt;" means that when you remove Persistent Volume, GCE persistent disk will be also removed. There is another policy, "&lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#retain"&gt;Retain&lt;/a&gt;", that allows you to save Google persistent disks to survive Kubernetes Persistent Volumes deletion. You can define a custom &lt;a href="https://kubernetes.io/docs/concepts/storage/storage-classes/"&gt;StorageClass&lt;/a&gt; to use this policy and other non-default settings. An example is present in IKO’s documentation: &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=AIKO#AIKO_storageclass"&gt;Create a storage class for persistent storage&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now, let’s check our newly created IRIS. In general, traffic to pods goes through Services or Ingresses. By default, IKO creates a service of ClusterIP type with a name from the iko.yaml &lt;em&gt;metadata.name&lt;/em&gt; field:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get svc iko-test
NAME     TYPE      CLUSTER-IP EXTERNAL-IP PORT(S)             AGE
iko-test ClusterIP 10.40.6.33 &amp;lt;none&amp;gt;      1972/TCP,52773/TCP  14m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can call this service using port-forward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl port-forward svc/iko-test 52773
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Navigate a browser to &lt;a href="http://localhost:52773/csp/sys/UtilHome.csp"&gt;http://localhost:52773/csp/sys/UtilHome.csp&lt;/a&gt; and type _system/SYS.&lt;/p&gt;

&lt;p&gt;You should see a familiar IRIS user interface (UI).&lt;/p&gt;

&lt;h4&gt;
  
  
  Custom Application
&lt;/h4&gt;

&lt;p&gt;Let’s replace a pure IRIS with an IRIS-based application. First, download the &lt;a href="https://github.com/intersystems-community/covid-19"&gt;COVID-19 application&lt;/a&gt;. We won’t consider a complete, continuous deployment here, just minimal steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone https://github.com/intersystems-community/covid-19.git
$ cd covid-19
$ docker build --no-cache -t covid-19:v1 .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As our Kubernetes is running in a Google cloud, let’s use Google Docker Container Registry as an image storage. We assume here that you have an account in Google Cloud allowing you to push images. Use your own project name in the below-mentioned commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker tag covid-19:v1 eu.gcr.io/iko-project/covid-19:v1
$ docker push eu.gcr.io/iko-project/covid-19:v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s go to the directory with iko.yaml, change the image there, and redeploy it. You should consider removing the previous example first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat iko.yaml
...  
  data:
    image: eu.gcr.io/iko-project/covid-19:v1
...
$ kubectl delete -f iko.yaml
$ kubectl -n iko delete deploy -l app=iris-operator
$ kubectl delete pvc iris-data-iko-test-data-0
$ kubectl apply -f iko.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should recreate the IRIS pod with this new image.&lt;/p&gt;

&lt;p&gt;This time, let’s provide external access via &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/"&gt;Ingress Resource&lt;/a&gt;. To make it work, we should deploy an &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/"&gt;Ingress Controller&lt;/a&gt; (choose &lt;a href="https://kubernetes.github.io/ingress-nginx/"&gt;nginx&lt;/a&gt; for its flexibility). To provide a traffic encryption (TLS), we will also add yet another component – &lt;a href="https://cert-manager.io/docs/installation/kubernetes/"&gt;cert-manager&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To install both these components, we use a &lt;a href="https://helm.sh/docs/faq/"&gt;Helm tool&lt;/a&gt;, version 3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

$ helm upgrade nginx-ingress  \
  --namespace nginx-ingress   \
  ingress-nginx/ingress-nginx \
  --install                   \
  --atomic                    \
  --version 3.7.0             \
  --create-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look at an nginx service IP (it’s dynamic, but you can &lt;a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#promote_ephemeral_ip"&gt;make it static&lt;/a&gt;):&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
 $ kubectl -n nginx-ingress get svc
 NAME                                   TYPE         CLUSTER-IP   EXTERNAL-IP PORT(S) AGE
 nginx-ingress-ingress-nginx-controller LoadBalancer 10.40.0.103  &lt;b&gt;xx.xx.xx.xx&lt;/b&gt; 80:32032/TCP,443:32374/TCP 88s
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: your IP will differ.&lt;/p&gt;

&lt;p&gt;Go to your domain registrar and create a domain name for this IP. For instance, create an A-record:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;covid19.myardyas.club = xx.xx.xx.xx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some time will pass until this new record propagates across DNS servers. The end result should be similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ dig +short covid19.myardyas.club
xx.xx.xx.xx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Having deployed Ingress Controller, we now need to create an Ingress resource itself (use your own domain name):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: iko-test
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    certmanager.k8s.io/cluster-issuer: lets-encrypt-production # Cert manager will be deployed below
spec:
  rules:
  - host: covid19.myardyas.club
    http:
      paths:
      - backend:
          serviceName: iko-test
          servicePort: 52773
        path: /
  tls:
  - hosts:
    - covid19.myardyas.club
    secretName: covid19.myardyas.club

$ kubectl apply -f ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a minute or so, IRIS should be available at &lt;a href="https://covid19.myardyas.club/csp/sys/UtilHome.csp"&gt;http://covid19.myardyas.club/csp/sys/UtilHome.csp&lt;/a&gt; (remember to use your domain name) and the COVID-19 application at &lt;a href="http://covid19.myardyas.club/dsw/index.html"&gt;http://covid19.myardyas.club/dsw/index.html&lt;/a&gt; (choose namespace IRISAPP).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Above, we’ve exposed the HTTP IRIS port. If you need to expose via nginx TCP super-server port (1972 or 51773), you can read instructions at &lt;a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md"&gt;Exposing TCP and UDP services&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Add Traffic Encryption
&lt;/h4&gt;

&lt;p&gt;The last step is to add traffic encryption. Let’s deploy cert-manager for that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/v0.10.0/deploy/manifests/00-crds.yaml

$ helm upgrade cert-manager \
  --namespace cert-manager  \
  jetstack/cert-manager     \
  --install                 \
  --atomic                  \
  --version v0.10.0         \
  --create-namespace

$ cat lets-encrypt-production.yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: lets-encrypt-production
spec:
  acme:
    # Set your email. Let’s Encrypt will send notifications about certificates expiration
    email: mvhoma@gmail.com 
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: lets-encrypt-production
    solvers:
    - http01:
        ingress:
          class: nginx

$ kubectl apply -f lets-encrypt-production.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait a few minutes until cert-manager notices IRIS-application ingress and goes to Let’s Encrypt for a certificate. You can observe Order and Certificate resources in progress:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get order
NAME                             STATE AGE
covid19.myardyas.club-3970469834 valid 52s

$ kubectl get certificate
NAME                  READY SECRET                AGE
covid19.myardyas.club True  covid19.myardyas.club 73s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This time, you can visit a more secured site version - &lt;a href="https://covid19.myardyas.club/dsw/index.html"&gt;https://covid19.myardyas.club/dsw/index.html&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MB8DLovP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fxpteadkdxnmi15ga1rq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MB8DLovP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fxpteadkdxnmi15ga1rq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  About Native Google Ingress Controller and Managed Certificates
&lt;/h4&gt;

&lt;p&gt;Google supports its own ingress controller, &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress"&gt;GCE&lt;/a&gt;, which you can use in place of an nginx controller. However, it has some drawbacks, for instance, &lt;a href="https://github.com/kubernetes/ingress-gce/issues/109"&gt;lack of rewrite rules support&lt;/a&gt;, at least at the moment of writing.&lt;/p&gt;

&lt;p&gt;Also, you can use &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs"&gt;Google managed certificates&lt;/a&gt; in place of cert-manager. It’s handy, but initial retrieval of certificate and any updates of Ingress resources (like new path) causes a tangible downtime. Also, Google managed certificates work only with GCE, not with nginx, as noted in &lt;a href="https://github.com/GoogleCloudPlatform/gke-managed-certs"&gt;Managed Certificates&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Next Steps
&lt;/h4&gt;

&lt;p&gt;We’ve deployed an IRIS-based application into the GKE cluster. To expose it to the Internet, we’ve added Ingress Controller and a certification manager. We’ve tried the IrisCluster configuration to highlight that setting up IKO is simple. You can read about more settings in &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=AIKO"&gt;Using the InterSystems Kubernetes Operator&lt;/a&gt; documentation.&lt;/p&gt;

&lt;p&gt;A single data server is good, but the real fun begins when we add ECP, mirroring, and monitoring, which are also available with IKO. Stay tuned and read the upcoming article in our Kubernetes operator series to take a closer look at mirroring.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>intersystems</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>InterSystems Kubernetes Operator Deep Dive: Introduction to Kubernetes Operators</title>
      <dc:creator>Mikhail Khomenko</dc:creator>
      <pubDate>Thu, 21 Jan 2021 09:05:52 +0000</pubDate>
      <link>https://dev.to/intersystems/intersystems-kubernetes-operator-deep-dive-introduction-to-kubernetes-operators-54ol</link>
      <guid>https://dev.to/intersystems/intersystems-kubernetes-operator-deep-dive-introduction-to-kubernetes-operators-54ol</guid>
      <description>&lt;h4&gt;
  
  
  Introduction
&lt;/h4&gt;

&lt;p&gt;Several resources tell us how to run IRIS in a Kubernetes cluster, such as &lt;a href="https://community.intersystems.com/post/deploying-intersystems-iris-solution-eks-using-github-actions"&gt;Deploying an InterSystems IRIS Solution on EKS using GitHub Actions&lt;/a&gt; and &lt;a href="https://community.intersystems.com/post/deploying-intersystems-iris-solution-gke-using-github-actions"&gt;Deploying InterSystems IRIS solution on GKE Using GitHub Actions&lt;/a&gt;. These methods work but they require that you create Kubernetes manifests and Helm charts, which might be rather time-consuming.&lt;br&gt;
To simplify IRIS deployment, &lt;a href="https://www.intersystems.com/"&gt;InterSystems&lt;/a&gt; developed an amazing tool called InterSystems Kubernetes Operator (IKO). A number of official resources explain IKO usage in details, such as  &lt;a href="https://community.intersystems.com/post/new-video-intersystems-iris-kubernetes-operator"&gt;New Video: Intersystems IRIS Kubernetes Operator&lt;/a&gt; and &lt;a href="https://docs.intersystems.com/components/csp/docbook/DocBook.UI.Page.cls?KEY=PAGE_IKO"&gt;InterSystems Kubernetes Operator&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/"&gt;Kubernetes documentation&lt;/a&gt; says that operators replace a human operator who knows how to deal with complex systems in Kubernetes. They provide system settings in the form of &lt;em&gt;custom resources&lt;/em&gt;. An operator includes a custom controller that reads these settings and performs steps the settings define to correctly set up and maintain your application. The custom controller is a simple pod deployed in Kubernetes. So, generally speaking, all you need to do to make an operator work is deploy a controller pod and define its settings in custom resources.&lt;br&gt;
You can find high-level explanation of operators in &lt;a href="https://enterprisersproject.com/article/2019/2/kubernetes-operators-plain-english"&gt;How to explain Kubernetes Operators in plain English&lt;/a&gt;. Also, a free &lt;a href="https://www.redhat.com/en/resources/oreilly-kubernetes-operators-automation-ebook"&gt;O’Reilly ebook&lt;/a&gt; is available for download.&lt;br&gt;
In this article, we’ll have a closer look at what operators are and what makes them tick. We’ll also write our own operator.&lt;/p&gt;
&lt;h4&gt;
  
  
  Prerequisites and Setup
&lt;/h4&gt;

&lt;p&gt;To follow along, you’ll need to install the following tools:&lt;br&gt;
&lt;a href="https://kind.sigs.k8s.io/"&gt;kind&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kind --version
kind version 0.9.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://golang.org/doc/install"&gt;golang&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ go version
go version go1.13.3 linux/amd64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://book.kubebuilder.io/quick-start.html"&gt;kubebuilder&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubebuilder version
Version: version.Version{KubeBuilderVersion:"2.3.1"…
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/"&gt;kubectl&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.11"...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://sdk.operatorframework.io/docs/installation/"&gt;operator-sdk&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ operator-sdk version
operator-sdk version: "v1.2.0"…
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Custom Resources
&lt;/h4&gt;

&lt;p&gt;API resources is an &lt;a href="https://kubernetes.io/docs/reference/using-api/api-concepts/"&gt;important concept&lt;/a&gt; in Kubernetes. These resources enable you to interact with Kubernetes via HTTP endpoints that can be grouped and versioned. The standard API can be extended with &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;custom resources&lt;/a&gt;, which require that you provide a Custom Resource Definition (CRD). Have a look at the &lt;a href="https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/"&gt;Extend the Kubernetes API with CustomResourceDefinitions&lt;/a&gt; page for detailed info.&lt;br&gt;
Here is an example of a CRD:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat crd.yaml 
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: irises.example.com
spec:
  group: example.com
  version: v1alpha1
  scope: Namespaced
  names:
    plural: irises
    singular: iris
    kind: Iris
    shortNames:
    - ir
  validation:
    openAPIV3Schema:
      required: ["spec"]
      properties:
        spec:
          required: ["replicas"]
          properties:
            replicas:
              type: "integer"
              minimum: 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example, we define the API GVK (Group/Version/Kind) resource as example.com/v1alpha1/Iris, with replicas as the only required field.&lt;br&gt;
Now let’s define a custom resource based on our CRD:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat crd-object.yaml 
apiVersion: example.com/v1alpha1
kind: Iris
metadata:
  name: iris
spec:
  test: 42
  replicas: 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our custom resource, we can define any fields in addition to replicas, which is required by the CRD.&lt;br&gt;
After we deploy the above two files, our custom resource should become visible to standard kubectl.&lt;br&gt;
Let’s launch Kubernetes locally using &lt;a href="https://kind.sigs.k8s.io/"&gt;kind&lt;/a&gt;, and then run the following kubectl commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kind create cluster
$ kubectl apply -f crd.yaml
$ kubectl get crd irises.example.com
NAME                 CREATED AT
irises.example.com   2020-11-14T11:48:56Z

$ kubectl apply -f crd-object.yaml
$ kubectl get iris
NAME   AGE
iris   84s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Although we’ve set a replica amount for our IRIS, nothing actually happens at the moment. It’s expected. We need to deploy a controller - the entity that can read our custom resource and perform some settings-based actions.&lt;br&gt;
For now, let’s clean up what we’ve created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl delete -f crd-object.yaml
$ kubectl delete -f crd.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Controller
&lt;/h4&gt;

&lt;p&gt;A controller can be written in any language. We’ll use &lt;a href="https://golang.org/"&gt;Golang&lt;/a&gt; as Kubernetes’ "native" language. We could write a controller’s logic from scratch but the good folks from Google and RedHat gave us a leg up. They have created two projects that can generate the operator code that will only require minimum changes – &lt;a href="https://book.kubebuilder.io/introduction.html"&gt;kubebuilder&lt;/a&gt; and &lt;a href="https://sdk.operatorframework.io/docs/overview/"&gt;operator-sdk&lt;/a&gt;. These two are compared at the &lt;a href="https://tiewei.github.io/posts/kubebuilder-vs-operator-sdk"&gt;kubebuilder vs operator-sdk&lt;/a&gt; page, as well as here: &lt;a href="https://github.com/operator-framework/operator-sdk/issues/1758"&gt;What is the difference between kubebuilder and operator-sdk #1758&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Kubebuilder
&lt;/h4&gt;

&lt;p&gt;It is convenient to start our acquaintance with Kubebuilder at the &lt;a href="https://book.kubebuilder.io/introduction.html"&gt;Kubebuilder book&lt;/a&gt; page. The &lt;a href="https://www.youtube.com/watch?v=KBTXBUVNF2I"&gt;Tutorial: Zero to Operator in 90 minutes&lt;/a&gt; video from the Kubebuilder maintainer might help as well.&lt;/p&gt;

&lt;p&gt;Sample implementations of the Kubebuilder project can be found in the &lt;a href="https://github.com/govargo/sample-controller-kubebuilder"&gt;sample-controller-kubebuilder&lt;/a&gt; and in &lt;a href="https://github.com/jetstack/kubebuilder-sample-controller"&gt;kubebuilder-sample-controller&lt;/a&gt; repositories.&lt;/p&gt;

&lt;p&gt;Let’s scaffold a new operator project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir iris
$ cd iris
$ go mod init iris # Creates a new module, name it iris
$ kubebuilder init --domain myardyas.club # An arbitrary domain, used below as a suffix in the API group
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Scaffolding includes many files and manifests. The main.go file, for instance, is the entrypoint of code. It imports the &lt;a href="https://github.com/kubernetes-sigs/controller-runtime"&gt;controller-runtime library&lt;/a&gt;, instantiates and runs a special manager that keeps track of the controller run. Nothing to change in any of these files.&lt;/p&gt;

&lt;p&gt;Let’s create the CRD:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubebuilder create api --group test --version v1alpha1 --kind Iris
Create Resource [y/n]
y
Create Controller [y/n]
y
…
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, a lot of files are generated. These are described in detail at the &lt;a href="https://book.kubebuilder.io/cronjob-tutorial/new-api.html"&gt;Adding a new API&lt;/a&gt; page. For example, you can see that a file for kind Iris is added in api/v1alpha1/iris_types.go. In our first sample CRD, we defined the required replicas field. Let’s create an identical field here, this time in the &lt;em&gt;IrisSpec&lt;/em&gt; structure. We’ll also add the &lt;em&gt;DeploymentName&lt;/em&gt; field. The replicas’ count should be also visible in the &lt;em&gt;Status&lt;/em&gt; section, so we need to make the following changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vim api/v1alpha1/iris_types.go
…
type IrisSpec struct {
        // +kubebuilder:validation:MaxLength=64
        DeploymentName string `json:"deploymentName"`
        // +kubebuilder:validation:Minimum=0
        Replicas *int32 `json:"replicas"`
}
…
type IrisStatus struct {
        ReadyReplicas int32 `json:"readyReplicas"`
}
…
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After editing the API, we’ll move to editing the controller boilerplate. All the logic should be defined in the Reconcile method (this example is mostly taken from &lt;a href="https://github.com/jetstack/kubebuilder-sample-controller/blob/master/controllers/mykind_controller.go"&gt;mykind_controller.go&lt;/a&gt;). We also add a couple of auxiliary methods and rewrite the SetupWithManager method.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
$ vim controllers/iris_controller.go
…
import (
...
// Leave the existing imports and add these packages
        apps "k8s.io/api/apps/v1"
        core "k8s.io/api/core/v1"
        apierrors "k8s.io/apimachinery/pkg/api/errors"
        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
        "k8s.io/client-go/tools/record"
)
// Add the Recorder field to enable Kubernetes events
type IrisReconciler struct {
        client.Client
        Log    logr.Logger
        Scheme *runtime.Scheme
        &lt;b&gt;Recorder record.EventRecorder&lt;/b&gt;
}
…
// +kubebuilder:rbac:groups=test.myardyas.club,resources=iris,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=test.myardyas.club,resources=iris/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;delete
// +kubebuilder:rbac:groups="",resources=events,verbs=create;patch

func (r *IrisReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
    ctx := context.Background()
    log := r.Log.WithValues("iris", req.NamespacedName)
    // Fetch Iris objects by name
    log.Info("fetching Iris resource")
    iris := testv1alpha1.Iris{}
    if err := r.Get(ctx, req.NamespacedName, &amp;amp;iris); err != nil {
        log.Error(err, "unable to fetch Iris resource")
        return ctrl.Result{}, client.IgnoreNotFound(err)
    }
    if err := r.cleanupOwnedResources(ctx, log, &amp;amp;iris); err != nil {
        log.Error(err, "failed to clean up old Deployment resources for Iris")
        return ctrl.Result{}, err
    }

    log = log.WithValues("deployment_name", iris.Spec.DeploymentName)
    log.Info("checking if an existing Deployment exists for this resource")
    deployment := apps.Deployment{}
    err := r.Get(ctx, client.ObjectKey{Namespace: iris.Namespace, Name: iris.Spec.DeploymentName}, &amp;amp;deployment)
    if apierrors.IsNotFound(err) {
        log.Info("could not find existing Deployment for Iris, creating one...")

        deployment = *buildDeployment(iris)
        if err := r.Client.Create(ctx, &amp;amp;deployment); err != nil {
            log.Error(err, "failed to create Deployment resource")
            return ctrl.Result{}, err
        }

        r.Recorder.Eventf(&amp;amp;iris, core.EventTypeNormal, "Created", "Created deployment %q", deployment.Name)
        log.Info("created Deployment resource for Iris")
        return ctrl.Result{}, nil
    }
    if err != nil {
        log.Error(err, "failed to get Deployment for Iris resource")
        return ctrl.Result{}, err
    }

    log.Info("existing Deployment resource already exists for Iris, checking replica count")

    expectedReplicas := int32(1)
    if iris.Spec.Replicas != nil {
        expectedReplicas = *iris.Spec.Replicas
    }
    
    if *deployment.Spec.Replicas != expectedReplicas {
        log.Info("updating replica count", "old_count", *deployment.Spec.Replicas, "new_count", expectedReplicas)            
        deployment.Spec.Replicas = &amp;amp;expectedReplicas
        if err := r.Client.Update(ctx, &amp;amp;deployment); err != nil {
            log.Error(err, "failed to Deployment update replica count")
            return ctrl.Result{}, err
        }

        r.Recorder.Eventf(&amp;amp;iris, core.EventTypeNormal, "Scaled", "Scaled deployment %q to %d replicas", deployment.Name, expectedReplicas)

        return ctrl.Result{}, nil
    }

    log.Info("replica count up to date", "replica_count", *deployment.Spec.Replicas)
    log.Info("updating Iris resource status")
    
    iris.Status.ReadyReplicas = deployment.Status.ReadyReplicas
    if r.Client.Status().Update(ctx, &amp;amp;iris); err != nil {
        log.Error(err, "failed to update Iris status")
        return ctrl.Result{}, err
    }

    log.Info("resource status synced")
    return ctrl.Result{}, nil
}

// Delete the deployment resources that no longer match the iris.spec.deploymentName field
func (r *IrisReconciler) cleanupOwnedResources(ctx context.Context, log logr.Logger, iris *testv1alpha1.Iris) error {
    log.Info("looking for existing Deployments for Iris resource")

    var deployments apps.DeploymentList
    if err := r.List(ctx, &amp;amp;deployments, client.InNamespace(iris.Namespace), client.MatchingField(deploymentOwnerKey, iris.Name)); err != nil {
        return err
    }
    
    deleted := 0
    for _, depl := range deployments.Items {
        if depl.Name == iris.Spec.DeploymentName {
            // Leave Deployment if its name matches the one in the Iris resource
            continue
        }

        if err := r.Client.Delete(ctx, &amp;amp;depl); err != nil {
            log.Error(err, "failed to delete Deployment resource")
            return err
        }

        r.Recorder.Eventf(iris, core.EventTypeNormal, "Deleted", "Deleted deployment %q", depl.Name)
        deleted++
    }

    log.Info("finished cleaning up old Deployment resources", "number_deleted", deleted)
    return nil
}

func buildDeployment(iris testv1alpha1.Iris) *apps.Deployment {
    deployment := apps.Deployment{
        ObjectMeta: metav1.ObjectMeta{
            Name:            iris.Spec.DeploymentName,
            Namespace:       iris.Namespace,
            OwnerReferences: []metav1.OwnerReference{*metav1.NewControllerRef(&amp;amp;iris, testv1alpha1.GroupVersion.WithKind("Iris"))},
        },
        Spec: apps.DeploymentSpec{
            Replicas: iris.Spec.Replicas,
            Selector: &amp;amp;metav1.LabelSelector{
                MatchLabels: map[string]string{
                    "iris/deployment-name": iris.Spec.DeploymentName,
                },
            },
            Template: core.PodTemplateSpec{
                ObjectMeta: metav1.ObjectMeta{
                    Labels: map[string]string{
                        "iris/deployment-name": iris.Spec.DeploymentName,
                    },
                },
                Spec: core.PodSpec{
                    Containers: []core.Container{
                        {
                            Name:  &lt;b&gt;"iris"&lt;/b&gt;,
                            Image: &lt;b&gt;"store/intersystems/iris-community:2020.4.0.524.0"&lt;/b&gt;,
                        },
                    },
                },
            },
        },
    }
    return &amp;amp;deployment
}

var (
    deploymentOwnerKey = ".metadata.controller"
)

// Specifies how the controller is built to watch a CR and other resources 
// that are owned and managed by that controller
func (r *IrisReconciler) SetupWithManager(mgr ctrl.Manager) error {
    if err := mgr.GetFieldIndexer().IndexField(&amp;amp;apps.Deployment{}, deploymentOwnerKey, func(rawObj runtime.Object) []string {
        // grab the Deployment object, extract the owner...
        depl := rawObj.(*apps.Deployment)
        owner := metav1.GetControllerOf(depl)
        if owner == nil {
            return nil
        }
        // ...make sure it's an Iris...
        if owner.APIVersion != testv1alpha1.GroupVersion.String() || owner.Kind != "Iris" {
            return nil
        }

        // ...and if so, return it
        return []string{owner.Name}
    }); err != nil {
        return err
    }

    return ctrl.NewControllerManagedBy(mgr).
        For(&amp;amp;testv1alpha1.Iris{}).
        Owns(&amp;amp;apps.Deployment{}).
        Complete(r)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To make the events logging work, we need to add yet another line to the main.go file:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
if err = (&amp;amp;controllers.IrisReconciler{
                Client: mgr.GetClient(),
                Log:    ctrl.Log.WithName("controllers").WithName("Iris"),
                Scheme: mgr.GetScheme(),
                &lt;b&gt;Recorder: mgr.GetEventRecorderFor("iris-controller")&lt;/b&gt;,
        }).SetupWithManager(mgr); err != nil {
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now everything is ready to set up an operator.&lt;br&gt;
Let’s install the CRD first using the Makefile target install:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat Makefile
…
# Install CRDs into a cluster
install: manifests
        kustomize build config/crd | kubectl apply -f -
...
$ make install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can have a look at the resulting CRD YAML file in the config/crd/bases/ directory. &lt;br&gt;
Now check CRD existence in the cluster:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get crd
NAME                      CREATED AT
iris.test.myardyas.club   2020-11-17T11:02:02Z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Let’s run our controller in another terminal, locally (not in Kubernetes) – just to see if it actually works:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ make run
...
2020-11-17T13:02:35.649+0200 INFO controller-runtime.metrics metrics server is starting to listen {"addr": ":8080"}
2020-11-17T13:02:35.650+0200 INFO setup starting manager
2020-11-17T13:02:35.651+0200 INFO controller-runtime.manager starting metrics server {"path": "/metrics"}
2020-11-17T13:02:35.752+0200 INFO controller-runtime.controller Starting EventSource {"controller": "iris", "source": "kind source: /, Kind="}
2020-11-17T13:02:35.852+0200 INFO controller-runtime.controller Starting EventSource {"controller": "iris", "source": "kind source: /, Kind="}
2020-11-17T13:02:35.853+0200 INFO controller-runtime.controller Starting Controller {"controller": "iris"}
2020-11-17T13:02:35.853+0200 INFO controller-runtime.controller Starting workers {"controller": "iris", "worker count": 1}
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now that we have the CRD and the controller installed, all we need to do is create an instance of our custom resource. A template can be found in the config/samples/example.com_v1alpha1_iris.yaml file. In this file, we need to make changes similar to those in the crd-object.yaml:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat config/samples/test_v1alpha1_iris.yaml
apiVersion: test.myardyas.club/v1alpha1
kind: Iris
metadata:
  name: iris
spec:
  deploymentName: iris
  replicas: 1

$ kubectl apply -f config/samples/test_v1alpha1_iris.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After a brief delay caused by the need to pull an IRIS image, you should see the running IRIS pod:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get deploy
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
iris   1/1     1            1           119s

$ kubectl get pod
NAME                   READY   STATUS    RESTARTS   AGE
iris-6b78cbb67-vk2gq   1/1     Running   0          2m42s

$ kubectl logs -f -l iris/deployment-name=iris
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can open the IRIS portal using the kubectl port-forward command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl port-forward deploy/iris 52773
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Go to &lt;a href="http://localhost:52773/csp/sys/UtilHome.csp"&gt;http://localhost:52773/csp/sys/UtilHome.csp&lt;/a&gt; in your browser. &lt;br&gt;
What if we change the replicas’ count in CRD? Let’s make and apply this change:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vi config/samples/test_v1alpha1_iris.yaml
…
  replicas: 2
$ kubectl apply -f config/samples/test_v1alpha1_iris.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You should now see another Iris pod appear.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get events
...
54s         Normal   Scaled                    iris/iris                   Scaled deployment "iris" to 2 replicas
54s         Normal   ScalingReplicaSet         deployment/iris             Scaled up replica set iris-6b78cbb67 to 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Log messages in the terminal where the controller in running report successful reconciliation:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2020-11-17T13:09:04.102+0200 INFO controllers.Iris replica count up to date {"iris": "default/iris", "deployment_name": "iris", "replica_count": 2}
2020-11-17T13:09:04.102+0200 INFO controllers.Iris updating Iris resource status {"iris": "default/iris", "deployment_name": "iris"}
2020-11-17T13:09:04.104+0200 INFO controllers.Iris resource status synced {"iris": "default/iris", "deployment_name": "iris"}
2020-11-17T13:09:04.104+0200 DEBUG controller-runtime.controller Successfully Reconciled {"controller": "iris", "request": "default/iris"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Okay, our controllers seem to be working. Now we’re ready to deploy that controller inside Kubernetes as a pod. For that, we need to create the controller docker container and push it to the registry. This can be any registry that works with Kubernetes –  DockerHub, ECR, GCR, and so on. &lt;br&gt;
We’ll use the local (kind) Kubernetes, so let’s deploy the controller to the local registry using the kind-with-registry.sh script available from the &lt;a href="https://kind.sigs.k8s.io/docs/user/local-registry/"&gt;Local Registry&lt;/a&gt; page. We can simply remove the current cluster and recreate it:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kind delete cluster
$ ./kind_with_registry.sh
$ make install
$ docker build . -t localhost:5000/iris-operator:v0.1 # Dockerfile is autogenerated by kubebuilder
$ docker push localhost:5000/iris-operator:v0.1
$ make deploy IMG=localhost:5000/iris-operator:v0.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The controller will be deployed into the IRIS-system namespace. Alternatively, you can scan all pods to find a namespace like &lt;em&gt;kubectl get pod -A&lt;/em&gt;):&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n iris-system get po
NAME                                      READY   STATUS    RESTARTS   AGE
iris-controller-manager-bf9fd5855-kbklt   2/2     Running   0          54s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Let’s check the logs:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n iris-system logs -f -l control-plane=controller-manager -c manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can experiment with changing replicas’ count in the CRD and observe how these changes are reflected in the IRIS instances count.&lt;/p&gt;
&lt;h4&gt;
  
  
  Operator-SDK
&lt;/h4&gt;

&lt;p&gt;Another handy tool to generate the operator code is &lt;a href="https://sdk.operatorframework.io/"&gt;Operator SDK&lt;/a&gt;. To get the initial idea of this tool, have a look at this &lt;a href="https://sdk.operatorframework.io/docs/building-operators/golang/tutorial/"&gt;tutorial&lt;/a&gt;. You should &lt;a href="https://sdk.operatorframework.io/docs/installation/"&gt;install operator-sdk&lt;/a&gt; first.&lt;br&gt;
For our simple use case, the process will look similar to the one we’ve worked on with kubebuilder (you can delete/create the kind cluster with the Docker registry before continuing). Run in another directory:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir iris
$ cd iris
$ go mod init iris
$ operator-sdk init --domain=myardyas.club
$ operator-sdk create api --group=test --version=v1alpha1 --kind=Iris
# Answer two ‘yes’
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now change the &lt;em&gt;IrisSpec&lt;/em&gt; and &lt;em&gt;IrisStatus&lt;/em&gt; structures in the same file –  api/v1alpha1/iris_types.go.&lt;br&gt;
We’ll use the same iris_controller.go file as we did in kubebuilder. Don’t forget to add the &lt;em&gt;Recorder&lt;/em&gt; field in the main.go file.&lt;br&gt;
Because  kubebuilder and operator-sdk use different versions of the Golang packages, you should add a context in the &lt;em&gt;SetupWithManager&lt;/em&gt; function in controllers/iris_controller.go:&lt;/p&gt;

&lt;pre&gt;
&lt;b&gt;ctx := context.Background()&lt;/b&gt;
if err := mgr.GetFieldIndexer().IndexField(ctx, &amp;amp;apps.Deployment{}, deploymentOwnerKey, func(rawObj runtime.Object) []string {
&lt;/pre&gt;

&lt;p&gt;Then, install the CRD and the operator (make sure that the &lt;em&gt;kind&lt;/em&gt; cluster is running):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ make install
$ docker build . -t localhost:5000/iris-operator:v0.2
$ docker push localhost:5000/iris-operator:v0.2
$ make deploy IMG=localhost:5000/iris-operator:v0.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should now see the CRD, operator pod, and IRIS pod(s) similar to the ones we’ve seen when we worked with kubebuilder.&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Although a controller includes a lot of code, you’ve seen that changing the IRIS replicas is just a matter of changing a line in a custom resource. All the complexity is hidden in the controller implementation. We’ve looked at how a simple operator can be created using handy scaffolding tools. &lt;br&gt;
Our operator cared only about IRIS replicas. Now imagine that we actually need to have the IRIS data persisted on disk – this would  require StatefulSet and Persistent Volumes. Also, we would need a Service and, perhaps, Ingress for external access. We should be able to set the IRIS version and system password, Mirroring and/or ECP, and so on. You can imagine the amount of work InterSystems had to do to simplify IRIS deployment by hiding all the IRIS-specific logic inside operator code. &lt;br&gt;
In the next article, we’re going to look at IRIS Operator (IKO) in more detail and investigate its possibilities in more complex scenarios.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>intersystems</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Creating custom SNMP OIDs in InterSystems IRIS</title>
      <dc:creator>Mikhail Khomenko</dc:creator>
      <pubDate>Fri, 16 Oct 2020 12:18:27 +0000</pubDate>
      <link>https://dev.to/intersystems/creating-custom-snmp-oids-in-intersystems-iris-mmc</link>
      <guid>https://dev.to/intersystems/creating-custom-snmp-oids-in-intersystems-iris-mmc</guid>
      <description>&lt;p&gt;This post is dedicated to the task of monitoring aa IRIS instance using SNMP. Some users of IRIS are probably doing it already in some way or another. Monitoring via SNMP has been supported by the standard IRIS package for a long time now, but not all the necessary parameters are available "out of the box". For example, it would be nice to monitor the number of CSP sessions, get detailed information about the use of the license, particular KPI’s of the system being used and such. After reading this article, you will know how to add your parameters to IRIS monitoring using SNMP.&lt;/p&gt;

&lt;h4&gt;
  
  
  What we already have
&lt;/h4&gt;

&lt;p&gt;IRIS can be monitored using SNMP. A full list of what’s supported can be found in the files of the &amp;lt;Install_dir&amp;gt;/SNMP/. You should find a file called &lt;em&gt;ISC-IRIS.mib&lt;/em&gt; there. In particular, we’d like to know what information we can get about licenses and sessions. The table contains corresponding OID’s provided that the hierarchy starts from the root for InterSystems— 1.3.6.1.4.1.16563&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;OID&lt;/th&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Data type&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;.4.1.1.1.10&lt;/td&gt;
&lt;td&gt;irisSysLicenseUsed&lt;/td&gt;
&lt;td&gt;The current number of licenses used on this IRIS instance&lt;/td&gt;
&lt;td&gt;INTEGER&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;.4.1.1.1.11&lt;/td&gt;
&lt;td&gt;irisSysLicenseHigh&lt;/td&gt;
&lt;td&gt;The high-water mark for licenses used on this IRIS instance&lt;/td&gt;
&lt;td&gt;INTEGER&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;.4.2.15&lt;/td&gt;
&lt;td&gt;irisLicenseExceed&lt;/td&gt;
&lt;td&gt;A request for a license has exceeded the licenses available or allowed&lt;/td&gt;
&lt;td&gt;Trap message&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;.4.1.1.1.6&lt;/td&gt;
&lt;td&gt;irisSysCurUser&lt;/td&gt;
&lt;td&gt;Current number of users on this IRIS instance&lt;/td&gt;
&lt;td&gt;INTEGER&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The package lacks many important parameters, such as, for instance, the number of CSP sessions, license information and, of course, does not have application-specific KPI’s.&lt;/p&gt;

&lt;p&gt;Here is an example of what we’d like to know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The number of CSP users&lt;/li&gt;
&lt;li&gt;Limitations of our license in terms of the user count&lt;/li&gt;
&lt;li&gt;License expiry date&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s also add a few parameters for performance analysis. The parameters themselves are in the package, but we want to know the increment per minute, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The increase of the number of "global" references per minute&lt;/li&gt;
&lt;li&gt;The number of executed command per minute&lt;/li&gt;
&lt;li&gt;The number of routine calls per minute&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  How to add "your" parameters
&lt;/h4&gt;

&lt;p&gt;You can rely on the &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_snmp" rel="noopener noreferrer"&gt;Monitoring InterSystems IRIS Using SNMP&lt;/a&gt; document.&lt;/p&gt;

&lt;p&gt;The IRIS version of our test instance (IRIS) is 2020.3.0.200.0com. The operating system is Ubuntu 18.04.4 LTS (docker image &lt;em&gt;intersystemsdc/iris-community:2020.3.0.200.0-zpm&lt;/em&gt; is used, root user is enabled).&lt;/p&gt;

&lt;p&gt;Here is our agenda:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a class for collecting metrics.&lt;/li&gt;
&lt;li&gt;Register and activate a new class in IRIS using ^%SYSMONMGR.&lt;/li&gt;
&lt;li&gt;Create a user MIB using &lt;a href="https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?APP=1&amp;amp;LIBRARY=%25SYS&amp;amp;CLASSNAME=MonitorTools.SNMP" rel="noopener noreferrer"&gt;MonitorTools.SNMP&lt;/a&gt; class methods. We’ll use 99990 as a temporary PEN (Private Enterprise Number), but will need to register with &lt;a href="http://www.iana.org/" rel="noopener noreferrer"&gt;IANA&lt;/a&gt; afterwards. This procedure is free, takes a week or two and requires some email exchange along the lines of "what do you need your own PEN for?".&lt;/li&gt;
&lt;li&gt;Start a monitoring service with a connected IRIS subagent.&lt;/li&gt;
&lt;li&gt;Use snmpwalk to make sure we have access to all our newly-created OID’s.&lt;/li&gt;
&lt;li&gt;Add our OID’s to to a third-party monitoring system. Let’s use &lt;a href="http://www.zabbix.com/download" rel="noopener noreferrer"&gt;Zabbix&lt;/a&gt;, for example. Zabbix documentation is available &lt;a href="https://www.zabbix.com/documentation/4.0/doku.php" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Let’s make sure that monitoring is up and running.&lt;/li&gt;
&lt;li&gt;Add the start of the system monitor in our TEST namespace to the system startup list.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s now follow the agenda, point by point.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Create a class for collecting metrics
&lt;/h4&gt;

&lt;p&gt;The metrics collection class extends &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_healthmon#GCM_healthmon_appmon_user_classes" rel="noopener noreferrer"&gt;%Monitor.Adaptor&lt;/a&gt;. In the Terminal we switch to the %SYS namespace and export the hidden Monitor.Sample class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;%SYS&amp;gt;do $system.OBJ.Export("Monitor.Sample.cls","/tmp/Monitor_Sample.xml")
Exporting to XML started on 10/16/2020 09:33:55
Exporting class: Monitor.Sample
Export finished successfully.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s assume that the TEST namespace is our working area. Let’s switch to it and import the Monitor.Sample class here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TEST&amp;gt;do $system.OBJ.Load("/tmp/Monitor_Sample.xml", "ck")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we create a class that describes the implementation of a monitoring mechanism for the 6 metrics described in the "What we already have" section.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Class monitoring.snmp.Metrics Extends %Monitor.Adaptor
{
/// Give the application a name. This allows you to group different
/// classes together under the same application level in the SNMP MIB.
/// The default is the same as the Package name.
Parameter APPLICATION = "Monitoring";
/// CSP sessions count
Property Sessions As %Monitor.Integer;
/// License user limit
Property KeyLicenseUnits As %Monitor.Integer;
/// License key expiration date
Property KeyExpirationDate As %Monitor.String;
/// Global references speed
Property GloRefSpeed As %Monitor.Integer;
/// Number of commands executed
Property ExecutedSpeed As %Monitor.Integer;
/// Number of routine loads/save
Property RoutineLoadSpeed As %Monitor.Integer;
/// The method is REQUIRED. It is where the Application Monitor
/// calls to collect data samples, which then get picked up by the
/// ^SNMP server process when requested.
Method GetSample() As %Status
{
      set ..Sessions = ..getSessions()
      set ..KeyLicenseUnits = ..getKeyLicenseUnits()
      set ..KeyExpirationDate = ..getKeyExpirationDate()

      set perfList = ..getPerformance()
      set ..GloRefSpeed = $listget(perfList,1)
      set ..ExecutedSpeed = $listget(perfList,2)
      set ..RoutineLoadSpeed = $listget(perfList,3)

      quit $$$OK
}
/// Get CSP sessions count
Method getSessions() As %Integer
{
     // This method will only work if we don't use WebAddon:
    // quit $system.License.CSPUsers() 
    //
    // This will work even if  we use WebAddon:
    set csp = ""
    try {
        set cn = $NAMESPACE
        znspace "%SYS"
        set db = ##class(SYS.Stats.Dashboard).Sample()
        set csp = db.CSPSessions
        znspace cn
    } catch e {
        set csp = "0"
    }
    quit csp
}
/// Get license user's power
Method getKeyLicenseUnits() As %Integer
{
    quit $system.License.KeyLicenseUnits()
}
/// Get license expiration date in human-readable format
Method getKeyExpirationDate() As %String
{
    quit $zdate($system.License.KeyExpirationDate(),3)
}
/// Get performance metrics (gloref, rourines etc.)
Method getPerformance(param As %String) As %Integer
{
    set cn = $NAMESPACE
    znspace "%SYS"
    set m = ##class(SYS.Monitor.SystemSensors).%New()
    do m.GetSensors()
    znspace cn
    quit $listbuild(m.SensorReading("GlobalRefsPerMin"), 
                    m.SensorReading("RoutineCommandsPerMin"), 
                    m.SensorReading("RoutineLoadsPerMin"))
}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure that the GetSample() method really fetches the necessary data for us:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TEST&amp;gt;set metrics = ##class(monitoring.snmp.Metrics).%New()

TEST&amp;gt;write metrics.GetSample()
1
TEST&amp;gt;zwrite metrics 
metrics=3@monitoring.snmp.Metrics  ; &amp;lt;OREF&amp;gt;
+----------------- general information ---------------
|      oref value: 3
|      class name: monitoring.snmp.Metrics
| reference count: 2
+----------------- attribute values ------------------
|      ExecutedSpeed = 2653596
|        GloRefSpeed = 35863
|  KeyExpirationDate = "2021-10-30"
|    KeyLicenseUnits = 5
|   RoutineLoadSpeed = 38
|           Sessions = 5
+-----------------------------------------------------
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Let’s register and activate the new class in IRIS using ^%SYSMONMGR
&lt;/h4&gt;

&lt;p&gt;Open the terminal and switch to the TEST namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# iris session iris -U TEST
TEST&amp;gt;do ^%SYSMONMGR
1. Select item 5, Manage Application Monitor.

2. Select item 2, Manage Monitor Classes.

3. Select item 3, Register Monitor System Classes.
Exporting to XML started on 10/16/2020 10:05:39
Exporting class: Monitor.Sample
Export finished successfully.

Load started on 10/16/2020 10:05:39
Loading file /usr/irissys/mgr/Temp/VonCEUzQ8gWgfQ.stream as xml
Imported class: Monitor.Sample, using worker jobs
Compiling class Monitor.Sample
Compiling table Monitor.Sample
Compiling routine Monitor.Sample.1
Load finished successfully.

4. Select item 1, Activate/Deactivate Monitor Class
Class??
Num MetricsClassName Activated
1) %Monitor.System.AuditCount N
…
15) monitoring.snmp.Metrics N
Class? 15 monitoring.snmp.Metrics
Activate class? Yes =&amp;gt; Yes

5. Select item 7, Exit

6. Select item 6, Exit

7. Select item 1, Start/Stop System Monitor

8. Select item 2, Stop System Monitor
Stopping System Monitor… System Monitor not running!

9. Select item 1, Start System Monitor
Starting System Monitor… System Monitor started

10. Select item 3, Exit

11. Select item 4, View System Monitor State
Component                     State
System Monitor                     OK
%SYS.Monitor.AppMonSensor          OK

12. Select item 7, Exit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  3. Create a user MIB
&lt;/h4&gt;

&lt;p&gt;A user MIB is created with the help of &lt;a href="https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?APP=1&amp;amp;LIBRARY=%25SYS&amp;amp;CLASSNAME=MonitorTools.SNMP" rel="noopener noreferrer"&gt;MonitorTools.SNMP&lt;/a&gt; class methods. For this example, let’s use a fake PEN (Private Enterprise Number), 99990, but the PEN will have to be registered with &lt;a href="http://pen.iana.org/pen/PenApplication.page" rel="noopener noreferrer"&gt;IANA&lt;/a&gt; afterwards. You can view registered numbers &lt;a href="http://www.iana.org/assignments/enterprise-numbers/enterprise-numbers" rel="noopener noreferrer"&gt;here&lt;/a&gt;. For example, InterSystems’ PEN is 16563.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;16563
InterSystems
Robert Davis
rdavis&amp;amp;intersystems.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will use the &lt;a href="https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?APP=1&amp;amp;LIBRARY=%25SYS&amp;amp;CLASSNAME=MonitorTools.SNMP" rel="noopener noreferrer"&gt;MonitorTools.SNMP&lt;/a&gt; class and its &lt;a href="https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?PAGE=CLASS&amp;amp;LIBRARY=%25SYS&amp;amp;CLASSNAME=MonitorTools.SNMP#METHOD_CreateMIB" rel="noopener noreferrer"&gt;CreateMIB()&lt;/a&gt; method to create a MIB file. This method takes 10 arguments:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Argument name and type&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AppName As %String&lt;/td&gt;
&lt;td&gt;application name&lt;/td&gt;
&lt;td&gt;Value of the APPLICATION parameter of the metrics.snmp.Metrics class&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Namespace As %String&lt;/td&gt;
&lt;td&gt;our namespace&lt;/td&gt;
&lt;td&gt;TEST&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;EntID As %Integer&lt;/td&gt;
&lt;td&gt;company PEN&lt;/td&gt;
&lt;td&gt;99990 (fiction)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AppID As %Integer&lt;/td&gt;
&lt;td&gt;application OID inside the company&lt;/td&gt;
&lt;td&gt;42&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Company As %String&lt;/td&gt;
&lt;td&gt;company name (capital letters)&lt;/td&gt;
&lt;td&gt;fiction&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Prefix As %String&lt;/td&gt;
&lt;td&gt;prefix of all SNMP objects we create&lt;/td&gt;
&lt;td&gt;fiction&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CompanyShort As %String&lt;/td&gt;
&lt;td&gt;short company prefix (capital letters)&lt;/td&gt;
&lt;td&gt;fict&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MIBname As %String&lt;/td&gt;
&lt;td&gt;name of the MIB file&lt;/td&gt;
&lt;td&gt;ISC-TEST&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Contact As %String&lt;/td&gt;
&lt;td&gt;contact information (address, in particular)&lt;/td&gt;
&lt;td&gt;Let’s leave the default value: Earth, Russia, Somewhere in the forests, Subject: ISC-TEST.mib&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;List As %Boolean&lt;/td&gt;
&lt;td&gt;equivalent to verbose. Show task progress for the MIB file&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;And here comes the creation of the MIB file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;%SYS&amp;gt;do ##class(MonitorTools.SNMP).CreateMIB("Monitoring","TEST",99990,42,"fiction","fict","fiction","ISC-TEST",,1)
Create SNMP structure for Application - Monitoring
     Group - Metrics
          ExecutedSpeed = Integer
          GloRefSpeed = Integer
          KeyExpirationDate = String
          KeyLicenseUnits = Integer
          RoutineLoadSpeed = Integer
          Sessions = Integer

Create MIB file for Monitoring
     Generate table Metrics
          Add object ExecutedSpeed, Type = Integer
          Add object GloRefSpeed, Type = Integer
          Add object KeyExpirationDate, Type = String
          Add object KeyLicenseUnits, Type = Integer
          Add object RoutineLoadSpeed, Type = Integer
          Add object Sessions, Type = Integer
MIB done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is new MIB &lt;em&gt;ISC-TEST.mib&lt;/em&gt; in the &amp;lt;Install_dir&amp;gt;/mgr/TEST folder now.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Start the monitoring service with the connected IRIS subagent
&lt;/h4&gt;

&lt;p&gt;Let’s open the &lt;em&gt;System Administration -&amp;gt; Security -&amp;gt; Services -&amp;gt; %Service_Monitor (click) -&amp;gt; Service Enabled (check)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm92sauvg7jtux3ne7pgx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fm92sauvg7jtux3ne7pgx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fme0xxn55cd3icj4rglre.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fme0xxn55cd3icj4rglre.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We also specify that we want to start the SNMP subagent when IRIS is started:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fa5sithbxks4nloxmkvyx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fa5sithbxks4nloxmkvyx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Linux, we use the net-snmp package for SNMP monitoring. So we install it, configure it to be used with subagents and specify port 705 as the default one for the master agent to talk with subagents.&lt;br&gt;
A small article about the snmpd.conf configuration file that complements the &lt;a href="http://www.net-snmp.org/docs/man/snmpd.conf.html" rel="noopener noreferrer"&gt;manual&lt;/a&gt; can be found on &lt;a href="http://www.cyberciti.biz/nixcraft/linux/docs/uniqlinuxfeatures/mrtg/mrtg_config_step_3.php" rel="noopener noreferrer"&gt;cyberciti&lt;/a&gt;. Here is your final set of settings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# apt-get update
# apt-get install -y snmpd snmp
# grep '^[^#]' /etc/snmp/snmpd.conf
master agentx
agentXSocket TCP:localhost:705
com2sec local localhost public
group MyRWGroup v1 local
group MyRWGroup v2c local
group MyRWGroup usm local
view all included .1 80
view system included .iso.org.dod
access MyROGroup "" any noauth exact all none none
access MyRWGroup "" any noauth exact all all none
syslocation server (edit /etc/snmp/snmpd.conf)
syscontact Root &amp;lt;root@localhost&amp;gt; (configure /etc/snmp/snmp.local.conf)
dontLogTCPWrappersConnects yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s restart the snmpd and snmptrapd daemons in Linux. After that, we start the SNMP service to activate the SNMP IRIS subagent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;%SYS&amp;gt;do start^SNMP

%SYS&amp;gt;; Check SNMP subagent status

%SYS&amp;gt;zwrite ^SYS("MONITOR")
^SYS("MONITOR","SNMP")="RUN"
^SYS("MONITOR","SNMP","NAMESPACE")="%SYS"
^SYS("MONITOR","SNMP","PID")=2035
^SYS("MONITOR","SNMP","PORT")=705
^SYS("MONITOR","SNMP","STARTUP")="SNMP agent started on port 705, timeout=20, winflag=0, Debug=0"
^SYS("MONITOR","SNMP","STATE")="Terminated - 10/16/2020 10:40:31.7147AM"
^SYS("MONITOR","SNMP","WINSTART")=0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  5. Check that only our own, newly-created user OID’s are available
&lt;/h4&gt;

&lt;p&gt;This can be done using snmpwalk — we’ll display the OID showing the number of CSP sessions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# snmpwalk -On -v 2c -c public localhost 1.3.6.1.4.1.99990
.1.3.6.1.4.1.99990.42.1.1.1.1.4.73.82.73.83 = INTEGER: 1928761
.1.3.6.1.4.1.99990.42.1.1.1.2.4.73.82.73.83 = INTEGER: 226351
.1.3.6.1.4.1.99990.42.1.1.1.3.4.73.82.73.83 = STRING: "2021-10-30"
.1.3.6.1.4.1.99990.42.1.1.1.4.4.73.82.73.83 = INTEGER: 5
.1.3.6.1.4.1.99990.42.1.1.1.5.4.73.82.73.83 = INTEGER: 306
.1.3.6.1.4.1.99990.42.1.1.1.6.4.73.82.73.83 = INTEGER: 2

# If you get such result
# .1.3.6.1.4.1.99990 = No Such Object available on this agent at this OID
# try to restart SNMP subagent in IRIS in this way:
# do stop^SNMP
# do start^SNMP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;ISC-TEST.mib&lt;/em&gt; file contains the sequence of our OID’s:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FictMetricsR ::=
       SEQUENCE {
        fictExecutedSpeed   Integer32,
        fictGloRefSpeed   Integer32,
        fictKeyExpirationDate   DisplayString,
        fictKeyLicenseUnits   Integer32,
        fictRoutineLoadSpeed   Integer32,
        fictSessions   Integer32
       }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Accordingly, the number of sessions, for example, is the last OID 1.3.6.1.4.1.99990.42.1.1.1.6. You can compare it with the number of sessions shown on the SMP dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqh8p6wlx5dvp9th7hfma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqh8p6wlx5dvp9th7hfma.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  6. Let’s add our OID’s to an external monitoring system
&lt;/h4&gt;

&lt;p&gt;Let’s use &lt;a href="http://www.zabbix.com/download.php" rel="noopener noreferrer"&gt;Zabbix&lt;/a&gt;. Zabbix documentation can be found &lt;a href="http://www.zabbix.com/documentation.php" rel="noopener noreferrer"&gt;here&lt;/a&gt;. A detailed Linux installation and configuration guide for Zabbix is available &lt;a href="https://www.zabbix.com/documentation/3.0/manual/installation/install_from_packages" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Zabbix was selected as a system that not only allows you to draw charts, but also monitor Plain Text (in our case, license expiry date and license units). After adding our 6 metrics to our local host &lt;a href="https://www.zabbix.com/documentation/3.0/manual/config/items/item" rel="noopener noreferrer"&gt;items&lt;/a&gt; (type: SNMPv2 agent) and creating 4 &lt;a href="https://www.zabbix.com/documentation/3.0/manual/config/visualisation/graphs/custom" rel="noopener noreferrer"&gt;graphs&lt;/a&gt; and 2 PlainText parameters (as &lt;a href="https://www.zabbix.com/documentation/3.0/manual/config/visualisation/screens" rel="noopener noreferrer"&gt;screen&lt;/a&gt; elements), we should see the following picture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxrnucnplk6885auubsj5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxrnucnplk6885auubsj5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above is the information about license expiry and the number of available license slots. Graphs speak for themselves.&lt;/p&gt;

&lt;h4&gt;
  
  
  7. Let’s add the launch of the system monitor to the startup list of our TEST namespace
&lt;/h4&gt;

&lt;p&gt;There is a pretty good &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GSTU_customize#GSTU_customize_startstop" rel="noopener noreferrer"&gt;document&lt;/a&gt; about user routines executed when IRIS starts and stops. They are called %ZSTART and %ZSTOP, accordingly.&lt;/p&gt;

&lt;p&gt;What we are interested in is that the system monitor (^%SYSMONMGR) starts in the TEST namespace during the system start. By default, this monitor only starts on the %SYS namespace. Therefore, we will only look at the ^%ZSTART program. The source is in %ZSTART.mac (create and save it to the %SYS namespace).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ROUTINE %ZSTART

%ZSTART; User startup routine.
SYSTEM;
    ; IRIS starting
    do $zu(9,"","Starting System Monitor in TEST namespace by ^%ZSTART...Begin")
    znspace "TEST"
    set sc = ##class(%SYS.Monitor).Start()
    do $system.OBJ.DisplayError(sc)
    if (sc = 1) {
    do $zutil(9,"","Starting System Monitor in TEST namespace by ^%ZSTART...OK")
    } else {
    do $zutil(9,"","Starting System Monitor in TEST namespace by ^%ZSTART...ERROR")
    }
    ; Starting SNMP
    znspace "%SYS"
    do start^SNMP
    quit
LOGIN;
    ; a user logs into IRIS (user account or telnet)
    quit
JOB;
    ; JOB'd process begins
    quit
CALLIN;
    ; a process enters via CALLIN interface
    quit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another way to do the same is using ^%SYSMONMGR:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;%SYS&amp;gt;do ^%SYSMONMGR

1. Select item 3, Configure System Monitor Classes.

2. Select item 2, Configure Startup Namespaces.

3. Select item 2, Add Namespace.
Namespace? TEST

4. Select item 1, List Start Namespaces.
Option? 1
     TEST

5. Select item 4, Exit.

6. Select item 3, Exit.

7. Select item 8, Exit.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s now restart IRIS (if possible) to make sure that SNMP stats continue to be collected after a restart.&lt;/p&gt;

&lt;p&gt;This is it. Perhaps, some will question my choice of monitored parameters or code, but the task was to show the mere possibility of implementing such monitoring in principle. You can add extra parameters or refactor your code later.&lt;br&gt;
Also, it's worth to make IRIS settings persistent in Docker using &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCI_manifest" rel="noopener noreferrer"&gt;Installer&lt;/a&gt;, but it's not in a scope of this article.&lt;/p&gt;

</description>
      <category>intersystems</category>
      <category>monitoring</category>
      <category>snmp</category>
    </item>
    <item>
      <title>Adding TLS and DNS to IRIS-based Services Deployed on Google Kubernetes Engine</title>
      <dc:creator>Mikhail Khomenko</dc:creator>
      <pubDate>Fri, 02 Oct 2020 06:27:12 +0000</pubDate>
      <link>https://dev.to/intersystems/adding-tls-and-dns-to-iris-based-services-deployed-on-google-kubernetes-engine-1c0j</link>
      <guid>https://dev.to/intersystems/adding-tls-and-dns-to-iris-based-services-deployed-on-google-kubernetes-engine-1c0j</guid>
      <description>&lt;p&gt;This article is a continuation of &lt;a href="https://dev.to/intersystems/deploying-intersystems-iris-solution-on-gke-using-github-actions-576h"&gt;Deploying InterSystems IRIS solution on GKE Using GitHub Actions&lt;/a&gt;, in which, with the help of GitHub Actions pipeline, our zpm-registry was deployed in a Google Kubernetes cluster created by Terraform. In order not to repeat, we’ll take as a starting point that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You’ve forked the repository &lt;a href="https://github.com/intersystems-community/github-gke-zpm-registry"&gt;Github Actions + GKE + zpm example&lt;/a&gt; and allow Actions in your fork. Its root directory will be referenced as &amp;lt;root_repo_dir&amp;gt; throughout the article.&lt;/li&gt;
&lt;li&gt;You’ve replaced the placeholders in &lt;a href="https://github.com/intersystems-community/github-gke-zpm-registry/blob/master/terraform/main.tf"&gt;the Terraform file&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;You’ve created all the secrets (in the GitHub Actions Secrets page) mentioned in the only table in &lt;a href="https://dev.to/intersystems/deploying-intersystems-iris-solution-on-gke-using-github-actions-576h"&gt;Deploying InterSystems IRIS solution on GKE Using GitHub Actions&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;All code samples will be stored in the &lt;a href="https://github.com/intersystems-community/github-gke-tls"&gt;GitHub-GKE-TLS repository&lt;/a&gt; to simplify copying and pasting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Assuming all the above, let’s continue. &lt;/p&gt;

&lt;h4&gt;
  
  
  Getting Started
&lt;/h4&gt;

&lt;p&gt;Last time we made our connection to zpm-registry like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -XGET -u _system:SYS 104.199.6.32:52773/registry/packages/-/all
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The absence of a protocol before the IP address means that we’re using HTTP and, therefore, traffic is unencrypted and &lt;a href="https://en.wikipedia.org/wiki/Alice_and_Bob#Cast_of_characters"&gt;Infamous Eve&lt;/a&gt; can catch our password.&lt;/p&gt;

&lt;p&gt;To protect against Eve eavesdropping, we need to encrypt the traffic, which means using &lt;a href="https://robertheaton.com/2014/03/27/how-does-https-actually-work/"&gt;HTTPS&lt;/a&gt;. Can we do this?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;104.199.6.32:52773/registry/packages → https://104.199.6.32:52773/registry/packages
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Generally speaking, yes. You can obtain &lt;a href="https://stackoverflow.com/questions/2043617/is-it-possible-to-have-ssl-certificate-for-ip-address-not-domain-name"&gt;the certificate for an IP address&lt;/a&gt;. But this solution has drawbacks. Read &lt;a href="https://www.geocerts.com/support/ip-address-in-ssl-certificate"&gt;Using an IP Address in an SSL Certificate&lt;/a&gt; and &lt;a href="https://www.digitalocean.com/community/questions/ssl-for-ip-address"&gt;SSL for IP Address&lt;/a&gt; for more details. In short, you need a static IP address and there are no free certificate providers with that ability. The benefit is also questionable.&lt;/p&gt;

&lt;p&gt;As for free providers in general, fortunately, they do exist, as you’ll find in &lt;a href="https://blog.hubspot.com/website/best-free-ssl-certificate-sources"&gt;9 Best Free SSL Certificate Sources&lt;/a&gt;. One of the leaders is &lt;a href="https://letsencrypt.org/"&gt;Let’s Encrypt&lt;/a&gt;, but it issues certificates only for domain names, although there are plans to add &lt;a href="https://letsencrypt.org/upcoming-features/#ip-addresses-in-certificates"&gt;IP Addresses in Certificates&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So, to encrypt the traffic, you first need to get a domain name. If you already have a domain name, you can skip the next section.&lt;/p&gt;

&lt;h4&gt;
  
  
  Getting a Domain Name
&lt;/h4&gt;

&lt;p&gt;Buy a domain name from a domain registrar. There are &lt;a href="https://www.icann.org/registrar-reports/accreditation-qualified-list.html"&gt;quite a few&lt;/a&gt;. Prices depend on the domain name and level of service. For development purposes, you can take, for example, a domain with the suffix .dev, which will probably be inexpensive. &lt;/p&gt;

&lt;p&gt;We won’t get into the domain registration process here, there’s nothing complicated about it. Note that from here on we’ll assume the registered domain name is example.com. Substitute your own domain.&lt;/p&gt;

&lt;p&gt;At the end of the domain registration process, you’ll have a domain name and domain zone. They are not the same thing. See the note &lt;a href="https://simpledns.plus/help/definition-domains-vs-zones"&gt;Definition - Domains vs. Zones&lt;/a&gt; and this small video, &lt;a href="https://www.youtube.com/watch?v=dk7GS6GbeiY"&gt;DNS Zones&lt;/a&gt;, to understand the difference.&lt;/p&gt;

&lt;p&gt;Briefly, if you imagine the domain as an organization, the zone can be thought of as a department in this organization. Our domain is example.com and we’ll call the zone the same: example.com. (In a small organization you can have just one department.) Each DNS zone should have special servers that know about the IP addresses and domain names inside its zone. These links are called resource records (RR). They can be of different types (see &lt;a href="https://simpledns.plus/help/dns-record-types"&gt;DNS Record types&lt;/a&gt;). The most widely used is the &lt;a href="https://simpledns.plus/help/a-records"&gt;A-type&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The special servers, called name servers, are provided by the domain registrar. For example, my registrar gave me the following two name servers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ns39.domaincontrol.com
ns40.domaincontrol.com
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You’ll need to create a resource record (server domain name = IP address), for instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zpm.example.com = 104.199.6.32
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You do this by creating an A-type record in the zone &lt;em&gt;example.com&lt;/em&gt;. After you save this record, other DNS servers all around the world will eventually know about this update and you’ll be able to refer to your zpm-registry by name: zpm.example.com:52773/registry/.&lt;/p&gt;

&lt;h4&gt;
  
  
  Domains and Kubernetes
&lt;/h4&gt;

&lt;p&gt;As you might remember, the zpm service IP address was created during our Kubernetes Service (Load Balancer type) deployment. If you’ve played with zpm-registry, removed it, and then decided to deploy zpm-registry again, you’re likely to get another IP address. If that’s the case, you should again go to the DNS registrar web console and set a new IP address for zpm.example.com.&lt;br&gt;
There’s another way to do this. During your Kubernetes deployment, you can deploy a helper tool, called &lt;a href="https://github.com/kubernetes-sigs/external-dns"&gt;External DNS&lt;/a&gt;, that can retrieve a newly created IP address from Kubernetes Services or from &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/"&gt;Ingress&lt;/a&gt; and create the corresponding DNS record. The tool doesn’t support all registrars, but it does support &lt;a href="https://cloud.google.com/dns"&gt;Google Cloud DNS&lt;/a&gt;, which can be responsible for DNS zones, providing name servers, and storing your resource records.&lt;/p&gt;

&lt;p&gt;To use Google Cloud DNS, we need to transfer responsibility to it for our DNS zone example.com in the registrar web console. To do this, replace the name servers provided by domain registrar with the ones Google Cloud DNS provides. We will need to create the example.com zone in the Google console and copy/paste the name servers provided by Google. See details below.&lt;/p&gt;

&lt;p&gt;Let’s see how to add External DNS and create our Google Cloud DNS in the code. However, to save space, I’ll include only parts of the code here. As I noted earlier, you’ll find the complete samples in &lt;a href="https://github.com/intersystems-community/github-gke-tls"&gt;GitHub GKE TLS repository&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  Adding External DNS
&lt;/h4&gt;

&lt;p&gt;To deploy this application to the GKE, we’ll use the power of &lt;a href="https://helm.sh/"&gt;Helm&lt;/a&gt;. You’ll find both the article &lt;a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-helm-the-package-manager-for-kubernetes"&gt;An Introduction to Helm, the Package Manager for Kubernetes&lt;/a&gt; and &lt;a href="https://helm.sh/docs/"&gt;the official documentation&lt;/a&gt; helpful.&lt;/p&gt;

&lt;p&gt;Add these lines as the last job in the "kubernetes-deploy" stage in the pipeline file. Also add several new variables in the "env" section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
$ cat &amp;lt;root_repo_dir&amp;gt;/.github/workflows/workflow.yaml
...
env:
...
  DNS_ZONE: example.com
  HELM_VERSION: 3.1.1
  EXTERNAL_DNS_CHART_VERSION: 2.20.6
...
jobs:
...
  kubernetes-deploy:
...
  steps:
...
    - name: Install External DNS
      run: |
        wget -q https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz
        tar -zxvf helm-v${HELM_VERSION}-linux-amd64.tar.gz
  cd linux-amd64 
        ./helm version
        ./helm repo add bitnami https://charts.bitnami.com/bitnami
        gcloud container clusters get-credentials ${GKE_CLUSTER} --zone ${GKE_ZONE} --project ${PROJECT_ID}
        echo ${GOOGLE_CREDENTIALS} &amp;gt; ./credentials.json
        kubectl create secret generic external-dns --from-file=./credentials.json --dry-run -o yaml | kubectl apply -f -
        ./helm upgrade external-dns bitnami/external-dns \
          --install \
          --atomic \
          --version=${EXTERNAL_DNS_CHART_VERSION} \
          --set provider=google \
          --set google.project=${PROJECT_ID} \
          --set google.serviceAccountSecret=external-dns \
          --set registry=txt \
          --set txtOwnerId=k8s \
          --set policy=sync \
          --set domainFilters={${DNS_ZONE}} \
          --set rbac.create=true
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can read more about parameters set by --set in the External DNS chart &lt;a href="https://github.com/bitnami/charts/tree/master/bitnami/external-dns"&gt;documentation&lt;/a&gt;. For now, just add these lines and let’s proceed.&lt;/p&gt;

&lt;h4&gt;
  
  
  Creating Cloud DNS
&lt;/h4&gt;

&lt;p&gt;First, add a new file under the &amp;lt;root_repo_dir&amp;gt;/terraform/ directory. Remember to use your domain zone:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;root_repo_dir&amp;gt;/terraform/clouddns.tf
resource "google_dns_managed_zone" "my-zone" {
  name = "zpm-zone"
  dns_name = "example.com."
  description = "My DNS zone"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Also be sure that the user on whose behalf Terraform works has at least the following roles (note the DNS Administrator and Kubernetes Engine Admin roles):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--30Qny5aP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jpmy6km0vf167muw72h3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--30Qny5aP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jpmy6km0vf167muw72h3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you run the GitHub Actions pipeline, these will be created. &lt;br&gt;
With External DNS and Cloud DNS ready (though virtually at the moment), we can now consider how to expose our zpm-service in a way different from the load balancer service type.&lt;/p&gt;
&lt;h4&gt;
  
  
  Kubernetes
&lt;/h4&gt;

&lt;p&gt;Our zpm-service is now exposed using a regular Kubernetes service load balancer. See the file &lt;a href="https://github.com/intersystems-community/github-gke-zpm-registry/blob/master/k8s/service.yaml"&gt;&amp;lt;root_repo_dir&amp;gt;/k8s/service.yaml&lt;/a&gt;. You can read more about Kubernetes Services in &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps"&gt;Exposing applications using services&lt;/a&gt;. What’s important for us is that load balancer services don’t have the ability to set the domain name, and the actual Google Load Balancers created by those Kubernetes Services resources work on the &lt;a href="https://cloud.google.com/load-balancing/docs/network"&gt;TCP/UDP level&lt;/a&gt; of the &lt;a href="https://en.wikipedia.org/wiki/OSI_model"&gt;OSI&lt;/a&gt;. This level knows nothing about HTTP and certificates. So we need to replace the Google network load balancer with an &lt;a href="https://cloud.google.com/load-balancing/docs/https"&gt;HTTP load balancer&lt;/a&gt;. We can create such a load balancer using the &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/"&gt;Kubernetes Ingress resource&lt;/a&gt; instead of the Kubernetes Service.&lt;/p&gt;

&lt;p&gt;At this point, it would probably be a good idea for you to read &lt;a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0"&gt;Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now let’s continue. What does Ingress usage mean in the code? Go to the &amp;lt;root_repo_dir&amp;gt;/k8s/ directory and make the following changes. The Service should be of type NodePort:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
$ cat /k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: zpm-registry
  namespace: iris
spec:
  selector:
    app: zpm-registry
  ports:
  - protocol: TCP
    port: 52773
    targetPort: 52773
  &lt;b&gt;type: NodePort&lt;/b&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And you need to add the Ingress manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;root_repo_dir&amp;gt;/k8s/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: gce
    networking.gke.io/managed-certificates: zpm-registry-certificate
    external-dns.alpha.kubernetes.io/hostname: zpm.example.com
  name: zpm-registry
  namespace: iris
spec:
  rules:
  - host: zpm.example.com
    http:
      paths:
      - backend:
          serviceName: zpm-registry
          servicePort: 52773
        path: /*
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Also, add the line marked in bold into the workflow file:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
$ cat /.github/workflows/workflow.yaml
...
- name: Apply Kubernetes manifests
  working-directory: ./k8s/
...
    kubectl apply -f service.yaml
    &lt;b&gt;kubectl apply -f ingress.yaml&lt;/b&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If you deploy this manifest, the Google Cloud controller will create an external HTTP load balancer, which will listen for traffic to the host zpm.example.com and send this traffic to all Kubernetes nodes on the port that’s opened during Kubernetes NodePort Service deployment. This port is arbitrary, but you don’t need to worry about it — it’s completely automated.&lt;br&gt;
We can define a more detailed configuration of Ingress using annotations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;annotations:
  kubernetes.io/ingress.class: gce
  networking.gke.io/managed-certificates: zpm-registry-certificate
  external-dns.alpha.kubernetes.io/hostname: zpm.example.com
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The first line says we’re using the "gce" Ingress controller. This entity creates an HTTP load balancer according to settings in the Ingress resource.&lt;/p&gt;

&lt;p&gt;The second line is the one that relates to certificates. We’ll return to this setting later.&lt;/p&gt;

&lt;p&gt;The third line sets the external DNS that should bind the specified hostname (zpm.example.com) to the IP address of the HTTP load balancer.&lt;/p&gt;

&lt;p&gt;If you did deploy this manifest, you’d see (after 10 minutes or so) that an Ingress has been created, but not everything else worked:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nFGyOjCI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a0is1iroi3n0ejb9aws3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nFGyOjCI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a0is1iroi3n0ejb9aws3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aY4x42DK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h07265on4v6r3y74m936.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aY4x42DK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h07265on4v6r3y74m936.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What are those "Backend services," one of which seems to be bad?&lt;/p&gt;

&lt;p&gt;See their names, like "k8s-be-31407-..." and "k8s-be-31445-..."? These are ports opened on our single Kubernetes node. Your ports are likely to have different numbers.&lt;/p&gt;

&lt;p&gt;31407 is a port that’s opened for &lt;a href="https://cloud.google.com/load-balancing/docs/health-check-concepts"&gt;health checks&lt;/a&gt;, which the HTTP load balancer constantly sends to be sure that node is alive.&lt;/p&gt;

&lt;p&gt;You can take a look at those health check results by connecting to Kubernetes and proxying Node Port 31407:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gcloud container clusters get-credentials &amp;lt;CLUSTER_NAME&amp;gt; --zone &amp;lt;LOCATION&amp;gt; --project &amp;lt;PROJECT_ID&amp;gt;
$ kubectl proxy &amp;amp;
$ curl localhost:8001/healthz
ok
$ fg
^Ctrl+C
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Another port, 31445,  is a NodePort opened for our zpm-registry service:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
$ kubectl -n iris get svc
NAME          TYPE  CLUSTER-IP   EXTERNAL-IP PORT(S)      AGE
zpm-registry  NodePort 10.23.255.89       52773:&lt;b&gt;31445&lt;/b&gt;/TCP 24m
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And the default health checks the HTTP load balancer sends a report that this service is down. Is it really?&lt;/p&gt;

&lt;p&gt;What are those Health checks in more detail? Click on the Backend service name. Scroll down a little to see the Health check name:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dGcol87w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gzzudp06qbudjdbx4235.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dGcol87w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gzzudp06qbudjdbx4235.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the Health check name to see details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yZ-P8ZXy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fwdvl9f5aa1cl8hq7qbl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yZ-P8ZXy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fwdvl9f5aa1cl8hq7qbl.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We should replace the Health check "/" path with a path understandable by IRIS, like "/csp/sys/UtilHome.csp":&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
$ curl -I localhost:52773/csp/sys/UtilHome.csp
Handling connection for 52773
HTTP/1.1 &lt;b&gt;200 OK&lt;/b&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;So, how can we set that new path in the Health check. The answer with &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/"&gt;readiness/liveness probes&lt;/a&gt; as they’re used, if present, in place of the default "/" path. &lt;/p&gt;

&lt;p&gt;So, let’s add those probes to the Kubernetes StatefulSet manifest:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
$ cat /k8s/statefulset.tpl
...
containers:
&lt;b&gt;- image: DOCKER_REPO_NAME:DOCKER_IMAGE_TAG&lt;/b&gt;
... 
  ports:
  - containerPort: 52773
    name: web
  &lt;b&gt;readinessProbe:
    httpGet:
      path: /csp/sys/UtilHome.csp
      port: 52773
    initialDelaySeconds: 10
    periodSeconds: 10
  livenessProbe:
    httpGet:
      path: /csp/sys/UtilHome.csp
      port: 52773
    periodSeconds: 10&lt;/b&gt;
  volumeMounts:
  - mountPath: /opt/zpm/REGISTRY-DATA
    name: zpm-registry-volume
  - mountPath: /mount-helper
    name: mount-helper
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;So far, we’ve made several changes just to open a door for the main event: the certificate. Let’s get it and run all this finally.&lt;/p&gt;

&lt;h4&gt;
  
  
  Getting the Certificate
&lt;/h4&gt;

&lt;p&gt;I highly recommended you watch these videos that describe different ways to obtain SSL/TLS certificates for Kubernetes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=7K0gAYmWWho"&gt;Create a Kubernetes TLS Ingress from scratch in Minikube&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=JJTJfl-V_UM"&gt;Automatically Provision TLS Certificates in K8s with cert-manager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=etC5d0vpLZE"&gt;Use cert-manager with Let's Encrypt® Certificates Tutorial: Automatic Browser-Trusted HTTPS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=ceeKC4izk4E"&gt;Super easy new way to add HTTPS to Kubernetes apps with ManagedCertificates on GKE&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, there are several ways to get a certificate: openssl self-signed, manual &lt;a href="https://certbot.eff.org/"&gt;certbot by Let’s Encrypt&lt;/a&gt;, automated &lt;a href="https://github.com/jetstack/cert-manager"&gt;cert-manager&lt;/a&gt; connected to Let’s Encrypt and, finally, the native Google approach, &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs"&gt;Managed Certificates&lt;/a&gt;, which is our choice for its simplicity. &lt;/p&gt;

&lt;p&gt;Let’s add it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;root_repo-dir&amp;gt;/k8s/managed-certificate.yaml
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
  name: zpm-registry-certificate
  namespace: iris
spec:
  domains:
  - zpm.example.com
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Add the bolded line to the deployment pipeline:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
$ cat /.github/workflows/workflow.yaml
...
- name: Apply Kubernetes manifests
  working-directory: ./k8s/
  run: |
    gcloud container clusters get-credentials ${GKE_CLUSTER} --zone ${GKE_ZONE} --project ${PROJECT_ID}
    kubectl apply -f namespace.yaml
    &lt;b&gt;kubectl apply -f managed-certificate.yaml&lt;/b&gt;
    kubectl apply -f service.yaml
...
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Enabling it in Ingress was done before as the Ingress annotation, remember?&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
$ cat /k8s/ingress.yaml
...
annotations:
  kubernetes.io/ingress.class: gce
  &lt;b&gt;networking.gke.io/managed-certificates: zpm-registry-certificate&lt;/b&gt;
...
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;So now we’re ready to push all those changes to the repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git add .github/ terraform/ k8s/
$ git commit -m "Add TLS to GKE deploy"
$ git push
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After 15 minutes or so (cluster provisioning should be done), you should see a "green" Ingress:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M0e6oZJd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a9da8lezjnvt3d0zhk2s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M0e6oZJd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a9da8lezjnvt3d0zhk2s.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you should set at least two name servers provided by Google in your domain name registrars console (if you use another registrar than Google Domains):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KcU_ILm6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1jula0rl37b6yszcce50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KcU_ILm6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1jula0rl37b6yszcce50.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We won’t describe this process as it’s specific to each registrar and usually well-described in the registrar’s documentation.&lt;/p&gt;

&lt;p&gt;Google already knows about the new resource record created automatically by ExternalDNS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ dig +short @ns-cloud-b1.googledomains.com. zpm.example.com
34.102.202.2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;But it will take some time until this record is propagated all around the world. Eventually, however, you’ll receive:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ dig +short zpm.example.com
34.102.202.2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It’s a good idea to check the certificate status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gcloud container clusters get-credentials &amp;lt;CLUSTER_NAME&amp;gt; --zone &amp;lt;LOCATION&amp;gt; --project &amp;lt;PROJECT_ID&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;pre&gt;&lt;code&gt;
$ kubectl -n iris get managedcertificate zpm-registry-certificate -ojson | jq '.status'
{
  "certificateName": "mcrt-158f20bb-cdd3-451d-8cb1-4a172244c14f",
  "certificateStatus": "&lt;b&gt;Provisioning&lt;/b&gt;",
  "domainStatus": [
    {
      "domain": "zpm.example.com",
      "status": "&lt;b&gt;Provisioning&lt;/b&gt;"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You can read about the various status meanings on &lt;a href="https://cloud.google.com/load-balancing/docs/ssl-certificates/troubleshooting#certificate-managed-status"&gt;the Google-managed SSL certificate status&lt;/a&gt; page. Note that you might have to wait an hour or so for the initial certificate provisioning.&lt;/p&gt;

&lt;p&gt;You might encounter the domainStatus "FailedNotVisible." If so, check that you’ve really added Google name servers in your DNS registrar console.&lt;/p&gt;

&lt;p&gt;You should wait until both certificateStatus and domainStatus become available, and you may even have to wait a little longer, as stated in &lt;a href="https://cloud.google.com/load-balancing/docs/ssl-certificates/troubleshooting#certificate-managed-status"&gt;Google-manages SSL certificate status&lt;/a&gt;. Finally, however, you should be able to call zpm-registry with HTTPS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -XGET -u _system:SYS https://zpm.example.com/registry/packages/-/all
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Google does a great job creating all required Google resources based on Kubernetes resources. &lt;/p&gt;

&lt;p&gt;In addition,  Google Managed Certificates is a cool feature that greatly simplifies obtaining a certificate.&lt;/p&gt;

&lt;p&gt;As always, don’t forget to remove Google resources (GKE, CloudDNS) when you no longer need them as they cost money.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>intersystems</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Deploying InterSystems IRIS Solution into GCP Kubernetes Cluster GKE Using CircleCI</title>
      <dc:creator>Mikhail Khomenko</dc:creator>
      <pubDate>Fri, 11 Sep 2020 08:07:40 +0000</pubDate>
      <link>https://dev.to/intersystems/deploying-intersystems-iris-solution-into-gcp-kubernetes-cluster-gke-using-circleci-1eoj</link>
      <guid>https://dev.to/intersystems/deploying-intersystems-iris-solution-into-gcp-kubernetes-cluster-gke-using-circleci-1eoj</guid>
      <description>&lt;p&gt;Most of us are more or less familiar with Docker. Those who use it like it for the way it lets us easily deploy almost any application, play with it, break something and then restore the application with a simple restart of the Docker container.&lt;br&gt;
InterSystems also likes Docker. The InterSystems &lt;a href="https://openexchange.intersystems.com/?sort=s.desc"&gt;OpenExchange&lt;/a&gt; project contains a number of examples that run InterSystems IRIS images in Docker containers that are &lt;a href="https://hub.docker.com/_/intersystems-iris-data-platform/plans/222f869e-567c-4928-b572-eb6a29706fbd?tab=instructions"&gt;easy to download&lt;/a&gt; and run. You’ll also find other useful components, such as the &lt;a href="https://openexchange.intersystems.com/package/VSCode-ObjectScript"&gt;Visual Studio IRIS plugin&lt;/a&gt;.&lt;br&gt;
It’s easy enough to run IRIS in Docker with additional code for specific use cases, but if you want to share your solutions with others, you’ll need some way to run commands and repeat them after each code update. In this article, we’ll see how to use &lt;a href="https://martinfowler.com/articles/continuousIntegration.html"&gt;Continuous Integration&lt;/a&gt;/&lt;a href="https://martinfowler.com/bliki/ContinuousDelivery.html"&gt;Continuous Delivery&lt;/a&gt; (CI/CD) practices to simplify that process.&lt;/p&gt;
&lt;h4&gt;
  
  
  Setting Up
&lt;/h4&gt;

&lt;p&gt;We’ll start with &lt;a href="https://openexchange.intersystems.com/package/objectscript-rest-docker-template"&gt;a simple REST API application&lt;/a&gt; based on IRIS. The details of the application can be found in the video &lt;a href="https://www.youtube.com/watch?v=5_R7dLKLbS8"&gt;Creating REST API with InterSystems IRIS, ObjectScript and Docker&lt;/a&gt;. Let’s see how we could share similar applications with others using CI/CD.&lt;/p&gt;

&lt;p&gt;Initially, we’ll clone the code into a personal GitHub repository. If you don’t have an account on GitHub, &lt;a href="https://github.com/join"&gt;sign up&lt;/a&gt; for one. For convenience, add &lt;a href="https://docs.github.com/en/enterprise/2.22/user/github/authenticating-to-github/adding-a-new-ssh-key-to-your-github-account"&gt;access via SSH&lt;/a&gt; so you don’t need to enter a password with each pull or push. Then go to the &lt;a href="https://github.com/intersystems-community/objectscript-rest-docker-template"&gt;intersystems-community/objectscript-rest-docker-template&lt;/a&gt; project page on GitHub and click the "Use this Template" button to create your own version of the repo based on the template. Give it a name like "my-objectscript-rest-docker-template".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L8fyyWSe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5zgbp22mep4rcl5koz2s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L8fyyWSe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5zgbp22mep4rcl5koz2s.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now pull the project to your local machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone git@github.com:&amp;lt;your_account&amp;gt;/my-objectscript-rest-docker-template.git
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next, we’ll add a REST endpoint in the spirit of "hello, world!".&lt;/p&gt;

&lt;p&gt;Endpoints are defined in the &lt;em&gt;src/cls/Sample/PersonREST.cls&lt;/em&gt; class. Our endpoint will look like this (defined before the first &amp;lt;Route&amp;gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Route Url="/helloworld" Method="GET" Call="HelloWorld"/&amp;gt;
&amp;lt;Route Url="/all" Method="GET" Call="GetAllPersons"/&amp;gt;
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It calls the HelloWorld method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ClassMethod HelloWorld() As %Status
{
    Write "Hello, world!"
    Quit $$$OK
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we need to consider how this works when pushing to a remote repository. We need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build a Docker image.&lt;/li&gt;
&lt;li&gt;Save the Docker image.&lt;/li&gt;
&lt;li&gt;Run the container based on this image.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We’ll use the &lt;a href="https://circleci.com/"&gt;CircleCI&lt;/a&gt; service, which is already integrated with GitHub, to build the Docker image. And we’ll use Google Cloud, which allows you to store Docker images and run containers based on them in Kubernetes. Let’s delve into this a little.&lt;/p&gt;

&lt;h4&gt;
  
  
  Google Cloud Prerequisites
&lt;/h4&gt;

&lt;p&gt;Let’s assume you’ve registered for an account with &lt;a href="https://console.cloud.google.com/"&gt;Google Cloud&lt;/a&gt;, which provides a free tier of services. Create a project with the name "Development", then create a Kubernetes cluster by clicking the "Create cluster" button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9FDcwCYy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/brp8a0yvu7ntxmplvp2l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9FDcwCYy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/brp8a0yvu7ntxmplvp2l.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the demo, select "Your first cluster" on the left. Choose a newer version of Kubernetes and a machine type of n1-standard-1. For our purposes, one machine should be enough.&lt;/p&gt;

&lt;p&gt;Click the Create button, then set up a connection to the cluster. We’ll use the &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/"&gt;kubectl&lt;/a&gt; and &lt;a href="https://cloud.google.com/sdk/gcloud/"&gt;gcloud&lt;/a&gt; utilities:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gcloud init
[2] Create a new configuration
Configuration name. "development"
[2] Log in with a new account
Pick cloud project to use
configure a default Compute Region and Zone? (Y/n)? y
Here europe-west-1b was chosen

$ gcloud container clusters get-credentials dev-cluster --zone europe-west1-b --project &amp;lt;project_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can get the last command by clicking the "Connect" button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GyKsyp35--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ocedypehavla04ne4l88.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GyKsyp35--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ocedypehavla04ne4l88.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check the status from kubectl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl config current-context
gke_possible-symbol-254507_europe-west1-b_dev-cluster

$ kubectl get nodes
NAME                                     STATUS    ROLES   AGE     VERSION
gke-dev-cluster-pool-2-8096d93c-fw5w     Ready     &amp;lt;none&amp;gt;     17m     v1.14.7-gke.10
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now create a directory called k8s/ under the root project directory to hold the three files that describe the future application in Kubernetes: &lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/"&gt;Namespace&lt;/a&gt;, which describes the workspace, &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/"&gt;Deployment&lt;/a&gt;, and &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/"&gt;Service&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: iris

$ cat deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: iris-rest
  namespace: iris
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: iris
  template:
    metadata:
      labels:
        app: iris
    spec:
      containers:
      - image: eu.gcr.io/iris-rest:v1
        name: iris-rest
        ports:
        - containerPort: 52773
          name: web

$ cat service.yaml
apiVersion: v1
kind: Service
metadata:
  name: iris-rest
  namespace: iris
spec:
  selector:
    app: iris
  ports:
  - protocol: TCP
    port: 52773
    targetPort: 52773
  type: LoadBalancer
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Send those definitions from your k8s/ directory to the Google Kubernetes Engine (GKE):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f namespace.yaml
$ kubectl apply -f deployment.yaml -f service.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Things won’t be working correctly yet, since we haven’t yet sent the &lt;em&gt;eu.gcr.io/iris-rest:v1&lt;/em&gt; image to the Docker registry, so we see an error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n iris get po
NAME                            READY       STATUS    RESTARTS    AGE
iris-rest-64cdb48f78-5g9hb      0/1         ErrImagePull             0            50s


$ kubectl -n iris get svc
NAME          TYPE             CLUSTER-IP     EXTERNAL-IP     PORT(S)     AGE
iris-rest     LoadBalancer     10.0.13.219     &amp;lt;pending&amp;gt;     52773:31425/TCP     20s

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When Kubernetes sees a &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer"&gt;LoadBalancer&lt;/a&gt; service, it tries to create a balancer in the Google Cloud environment. If it succeeds, the service will get a real IP address instead of External IP = &amp;lt;pending&amp;gt;.&lt;/p&gt;

&lt;p&gt;Before leaving Kubernetes for a bit, let's give CircleCI the ability to push Docker images into the registry and restart Kubernetes deployments by creating &lt;a href="https://cloud.google.com/iam/docs/creating-managing-service-accounts"&gt;a service account&lt;/a&gt;. Give your service account EDITOR permission to the project. You’ll find information &lt;a href="https://circleci.com/docs/2.0/google-auth/"&gt;here&lt;/a&gt; on creating and storing a service account.  &lt;/p&gt;

&lt;p&gt;A bit later, when we create and set up the project in CircleCI, you’ll need to add the following three environment variables:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5-pTukc4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dpv69d64v18mbe0r55pw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5-pTukc4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dpv69d64v18mbe0r55pw.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The names of these variables speak for themselves. The value of GCLOUD_SERVICE_KEY is the JSON structure Google sends you when you press "Create key" and select a key in the JSON format after creating the Service Account:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uk223d11--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/y26mqi3x1bu6a9ilvgae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uk223d11--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/y26mqi3x1bu6a9ilvgae.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  CircleCI
&lt;/h4&gt;

&lt;p&gt;Let's turn our attention to &lt;a href="https://circleci.com/"&gt;CircleCI&lt;/a&gt; now, where we’ll register using our GitHub account (click Sign Up, then Sign Up with GitHub). After registration, you’ll see the dashboard with projects from your GitHub repository listed on the Add Project tab. Click the Set Up Project button for "my-objectscript-rest-docker-template" or whatever you named the repository created from the objectscript-rest-docker-template repo:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rOfxYAa7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mzyw6uinazf6jfw1gzvd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rOfxYAa7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mzyw6uinazf6jfw1gzvd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: all CircleCI screenshots are made as of October 2019. Changes may occur in new versions.&lt;/p&gt;

&lt;p&gt;The page that opens tells you how to make your project work with CircleCI. The first step is to create a folder called .circleci and add a file named config.yml to it. The structure of this configuration file is well described in &lt;a href="https://circleci.com/docs/2.0/configuration-reference/"&gt;the official documentation&lt;/a&gt;. Here are the basic steps the file will contain:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pull the repository&lt;/li&gt;
&lt;li&gt;Build the Docker image&lt;/li&gt;
&lt;li&gt;Authenticate with Google Cloud&lt;/li&gt;
&lt;li&gt;Upload image to Google Docker Registry&lt;/li&gt;
&lt;li&gt;Run the container based on this image in GKE&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With any luck, we’ll find some already created configurations (called &lt;a href="https://circleci.com/docs/2.0/using-orbs/"&gt;orbs&lt;/a&gt;) we can use. There are certified orbs and third-party ones. The certified &lt;a href="https://circleci.com/orbs/registry/orb/circleci/gcp-gke"&gt;GCP-GKE orb&lt;/a&gt; had a number of &lt;a href="https://github.com/CircleCI-Public/gcp-gke-orb/issues/1"&gt;limitations&lt;/a&gt; at the moment of initial writing, so let's take a third-party orb — &lt;a href="https://circleci.com/orbs/registry/orb/duksis/gcp-gke"&gt;duksis&lt;/a&gt; — that meets our needs. Using it, the configuration file turns into (replace names — for example, the cluster name — with correct ones for your implementation):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat .circleci/config.yml
version: 2.1
orbs:
  gcp-gke: duksis/gcp-gke@0.1.9
workflows:
  main:
    jobs:
    - gcp-gke/publish-and-rollout-image:
         google-project-id: GOOGLE_PROJECT_ID
         gcloud-service-key: GCLOUD_SERVICE_KEY
         registry-url: eu.gcr.io
         image: iris-rest
         tag: ${CIRCLE_SHA1}
         cluster: dev-cluster
         namespace: iris
         deployment: iris-rest
         container: iris-rest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The initial configuration of the publish-and-rollout-image task can be viewed on &lt;a href="https://circleci.com/orbs/registry/orb/duksis/gcp-gke"&gt;the project page&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;We don’t actually need the final three notification steps of this orb, which is good because they won’t work anyway without some additional variables. Ideally, you can prepare your own orb once and use it many times, but we won’t get into that now.&lt;/p&gt;

&lt;p&gt;Note that the use of third-party orbs has to be specifically allowed on the "Organization settings" tab in CircleCI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BOxAdjs4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ia3sgz8w7ckoplyjptq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BOxAdjs4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ia3sgz8w7ckoplyjptq5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, it’s time to send all our changes to GitHub and CircleCI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git add .circleci/ k8s/ src/cls/Sample/PersonREST.cls
$ git commit -m "Deploy project to GKE using CircleCI"
$ git push 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Let’s check the CircleCI dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dZ6XPTrz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/r8m0lr1oiyuuls2bgp2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dZ6XPTrz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/r8m0lr1oiyuuls2bgp2f.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you forgot to add Google Service Account keys, here’s what you’ll soon see:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LrvkgZMt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zprny63hz166qkkv98ti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LrvkgZMt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zprny63hz166qkkv98ti.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So don’t forget to add those environment variables as described in the end of the Google Cloud Prerequisites section. If you forgot, update that information, then click "Rerun workflow". &lt;/p&gt;

&lt;p&gt;If the build is successful you’ll see a green bar:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xs1mDCZm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/knij2hd7vnf0dic7vyro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xs1mDCZm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/knij2hd7vnf0dic7vyro.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also check the Kubernetes pod state separately from the CircleCI Web UI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n iris get po -w
NAME                            READY     STATUS     RESTARTS     AGE
iris-rest-64chdb48f78-q5sbw     0/1     ImagePullBackOff     0     15m
…
iris-rest-5c9c86c768-vt7c9     1/1     Running     0     23s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That last line — 1/1 Running — is a good sign.&lt;/p&gt;

&lt;p&gt;Let’s test it. Remember, your IP address will differ from mine. Also, you’ll have to figure out about passwords over HTTP yourself as it’s out of scope for this article.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n iris get svc
NAME          TYPE             CLUSTER-IP     EXTERNAL-IP     PORT(S)     AGE
iris-rest     LoadBalancer     10.0.4.242     23.251.143.124     52773:30948/TCP     18m


$ curl -XGET -u _system:SYS 23.251.143.124:52773/person/helloworld
Hello, world!

$ curl -XPOST -H "Content-Type: application/json" -u _system:SYS 23.251.143.124:52773/person/ -d '{"Name":"John Dou"}'

$ curl -XGET -u _system:SYS 23.251.143.124:52773/person/all
[{"Name":"John Dou"},]

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It seems the application works. You can continue with tests described on &lt;a href="https://github.com/intersystems-community/objectscript-rest-docker-template"&gt;project page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In sum, the combination of using GitHub, CircleCI, and Google Cloud Kubernetes Engine looks quite promising for testing and deployment of IRIS applications, even though it’s &lt;a href="https://circleci.com/pricing/"&gt;not completely free&lt;/a&gt;. Also, do not forget that running Kubernetes cluster can gradually eat your virtual (and then real) money. We are not responsible for any charges you may incur.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ksGm3z06--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rw80lzaj91ca0fdqnu1z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ksGm3z06--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rw80lzaj91ca0fdqnu1z.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>intersystems</category>
      <category>kubernetes</category>
      <category>circleci</category>
    </item>
    <item>
      <title>Deploying a Simple IRIS-Based Web Application Using Amazon EKS</title>
      <dc:creator>Mikhail Khomenko</dc:creator>
      <pubDate>Tue, 04 Aug 2020 07:12:42 +0000</pubDate>
      <link>https://dev.to/intersystems/deploying-a-simple-iris-based-web-application-using-amazon-eks-5dnm</link>
      <guid>https://dev.to/intersystems/deploying-a-simple-iris-based-web-application-using-amazon-eks-5dnm</guid>
      <description>&lt;p&gt;We’re going to deploy a &lt;a href="https://openexchange.intersystems.com/package/objectscript-rest-docker-template"&gt;simple IRIS application&lt;/a&gt; to Amazon Web Services using its Elastic Kubernetes Service (&lt;a href="https://docs.aws.amazon.com/eks/index.html"&gt;EKS&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Please, fork the &lt;a href="https://github.com/intersystems-community/objectscript-rest-docker-template"&gt;IRIS project&lt;/a&gt; to your own private repository. It’s called &amp;lt;username&amp;gt;/my-objectscript-rest-docker-template in this article. &amp;lt;root_repo_dir&amp;gt; is its root directory.&lt;/p&gt;

&lt;p&gt;Before getting started, install the &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html"&gt;AWS command-line tool&lt;/a&gt; and, for Kubernetes cluster creation, &lt;a href="https://eksctl.io/introduction/#installation"&gt;eksctl&lt;/a&gt;, a simple CLI utility. If you choose aws2, you’ll need to set aws2 usage in kube config file as described &lt;a href="https://github.com/weaveworks/eksctl/issues/1562"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS EKS
&lt;/h4&gt;

&lt;p&gt;Like AWS resources &lt;a href="https://aws.amazon.com/pricing/"&gt;in general&lt;/a&gt;, EKS is &lt;a href="https://aws.amazon.com/eks/pricing/"&gt;not free&lt;/a&gt;. But you can create a &lt;a href="https://aws.amazon.com/free/"&gt;free-tier account&lt;/a&gt; to play with AWS features. Keep in mind, though, that not everything you want to play with is included in the free tier. So, to manage your current budget and understand the financial issues, read &lt;a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-what-is.html"&gt;What is AWS Billing and Cost Management?&lt;/a&gt; and &lt;a href="https://aws.amazon.com/getting-started/tutorials/control-your-costs-free-tier-budgets/?trk=gs_card"&gt;Control your AWS costs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We’ll assume you already have an AWS account and root access to it, and that you don’t use this root access but have created a user with admin permissions. You’ll need to put the access key and secret key of this user into the AWS credentials file under the [dev] profile (or whatever you choose to name the profile):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat ~/.aws/credentials
[dev]
aws_access_key_id = ABCDEFGHIJKLMNOPQRST
aws_secret_access_key = 1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ1234
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We’re going to create resources in the AWS "eu-west-1" region, but you should choose the &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html"&gt;region&lt;/a&gt; closest to your location and replace “eu-west-1” by your region everywhere below in the text.&lt;/p&gt;

&lt;p&gt;By the way, all needed files (.circleci/, eks/, k8s/) are also stored &lt;a href="https://github.com/intersystems-community/eks-circleci-objectscript-rest-docker-template"&gt;here&lt;/a&gt; to simplify copying and pasting.&lt;/p&gt;

&lt;p&gt;All required EKS resources will be created from scratch. You’ll find the &lt;a href="https://eksworkshop.com/"&gt;Amazon EKS Workshop&lt;/a&gt; to be a good resource to get an initial impression.&lt;/p&gt;

&lt;p&gt;Now let’s check our access to AWS (we’ve used a dummy account here):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ export AWS_PROFILE=dev

$ aws sts get-caller-identity
{
  "Account": "012345678910",
  "UserId": " ABCDEFGHIJKLMNOPQRSTU",
  "Arn": "arn:aws:iam::012345678910:user/FirstName.LastName"
}

$ eksctl version
[ℹ] version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.10.2"}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We could run &lt;em&gt;"eksctl create cluster  --region eu-west-1"&lt;/em&gt; now, relying on the fact that all the default settings are good for us, or we can manage our own settings by creating a configuration file and using it.&lt;/p&gt;

&lt;p&gt;The latter is preferable because it allows you to store such a file in a version control system (VCS). Examples of configurations can be found &lt;a href="https://github.com/weaveworks/eksctl/tree/master/examples"&gt;here&lt;/a&gt;. After reading about the different settings &lt;a href="https://eksctl.io/usage/schema/"&gt;here&lt;/a&gt;, let’s try to create our own configuration):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir &amp;lt;root_repo_dir&amp;gt;/eks; cd &amp;lt;root_repo_dir&amp;gt;/eks

$ cat cluster.yaml

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: dev-cluster
  region: eu-west-1
  version: '1.14'

vpc:
  cidr: 10.42.0.0/16
  nat:
    gateway: Single
  clusterEndpoints:
    publicAccess: true
    privateAccess: false

nodeGroups:
  - name: ng-1
    amiFamily: AmazonLinux2
    ami: ami-059c6874350e63ca9  # AMI is specific for a region
    instanceType: t2.medium
    desiredCapacity: 1
    minSize: 1
    maxSize: 1

    # Worker nodes won't have an access FROM the Internet
    # But have an access TO the Internet through NAT-gateway
    privateNetworking: true

    # We don't need to SSH to nodes for demo
    ssh:
      allow: false

    # Labels are Kubernetes labels, shown when 'kubectl get nodes --show-labels'
    labels:
      role: eks-demo
    # Tags are AWS tags, shown in 'Tags' tab on AWS console'
    tags:
      role: eks-demo

# CloudWatch logging is disabled by default to save money
# Mentioned here just to show a way to manage it
#cloudWatch: 
#  clusterLogging:
#    enableTypes: []
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Note that &lt;em&gt;"nodeGroups.desiredCapacity = 1"&lt;/em&gt; would make no sense in a production environment, but it’s fine for our demo.&lt;br&gt;
Also note that AMI images could differ between regions. Look for "amazon-eks-node-1.14" and choose one of the latest:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--56DfroNf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/su7o7u9cinrt7x99bkjm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--56DfroNf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/su7o7u9cinrt7x99bkjm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let’s create the cluster - the control plane and worker nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ eksctl create cluster -f cluster.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;By the way, when you no longer need a cluster, you can use the following to delete it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ eksctl delete cluster --name dev-cluster --region eu-west-1 --wait
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Creating a cluster takes about 15 minutes. During this time you can look at the eksctl output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--69AT7ti1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/s7b1axlhxe17g8hrsi0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--69AT7ti1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/s7b1axlhxe17g8hrsi0h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also view the &lt;a href="https://console.aws.amazon.com/cloudformation"&gt;CloudFormation console&lt;/a&gt;, which will have two stacks. You can drill down into each one and look at the Resources tab to see exactly what will be created, and at the Events tab to check the current state of the resources creation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AT09iQcv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2dunqfw3imxjqygg398q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AT09iQcv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2dunqfw3imxjqygg398q.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The cluster was successfully created, although you can see in the eksctl output that we had difficulties connecting to it: "unable to use kubectl with the EKS cluster".&lt;br&gt;
Let's fix this by installing the &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html"&gt;aws-iam-authenticator&lt;/a&gt; and using it to create a kube context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ which aws-iam-authenticator
/usr/local/bin/aws-iam-authenticator

$ aws eks update-kubeconfig --name dev-cluster --region eu-west-1

$ kubectl get nodes
NAME                                       STATUS  ROLES   AGE   VERSION                                                                              
ip-10-42-140-98.eu-west-1.compute.internal Ready   &amp;lt;none&amp;gt;   1m    v1.14.7-eks-1861c5
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It should work now, but we created a cluster with a user who has administrator rights. For the regular deployment process from CircleCI, it’s better to &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html"&gt;create a special AWS user&lt;/a&gt;, named, in this case, CircleCI, with only programmatic access and the following policies attached:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XCJvFBLl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jv85xaq0kilm9wjiwoha.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XCJvFBLl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jv85xaq0kilm9wjiwoha.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first policy is built into AWS, so you only need to select it. The second one should be created on your own. &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html#access_policies_create-json-editor"&gt;Here&lt;/a&gt; is a creation process description. The policy &lt;em&gt;"AmazonEKSDescribePolicy"&lt;/em&gt; should look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "eks:DescribeCluster",
                "eks:ListClusters"
            ],
            "Resource": "*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After user creation, save the user’s access key and secret access key — we’ll need them soon.&lt;/p&gt;

&lt;p&gt;We also want to give this user rights within the Kubernetes cluster itself, as described in &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/"&gt;How do I provide access to other users and roles after cluster creation in Amazon EKS?&lt;/a&gt;. In short, after creating the EKS cluster, only the IAM user, creator, has access to it. To add our CircleCI user, we need to replace default empty "mapUsers" section in the cluster AWS authentication settings (configmap aws-auth, 'data' section) by the following lines using &lt;a href="https://github.com/fabric8io/kansible/blob/master/vendor/k8s.io/kubernetes/docs/user-guide/kubectl/kubectl_edit.md"&gt;kubectl edit&lt;/a&gt; (use your own Account Id instead of ‘01234567890’):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n kube-system edit configmap aws-auth
...
data:
...
  mapUsers: |
    - userarn: arn:aws:iam::01234567890:user/CircleCI
      username: circle-ci
      groups:
        - system:masters
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We’ll use the Kubernetes manifests from &lt;a href="https://community.intersystems.com/post/deploying-intersystems-iris-solution-gcp-kubernetes-cluster-gke-using-circleci"&gt;this article&lt;/a&gt; (see the "Google Cloud Prerequisite" section) with one change: in the deployment image field we use placeholders. We’ll store those manifests in the &amp;lt;root_repo_dir&amp;gt;/k8s directory. Note that the deployment file was renamed to deployment.tpl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;root_repo_dir&amp;gt;/k8s/deployment.tpl
...
spec:
containers:
- image: DOCKER_REPO_NAME/iris-rest:DOCKER_IMAGE_TAG
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  CircleCI
&lt;/h4&gt;

&lt;p&gt;The deployment process on the CircleCI side is similar to the &lt;a href="https://community.intersystems.com/post/deploying-intersystems-iris-solution-gcp-kubernetes-cluster-gke-using-circleci"&gt;process used for GKE&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pull the repository&lt;/li&gt;
&lt;li&gt;Build the Docker image&lt;/li&gt;
&lt;li&gt;Authenticate with Amazon Cloud&lt;/li&gt;
&lt;li&gt;Upload the image to Amazon Elastic Container Registry (ECR)&lt;/li&gt;
&lt;li&gt;Run the container based on this image in AWS EKS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’ll take advantage of already created and tested CircleCI configuration templates: &lt;a href="https://circleci.com/docs/2.0/orb-intro/"&gt;orbs&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/orbs/registry/orb/circleci/aws-ecr"&gt;aws-ecr orb&lt;/a&gt; for building an image and pushing it to ECR&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/orbs/registry/orb/circleci/aws-eks"&gt;aws-eks orb&lt;/a&gt; for AWS authentication&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/orbs/registry/orb/circleci/kubernetes"&gt;kubernetes orb&lt;/a&gt; for Kubernetes manifests deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our deployment configuration looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;root_repo_dir&amp;gt;/.circleci/config.yml
version: 2.1
orbs:
  aws-ecr: circleci/aws-ecr@6.5.0
  aws-eks: circleci/aws-eks@0.2.6
  kubernetes: circleci/kubernetes@0.10.1

jobs:
  deploy-application:
    executor: aws-eks/python3
    parameters:
      cluster-name:
        description: |
          Name of the EKS cluster
        type: string
      aws-region:
        description: |
          AWS region
        type: string
      account-url:
        description: |
          Docker AWS ECR repository url
        type: string
      tag:
        description: |
          Docker image tag
        type: string
    steps:
      - checkout
      - run:
          name: Replace placeholders with values in deployment template
          command: |
            cat k8s/deployment.tpl |\
            sed "s|DOCKER_REPO_NAME|&amp;lt;&amp;lt; parameters.account-url &amp;gt;&amp;gt;|" |\
            sed "s|DOCKER_IMAGE_TAG|&amp;lt;&amp;lt; parameters.tag &amp;gt;&amp;gt;|" &amp;gt; k8s/deployment.yaml; \
            cat k8s/deployment.yaml
      - aws-eks/update-kubeconfig-with-authenticator:
          cluster-name: &amp;lt;&amp;lt; parameters.cluster-name &amp;gt;&amp;gt;
          install-kubectl: true
          aws-region: &amp;lt;&amp;lt; parameters.aws-region &amp;gt;&amp;gt;
      - kubernetes/create-or-update-resource:
          action-type: apply
          resource-file-path: "k8s/namespace.yaml"
          show-kubectl-command: true
      - kubernetes/create-or-update-resource:
          action-type: apply
          resource-file-path: "k8s/deployment.yaml"
          show-kubectl-command: true
          get-rollout-status: true
          resource-name: deployment/iris-rest
          namespace: iris
      - kubernetes/create-or-update-resource:
          action-type: apply
          resource-file-path: "k8s/service.yaml"
          show-kubectl-command: true
          namespace: iris
workflows:
  main:
    jobs:
    - aws-ecr/build-and-push-image:
        aws-access-key-id: AWS_ACCESS_KEY_ID
        aws-secret-access-key: AWS_SECRET_ACCESS_KEY
        region: AWS_REGION
        account-url: AWS_ECR_ACCOUNT_URL
        repo: iris-rest
        create-repo: true
        dockerfile: Dockerfile-zpm
        path: .
        tag: ${CIRCLE_SHA1}
    - deploy-application:
        cluster-name: dev-cluster
        aws-region: eu-west-1
        account-url: ${AWS_ECR_ACCOUNT_URL}
        tag: ${CIRCLE_SHA1}
        requires:
          - aws-ecr/build-and-push-image

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://circleci.com/docs/2.0/workflows/"&gt;Workflows&lt;/a&gt; section contains a list of jobs, each of which can be either called from an orb, such as &lt;a href="https://circleci.com/orbs/registry/orb/circleci/aws-ecr#jobs-build-and-push-image"&gt;aws-ecr/build-and-push-image&lt;/a&gt;, or defined directly in the configuration using "deploy-application". &lt;/p&gt;

&lt;p&gt;The following code means that the deploy-application job will be called only after the aws-ecr/build-and-push-image job finishes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;requires:
- aws-ecr/build-and-push-image
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The Jobs section contains a description of the deploy-application job, with a list of steps defined, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;checkout&lt;/em&gt;, to pull from a Git repository&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;run&lt;/em&gt;, to run a script that dynamically sets the Docker-image repository and tag&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;aws-eks/update-kubeconfig-with-authenticator&lt;/em&gt;, which uses &lt;a href="https://github.com/kubernetes-sigs/aws-iam-authenticator"&gt;aws-iam-authenticator&lt;/a&gt; to set up a connection to Kubernetes&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/orbs/registry/orb/circleci/kubernetes#commands-create-or-update-resource"&gt;kubernetes/create-or-update-resource&lt;/a&gt;, which is used several times as a way to run "kubectl apply" from CircleCI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We use variables and they, of course, should be defined in CircleCI on the "Environment variables" tab:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ym14yPzV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hw1srecxuafw5m0txdeu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ym14yPzV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hw1srecxuafw5m0txdeu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following table shows the meaning of the variables used:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Variable&lt;/th&gt;
&lt;th&gt;Meaning&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AWS_ACCESS_KEY_ID&lt;/td&gt;
&lt;td&gt;Access key of CircleCI IAM user&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AWS_SECRET_ACCESS_KEY&lt;/td&gt;
&lt;td&gt;Secret key of CircleCI IAM user&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AWS_REGION&lt;/td&gt;
&lt;td&gt;
&lt;em&gt;eu-west-1&lt;/em&gt;, in this case&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AWS_ECR_ACCOUNT_URL&lt;/td&gt;
&lt;td&gt;URL of the &lt;a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/Repositories.html"&gt;AWS ECR Docker Registry&lt;/a&gt;, such as &lt;em&gt;01234567890.dkr.ecr.eu-west-1.amazonaws.com&lt;/em&gt;, where ‘01234567890’ is the account ID&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Here’s how we trigger the deployment process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git add .circleci/ eks/ k8s/
$ git commit -m “AWS EKS deployment”
$ git push
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will show the two jobs in this workflow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9u3RWszd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dfywyrzazbbv01ut2epr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9u3RWszd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dfywyrzazbbv01ut2epr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both jobs are clickable, and this allows you to see details of the steps taken. &lt;br&gt;
Deployment takes several minutes. Once it completes, we can check the status of the Kubernetes resources and of the IRIS application itself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n iris get pods -w    # Ctrl+C to stop

$ kubectl -n iris get service    
NAME       TYPE        CLUSTER-IP      EXTERNAL-IP              PORT(S)      AGE                                                                                                                                          
iris-rest LoadBalancer 172.20.190.211  a3de52988147a11eaaaff02ca6b647c2-663499201.eu-west-1.elb.amazonaws.com  52773:32573/TCP 15s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Allow several minutes to propagate the DNS-record. Until then you’ll receive a "Could not resolve host" error when running curl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -XPOST -H "Content-Type: application/json" -u _system:SYS a3de52988147a11eaaaff02ca6b647c2-663499201.eu-west-1.elb.amazonaws.com:52773/person/ -d '{"Name":"John Dou"}'

$ curl -XGET -u _system:SYS a3de52988147a11eaaaff02ca6b647c2-663499201.eu-west-1.elb.amazonaws.com:52773/person/all
[{"Name":"John Dou"},]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Wrapping up
&lt;/h4&gt;

&lt;p&gt;At first glance, deployment to AWS EKS looks more complex than to GKE, but it’s not really much different. If your organization uses AWS, you now know how to add Kubernetes to your stack.&lt;/p&gt;

&lt;p&gt;Note that the EKS API was extended to support &lt;a href="https://aws.amazon.com/blogs/containers/eks-managed-node-groups/"&gt;managed groups&lt;/a&gt;. These allow you to deploy the control plane and the data plane as a whole, and they look promising. Moreover,  &lt;a href="https://aws.amazon.com/blogs/aws/amazon-eks-on-aws-fargate-now-generally-available/"&gt;Fargate&lt;/a&gt;, the AWS serverless compute engine for containers, is now available.&lt;/p&gt;

&lt;p&gt;Finally, a quick note about AWS ECR: don’t forget to set up a &lt;a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/LifecyclePolicies.html"&gt;lifecycle policy&lt;/a&gt; for your images.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>eks</category>
      <category>intersystems</category>
      <category>circleci</category>
    </item>
    <item>
      <title>Automating GKE creation on CircleCI builds</title>
      <dc:creator>Mikhail Khomenko</dc:creator>
      <pubDate>Fri, 31 Jul 2020 07:34:28 +0000</pubDate>
      <link>https://dev.to/intersystems/automating-gke-creation-on-circleci-builds-35g8</link>
      <guid>https://dev.to/intersystems/automating-gke-creation-on-circleci-builds-35g8</guid>
      <description>&lt;p&gt;Creating GKE cluster manually (or through &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster" rel="noopener noreferrer"&gt;gcloud&lt;/a&gt;) is easy, but the modern &lt;a href="https://martinfowler.com/bliki/InfrastructureAsCode.html" rel="noopener noreferrer"&gt;Infrastructure-as-Code (IaC) approach&lt;/a&gt; advises that the description of the Kubernetes cluster should be stored in the repository as code as well. How to write this code is determined by the tool that’s used for IaC.&lt;/p&gt;

&lt;p&gt;In the case of Google Cloud, there are &lt;a href="https://cloud.google.com/solutions/infrastructure-as-code/#cards" rel="noopener noreferrer"&gt;several options&lt;/a&gt;, among them &lt;a href="https://cloud.google.com/deployment-manager/docs/" rel="noopener noreferrer"&gt;Deployment Manager&lt;/a&gt; and &lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;. Opinions are divided as to which is better: if you want to learn more, read this Reddit thread &lt;a href="https://www.reddit.com/r/googlecloud/comments/9nzchm/opinions_on_terraform_vs_deployment_manager/" rel="noopener noreferrer"&gt;Opinions on Terraform vs. Deployment Manager?&lt;/a&gt; and the Medium article &lt;a href="https://medium.com/@kari.marttila/comparing-gcp-deployment-manager-and-terraform-3bc6e1b3aa2d" rel="noopener noreferrer"&gt;Comparing GCP Deployment Manager and Terraform&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;For this article we’ll choose Terraform, since it’s less tied to a specific vendor and you can use your IaC with different cloud providers.&lt;/p&gt;

&lt;p&gt;We’ll assume you already have a &lt;a href="https://console.cloud.google.com/" rel="noopener noreferrer"&gt;Google account&lt;/a&gt;, and that you’ve created a project named, for instance, "Development". In this article, its ID is shown as &amp;lt;PROJECT_ID&amp;gt;. In the examples below, change it to &lt;a href="https://support.google.com/googleapi/answer/7014113?hl=en" rel="noopener noreferrer"&gt;the ID of your own project&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Keep in mind that Google isn’t free, although it has a &lt;a href="https://cloud.google.com/free/" rel="noopener noreferrer"&gt;free tier&lt;/a&gt;. Be sure to &lt;a href="https://cloud.google.com/billing/docs/" rel="noopener noreferrer"&gt;control your expenses&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You should fork the &lt;a href="https://github.com/intersystems-community/objectscript-rest-docker-template" rel="noopener noreferrer"&gt;original repository&lt;/a&gt;. We’ll call this fork “my-objectscript-rest-docker-template” and refer to its root directory as "&amp;lt;root_repo_dir&amp;gt;" throughout this article.&lt;/p&gt;

&lt;p&gt;All code samples are stored in &lt;a href="https://github.com/intersystems-community/gke-terraform-circleci-objectscript-rest-docker-template" rel="noopener noreferrer"&gt;this repo&lt;/a&gt; to simplify copying and pasting.&lt;/p&gt;

&lt;p&gt;The following diagram depicts the whole deployment process in one picture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fymei7t8qldkuwg9d84xd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fymei7t8qldkuwg9d84xd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, let's &lt;a href="https://learn.hashicorp.com/terraform/getting-started/install.html" rel="noopener noreferrer"&gt;install&lt;/a&gt; the latest version of Terraform at the time of initial writing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform version
Terraform v0.12.17
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The version is important here, because many examples on the Internet use earlier versions, and 0.12 brought &lt;a href="https://www.hashicorp.com/blog/announcing-terraform-0-12/" rel="noopener noreferrer"&gt;many changes&lt;/a&gt;. &lt;strong&gt;Update:&lt;/strong&gt; 0.13 brought yet &lt;a href="https://www.hashicorp.com/blog/announcing-the-terraform-0-13-beta/" rel="noopener noreferrer"&gt;more changes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We want Terraform to perform certain actions (use certain APIs) in our GCP account. To enable this, &lt;a href="https://cloud.google.com/iam/docs/service-accounts" rel="noopener noreferrer"&gt;create a Service Account&lt;/a&gt; with the name 'terraform', and enable the Kubernetes Engine API. Don’t worry about how we’re going to achieve this — just read further and your questions will be addressed.&lt;/p&gt;

&lt;p&gt;Let's try an example with the &lt;a href="https://cloud.google.com/sdk/gcloud/" rel="noopener noreferrer"&gt;gcloud utility&lt;/a&gt;, although we could also use the &lt;a href="https://console.cloud.google.com/iam-admin/serviceaccounts" rel="noopener noreferrer"&gt;Web Console&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We're going to use a couple different commands in the following examples. See the following documentation topics for more details on these commands and features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/iam/docs/creating-managing-service-accounts#iam-service-accounts-create-gcloud" rel="noopener noreferrer"&gt;gcloud iam service-accounts create&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/iam/docs/granting-roles-to-service-accounts#granting_access_to_a_service_account_for_a_resource" rel="noopener noreferrer"&gt;Granting roles to a service account for specific resources&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/iam/docs/creating-managing-service-account-keys#iam-service-account-keys-create-gcloud" rel="noopener noreferrer"&gt;gcloud iam service-accounts keys create&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/endpoints/docs/openapi/enable-api" rel="noopener noreferrer"&gt;Enabling an API in your Google Cloud project&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let's walk through the example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gcloud init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We won’t discuss all of the setup details here. You can read a little more in &lt;a href="https://community.intersystems.com/post/deploying-intersystems-iris-solution-gcp-kubernetes-cluster-gke-using-circleci" rel="noopener noreferrer"&gt;this article&lt;/a&gt;. For this example, run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd &amp;lt;root_repo_dir&amp;gt;
$ mkdir terraform; cd terraform
$ gcloud iam service-accounts create terraform --description "Terraform" --display-name "terraform"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's add a few roles to the terraform service account besides “Kubernetes Engine Admin” (container.admin). These roles will be useful to us in the future.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gcloud projects add-iam-policy-binding &amp;lt;PROJECT_ID&amp;gt; \
  --member serviceAccount:terraform@&amp;lt;PROJECT_ID&amp;gt;.iam.gserviceaccount.com \
  --role roles/container.admin

$ gcloud projects add-iam-policy-binding &amp;lt;PROJECT_ID&amp;gt; \
  --member serviceAccount:terraform@&amp;lt;PROJECT_ID&amp;gt;.iam.gserviceaccount.com \
  --role roles/iam.serviceAccountUser

$ gcloud projects add-iam-policy-binding &amp;lt;PROJECT_ID&amp;gt; \
  --member serviceAccount:terraform@&amp;lt;PROJECT_ID&amp;gt;.iam.gserviceaccount.com \
  --role roles/compute.viewer

$ gcloud projects add-iam-policy-binding &amp;lt;PROJECT_ID&amp;gt; \
  --member serviceAccount:terraform@&amp;lt;PROJECT_ID&amp;gt;.iam.gserviceaccount.com \
  --role roles/storage.admin

$ gcloud iam service-accounts keys create account.json \
  --iam-account terraform@&amp;lt;PROJECT_ID&amp;gt;.iam.gserviceaccount.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the last entry creates your &lt;em&gt;account.json&lt;/em&gt; file. Be sure to keep this file secret.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gcloud projects list
$ gcloud config set project &amp;lt;PROJECT_ID&amp;gt;
$ gcloud services list --available | grep 'Kubernetes Engine'
$ gcloud services enable container.googleapis.com
$ gcloud services list --enabled | grep 'Kubernetes Engine'
container.googleapis.com Kubernetes Engine API
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, let’s describe the GKE cluster in Terraform’s &lt;a href="https://www.terraform.io/docs/glossary.html#hcl" rel="noopener noreferrer"&gt;HCL&lt;/a&gt; language. Note that we use several placeholders here; replace them with your values:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Placeholder&lt;/th&gt;
&lt;th&gt;Meaning&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&amp;lt;PROJECT_ID&amp;gt;&lt;/td&gt;
&lt;td&gt;GCP project ID&lt;/td&gt;
&lt;td&gt;possible-symbol-254507&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;lt;BUCKET_NAME&amp;gt;&lt;/td&gt;
&lt;td&gt;Storage for Terraform state - should be &lt;a href="https://cloud.google.com/storage/docs/naming" rel="noopener noreferrer"&gt;unique&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;circleci-gke-terraform-demo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;lt;REGION&amp;gt;&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://cloud.google.com/compute/docs/regions-zones/" rel="noopener noreferrer"&gt;Region&lt;/a&gt; where resources will be created&lt;/td&gt;
&lt;td&gt;europe-west1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;lt;LOCATION&amp;gt;&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://cloud.google.com/compute/docs/regions-zones/" rel="noopener noreferrer"&gt;Zone&lt;/a&gt; where resources will be created&lt;/td&gt;
&lt;td&gt;europe-west1-b&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;lt;CLUSTER_NAME&amp;gt;&lt;/td&gt;
&lt;td&gt;GKE cluster name&lt;/td&gt;
&lt;td&gt;dev-cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;lt;NODES_POOL_NAME&amp;gt;&lt;/td&gt;
&lt;td&gt;GKE worker nodes pool name&lt;/td&gt;
&lt;td&gt;dev-cluster-node-pool&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Here’s the HCL configuration for the cluster in practice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat main.tf
terraform {
  required_version = "~&amp;gt; 0.12"
  backend "gcs" {
    bucket = "&amp;lt;BUCKET_NAME&amp;gt;"
    prefix = "terraform/state"
    credentials = "account.json"
  }
}

provider "google" {
  credentials = file("account.json")
  project = "&amp;lt;PROJECT_ID&amp;gt;"
  region = "&amp;lt;REGION&amp;gt;"
}

resource "google_container_cluster" "gke-cluster" {
  name = "&amp;lt;CLUSTER_NAME&amp;gt;"
  location = "&amp;lt;LOCATION&amp;gt;"
  remove_default_node_pool = true
  # In regional cluster (location is region, not zone) 
  # this is a number of nodes per zone 
  initial_node_count = 1
}

resource "google_container_node_pool" "preemptible_node_pool" {
  name = "&amp;lt;NODES_POOL_NAME&amp;gt;"
  location = "&amp;lt;LOCATION&amp;gt;"
  cluster = google_container_cluster.gke-cluster.name
  # In regional cluster (location is region, not zone) 
  # this is a number of nodes per zone
  node_count = 1

  node_config {
    preemptible = true
    machine_type = "n1-standard-1"
    oauth_scopes = [
      "storage-ro",
      "logging-write",
      "monitoring"
    ]
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To make sure the HCL code is in the proper format, Terraform provides a handy formatting command you can use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform fmt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code snippet shown above indicates that the created resources will be &lt;a href="https://www.terraform.io/docs/providers/google/guides/provider_reference.html" rel="noopener noreferrer"&gt;provided by Google&lt;/a&gt;, and the resources themselves are &lt;a href="https://www.terraform.io/docs/providers/google/r/container_cluster.html" rel="noopener noreferrer"&gt;google_container_cluster&lt;/a&gt; and google_container_node_pool, which we designate &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms" rel="noopener noreferrer"&gt;preemptible&lt;/a&gt; for costs savings. We also choose to create &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools" rel="noopener noreferrer"&gt;our own pool&lt;/a&gt; instead of using the default.&lt;/p&gt;

&lt;p&gt;Let’s focus briefly on the following setting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "~&amp;gt; 0.12"
  backend "gcs" {
    Bucket = "&amp;lt;BUCKET_NAME&amp;gt;"
    Prefix = "terraform/state"
    credentials = "account.json"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform writes everything it's done into the status file and then uses this file for other work. For convenient sharing, it’s better to store this file somewhere in a remote place. A typical place is a &lt;a href="https://cloud.google.com/storage/docs/key-terms#buckets" rel="noopener noreferrer"&gt;Google Bucket&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let's create this bucket. Use the name of your bucket instead of the placeholder &amp;lt;BUCKET_NAME&amp;gt;. Before bucket creation let’s check if &amp;lt;BUCKET_NAME&amp;gt; is available as it has to be unique across all GCP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gsutil acl get gs://&amp;lt;BUCKET_NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Good answer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BucketNotFoundException: 404 gs://&amp;lt;BUCKET_NAME&amp;gt; bucket does not exist
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;"Busy" answer means you have to choose another name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AccessDeniedException: 403 &amp;lt;YOUR_ACCOUNT&amp;gt; does not have storage.buckets.get access to &amp;lt;BUCKET_NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s also enable versioning, as &lt;a href="https://www.terraform.io/docs/backends/types/gcs.html" rel="noopener noreferrer"&gt;Terraform recommends&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gsutil mb -l EU gs://&amp;lt;BUCKET_NAME&amp;gt;

$ gsutil versioning get gs://&amp;lt;BUCKET_NAME&amp;gt;
gs://&amp;lt;BUCKET_NAME&amp;gt;: Suspended

$ gsutil versioning set on gs://&amp;lt;BUCKET_NAME&amp;gt;

$ gsutil versioning get gs://&amp;lt;BUCKET_NAME&amp;gt;
gs://&amp;lt;BUCKET_NAME&amp;gt;: Enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform is modular and needs to add a Google provider plugin to create something in GCP. We use the following command to do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's look at what Terraform is going to do to create a GKE cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform plan -out dev-cluster.plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command output includes details of the plan. If you have no objections, let's implement this plan:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform apply dev-cluster.plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By the way, to delete the resources created by Terraform, run this command from the &amp;lt;root_repo_dir&amp;gt;/terraform/ directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform destroy -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s leave the cluster as is for a while and move on. But first note that we don’t want to push everything into the repository, so we’ll add several files to the exceptions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;root_repo_dir&amp;gt;/.gitignore
.DS_Store
terraform/.terraform/
terraform/*.plan
terraform/*.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Using Helm
&lt;/h4&gt;

&lt;p&gt;As described in &lt;a href="https://community.intersystems.com/post/deploying-intersystems-iris-solution-gcp-kubernetes-cluster-gke-using-circleci" rel="noopener noreferrer"&gt;this article&lt;/a&gt;, we could store Kubernetes manifests as yaml files in the &amp;lt;root_repo_dir&amp;gt;/k8s/ directory, which we then sent to the cluster using the "kubectl apply" command. &lt;/p&gt;

&lt;p&gt;This time we'll try a different approach: using the Kubernetes package manager &lt;a href="https://helm.sh/docs/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt;, which currently has &lt;a href="https://helm.sh/blog/helm-3-released/" rel="noopener noreferrer"&gt;version 3&lt;/a&gt;. Please, use version 3 or later because version 2 had Kubernetes-side security issues (see &lt;a href="https://engineering.bitnami.com/articles/running-helm-in-production.html" rel="noopener noreferrer"&gt;Running Helm in production: Security best practices&lt;/a&gt; for details). First, we’ll pack the Kubernetes manifests from our k8s/ directory into a Helm package, which is known as a &lt;a href="https://helm.sh/docs/topics/charts/#the-chart-file-structure" rel="noopener noreferrer"&gt;chart&lt;/a&gt;. A Helm chart installed in Kubernetes is called a release. In a minimal configuration, a chart will consist of several files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir &amp;lt;root_repo_dir&amp;gt;/helm; cd &amp;lt;root_repo_dir&amp;gt;/helm
$ tree &amp;lt;root_repo_dir&amp;gt;/helm/
helm/
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   └── service.yaml
└── values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Their purpose is well-described on &lt;a href="https://helm.sh/docs/topics/charts/" rel="noopener noreferrer"&gt;the official site&lt;/a&gt;. The best practices for creating your own charts are described in the &lt;a href="https://helm.sh/docs/chart_best_practices/" rel="noopener noreferrer"&gt;The Chart Best Practices Guide&lt;/a&gt; in the Helm documentation. &lt;/p&gt;

&lt;p&gt;Here’s what the contents of our files look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat Chart.yaml
apiVersion: v2
name: iris-rest
version: 0.1.0
appVersion: 1.0.3
description: Helm for ObjectScript-REST-Docker-template application
sources:
- https://github.com/intersystems-community/objectscript-rest-docker-template
- https://github.com/intersystems-community/gke-terraform-circleci-objectscript-rest-docker-template
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ template "iris-rest.name" . }}
  labels:
    app: {{ template "iris-rest.name" . }}
    chart: {{ template "iris-rest.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  strategy:
    {{- .Values.strategy | nindent 4 }}
  selector:
    matchLabels:
      app: {{ template "iris-rest.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ template "iris-rest.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
      - image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
        name: {{ template "iris-rest.name" . }}
        ports:
        - containerPort: {{ .Values.webPort.value }}
          name: {{ .Values.webPort.name }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat templates/service.yaml
{{- if .Values.service.enabled }}
apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.service.name }}
  labels:
    app: {{ template "iris-rest.name" . }}
    chart: {{ template "iris-rest.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  selector:
    app: {{ template "iris-rest.name" . }}
    release: {{ .Release.Name }}
  ports:
  {{- range $key, $value := .Values.service.ports }}
    - name: {{ $key }}
{{ toYaml $value | indent 6 }}
  {{- end }}
  type: {{ .Values.service.type }}
  {{- if ne .Values.service.loadBalancerIP "" }}
  loadBalancerIP: {{ .Values.service.loadBalancerIP }}
  {{- end }}
{{- end }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat templates/_helpers.tpl
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}

{{- define "iris-rest.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "iris-rest.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat values.yaml
namespaceOverride: iris-rest

replicaCount: 1

strategy: |
  type: Recreate

image:
  repository: eu.gcr.io/iris-rest
  tag: v1

webPort:
  name: web
  value: 52773

service:
  enabled: true
  name: iris-rest
  type: LoadBalancer
  loadBalancerIP: ""
  ports:
    web:
      port: 52773
      targetPort: 52773
      protocol: TCP

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To create the Helm charts, install the &lt;a href="https://github.com/helm/helm/releases/tag/v3.0.1" rel="noopener noreferrer"&gt;Helm client&lt;/a&gt; and the &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt; command-line utility.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm version
version.BuildInfo{Version:"v3.0.1", GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa", GitTreeState:"clean", GoVersion:"go1.13.4"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a namespace called "iris". It would be nice if this was created during the deployment, but &lt;a href="https://github.com/helm/helm/issues/6794" rel="noopener noreferrer"&gt;initially it was not the case&lt;/a&gt;. &lt;strong&gt;Update&lt;/strong&gt;: looks like, &lt;em&gt;--create-namespace&lt;/em&gt; flag is currently supported.&lt;/p&gt;

&lt;p&gt;First, add credentials for the cluster created by Terraform to kube-config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gcloud container clusters get-credentials &amp;lt;CLUSTER_NAME&amp;gt; --zone &amp;lt;LOCATION&amp;gt; --project &amp;lt;PROJECT_ID&amp;gt;
$ kubectl create ns iris
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm (without kicking off a real deploy) that Helm is going to create the following in Kubernetes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd &amp;lt;root_repo_dir&amp;gt;/helm
$ helm upgrade iris-rest \
  --install \
  . \
  --namespace iris \
  --debug \
  --dry-run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output - the Kubernetes manifests - has been omitted for space here. If everything looks good, let’s deploy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm upgrade iris-rest --install . --namespace iris

$ helm list -n iris --all
iris-rest  iris  1  2019-12-14 15:24:19.292227564  +0200  EET  deployed    iris-rest-0.1.0  1.0.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We see that Helm has deployed our application, but since we haven’t created the Docker image &lt;em&gt;eu.gcr.io/iris-rest:v1&lt;/em&gt; yet, Kubernetes can’t pull it (ImagePullBackOff):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n iris get po
NAME                       READY  STATUS            RESTARTS AGE
iris-rest-59b748c577-6cnrt 0/1    ImagePullBackOff  0         10m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s finish with it for now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm delete iris-rest -n iris
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  The CircleCI Side
&lt;/h4&gt;

&lt;p&gt;Now that we’ve tried out Terraform and the Helm client, let’s put them to use during the deployment process on the CircleCI side.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;root_repo_dir&amp;gt;/.circleci/config.yml
version: 2.1

orbs:
  gcp-gcr: circleci/gcp-gcr@0.6.1

jobs:
  terraform:
    docker:
    # Terraform image version should be the same as when
    # you run terraform before from the local machine
      - image: hashicorp/terraform:0.12.17
    steps:
      - checkout
      - run:
          name: Create Service Account key file from environment variable
          working_directory: terraform
          command: echo ${TF_SERVICE_ACCOUNT_KEY} &amp;gt; account.json
      - run:
          name: Show Terraform version
          command: terraform version
      - run:
          name: Download required Terraform plugins
          working_directory: terraform
          command: terraform init
      - run:
          name: Validate Terraform configuration
          working_directory: terraform
          command: terraform validate
      - run:
          name: Create Terraform plan
          working_directory: terraform
          command: terraform plan -out /tmp/tf.plan
      - run:
          name: Run Terraform plan
          working_directory: terraform
          command: terraform apply /tmp/tf.plan
  k8s_deploy:
    docker:
      - image: kiwigrid/gcloud-kubectl-helm:3.0.1-272.0.0-218
    steps:
      - checkout
      - run:
          name: Authorize gcloud on GKE
          working_directory: helm
          command: |
            echo ${GCLOUD_SERVICE_KEY} &amp;gt; gcloud-service-key.json
            gcloud auth activate-service-account --key-file=gcloud-service-key.json
            gcloud container clusters get-credentials ${GKE_CLUSTER_NAME} --zone ${GOOGLE_COMPUTE_ZONE} --project ${GOOGLE_PROJECT_ID}
      - run:
          name: Wait a little until k8s worker nodes up
          command: sleep 30 # It’s a place for improvement
      - run:
          name: Create IRIS namespace if it doesn't exist
          command: kubectl get ns iris || kubectl create ns iris
      - run:
          name: Run Helm release deployment
          working_directory: helm
          command: |
            helm upgrade iris-rest \
              --install \
              . \
              --namespace iris \
              --wait \
              --timeout 300s \
              --atomic \
              --set image.repository=eu.gcr.io/${GOOGLE_PROJECT_ID}/iris-rest \
              --set image.tag=${CIRCLE_SHA1}
      - run:
          name: Check Helm release status
          command: helm list --all-namespaces --all
      - run:
          name: Check Kubernetes resources status
          command: |
            kubectl -n iris get pods
            echo
            kubectl -n iris get services
workflows:
  main:
    jobs:
      - terraform
      - gcp-gcr/build-and-push-image:
          dockerfile: Dockerfile
          gcloud-service-key: GCLOUD_SERVICE_KEY
          google-compute-zone: GOOGLE_COMPUTE_ZONE
          google-project-id: GOOGLE_PROJECT_ID
          registry-url: eu.gcr.io
          image: iris-rest
          path: .
          tag: ${CIRCLE_SHA1}
      - k8s_deploy:
          requires:
            - terraform
            - gcp-gcr/build-and-push-image

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll need to add several &lt;a href="https://circleci.com/docs/2.0/env-vars/#setting-an-environment-variable-in-a-project" rel="noopener noreferrer"&gt;environment variables&lt;/a&gt; to your project on CircleCI side:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fw4mb2wnytd1soafu7714.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fw4mb2wnytd1soafu7714.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The GCLOUD_SERVICE_KEY is the CircleCI service account key, and TF_SERVICE_ACCOUNT_KEY is the Terraform service account key. Recall that the service account key is the whole content of &lt;em&gt;account.json&lt;/em&gt; file.&lt;/p&gt;

&lt;p&gt;Next, let’s push our changes to a repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd &amp;lt;root_repo_dir&amp;gt;
$ git add .circleci/ helm/ terraform/ .gitignore
$ git commit -m "Add Terraform and Helm"
$ git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The CircleCI UI dashboard should show that everything is ok:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnja60f4tjy45qi8gtmr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnja60f4tjy45qi8gtmr5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Terraform is an idempotent tool and if the GKE cluster is present, the "terraform" job won’t do anything. If the cluster doesn’t exist, it will be created before Kubernetes deployment.&lt;br&gt;
Finally, let’s check IRIS availability:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gcloud container clusters get-credentials &amp;lt;CLUSTER_NAME&amp;gt; --zone &amp;lt;LOCATION&amp;gt; --project &amp;lt;PROJECT_ID&amp;gt;

$ kubectl -n iris get svc
NAME      TYPE          CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE   
iris-rest LoadBalancer  10.23.249.42  34.76.130.11  52773:31603/TCP   53s

$ curl -XPOST -H "Content-Type: application/json" -u _system:SYS 34.76.130.11:52773/person/ -d '{"Name":"John Dou"}'

$ curl -XGET -u _system:SYS 34.76.130.11:52773/person/all
[{"Name":"John Dou"},]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Terraform and Helm are standard DevOps tools and should be fine integrated with IRIS deployment.&lt;/p&gt;

&lt;p&gt;They do require some learning, but after some practice, they can really save you time and effort.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>intersystems</category>
      <category>circleci</category>
    </item>
    <item>
      <title>Deploying InterSystems IRIS solution on GKE Using GitHub Actions</title>
      <dc:creator>Mikhail Khomenko</dc:creator>
      <pubDate>Fri, 24 Jul 2020 07:25:12 +0000</pubDate>
      <link>https://dev.to/intersystems/deploying-intersystems-iris-solution-on-gke-using-github-actions-576h</link>
      <guid>https://dev.to/intersystems/deploying-intersystems-iris-solution-on-gke-using-github-actions-576h</guid>
      <description>&lt;p&gt;In &lt;a href="https://community.intersystems.com/post/automating-gke-creation-circleci-builds" rel="noopener noreferrer"&gt;this&lt;/a&gt; article, we took a look at the CircleCI deployment system, which integrates perfectly with GitHub. Why then would we want to look any further? Well, GitHub has its own CI/CD platform called GitHub Actions, which is worth exploring. With GitHub Actions, you don’t need to rely on some external, albeit cool, service.&lt;/p&gt;

&lt;p&gt;In this article we’re going to try using GitHub Actions to deploy the server part of  InterSystems Package Manager, &lt;a href="https://openexchange.intersystems.com/package/zpm-registry" rel="noopener noreferrer"&gt;ZPM-registry&lt;/a&gt;, on Google Kubernetes Engine (GKE).&lt;/p&gt;

&lt;p&gt;As with all systems, the build/deploy process essentially comes down to “do this, go there, do that,” and so on. With GitHub Actions, each such action is a job that consists of one or more steps, together known as a &lt;a href="https://help.github.com/en/actions/configuring-and-managing-workflows/configuring-a-workflow#about-workflows" rel="noopener noreferrer"&gt;workflow&lt;/a&gt;. GitHub will search for a description of the workflow in the YAML file (any filename ending in .yml or .yaml) in your .github/workflows directory. See &lt;a href="https://help.github.com/en/actions/automating-your-workflow-with-github-actions/core-concepts-for-github-actions" rel="noopener noreferrer"&gt;Core concepts for GitHub Actions&lt;/a&gt; for more details.&lt;/p&gt;

&lt;p&gt;All further actions will be performed in the fork of the &lt;a href="https://github.com/intersystems-community/zpm-registry" rel="noopener noreferrer"&gt;ZPM-registry repository&lt;/a&gt;. We’ll call this fork &lt;em&gt;"zpm-registry"&lt;/em&gt; and refer to its root directory as "&amp;lt;root_repo_dir&amp;gt;" throughout this article. To learn more about the ZPM application itself see &lt;a href="https://community.intersystems.com/post/introducing-intersystems-objectscript-package-manager" rel="noopener noreferrer"&gt;Introducing InterSystems ObjectScript Package Manager&lt;/a&gt; and &lt;a href="https://community.intersystems.com/post/anatomy-zpm-module-packaging-your-intersystems-solution" rel="noopener noreferrer"&gt;The Anatomy of ZPM Module: Packaging Your InterSystems Solution&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;All code samples are stored &lt;a href="https://github.com/intersystems-community/github-gke-zpm-registry" rel="noopener noreferrer"&gt;in this repository&lt;/a&gt; to simplify copying and pasting. The prerequisites are the same as in the article &lt;a href="https://community.intersystems.com/post/automating-gke-creation-circleci-builds" rel="noopener noreferrer"&gt;Automating GKE creation on CircleCI builds&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We’ll assume you’ve read that article and already have a &lt;a href="https://console.cloud.google.com/" rel="noopener noreferrer"&gt;Google account&lt;/a&gt;, and that you’ve created a project named "Development," as in the previous article. In this article, its ID is shown as &amp;lt;PROJECT_ID&amp;gt;. In the examples below, change it to &lt;a href="https://support.google.com/googleapi/answer/7014113?hl=en" rel="noopener noreferrer"&gt;the ID of your own project&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Keep in mind that Google isn’t free, although it has a &lt;a href="https://cloud.google.com/free/" rel="noopener noreferrer"&gt;free tier&lt;/a&gt;. Be sure to &lt;a href="https://cloud.google.com/billing/docs/" rel="noopener noreferrer"&gt;control your expenses&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Workflow Basics
&lt;/h4&gt;

&lt;p&gt;Let’s get started. &lt;/p&gt;

&lt;p&gt;A simple and useless workflow file might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd &amp;lt;root_repo_dir&amp;gt;
$ mkdir -p .github/workflows
$ cat &amp;lt;root_repo_dir&amp;gt;/.github/workflows/workflow.yaml          
name: Traditional Hello World
on: [push]
jobs:
  courtesy:
    name: Greeting
    runs-on: ubuntu-latest
    steps:
    - name: Hello world
      run: echo "Hello, world!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When pushing to the repository, you need to execute a job named "Greeting," which consists of a single step: printing a welcome phrase. The job should run on a GitHub-hosted virtual machine called the Runner, with the latest version of Ubuntu installed.&lt;br&gt;
After pushing this file to the repository, you should see on the Code GitHub tab that everything went well:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8zbhp2z0hvz828eeruc5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8zbhp2z0hvz828eeruc5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
If the job had failed, you’d see a red X instead of a green checkmark. To see more, click on the green checkmark and then on Details. Or you can immediately go to the Actions tab:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxp6b314tv7x0ovrsqy7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxp6b314tv7x0ovrsqy7s.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can learn all about the workflow syntax in the help document &lt;a href="https://help.github.com/en/actions/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions" rel="noopener noreferrer"&gt;Workflow syntax for GitHub Actions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If your repository contains a Dockerfile for the image build, you could replace the "Hello world" step with something more useful like this example from &lt;a href="https://github.com/actions/starter-workflows/blob/master/ci/docker-image.yml" rel="noopener noreferrer"&gt;starter-workflows&lt;/a&gt;:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
steps:
- &lt;b&gt;uses&lt;/b&gt;: actions/checkout@v2
- name: Build the Docker image
  run: docker build . --file Dockerfile --tag my-image:$(date +%s)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Notice that a new step, "uses: action/checkout@v2", was added here. Judging by the name "checkout", it clones the repository, but where to find out more?&lt;/p&gt;

&lt;p&gt;As in the case of CircleCI, many useful steps don’t need to be rewritten. Instead, you can take them from the shared resource called &lt;a href="https://github.com/marketplace?type=actions" rel="noopener noreferrer"&gt;Marketplace&lt;/a&gt;. Look there for the desired action, and note that it’s better to take those that are marked as "By actions" (when you hover over - "Creator verified by Github").&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcaj7pvwyoksv2yobt1pz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcaj7pvwyoksv2yobt1pz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The "uses" clause in the workflow reflects our intention to use a ready-made module, rather than writing one ourselves.&lt;/p&gt;

&lt;p&gt;The implementations of the actions themselves can be written in almost any language, but JavaScript is preferred. If your action is written in JavaScript (or TypeScript), it will be executed directly on the Runner machine. For other implementations, the Docker container you specify will run with the desired environment inside, which is obviously somewhat slower. You can read more about actions in the aptly titled article, &lt;a href="https://help.github.com/en/actions/automating-your-workflow-with-github-actions/about-actions" rel="noopener noreferrer"&gt;About actions&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/actions/checkout" rel="noopener noreferrer"&gt;checkout action&lt;/a&gt; is written in TypeScript. And in our example, &lt;a href="https://github.com/hashicorp/terraform-github-actions" rel="noopener noreferrer"&gt;Terraform action&lt;/a&gt; is a regular bash script launched in Docker Alpine.&lt;/p&gt;

&lt;p&gt;There’s a Dockerfile in our cloned repository, so let's try to apply our new knowledge. We’ll build the image of the ZPM registry and push it into the Google Container Registry. In parallel, we’ll create the Kubernetes cluster in which this image will run, and we’ll use Kubernetes manifests to do this. &lt;/p&gt;

&lt;p&gt;Here’s what our plan, in a language that GitHub understands, will look like (but keep in mind that this is a bird's eye view with many lines omitted for simplification, so don’t actually use this config):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Workflow description
# Trigger condition. In this case, only on push to ‘master’ branch
on:
  push:
    branches:
    - master
# Here we describe environment variables available 
# for all further jobs and their steps
# These variables can be initialized on GitHub Secrets page
# We add “${{ secrets }}” to refer them
env:
  PROJECT_ID: ${{ secrets.PROJECT_ID }}

# Define a jobs list. Jobs/steps names could be random but
# it’s better to have they meaningful
jobs:
  gcloud-setup-and-build-and-publish-to-GCR:
    name: Setup gcloud utility, Build ZPM image and Publish it to Container Registry
    runs-on: ubuntu-18.04
    steps:
    - name: Checkout
    - name: Setup gcloud cli
    - name: Configure docker to use the gcloud as a credential helper
    - name: Build ZPM image
    - name: Publish ZPM image to Google Container Registry

  gke-provisioner:
    name: Provision GKE cluster
    runs-on: ubuntu-18.04
    steps:
    - name: Checkout
    - name: Terraform init
    - name: Terraform validate
    - name: Terraform plan
    - name: Terraform apply

  kubernetes-deploy:
    name: Deploy Kubernetes manifests to GKE cluster
    needs:
    - gcloud-setup-and-build-and-publish-to-GCR
    - gke-provisioner
    runs-on: ubuntu-18.04
    steps:
    - name: Checkout
    - name: Replace placeholders with values in statefulset template
    - name: Setup gcloud cli
    - name: Apply Kubernetes manifests
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the skeleton of the working config in which there are no muscles, the real actions for each step. Actions can be accomplished with a simple console command ("run" or "run |" if there are several commands):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Configure docker to use gcloud as a credential helper
  run: |
    gcloud auth configure-docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also launch actions as a module with "uses":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Checkout
  uses: actions/checkout@v2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, all jobs run in parallel, and the steps in them are done in sequence. But by using "needs", you can specify that one job should wait for the rest to complete:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;needs:
- gcloud-setup-and-build-and-publish-to-GCR
- gke-provisioner
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By the way, in the GitHub Web interface, such waiting jobs appear only when the jobs they’re waiting for are executed.&lt;/p&gt;

&lt;p&gt;The "gke-provisioner" job mentions Terraform, which we examined in &lt;a href="https://community.intersystems.com/post/automating-gke-creation-circleci-builds" rel="noopener noreferrer"&gt;the previous article&lt;/a&gt;. The preliminary settings for its operation in the GCP environment are repeated for convenience in a separate &lt;a href="https://github.com/intersystems-community/github-gke-zpm-registry/blob/master/Terraform.md" rel="noopener noreferrer"&gt;markdown file&lt;/a&gt;. Here are some additional useful links (&lt;strong&gt;UPD&lt;/strong&gt; - this GitHub Actioh has been superseded by &lt;a href="https://www.terraform.io/docs/github-actions/setup-terraform.html" rel="noopener noreferrer"&gt;hashicorp/setup-terraform&lt;/a&gt; from the moment of initial writing):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs/github-actions/configuration/apply.html" rel="noopener noreferrer"&gt;Terraform Apply Subcommand documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/hashicorp/terraform-github-actions" rel="noopener noreferrer"&gt;Terraform GitHub Actions repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs/github-actions/index.html" rel="noopener noreferrer"&gt;Terraform GitHub Actions documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the "kubernetes-deploy" job, there is a step called "Apply Kubernetes manifests". We’re going to use manifests as mentioned in the article &lt;a href="https://community.intersystems.com/post/deploying-intersystems-iris-solution-gcp-kubernetes-cluster-gke-using-circleci" rel="noopener noreferrer"&gt;Deploying InterSystems IRIS Solution into GCP Kubernetes Cluster GKE Using CircleCI&lt;/a&gt;, but with a slight change.&lt;/p&gt;

&lt;p&gt;In the previous articles, &lt;a href="https://openexchange.intersystems.com/package/objectscript-rest-docker-template" rel="noopener noreferrer"&gt;IRIS application&lt;/a&gt; has been stateless. That is, when restarting the pod, all data is returned to its default place. This is great, and it’s often necessary, but for ZPM registry you need to somehow save the packages that were loaded into it, regardless of how many times you need to restart. Deployment allows you to do this, of course, but not without limitations. &lt;/p&gt;

&lt;p&gt;For stateful applications, it’s better to choose the &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noopener noreferrer"&gt;StatefulSet&lt;/a&gt; resource. Pros and cons can be found in the GKE documentation topic on &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#deployments_vs_statefulsets" rel="noopener noreferrer"&gt;Deployments vs. StatefulSets&lt;/a&gt; and the blog post &lt;a href="https://akomljen.com/kubernetes-persistent-volumes-with-deployment-and-statefulset/" rel="noopener noreferrer"&gt;Kubernetes Persistent Volumes with Deployment and StatefulSet&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The StatefulSet resource is in &lt;a href="https://github.com/intersystems-community/github-gke-zpm-registry/tree/master/k8s" rel="noopener noreferrer"&gt;the repository&lt;/a&gt;. Here’s the part that’s important for us:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volumeClaimTemplates:
- metadata:
    name: zpm-registry-volume
    namespace: iris
  spec:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 10Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code creates a 10GB read/write disk that can be mounted by a single Kubernetes worker node. This disk (and the data on it) will survive the restart of the application. It can also survive the removal of the entire StatefulSet, but for this you need to set the correct &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming" rel="noopener noreferrer"&gt;Reclaim Policy&lt;/a&gt;, which we won’t cover here.&lt;/p&gt;

&lt;p&gt;Before breathing life into our workflow, let's add a few more variables to &lt;a href="https://developer.github.com/v3/actions/secrets/" rel="noopener noreferrer"&gt;GitHub Secrets&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzh1c6w4dpx9mf8v0b3p9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzh1c6w4dpx9mf8v0b3p9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following table explains the meaning of these settings (&lt;a href="https://cloud.google.com/iam/docs/creating-managing-service-account-keys" rel="noopener noreferrer"&gt;service account keys&lt;/a&gt; are also present):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Meaning&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GCR_LOCATION&lt;/td&gt;
&lt;td&gt;Global GCR location&lt;/td&gt;
&lt;td&gt;eu.gcr.io&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GKE_CLUSTER&lt;/td&gt;
&lt;td&gt;GKE cluster name&lt;/td&gt;
&lt;td&gt;dev-cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GKE_ZONE&lt;/td&gt;
&lt;td&gt;Zone to store an image&lt;/td&gt;
&lt;td&gt;europe-west1-b&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IMAGE_NAME&lt;/td&gt;
&lt;td&gt;Image registry name&lt;/td&gt;
&lt;td&gt;zpm-registry&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PROJECT_ID&lt;/td&gt;
&lt;td&gt;GCP Project ID&lt;/td&gt;
&lt;td&gt;possible-symbol-254507&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SERVICE_ACCOUNT_KEY&lt;/td&gt;
&lt;td&gt;JSON key GitHub uses to connect to GCP. &lt;strong&gt;Important&lt;/strong&gt;: it has to be base64-encoded (see note below)&lt;/td&gt;
&lt;td&gt;ewogICJ0eXB...&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TF_SERVICE_ACCOUNT_KEY&lt;/td&gt;
&lt;td&gt;JSON key Terraform uses to connect to GCP (see note below)&lt;/td&gt;
&lt;td&gt;{ ... }&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For SERVICE_ACCOUNT_KEY, if your JSON-key has a name, for instance, key.json, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ base64 key.json | tr -d '\n'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For TF_SERVICE_ACCOUNT_KEY, note that its rights are described in &lt;a href="https://community.intersystems.com/post/automating-gke-creation-circleci-builds" rel="noopener noreferrer"&gt;Automating GKE creation on CircleCI builds&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One small note about SERVICE_ACCOUNT_KEY: if you, like me, initially forgot to convert it to base64 format, you’ll see a screen like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff0uzt5porn4i8mtq8hp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ff0uzt5porn4i8mtq8hp2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we’ve looked at the workflow backbone and added the necessary variables, we’re ready to examine the full version of the workflow (&lt;a href="https://github.com/intersystems-community/github-gke-zpm-registry/blob/master/.github/workflows/workflow.yaml" rel="noopener noreferrer"&gt;&amp;lt;root_repo_dir&amp;gt;/.github/workflow/workflow.yaml&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Build ZPM-registry image, deploy it to GCR. Run GKE. Run ZPM-registry in GKE
on:
  push:
    branches:
    - master
# Environment variables.
# ${{ secrets }} are taken from GitHub -&amp;gt; Settings -&amp;gt; Secrets
# ${{ github.sha }} is the commit hash
env:
  PROJECT_ID: ${{ secrets.PROJECT_ID }}
  SERVICE_ACCOUNT_KEY: ${{ secrets.SERVICE_ACCOUNT_KEY }}
  GOOGLE_CREDENTIALS: ${{ secrets.TF_SERVICE_ACCOUNT_KEY }}
  GITHUB_SHA: ${{ github.sha }}
  GCR_LOCATION: ${{ secrets.GCR_LOCATION }}
  IMAGE_NAME: ${{ secrets.IMAGE_NAME }}
  GKE_CLUSTER: ${{ secrets.GKE_CLUSTER }}
  GKE_ZONE: ${{ secrets.GKE_ZONE }}
  K8S_NAMESPACE: iris
  STATEFULSET_NAME: zpm-registry

jobs:
  gcloud-setup-and-build-and-publish-to-GCR:
    name: Setup gcloud utility, Build ZPM image and Publish it to Container Registry
    runs-on: ubuntu-18.04
    steps:
    - name: Checkout
      uses: actions/checkout@v2

    - name: Setup gcloud cli
      uses: GoogleCloudPlatform/github-actions/setup-gcloud@master
      with:
        version: '275.0.0'
        service_account_key: ${{ secrets.SERVICE_ACCOUNT_KEY }}

    - name: Configure docker to use the gcloud as a credential helper
      run: |
        gcloud auth configure-docker

    - name: Build ZPM image
      run: |
        docker build -t ${GCR_LOCATION}/${PROJECT_ID}/${IMAGE_NAME}:${GITHUB_SHA} .

    - name: Publish ZPM image to Google Container Registry
      run: |
        docker push ${GCR_LOCATION}/${PROJECT_ID}/${IMAGE_NAME}:${GITHUB_SHA}

  gke-provisioner:
  # Inspired by:
  ## https://www.terraform.io/docs/github-actions/getting-started.html
  ## https://github.com/hashicorp/terraform-github-actions
    name: Provision GKE cluster
    runs-on: ubuntu-18.04
    steps:
    - name: Checkout
      uses: actions/checkout@v2

    - name: Terraform init
      uses: hashicorp/terraform-github-actions@master
      with:
        tf_actions_version: 0.12.17
        tf_actions_subcommand: 'init'
        tf_actions_working_dir: 'terraform'

    - name: Terraform validate
      uses: hashicorp/terraform-github-actions@master
      with:
        tf_actions_version: 0.12.17
        tf_actions_subcommand: 'validate'
        tf_actions_working_dir: 'terraform'

    - name: Terraform plan
      uses: hashicorp/terraform-github-actions@master
      with:
        tf_actions_version: 0.12.17
        tf_actions_subcommand: 'plan'
        tf_actions_working_dir: 'terraform'

    - name: Terraform apply
      uses: hashicorp/terraform-github-actions@master
      with:
        tf_actions_version: 0.12.17
        tf_actions_subcommand: 'apply'
        tf_actions_working_dir: 'terraform'

  kubernetes-deploy:
    name: Deploy Kubernetes manifests to GKE cluster
    needs:
    - gcloud-setup-and-build-and-publish-to-GCR
    - gke-provisioner
    runs-on: ubuntu-18.04
    steps:
    - name: Checkout
      uses: actions/checkout@v2

    - name: Replace placeholders with values in statefulset template
      working-directory: ./k8s/
      run: |
        cat statefulset.tpl |\
        sed "s|DOCKER_REPO_NAME|${GCR_LOCATION}/${PROJECT_ID}/${IMAGE_NAME}|" |\
        sed "s|DOCKER_IMAGE_TAG|${GITHUB_SHA}|" &amp;gt; statefulset.yaml
        cat statefulset.yaml

    - name: Setup gcloud cli
      uses: GoogleCloudPlatform/github-actions/setup-gcloud@master
      with:
        version: '275.0.0'
        service_account_key: ${{ secrets.SERVICE_ACCOUNT_KEY }}

    - name: Apply Kubernetes manifests
      working-directory: ./k8s/
      run: |
        gcloud container clusters get-credentials ${GKE_CLUSTER} --zone ${GKE_ZONE} --project ${PROJECT_ID}
        kubectl apply -f namespace.yaml
        kubectl apply -f service.yaml
        kubectl apply -f statefulset.yaml
        kubectl -n ${K8S_NAMESPACE} rollout status statefulset/${STATEFULSET_NAME}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before you push to a repository, you should take the terraform-code from the &lt;a href="https://github.com/intersystems-community/github-gke-zpm-registry/tree/master/terraform" rel="noopener noreferrer"&gt;Terraform directory of github-gke-zpm-registry repository&lt;/a&gt;, replace placeholders as noted in main.tf comment, and put it inside the terraform/ directory. Remember that Terraform uses a remote bucket that should be initially created as noted in &lt;a href="https://community.intersystems.com/post/automating-gke-creation-circleci-builds" rel="noopener noreferrer"&gt;Automating GKE creation on CircleCI builds article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Also, Kubernetes-code should be taken from the &lt;a href="https://github.com/intersystems-community/github-gke-zpm-registry/tree/master/k8s" rel="noopener noreferrer"&gt;K8S directory of github-gke-zpm-registry repository&lt;/a&gt; and put inside the k8s/ directory. These code sources were omitted in this article to save space. &lt;/p&gt;

&lt;p&gt;Then you can trigger a deploy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd &amp;lt;root_repo_dir&amp;gt;/
$ git add .github/workflow/workflow.yaml k8s/ terraform/
$ git commit -m “Add GitHub Actions deploy”
$ git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After pushing the changes to our forked ZPM repository, we can take a look at the implementation of the steps we described:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fs11oi3zjyghcf0uq0vz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fs11oi3zjyghcf0uq0vz2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqgknxzvahtcwvf2l03q0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fqgknxzvahtcwvf2l03q0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are only two jobs so far. The third, "kubernetes-deploy", will appear after the completion of those on which it depends.&lt;br&gt;
Note that building and publishing Docker images requires some time:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuyfatqejqcync2vf11hc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuyfatqejqcync2vf11hc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And you can check the result in the &lt;a href="https://cloud.google.com/container-registry/docs" rel="noopener noreferrer"&gt;GCR console&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9gnyj73i403m1i00hz9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9gnyj73i403m1i00hz9g.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The "Provision GKE cluster" job takes longer the first time as it creates the GKE cluster. You’ll see a waiting screen for a few minutes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fs2xzf34jq3crhubrfd8j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fs2xzf34jq3crhubrfd8j.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But, finally, it finishes and you can be happy:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F64x9sfd08fhu1focujyb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F64x9sfd08fhu1focujyb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Kubernetes resources are also happy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ gcloud container clusters get-credentials &amp;lt;CLUSTER_NAME&amp;gt; --zone &amp;lt;GKE_ZONE&amp;gt; --project &amp;lt;PROJECT_ID&amp;gt;
$ kubectl get nodes
NAME                                                                                                   STATUS ROLES   AGE        VERSION
gke-dev-cluster-dev-cluster-node-pool-98cef283-dfq2 Ready    &amp;lt;none&amp;gt; 8m51s   v1.13.11-gke.23

$ kubectl -n iris get po
NAME            READY  STATUS   RESTARTS   AGE
zpm-registry-0  1/1    Running  0          8m25s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's a good idea to wait for Running status, then check other things:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n iris get sts
NAME           READY   AGE
zpm-registry   1/1     8m25s


$ kubectl -n iris get svc
NAME           TYPE            CLUSTER-IP       EXTERNAL-IP    PORT(S)                       AGE
zpm-registry   LoadBalancer    10.23.248.234   104.199.6.32     52773:32725/TCP               8m29s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even the disks are happy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pv -oyaml | grep pdName
  pdName: gke-dev-cluster-5fe434-pvc-5db4f5ed-4055-11ea-a6ab-42010af00286
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv7y3kcwwjsi25r0tp4xp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv7y3kcwwjsi25r0tp4xp.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And happiest of all is the ZPM registry (we took the External-IP output of &lt;em&gt;"kubectl -n iris get svc"&lt;/em&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -u _system:SYS 104.199.6.32:52773/registry/_ping
{"message":"ping"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Handling the login/password over HTTP is a shame, but I hope to do something about this in future articles.&lt;/p&gt;

&lt;p&gt;By the way, you can find more information about endpoints in &lt;a href="https://github.com/intersystems-community/zpm-registry/blob/master/src/cls/ZPM/Registry.cls" rel="noopener noreferrer"&gt;the source code&lt;/a&gt;: see the XData UrlMap section.&lt;/p&gt;

&lt;p&gt;We can test this repo by pushing a package to it. There’s a cool ability to push just a direct GitHub link. Let’s try with the &lt;a href="https://openexchange.intersystems.com/package/ObjectScript-Math" rel="noopener noreferrer"&gt;math library for InterSystems ObjectScript&lt;/a&gt;. Run this from your local machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -XGET -u _system:SYS 104.199.6.32:52773/registry/packages/-/all
[]
$ curl -i -XPOST -u _system:SYS -H "Content-Type: application/json" -d '{"repository":"https://github.com/psteiwer/ObjectScript-Math"}' 'http://104.199.6.32:52773/registry/package'
HTTP/1.1 200 OK
$ curl -XGET -u _system:SYS 104.199.6.32:52773/registry/packages/-/all
[{"name":"objectscript-math","versions":["0.0.4"]}]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart a pod to be sure that the data is in place:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n iris scale --replicas=0 sts zpm-registry
$ kubectl -n iris scale --replicas=1 sts zpm-registry
$ kubectl -n iris get po -w
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait for a running pod. Then what I hope you’ll see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -XGET -u _system:SYS 104.199.6.32:52773/registry/packages/-/all
[{"name":"objectscript-math","versions":["0.0.4"]}]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s install this math package from your repository on your local IRIS instance. Choose the one where the ZPM client is already installed:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
$ docker exec -it $(docker run -d intersystemsdc/iris-community:2019.4.0.383.0-zpm) bash

$ iris session iris
USER&amp;gt;write ##class(Math.Math).Factorial(5)
 *Math.Math

USER&amp;gt;zpm
zpm: USER&amp;gt;list

zpm: USER&amp;gt;repo -list
registry
    Source:     https://pm.community.intersystems.com
    Enabled?    Yes
    Available?    &lt;b&gt;Yes&lt;/b&gt;
    Use for Snapshots?    Yes
    Use for Prereleases?    Yes

zpm: USER&amp;gt;repo -n registry -r -url http://104.199.6.32:52773/registry/ -user _system -pass SYS

zpm: USER&amp;gt;repo -list                                                                          
registry
    Source:     http://104.199.6.32:52773/registry/
    Enabled?    Yes
    Available?    &lt;b&gt;Yes&lt;/b&gt;
    Use for Snapshots?    Yes
    Use for Prereleases?    Yes
    Username:     _system
    Password:     ***

zpm: USER&amp;gt;repo -list-modules -n registry
objectscript-math 0.0.4

zpm: USER&amp;gt;install objectscript-math
[objectscript-math]    Reload START
...
[objectscript-math]    Activate SUCCESS

zpm: USER&amp;gt;quit

USER&amp;gt;write ##class(Math.Math).Factorial(5)                                               
120
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Congratulations!&lt;br&gt;
Don’t forget to remove the GKE cluster when you don’t need it anymore:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh1qrl61873ssqpw9o9cg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh1qrl61873ssqpw9o9cg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;There are not many references to GitHub Actions within the InterSystems community. I found only &lt;a href="https://community.intersystems.com/post/behind-scene-isc-tar-project-and-story-about-continuous-integration-using-github-actions" rel="noopener noreferrer"&gt;one mention&lt;/a&gt; from guru &lt;a href="https://community.intersystems.com/user/dmitriy-maslennikov" rel="noopener noreferrer"&gt;@mdaimor&lt;/a&gt;. But GitHub Actions can be quite useful for developers storing code on GitHub. Native actions supported only in JavaScript, but this could be dictated by a desire to describe steps in code, which most developers are familiar with. In any case, you can use Docker actions if you don’t know JavaScript.&lt;/p&gt;

&lt;p&gt;Regarding the GitHub Actions UI, along the way I discovered  a couple of inconveniences that you should be aware of:&lt;/p&gt;

&lt;p&gt;You cannot check what is going on until a job step is finished. It’s not clickable, like in the step "Terraform apply".&lt;br&gt;
While you can rerun a failed workflow, I didn’t find a way to rerun a successful workflow. &lt;br&gt;
A workaround for the second point is to use the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git commit --allow-empty -m "trigger GitHub actions" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can learn more about this in the StackOverflow question &lt;a href="https://stackoverflow.com/questions/56435547/how-do-i-re-run-github-actions" rel="noopener noreferrer"&gt;How do I re-run Github Actions&lt;/a&gt;?&lt;/p&gt;

&lt;p&gt;Continue reading with the next part: &lt;a href="https://community.intersystems.com/post/adding-tls-and-dns-iris-based-services-deployed-google-kubernetes-engine" rel="noopener noreferrer"&gt;Adding TLS and DNS to IRIS-based Services Deployed on Google Kubernetes Engine&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>intersystems</category>
      <category>github</category>
    </item>
    <item>
      <title>Deploying an InterSystems IRIS Solution on EKS using GitHub Actions</title>
      <dc:creator>Mikhail Khomenko</dc:creator>
      <pubDate>Thu, 09 Jul 2020 20:40:43 +0000</pubDate>
      <link>https://dev.to/intersystems/deploying-an-intersystems-iris-solution-on-eks-using-github-actions-5ef7</link>
      <guid>https://dev.to/intersystems/deploying-an-intersystems-iris-solution-on-eks-using-github-actions-5ef7</guid>
      <description>&lt;p&gt;Imagine you want to see what InterSystems can give you in terms of data analytics. You studied the &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=D2GS" rel="noopener noreferrer"&gt;theory&lt;/a&gt; and now you want some practice. Fortunately, InterSystems provides a project that contains some good examples: &lt;a href="https://openexchange.intersystems.com/package/Samples-BI-2" rel="noopener noreferrer"&gt;Samples BI&lt;/a&gt;. Start with the README file, skipping anything associated with Docker, and go straight to the step-by-step installation. Launch a virtual instance, &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCI_unix" rel="noopener noreferrer"&gt;install IRIS&lt;/a&gt; there, follow the instructions for installing Samples BI, and then impress the boss with beautiful charts and tables. So far so good.&lt;br&gt;
Inevitably, though, you’ll need to make changes.&lt;/p&gt;

&lt;p&gt;It turns out that keeping a virtual machine on your own has some drawbacks, and it’s better to keep it with a cloud provider. Amazon seems solid, and you create an AWS account (&lt;a href="https://aws.amazon.com/free/" rel="noopener noreferrer"&gt;free&lt;/a&gt; to start), read that &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html" rel="noopener noreferrer"&gt;using the root user identity for everyday tasks is evil&lt;/a&gt;, and create a regular &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_create-admin-group.html" rel="noopener noreferrer"&gt;IAM user with admin permissions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Clicking a little, you create your own VPC network, subnets, and a virtual EC2 instance, and also add a security group to open the IRIS web port (52773) and ssh port (22) for yourself. Repeat the installation of IRIS and Samples BI. This time, use Bash scripting, or Python if you prefer. Again, impress the boss.&lt;/p&gt;

&lt;p&gt;But the ubiquitous DevOps movement leads you to start reading about &lt;a href="https://www.martinfowler.com/bliki/InfrastructureAsCode.html" rel="noopener noreferrer"&gt;Infrastructure as Code&lt;/a&gt; and you want to implement it. You choose Terraform, since it’s well-known to everyone and its approach is quite universal—suitable with minor adjustments for various cloud providers. You describe the infrastructure in &lt;a href="https://github.com/hashicorp/hcl/blob/hcl2/hclsyntax/spec.md" rel="noopener noreferrer"&gt;HCL language&lt;/a&gt;, and translate the installation steps for IRIS and Samples BI to &lt;a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html" rel="noopener noreferrer"&gt;Ansible&lt;/a&gt;. Then you create one more IAM user to enable Terraform to work. Run it all. Get a bonus at work.&lt;/p&gt;

&lt;p&gt;Gradually you come to the conclusion that in our age of &lt;a href="https://martinfowler.com/articles/microservices.html" rel="noopener noreferrer"&gt;microservices&lt;/a&gt; it’s a shame not to use Docker, especially since InterSystems tells you &lt;a href="https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ADOCK_iris" rel="noopener noreferrer"&gt;how&lt;/a&gt;. You return to the Samples BI installation guide and read the lines about Docker, which don’t seem to be complicated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker pull intersystemsdc/iris-community:2019.4.0.383.0-zpm
$ docker run --name irisce -d --publish 52773:52773 intersystemsdc/iris-community:2019.4.0.383.0-zpm
$ docker exec -it irisce iris session iris
USER&amp;gt;zpm
zpm: USER&amp;gt;install samples-bi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After directing your browser to &lt;a href="http://localhost:52773/csp/user/_DeepSee.UserPortal.Home.zen?$NAMESPACE=USER" rel="noopener noreferrer"&gt;http://localhost:52773/csp/user/_DeepSee.UserPortal.Home.zen?$NAMESPACE=USER&lt;/a&gt;, you again go to the boss and get a day off for a nice job.&lt;/p&gt;

&lt;p&gt;You then begin to understand that “docker run” is just the beginning, and you need to use at least &lt;a href="https://docs.docker.com/compose/compose-file/" rel="noopener noreferrer"&gt;docker-compose&lt;/a&gt;. Not a problem:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat docker-compose.yml
version: "3.7"
services:
  irisce:
    container_name: irisce
    image: intersystemsdc/iris-community:2019.4.0.383.0-zpm
    ports:
    - 52773:52773
$ docker rm -f irisce # We don’t need the previous container
$ docker-compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So you install Docker and docker-compose with Ansible, and then just run the container, which will download an image if it’s not already present on the machine. Then you install Samples BI.&lt;/p&gt;

&lt;p&gt;You certainly like Docker, because it’s a cool and simple interface to various &lt;a href="https://medium.com/@nagarwal/understanding-the-docker-internals-7ccb052ce9fe" rel="noopener noreferrer"&gt;kernel stuff&lt;/a&gt;. You start using Docker elsewhere and often launch more than one container. And find that often containers must communicate with each other, which leads to reading about how to manage multiple containers. &lt;/p&gt;

&lt;p&gt;And you come to &lt;a href="https://kubernetes.io/docs/concepts/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;One option to quickly switch from docker-compose to Kubernetes is to use &lt;a href="https://kompose.io/" rel="noopener noreferrer"&gt;kompose&lt;/a&gt;. Personally, I prefer to simply copy Kubernetes manifests from manuals and then edit for myself, but kompose does a good job of completing its small task:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kompose convert -f docker-compose.yml
INFO Kubernetes file "irisce-service.yaml" created
INFO Kubernetes file "irisce-deployment.yaml" created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you have the deployment and service files that can be sent to some Kubernetes cluster. You find out that you can install a &lt;a href="https://kubernetes.io/docs/tutorials/hello-minikube/" rel="noopener noreferrer"&gt;minikube&lt;/a&gt;, which lets you run a single-node Kubernetes cluster and is just what you need at this stage. After a day or two of playing with the minikube sandbox, you’re ready to use a real live &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf" rel="noopener noreferrer"&gt;Kubernetes deployment somewhere in the AWS cloud&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Getting Set Up
&lt;/h4&gt;

&lt;p&gt;So, let’s do this together. At this point we'll make a couple assumptions:&lt;br&gt;
First, we assume you have an AWS account, you &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html" rel="noopener noreferrer"&gt;know its ID&lt;/a&gt;, and you don’t use root credentials. You create an IAM user (let's call it "my-user") with &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_create-admin-group.html" rel="noopener noreferrer"&gt;administrator rights&lt;/a&gt; and programmatic access only and store its credentials. You also create another IAM user, called "terraform", with the same permissions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsav5sdzn1vfyf4ax6uck.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsav5sdzn1vfyf4ax6uck.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On its behalf, Terraform will go to your AWS account and create and delete the necessary resources. The extensive rights of both users are explained by the fact that this is a demo. You save credentials locally for both IAM users:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat ~/.aws/credentials
[terraform]
aws_access_key_id = ABCDEFGHIJKLMNOPQRST
aws_secret_access_key = ABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890123
[my-user]
aws_access_key_id = TSRQPONMLKJIHGFEDCBA
aws_secret_access_key = TSRQPONMLKJIHGFEDCBA01234567890123
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: Don’t copy and paste the credentials from above. They are provided here as an example and no longer exist. Edit the ~/.aws/credentials file and introduce your own records.&lt;/p&gt;

&lt;p&gt;Second, we’ll use the dummy AWS Account ID (01234567890) for the article, and the AWS region “eu-west-1.” Feel free to use &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html" rel="noopener noreferrer"&gt;another region&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Third, we assume you’re aware that &lt;a href="https://aws.amazon.com/pricing/" rel="noopener noreferrer"&gt;AWS is not free&lt;/a&gt; and you’ll have to pay for resources used.&lt;/p&gt;

&lt;p&gt;Next, you’ve installed the &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html" rel="noopener noreferrer"&gt;AWS CLI utility&lt;/a&gt; for command-line communication with AWS. You can try to use &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html" rel="noopener noreferrer"&gt;aws2&lt;/a&gt;, but you’ll need to specifically set aws2 usage in your kube config file, as described &lt;a href="https://github.com/weaveworks/eksctl/issues/1562" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You’ve also installed the &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="noopener noreferrer"&gt;kubectl utility&lt;/a&gt; for command-line communication with AWS Kubernetes.&lt;/p&gt;

&lt;p&gt;And you’ve installed the &lt;a href="https://kompose.io/installation/" rel="noopener noreferrer"&gt;kompose utility&lt;/a&gt; for docker-compose.yml for converting Kubernetes manifests.&lt;/p&gt;

&lt;p&gt;Finally, you’ve created an empty GitHub repository and cloned it to your host. We’ll refer to its root directory as &amp;lt;root_repo_dir&amp;gt;. In this repository, we’ll create and fill three directories: .github/workflows/, k8s/, and terraform/.&lt;/p&gt;

&lt;p&gt;Note that all the relevant code is duplicated in the &lt;a href="https://github.com/intersystems-community/github-eks-samples-bi" rel="noopener noreferrer"&gt;github-eks-samples-bi&lt;/a&gt; repo to simplify copying and pasting.&lt;br&gt;
Let’s continue.&lt;/p&gt;
&lt;h4&gt;
  
  
  AWS EKS Provisioning
&lt;/h4&gt;

&lt;p&gt;We already met EKS in the article &lt;a href="https://community.intersystems.com/post/deploying-simple-iris-based-web-application-using-amazon-eks" rel="noopener noreferrer"&gt;Deploying a Simple IRIS-Based Web Application Using Amazon EKS&lt;/a&gt;. At that time, we created a cluster semi-automatically. That is, we described the cluster in a file, and then manually launched the &lt;a href="https://eksctl.io/" rel="noopener noreferrer"&gt;eksctl utility&lt;/a&gt; from a local machine, which created the cluster according to our description. &lt;/p&gt;

&lt;p&gt;eksctl was developed for creating EKS clusters and it’s good for a &lt;a href="http://en.wikipedia.org/wiki/Proof_of_concept" rel="noopener noreferrer"&gt;proof-of-concept&lt;/a&gt; implementation, but for everyday usage it’s better to use something more universal, such as Terraform. A great resource, &lt;a href="https://learn.hashicorp.com/terraform/aws/eks-intro" rel="noopener noreferrer"&gt;AWS EKS Introduction&lt;/a&gt;, explains the Terraform configuration needed to create an EKS cluster. An hour or two spent getting acquainted with it will not be a waste of time.&lt;/p&gt;

&lt;p&gt;You can play with Terraform locally. To do so, you’ll need a binary (we’ll use the latest version for Linux at the initial time of writing of the article, &lt;a href="https://releases.hashicorp.com/terraform/0.12.20/" rel="noopener noreferrer"&gt;0.12.20&lt;/a&gt;, &lt;strong&gt;Update&lt;/strong&gt; - the newer version &lt;a href="https://www.hashicorp.com/blog/announcing-the-terraform-0-13-beta/" rel="noopener noreferrer"&gt;0.13&lt;/a&gt; was announced recently), and the IAM user "terraform" with sufficient rights for Terraform to go to AWS. Create the directory &amp;lt;root_repo_dir&amp;gt;/terraform/ to store Terraform code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mkdir &amp;lt;root_repo_dir&amp;gt;/terraform
$ cd &amp;lt;root_repo_dir&amp;gt;/terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can create one or more &lt;em&gt;.tf&lt;/em&gt; files (they are merged at startup). Just copy and paste the code examples from &lt;a href="https://learn.hashicorp.com/terraform/aws/eks-intro" rel="noopener noreferrer"&gt;AWS EKS Introduction&lt;/a&gt; and then run something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ export AWS_PROFILE=terraform
$ export AWS_REGION=eu-west-1
$ terraform init
$ terraform plan -out eks.plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You may encounter some errors. If so, play a little with debug mode, but remember to turn it off later:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ export TF_LOG=debug
$ terraform plan -out eks.plan
&amp;lt;many-many lines here&amp;gt;
$ unset TF_LOG
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This experience will be useful, and most likely you’ll get an EKS cluster launched (use “terraform apply” for that). Check it out in the AWS console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvo8b0klq38o16bs272ff.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvo8b0klq38o16bs272ff.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clean up when you get bored:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then go to the next level and start using &lt;a href="https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/8.2.0" rel="noopener noreferrer"&gt;the Terraform EKS module&lt;/a&gt;, especially since it’s based on the same &lt;a href="https://learn.hashicorp.com/terraform/aws/eks-intro" rel="noopener noreferrer"&gt;EKS introduction&lt;/a&gt;. In the &lt;a href="https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/basic" rel="noopener noreferrer"&gt;examples/ directory&lt;/a&gt; you’ll see how to use it. You’ll also find &lt;a href="https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples" rel="noopener noreferrer"&gt;other examples&lt;/a&gt; there.&lt;/p&gt;

&lt;p&gt;We simplified the examples somewhat. Here’s the main file in which the VPC creation and EKS creation modules are called:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;root_repo_dir&amp;gt;/terraform/main.tf
terraform {
  required_version = "&amp;gt;= 0.12.0"
  backend "s3" {
    bucket         = "eks-github-actions-terraform"
    key            = "terraform-dev.tfstate"
    region         = "eu-west-1"
    dynamodb_table = "eks-github-actions-terraform-lock"
  }
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "1.10.0"
}

locals {
  vpc_name             = "dev-vpc"
  vpc_cidr             = "10.42.0.0/16"
  private_subnets      = ["10.42.1.0/24", "10.42.2.0/24"]
  public_subnets       = ["10.42.11.0/24", "10.42.12.0/24"]
  cluster_name         = "dev-cluster"
  cluster_version      = "1.14"
  worker_group_name    = "worker-group-1"
  instance_type        = "t2.medium"
  asg_desired_capacity = 1
}

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

data "aws_availability_zones" "available" {
}

module "vpc" {
  source               = "git::https://github.com/terraform-aws-modules/terraform-aws-vpc?ref=master"

  name                 = local.vpc_name
  cidr                 = local.vpc_cidr
  azs                  = data.aws_availability_zones.available.names
  private_subnets      = local.private_subnets
  public_subnets       = local.public_subnets
  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
  }

  public_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/elb" = "1"
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb" = "1"
  }
}

module "eks" {
  source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks?ref=master"
  cluster_name     = local.cluster_name
  cluster_version  = local.cluster_version
  vpc_id           = module.vpc.vpc_id
  subnets          = module.vpc.private_subnets
  write_kubeconfig = false

  worker_groups = [
    {
      name                 = local.worker_group_name
      instance_type        = local.instance_type
      asg_desired_capacity = local.asg_desired_capacity
    }
  ]

  map_accounts = var.map_accounts
  map_roles    = var.map_roles
  map_users    = var.map_users
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s look a little more closely at the &lt;em&gt;"terraform"&lt;/em&gt; block in main.tf:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "&amp;gt;= 0.12.0"
  backend "s3" {
    bucket         = "eks-github-actions-terraform"
    key            = "terraform-dev.tfstate"
    region         = "eu-west-1"
    dynamodb_table = "eks-github-actions-terraform-lock"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we indicate that we’ll adhere to the syntax not lower than Terraform 0.12 (&lt;a href="https://www.hashicorp.com/blog/announcing-terraform-0-12/" rel="noopener noreferrer"&gt;much has changed&lt;/a&gt; compared with earlier versions), and also that Terraform shouldn’t store its state locally, but rather remotely, in the S3 bucket. &lt;/p&gt;

&lt;p&gt;It’s convenient if the terraform code can be updated from different places by different people, which means we need to be able to lock a user’s state, so we added a lock using a &lt;a href="https://docs.aws.amazon.com/dynamodb/index.html" rel="noopener noreferrer"&gt;dynamodb table&lt;/a&gt;. Read more about locks on the &lt;a href="https://www.terraform.io/docs/state/locking.html" rel="noopener noreferrer"&gt;State Locking&lt;/a&gt; page.&lt;/p&gt;

&lt;p&gt;Since the name of the bucket should be unique throughout AWS, the name “eks-github-actions-terraform” won’t work for you. Please think up your own and make sure it’s not already taken (so you’re getting a &lt;em&gt;NoSuchBucket&lt;/em&gt; error):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aws s3 ls s3://my-bucket
An error occurred (AllAccessDisabled) when calling the ListObjectsV2 operation: All access to this object has been disabled
$ aws s3 ls s3://my-bucket-with-name-that-impossible-to-remember
An error occurred (NoSuchBucket) when calling the ListObjectsV2 operation: The specified bucket does not exist
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Having come up with a name, create the bucket (we use the IAM user "terraform" here. It has administrator rights so it can create a bucket) and enable versioning for it (which will save your nerves in the event of a configuration error):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aws s3 mb s3://eks-github-actions-terraform --region eu-west-1
make_bucket: eks-github-actions-terraform
$ aws s3api put-bucket-versioning --bucket eks-github-actions-terraform --versioning-configuration Status=Enabled
$ aws s3api get-bucket-versioning --bucket eks-github-actions-terraform
{
  "Status": "Enabled"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With DynamoDB, uniqueness is not needed, but you do need to create a table first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aws dynamodb create-table                       \
  --region eu-west-1                              \
  --table-name eks-github-actions-terraform-lock  \
  --attribute-definitions AttributeName=LockID,AttributeType=S              \
  --key-schema AttributeName=LockID,KeyType=HASH  \
  --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0v4gs4cdd8clhxmvdbyf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0v4gs4cdd8clhxmvdbyf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Keep in mind that, in case of Terraform failure, you may need to remove a lock manually from the AWS console. But be careful when doing so.&lt;/p&gt;

&lt;p&gt;With regard to the module eks/vpc blocks in main.tf, the way to reference the module available on GitHub is simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git::https://github.com/terraform-aws-modules/terraform-aws-vpc?ref=master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s look at our other two Terraform files (variables.tf and outputs.tf). The first holds our Terraform variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;root_repo_dir&amp;gt;/terraform/variables.tf
variable "region" {
  default = "eu-west-1"
}

variable "map_accounts" {
  description = "Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format."
  type        = list(string)
  default     = []
}

variable "map_roles" {
  description = "Additional IAM roles to add to the aws-auth configmap."
  type = list(object({
    rolearn  = string
    username = string
    groups   = list(string)
  }))
  default = []
}

variable "map_users" {
  description = "Additional IAM users to add to the aws-auth configmap."
  type = list(object({
    userarn  = string
    username = string
    groups   = list(string)
  }))
  default = [
    {
      userarn  = "arn:aws:iam::01234567890:user/my-user"
      username = "my-user"
      groups   = ["system:masters"]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The most important part here is adding the IAM user "my-user" to the map_users variable, but you should use your own account ID here in place of 01234567890.&lt;/p&gt;

&lt;p&gt;What does this do? When you communicate with EKS through the local kubectl client, it sends requests to the Kubernetes API server, and each request goes through authentication and authorization processes so Kubernetes can understand who sent the request and what they can do. So the EKS version of Kubernetes asks AWS IAM for help with user authentication. If the user who sent the request is listed in AWS IAM (we pointed to his ARN here), the request goes to the authorization stage, which EKS processes itself, but according to our settings. Here, we indicated that the IAM user "my-user" is very cool (&lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noopener noreferrer"&gt;group "system: masters"&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Finally, the outputs.tf file describes what Terraform should print after it finishes a job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;root_repo_dir&amp;gt;/terraform/outputs.tf
output "cluster_endpoint" {
  description = "Endpoint for EKS control plane."
  value       = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
  description = "Security group ids attached to the cluster control plane."
  value       = module.eks.cluster_security_group_id
}

output "config_map_aws_auth" {
  description = "A kubernetes configuration to authenticate to this EKS cluster."
  value       = module.eks.config_map_aws_auth
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This completes the description of the Terraform part. We’ll return soon to see how we’re going to launch these files.&lt;/p&gt;

&lt;h4&gt;
  
  
  Kubernetes Manifests
&lt;/h4&gt;

&lt;p&gt;So far, we’ve taken care of &lt;em&gt;where&lt;/em&gt; to launch the application. Now let’s look at &lt;em&gt;what&lt;/em&gt; to run. &lt;/p&gt;

&lt;p&gt;Recall that we have docker-compose.yml (we renamed the service and added a couple of labels that kompose will use shortly) in the &amp;lt;root_repo_dir&amp;gt;/k8s/ directory:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
$ cat /k8s/docker-compose.yml
version: "3.7"
services:
  samples-bi:
    container_name: samples-bi
    image: intersystemsdc/iris-community:2019.4.0.383.0-zpm
    ports:
    - 52773:52773
    &lt;b&gt;labels:&lt;/b&gt;
      &lt;b&gt;kompose.service.type: loadbalancer&lt;/b&gt;
      &lt;b&gt;kompose.image-pull-policy: IfNotPresent&lt;/b&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Run kompose and then add what’s highlighted below. Delete annotations (to make things more intelligible):&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
$ kompose convert -f docker-compose.yml --replicas=1
$ cat /k8s/samples-bi-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    io.kompose.service: samples-bi
  name: samples-bi
spec:
  replicas: 1
  &lt;b&gt;strategy:
    type: Recreate&lt;/b&gt;
  template:
    metadata:
      labels:
        io.kompose.service: samples-bi
    spec:
      containers:
      - image: intersystemsdc/iris-community:2019.4.0.383.0-zpm
        imagePullPolicy: IfNotPresent
        name: samples-bi
        ports:
        - containerPort: 52773
        resources: {}
        &lt;b&gt;lifecycle:
          postStart:
            exec:
              command:
              - /bin/bash
              - -c
              - |
                echo -e "write\nhalt" &amp;gt; test
                until iris session iris &amp;lt; test; do sleep 1; done
                echo -e "zpm\ninstall samples-bi\nquit\nhalt" &amp;gt; samples_bi_install
                iris session iris &amp;lt; samples_bi_install
                rm test samples_bi_install&lt;/b&gt;
        restartPolicy: Always
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We use the Recreate update strategy, which means that the pod will be deleted first and then recreated. This is permissible for demo purposes and allows us to use fewer resources.&lt;br&gt;
We also added the postStart hook, which will trigger immediately after the pod starts. We wait until IRIS starts up and install the samples-bi package from the default zpm-repository.&lt;br&gt;
Now we add the Kubernetes service (also without annotations):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;root_repo_dir&amp;gt;/k8s/samples-bi-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    io.kompose.service: samples-bi
  name: samples-bi
spec:
  ports:
  - name: "52773"
    port: 52773
    targetPort: 52773
  selector:
    io.kompose.service: samples-bi
  type: LoadBalancer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yes, we’ll deploy in the "default" namespace, which will work for the demo.&lt;/p&gt;

&lt;p&gt;Okay, now we know &lt;em&gt;where&lt;/em&gt; and &lt;em&gt;what&lt;/em&gt; we want to run. It remains to see &lt;em&gt;how&lt;/em&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  The GitHub Actions Workflow
&lt;/h4&gt;

&lt;p&gt;Rather than doing everything from scratch, we’ll create a workflow similar to the one described in &lt;a href="https://community.intersystems.com/post/deploying-intersystems-iris-solution-gke-using-github-actions" rel="noopener noreferrer"&gt;Deploying InterSystems IRIS solution on GKE Using GitHub Actions&lt;/a&gt;. This time we don’t have to worry about building a container. The GKE-specific parts are replaced by those specific to EKS. Bolded parts are related to receiving the commit message and using it in conditional steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;root_repo_dir&amp;gt;/.github/workflows/workflow.yaml
name: Provision EKS cluster and deploy Samples BI there
on:
  push:
    branches:
    - master

# Environment variables.
# ${{ secrets }} are taken from GitHub -&amp;gt; Settings -&amp;gt; Secrets
# ${{ github.sha }} is the commit hash
env:
  AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
  AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  AWS_REGION: ${{ secrets.AWS_REGION }}
  CLUSTER_NAME: dev-cluster
  DEPLOYMENT_NAME: samples-bi

jobs:
  eks-provisioner:
    # Inspired by:
    ## https://www.terraform.io/docs/github-actions/getting-started.html
    ## https://github.com/hashicorp/terraform-github-actions
    name: Provision EKS cluster
    runs-on: ubuntu-18.04
    steps:
    - name: Checkout
      uses: actions/checkout@v2

    - name: Get commit message
      run: |
        echo ::set-env name=commit_msg::$(git log --format=%B -n 1 ${{ github.event.after }})

    - name: Show commit message
      run: echo $commit_msg

    - name: Terraform init
      uses: hashicorp/terraform-github-actions@master
      with:
        tf_actions_version: 0.12.20
        tf_actions_subcommand: 'init'
        tf_actions_working_dir: 'terraform'

    - name: Terraform validate
      uses: hashicorp/terraform-github-actions@master
      with:
        tf_actions_version: 0.12.20
        tf_actions_subcommand: 'validate'
        tf_actions_working_dir: 'terraform'

    - name: Terraform plan
      if: "!contains(env.commit_msg, '[destroy eks]')"
      uses: hashicorp/terraform-github-actions@master
      with:
        tf_actions_version: 0.12.20
        tf_actions_subcommand: 'plan'
        tf_actions_working_dir: 'terraform'

    - name: Terraform plan for destroy
      if: "contains(env.commit_msg, '[destroy eks]')"
      uses: hashicorp/terraform-github-actions@master
      with:
        tf_actions_version: 0.12.20
        tf_actions_subcommand: 'plan'
        args: '-destroy -out=./destroy-plan'
        tf_actions_working_dir: 'terraform'

    - name: Terraform apply
      if: "!contains(env.commit_msg, '[destroy eks]')"
      uses: hashicorp/terraform-github-actions@master
      with:
        tf_actions_version: 0.12.20
        tf_actions_subcommand: 'apply'
        tf_actions_working_dir: 'terraform'

    - name: Terraform apply for destroy
      if: "contains(env.commit_msg, '[destroy eks]')"
      uses: hashicorp/terraform-github-actions@master
      with:
        tf_actions_version: 0.12.20
        tf_actions_subcommand: 'apply'
        args: './destroy-plan'
        tf_actions_working_dir: 'terraform'

  kubernetes-deploy:
    name: Deploy Kubernetes manifests to EKS
    needs:
    - eks-provisioner
    runs-on: ubuntu-18.04
    steps:
    - name: Checkout
      uses: actions/checkout@v2

    - name: Get commit message
      run: |
        echo ::set-env name=commit_msg::$(git log --format=%B -n 1 ${{ github.event.after }})

    - name: Show commit message
      run: echo $commit_msg

    - name: Configure AWS Credentials
      if: "!contains(env.commit_msg, '[destroy eks]')"
      uses: aws-actions/configure-aws-credentials@v1
      with:
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        aws-region: ${{ secrets.AWS_REGION }}

    - name: Apply Kubernetes manifests
      if: "!contains(env.commit_msg, '[destroy eks]')"
      working-directory: ./k8s/
      run: |
        aws eks update-kubeconfig --name ${CLUSTER_NAME}
        kubectl apply -f samples-bi-service.yaml
        kubectl apply -f samples-bi-deployment.yaml
        kubectl rollout status deployment/${DEPLOYMENT_NAME}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Of course, we need to set the credentials of the "terraform" user (take them from the ~/.aws/credentials file), letting Github use its secrets:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc161y9fx5jfm56we315x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc161y9fx5jfm56we315x.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that workflow will enable us to destroy an EKS cluster by pushing a commit message that contains a phrase "[destroy eks]". Also note that we won’t run "kubernetes apply" with such a commit message.&lt;br&gt;
Run a pipeline, but first create a .gitignore file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;lt;root_repo_dir&amp;gt;/.gitignore
.DS_Store
terraform/.terraform/
terraform/*.plan
terraform/*.json
$ cd &amp;lt;root_repo_dir&amp;gt;
$ git add .github/ k8s/ terraform/ .gitignore
$ git commit -m "GitHub on EKS"
$ git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Monitor deployment process on the "Actions" tab of GitHub repository page. Please wait for successful completion.&lt;/p&gt;

&lt;p&gt;When you run a workflow for the very first time, it will take about 15 minutes on the "Terraform apply" step, approximately as long as it takes to create the cluster. At the next start (if you didn’t delete the cluster), the workflow will be much faster. You can check this out:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cd &amp;lt;root_repo_dir&amp;gt;
$ git commit -m "Trigger" --allow-empty
$ git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Of course, it would be nice to check what we did. This time you can use the credentials of IAM "my-user" on your laptop:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
$ export AWS_PROFILE=my-user
$ export AWS_REGION=eu-west-1
$ aws sts get-caller-identity
$ aws eks update-kubeconfig --region=eu-west-1 --name=dev-cluster --alias=dev-cluster
$ kubectl config current-context
dev-cluster

$ kubectl get nodes
NAME                                                                               STATUS   ROLES      AGE          VERSION
ip-10-42-1-125.eu-west-1.compute.internal   Ready          6m20s     v1.14.8-eks-b8860f

$ kubectl get po
NAME                                                       READY        STATUS      RESTARTS   AGE
samples-bi-756dddffdb-zd9nw    1/1               Running    0                      6m16s

$ kubectl get svc
NAME                   TYPE                        CLUSTER-IP        EXTERNAL-IP                                                                                                                                                         PORT(S)                    AGE
kubernetes        ClusterIP               172.20.0.1                                                                                                                                                                                443/TCP                    11m
samples-bi         LoadBalancer     172.20.33.235    &lt;b&gt;a2c6f6733557511eab3c302618b2fae2-622862917.eu-west-1.elb.amazonaws.com&lt;/b&gt;    52773:31047/TCP  6m33s

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Go to &lt;a href="http://a2c6f6733557511eab3c302618b2fae2-622862917.eu-west-1.elb.amazonaws.com:52773/csp/user/_DeepSee.UserPortal.Home.zen?%24NAMESPACE=USER" rel="noopener noreferrer"&gt;http://a2c6f6733557511eab3c302618b2fae2-622862917.eu-west-1.elb.amazonaws.com:52773/csp/user/_DeepSee.UserPortal.Home.zen?$NAMESPACE=USER&lt;/a&gt; (substitute link by your External-IP), then type "_system", "SYS" and change the default password. You should see a bunch of BI dashboards:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjbk7rihyv3zrl4ypiasp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjbk7rihyv3zrl4ypiasp.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on each one’s arrow to deep dive:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F40zdqi7lb9ydtd8vlpgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F40zdqi7lb9ydtd8vlpgo.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Remember, if you restart a samples-bi pod, all your changes will be lost. This is intentional behavior as this is a demo. If you need persistence, I've created an example in the &lt;a href="https://github.com/intersystems-community/github-gke-zpm-registry/blob/master/k8s/statefulset.tpl" rel="noopener noreferrer"&gt;github-gke-zpm-registry/k8s/statefulset.tpl&lt;/a&gt; repository.&lt;/p&gt;

&lt;p&gt;When you’re finished, just remove everything you’ve created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git commit -m "Mr Proper [destroy eks]" --allow-empty
$ git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;In this article, we replaced the eksctl utility with Terraform to create an EKS cluster. It’s a step forward to “codify” all of your AWS infrastructure.&lt;br&gt;
We showed how you can easily deploy a demo application with git push using Github Actions and Terraform.&lt;br&gt;
We also added kompose and a pod’s postStart hooks to our toolbox.&lt;br&gt;
We didn’t show TLS enabling this time. That’s a task we’ll undertake in the near future.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
