In this blog post I want to capture my notes about deploying the application to a Kubernetes cluster using Oracle's cloud offerings a.k.a. Oracle Cloud Infrastructure (OCI). As per the previous blog posts, there's nothing staggeringly new in this blog post, I'm simply capturing my notes on how to do this so I don't have to remember all the details for next time.
In this post we're going to use OCI to setup a Kubernetes cluster, configure an OCI user account to be remotely managed from my Mac, and then use Docker and kubectl on my Mac to deploy the application to the Kubernetes cluster on OCI.
For this blog post I'm making a few assumptions:
(a) We have access to an OCI tenancy
(b) We have privileges in OCI to create a Kubernetes Cluster and deploy to the Container Registry
For (b) if our user account is an OCI administrator we have all the privileges required, otherwise we need to follow these OCI documents to grant the appropriate privileges:
- OCI Kubernetes Container Engine - Policy Configuration for Cluster Creation and Deployment
- OCI Container Registry - Policies to Control Repository Access
In OCI there is a number of ways to setup cloud resources like a Kubernetes cluster, such as using the OCI web console, Terraform, a command line client (a.k.a. OCI CLI) and more. We'll take a shortcut (read: cheat) and use the Kubernetes wizard in the OCI console.
First we'll create a compartment to segment the Kubernetes work from other work in our OCI tenancy.
(1) With an OCI tenancy created and a login to the OCI console, via the hamburger menu select Identity & Security then Compartments from the sub menu under the Identity heading.
(2) Create a compartment "k8s-demo-compartment" under the root compartment
(3) Via the hamburger menu select Developer Services then Kubernetes Clusters (OKE) under the Containers & Artifacts heading.
(4) On the left hand side of the screen, under Compartment select the compartment "k8s-demo-compartment" that you created earlier.
(5) Select the Create Cluster button.
(6) In the result Create Cluster dialog select the Quick Create option. This option is designed to create all the necessary Kubernetes artifacts including a virtual cloud network (VCN), various gateways, the Kubernetes cluster working nodes and so on. Select Launch Workflow
(7) In the resulting Quick Create Cluster set the following:
- Change the cluster Name to "k8s-demo-cluster"
- Ensure the Kubernetes API Endpoint is set to Public Endpoint
- Ensure the Kubernetes Worker Nodes is set to Private Workers
- Select the VM.Standard.E3.Flex Shape
- Limit the VM to one OCPU and 16GB memory
- Leave the Number of nodes at 3.
(8) Click Next
The resulting screen will show us the progress in creating the various artifacts needed for the Kubernetes cluster.
(9) Once all work is completed click Close
The OCI console will then place us in the Cluster Details screen. Note the status of the cluster will be Creating. Wait until it reads Active.
(10) Once done select the Node Pools option in the left hand-side menu, and in the resulting page select pool1 and then Nodes in the left hand-side menu.
The resulting screen will show us the progress in creating the nodes needed for the Kubernetes cluster. Once set to 'Ready' the Kubernetes cluster is fully setup ready for the next steps.
In the previous blog we deployed the Slack application to our local Docker instance. Our goal in this blog hereafter is to deploy the application to OCI to run inside the Kubernetes cluster we just created. But there's a few steps we must do before we can get to this.
First we need to configure an OCI user and an authentication token to allow connections from the various tools we're going to use on our local machine, a Mac in my case, to deploy our application to OCI.
Once we've setup an OCI user with the appropriate settings, we'll then use Docker to deploy our local image to an OCI Container Registry (OCIR).
Finally we'll use a local installation of the Kubernetes kubectl command line tool to hook up the Kubernetes cluster with the OCIR deployed image.
Let's run through those steps now:
As mentioned previously, our next piece of work is to setup our OCI user with an authentication token. This will allow us to connect to OCI from various tools we're going to use on our local machine to deploy our application to OCI.
(11) In the OCI Console in the top right of the screen, select the user Profile button then select the User Settings option.
(12) In the left hand-side menu select Auth Tokens.
(13) Select the Generate Token button, and in the resulting dialog give the Description "Token for Kubernetes deployment". Select the Generate Token button.
(14) In the resulting screen select the Show button, and record the hidden token. Think of this as a password we're going to use momentarily to access OCI from your local machine.
Next we'll setup an OCI Container Registry ready in a moment to receive/push our image from our local machine:
(15) In the OCI Console select the hamburger menu, then Developer Services followed by Container Registry under the Containers & Artifact heading.
(16) In the resulting screen on the left hand side, select the "k8s-demo-compartment" Compartment.
(17) Select the Create Repository button.
(18) In the resulting Create Repository dialog enter a Repository Name of "scratchslackapp" (all in lowercase) and set the Access to Public.
In order to push our local application Docker image to the OCI Container Registry we need to login to OCI via Docker on our local machine. To do this we will issue the following command:
docker login <tenancy-region-key>.ocir.io --username <tenancy-id>/<user-name>
(19) To determine the tenancy-region-code, in the OCI Console click on the user Profile button again in the top right hand of the screen then select the Tenancy option. In the resulting screen we will have the value Home Region which is the region your tenancy sits. For example "US East (Ashburn)".
(20) Once we known the region, in the table in the following documentation page lookup the corresponding region key needed for the login statement above in lowercase. For example for "US East (Ashburn)" the region key is "IAD". Convert this to lowercase.
(21) Still on the same screen, also note the Object Storage Namespace value. This is the tenancy ID value. For customer purchased tenancies this will be something we've agreed with Oracle beforehand (e.g. 'acme' or some other meaningful tenancy identifier for our org). If we're using the OCI free tier this will be an Oracle generated identifier, such as "kdhkv4bxlaap".
(22) Also record the tenancy Name. This is the tenancy name and will be required in (much) later steps in this blog.
(23) Finally the user name is simply the user name we use to login into OCI, such as "email@example.com".
In returning to the docker login command using the examples above, it would look like:
docker login iad.ocir.io --username firstname.lastname@example.org
(25) At this point we'll be prompted for a password. Provide the authentication token we were provided earlier.
Next within our local Docker repository we want to tag the image we wish to deploy to the OCIR container registry with the name of OCIR container repository name. We do this by issuing:
docker tag <local-image-name>:latest <tenancy-region-key>.ocir.io/<tenancy-id>/<oci-repository-name>:latest
Using the examples used in the blog so far, this would be:
docker tag scratchslackapp:latest iad.ocir.io/kdhkv4bxlaap/scratchslackapp:latest
From here we push the image to OCI:
docker push iad.ocir.io/kdhkv4bxlaap/scratchslackapp:latest
(28) To see the results, return to the OCI Console, via the hamburger menu select Developer Services followed by Container Registry under the Containers & Artifact heading.
(29) Ensure the "k8s-demo-compartment" is still selected on the left-hand side of the screen, then in the middle select and expand the "scratchslackapp" repository you created earlier. We should now find a single image which represents what we just uploaded.
At this point our image is now in OCI, but hasn't been associated with the Kubernetes cluster.
In order to work on Kubernetes, Kubernetes provides a command line tool called kubectl (a.k.a. kubernetes control). This can be downloaded from the Kubernetes tools website. Once installed it must then be configured to talk to our OCI Kubernetes cluster. The configuration typically goes in the local machine user's home directory in a .kube/config file.
OCI makes it easy to configure this by providing a handy dialog under the Kubernetes cluster to show us how exactly to set this up.
(30) Via the OCI console hamburger menu again open the Kubernetes Clusters (OKE) option again, then select our cluster.
(31) Press the Access Your Cluster button.
(32) In the resulting dialog it will explain how to setup the .kube/config file by downloading and installing & configure the OCI CLI tool, and then running a number of steps. Follow those steps!
(33) We can then verify kubectl is working properly by listing the running nodes in your OCI cluster:
kubectl get nodes
Okay, we're getting close!
In order to associate the Kubernetes cluster with our image, we first need to setup a Kubernetes 'secret' using the following command:
kubectl create secret docker-registry <secret-name-to-create> --docker-server=<tenancy-region-key>.ocir.io --docker-username='<tenancy-namespace>/<oci-username>' --docker-password='<auth-token>' --docker-email='<email-address>
Most of the above values are now self explanatory, but the requires some more explanation. If our tenancy's users are federated with Oracle Identify Cloud Service such as the OCI free tier uses, we use the format oracleidentitycloudservice/, for example:
kubectl create secret docker-registry my-kube-secret --docker-server=iad.ocir.io --email@example.com' --docker-password='our auth token' --firstname.lastname@example.org
If alternatively our tenancy is a customer owned tenancy without Oracle Identity Cloud Service, the command takes the form:
kubectl create secret docker-registry my-kube-secret --docker-server=iad.ocir.io --email@example.com' --docker-password='our auth token' --firstname.lastname@example.org
(35) The secret can be verified by issuing:
kubectl get secret my-kube-secret --output.yaml
With the secret in place we're then ready to instruct Kubernetes to use the image. This can be done via the kubectl file, but ends up being repetitive and prone to manual errors.
An alternative approach to create a Kubernetes YAML file that can be used multiple times.
(36) In your favourite text editor, return to our application's source code and create a new file kube.yaml
(37) Add the following source code:
kind: Deployment apiVersion: apps/v1 metadata: name: scratchslackapp spec: replicas: 1 selector: matchLabels: app: scratchslackapp template: metadata: labels: app: scratchslackapp version: v1 spec: containers: - name: scratchslackapp image: iad.ocir.io/kdhkv4bxlaap/scratchslackapp:latest imagePullPolicy: Always ports: - containerPort: 5000 protocol: TCP imagePullSecrets: - name: my-kube-secret --- apiVersion: v1 kind: Service metadata: name: scratchslackapp-lb labels: app: scratchslackapp spec: selector: app: scratchslackapp type: LoadBalancer ports: - port: 8080 targetPort: 5000 name: http
This file contains two main parts, that above the 3 dashes (---) and that below the 3 dashes.
Above - defines the application to deploy to the Kubernetes pod. Note how the image maps to our image we just deployed to the OCI Container Registry, the imagePullSecrets uses my-kube-secret we just created, and the containerPort maps to the Docker EXPOSE port of 5000 we configured in the previous blog.
Below - in order for the application to be exposed to the internet, this creates an OCI load balancer, mapping port 8080 externally to the targetPort 5000 internally for our application.
(38) With this file saved, return to our command line, cd to the same directory as the yaml file and execute:
kubectl create -f kube.yaml
This command fires Kubernetes to deploy our application to the Kubernetes pod, and spins up a load balancer for the cluster so external calls from Slack can be routed to our running Slack code. The command will return when the pod is ready. However the load balancer may take longer to start so be patient!
(39) To check if the load balancer is ready issue the following command:
kubectl get services
We will see an output similar to the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.11.0.1 <none> 443/TCP 27h scratchslackapp-lb LoadBalancer 10.11.157.133 126.96.36.199 8080:31467/TCP 5m16s
If the external-ip for the load balancer says '' we need to wait longer.
While we're waiting other commands we can issue:
kubetcl get all
kubectl get services
kubectl describe services scratchslackapp
kubectl get pods
kubectl get pod <pod-id-from-previous-command>
kubectl get deployments scratchslackapp
kubectl describe deployments scratchslackapp
Once the load balancer is running, the very last thing we need to do is return to Slack, visiting the api.slack.com, open our application, select the Event Subscriptions page and update the Request URL to the http (not https as we haven't configured it in this example) address of the external IP address from above, and port 8080 we configured in the kube.yaml file.
(40 optional) There after if we're happy with our testing from Slack, you can delete the pod deployment by executing:
kubectl delete -f kube.yaml
Done, in 40 steps! (phew)