Deploy a Kubernetes cluster for free, using K3s and Oracle always free resources.
Table of Contents
- Important notes
 - Requirements
 - Example RSA key generation
 - Project setup
 - Oracle provider setup
 - Pre flight checklist
 - Notes about OCI always free resources
 - Notes about K3s
 - Infrastructure overview
 - Cluster resource deployed
 - Deploy
 - Deploy a sample stack
 - Clean up
 
Note choose a region with enough ARM capacity
Important notes
- This is tutorial shows only how to use terraform with the Oracle Cloud infrastructure and use only the always free resources. This examples are not for a production environment.
 - At the end of your trial period (30 days). All the paid resources deployed will be stopped/terminated
 - At the end of your trial period (30 days), if you have a running compute instance it will be stopped/hibernated
 
Requirements
To use this tutorial you will need:
- an Oracle Cloud account. You can register here
 
Once you get the account, follow the Before you begin and 1. Prepare step in this document.
you need also:
- Terraform - Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files.
 - kubectl - The Kubernetes command-line tool (optional)
 - oci cli - Oracle command line interface (optional)
 
Example RSA key generation
To use terraform with the Oracle Cloud infrastructure you need to generate an RSA key. Generate the rsa key with:
openssl genrsa -out ~/.oci/<your_name>-oracle-cloud.pem 4096
chmod 600 ~/.oci/<your_name>-oracle-cloud.pem
openssl rsa -pubout -in ~/.oci/<your_name>-oracle-cloud.pem -out ~/.oci/<your_name>-oracle-cloud_public.pem
replace with your name or a string you prefer.
NOTE ~/.oci/-oracle-cloud_public.pem this string will be used on the terraform.tfvars used by the Oracle provider plugin, so please take note of this string.
Project setup
You can clone this repository and work in the example/ directory. You have to edit the main.tf file and you have to create the terraform.tfvars file. For more detail see Oracle provider setup and Pre flight checklist.
Or if you prefer you can create an new empty directory in your workspace and create this three files:
- terraform.tfvars - More details in Oracle provider setup
 - main.tf
 - provider.tf
 
The main.tf file will look like:
variable "compartment_ocid" {
}
variable "tenancy_ocid" {
}
variable "user_ocid" {
}
variable "fingerprint" {
}
variable "private_key_path" {
}
variable "region" {
  default = "<change_me>"
}
module "k3s_cluster" {
  region              = var.region
  availability_domain = "<change_me>"
  compartment_ocid    = var.compartment_ocid
  my_public_ip_cidr   = "<change_me>"
  cluster_name        = "<change_me>"
  environment         = "staging"
  k3s_token           = "<change_me>"
  source              = "github.com/garutilorenzo/k3s-oci-cluster"
}
output "k3s_servers_ips" {
  value = module.k3s_cluster.k3s_servers_ips
}
output "k3s_workers_ips" {
  value = module.k3s_cluster.k3s_workers_ips
}
output "public_lb_ip" {
  value = module.k3s_cluster.public_lb_ip
}
For all the possible variables see Pre flight checklist
The provider.tf will look like:
provider "oci" {
  tenancy_ocid     = var.tenancy_ocid
  user_ocid        = var.user_ocid
  private_key_path = var.private_key_path
  fingerprint      = var.fingerprint
  region           = var.region
}
Now we can init terraform with:
terraform init
terraform init
Initializing modules...
Downloading git::https://github.com/garutilorenzo/k3s-oci-cluster.git for k3s_cluster...
- k3s_cluster in .terraform/modules/k3s_cluster
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/oci from the dependency lock file
- Reusing previous version of hashicorp/template from the dependency lock file
- Using previously-installed hashicorp/template v2.2.0
- Using previously-installed hashicorp/oci v4.64.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Generate sel signed SSL certificate for the public LB (L7)
NOTE If you already own a valid certificate skip this step and set the correct values for the variables: PATH_TO_PUBLIC_LB_CERT and PATH_TO_PUBLIC_LB_KEY
We need to generate the certificates (sel signed) for our public load balancer (Layer 7). To do this we need openssl, open a terminal and follow this step:
Generate the key:
openssl genrsa 2048 > privatekey.pem
Generating RSA private key, 2048 bit long modulus (2 primes)
.......+++++
...............+++++
e is 65537 (0x010001)
Generate the a new certificate request:
openssl req -new -key privatekey.pem -out csr.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:IT
State or Province Name (full name) [Some-State]:Italy
Locality Name (eg, city) []:Brescia
Organization Name (eg, company) [Internet Widgits Pty Ltd]:GL Ltd
Organizational Unit Name (eg, section) []:IT
Common Name (e.g. server FQDN or YOUR name) []:testlb.domainexample.com
Email Address []:email@you.com
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Generate the public CRT:
openssl x509 -req -days 365 -in csr.pem -signkey privatekey.pem -out public.crt
Signature ok
subject=C = IT, ST = Italy, L = Brescia, O = GL Ltd, OU = IT, CN = testlb.domainexample.com, emailAddress = email@you.com
Getting Private key
This is the final result:
ls
csr.pem  privatekey.pem  public.crt
Now set the variables:
- PATH_TO_PUBLIC_LB_CERT: ~/full_path/public.crt
 - PATH_TO_PUBLIC_LB_KEY: ~/full_path/privatekey.pem
 
Oracle provider setup
This is an example of the terraform.tfvars file:
fingerprint      = "<rsa_key_fingerprint>"
private_key_path = "~/.oci/<your_name>-oracle-cloud_public.pem"
user_ocid        = "<user_ocid>"
tenancy_ocid     = "<tenency_ocid>"
compartment_ocid = "<compartment_ocid>"
To find your tenency_ocid in the Ocacle Cloud console go to: Governance and Administration > Tenency details, then copy the OCID.
To find you user_ocid in the Ocacle Cloud console go to User setting (click on the icon in the top right corner, then click on User settings), click your username and then copy the OCID
The compartment_ocid is the same as tenency_ocid.
The fingerprint is the fingerprint of your RSA key, you can find this vale under User setting > API Keys
Pre flight checklist
Once you have created the terraform.tfvars file edit the main.tf file (always in the example/ directory) and set the following variables:
| Var | Required | Desc | 
|---|---|---|
region | 
yes | 
set the correct OCI region based on your needs | 
availability_domain | 
yes | 
Set the correct availability domain. See how to find the availability domain | 
compartment_ocid | 
yes | 
Set the correct compartment ocid. See how to find the compartment ocid | 
cluster_name | 
yes | 
the name of your K3s cluster. Default: k3s-cluster | 
k3s_token | 
yes | 
The token of your K3s cluster. How to generate a random token | 
my_public_ip_cidr | 
yes | 
your public ip in cidr format (Example: 195.102.xxx.xxx/32) | 
environment | 
yes | 
Current work environment (Example: staging/dev/prod). This value is used for tag all the deployed resources | 
PATH_TO_PUBLIC_LB_CERT | 
yes | 
Path to the public LB certificate. See how to generate the certificate | 
PATH_TO_PUBLIC_LB_KEY | 
yes | 
Path to the public LB key. See how to generate the key | 
compute_shape | 
no | 
Compute shape to use. Default VM.Standard.A1.Flex. NOTE Is mandatory to use this compute shape for provision 4 always free VMs | 
os_image_id | 
no | 
Image id to use. Default image: Canonical-Ubuntu-20.04-aarch64-2022.01.18-0. See how to list all available OS images | 
oci_core_vcn_dns_label | 
no | 
VCN DNS label. Default: defaultvcn | 
oci_core_subnet_dns_label10 | 
no | 
First subnet DNS label. Default: defaultsubnet10 | 
oci_core_subnet_dns_label11 | 
no | 
Second subnet DNS label. Default: defaultsubnet11 | 
oci_core_vcn_cidr | 
no | 
VCN CIDR. Default: oci_core_vcn_cidr | 
oci_core_subnet_cidr10 | 
no | 
First subnet CIDR. Default: 10.0.0.0/24 | 
oci_core_subnet_cidr11 | 
no | 
Second subnet CIDR. Default: 10.0.1.0/24 | 
oci_identity_dynamic_group_name | 
no | 
Dynamic group name. This dynamic group will contains all the instances of this specific compartment. Default: Compute_Dynamic_Group | 
oci_identity_policy_name | 
no | 
Policy name. This policy will allow dynamic group 'oci_identity_dynamic_group_name' to read OCI api without auth. Default: Compute_To_Oci_Api_Policy | 
k3s_load_balancer_name | 
no | 
Internal LB name. Default: k3s internal load balancer | 
public_load_balancer_name | 
no | 
Public LB name. Default: K3s public LB | 
kube_api_port | 
no | 
Kube api default port Default: 6443 | 
public_lb_shape | 
no | 
LB shape for the public LB. Default: flexible. NOTE is mandatory to use this kind of shape to provision two always free LB (public and private) | 
http_lb_port | 
no | 
http port used by the public LB. Default: 80 | 
https_lb_port | 
no | 
http port used by the public LB. Default: 443 | 
k3s_server_pool_size | 
no | 
Number of k3s servers deployed. Default 2 | 
k3s_worker_pool_size | 
no | 
Number of k3s workers deployed. Default 2 | 
install_nginx_ingress | 
no | 
Boolean value, install kubernetes nginx ingress controller instead of Traefik. Default: true. For more information see Nginx ingress controller | 
install_longhorn | 
no | 
Boolean value, install longhorn "Cloud native distributed block storage for Kubernetes". Default: true | 
longhorn_release | 
no | 
Longhorn release. Default: v1.2.3 | 
unique_tag_key | 
no | 
Unique tag name used for tagging all the deployed resources. Default: k3s-provisioner | 
unique_tag_value | 
no | 
Unique value used with unique_tag_key. Default: https://github.com/garutilorenzo/k3s-oci-cluster | 
PATH_TO_PUBLIC_KEY | 
no | 
Path to your public ssh key (Default: "~/.ssh/id_rsa.pub) | 
PATH_TO_PRIVATE_KEY | 
no | 
Path to your private ssh key (Default: "~/.ssh/id_rsa) | 
Generate random token
Generate random k3s token with:
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 55 | head -n 1
How to find the availability doamin name
To find the list of the availability domains run this command on che Cloud Shell:
oci iam availability-domain list
{
  "data": [
    {
      "compartment-id": "<compartment_ocid>",
      "id": "ocid1.availabilitydomain.oc1..xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
      "name": "iAdc:EU-ZURICH-1-AD-1"
    }
  ]
}
How to list all the OS images
To filter the OS images by shape and OS run this command on che Cloud Shell:
oci compute image list --compartment-id <compartment_ocid> --operating-system "Canonical Ubuntu" --shape "VM.Standard.A1.Flex"
{
  "data": [
    {
      "agent-features": null,
      "base-image-id": null,
      "billable-size-in-gbs": 2,
      "compartment-id": null,
      "create-image-allowed": true,
      "defined-tags": {},
      "display-name": "Canonical-Ubuntu-20.04-aarch64-2022.01.18-0",
      "freeform-tags": {},
      "id": "ocid1.image.oc1.eu-zurich-1.aaaaaaaag2uyozo7266bmg26j5ixvi42jhaujso2pddpsigtib6vfnqy5f6q",
      "launch-mode": "NATIVE",
      "launch-options": {
        "boot-volume-type": "PARAVIRTUALIZED",
        "firmware": "UEFI_64",
        "is-consistent-volume-naming-enabled": true,
        "is-pv-encryption-in-transit-enabled": true,
        "network-type": "PARAVIRTUALIZED",
        "remote-data-volume-type": "PARAVIRTUALIZED"
      },
      "lifecycle-state": "AVAILABLE",
      "listing-type": null,
      "operating-system": "Canonical Ubuntu",
      "operating-system-version": "20.04",
      "size-in-mbs": 47694,
      "time-created": "2022-01-27T22:53:34.270000+00:00"
    },
Note: this setup was only tested with Ubuntu 20.04
Notes about OCI always free resources
In order to get the maximum resources available within the oracle always free tier, the max amount of the k3s servers and k3s workers must be 2. So the max value for k3s_server_pool_size and k3s_worker_pool_size is 2.
In this setup we use two LB, one internal LB and one public LB (Layer 7). In order to use two LB using the always free resources, one lb must be a network load balancer an the other must be a load balancer. The public LB must use the flexible shape (public_lb_shape variable).
Notes about K3s
In this environment the High Availability of the K3s cluster is provided using the Embedded DB. More details here
The default installation of K3s install Traefik as ingress the controller. In this environment Traefik is replaced by Nginx ingress controller. To install Traefik as the ingress controller set the variable install_nginx_ingress to false.
For more details on Nginx ingress controller see the Nginx ingress controller section.
Infrastructure overview
The final infrastructure will be made by:
- two instance pool:
- one instance pool for the server nodes named "k3s-servers"
 - one instance pool for the worker nodes named "k3s-workers"
 
 - one internal load balancer that will route traffic to K3s servers
 - one external load balancer that will route traffic to K3s workers
 
The other resources created by terraform are:
- two instance configurations (one for the servers and one for the workers) used by the instance pools
 - one vcn
 - two public subnets
 - two security list
 - one dynamic group
 - one identity policy
 
Cluster resource deployed
This setup will automatically install longhorn. Longhorn is a Cloud native distributed block storage for Kubernetes. To disable the longhorn deployment set install_longhorn variable to false
Nginx ingress controller
In this environment Nginx ingress controller is used instead of the standard Traefik ingress controller.
The installation is the bare metal installation, the ingress controller then is exposed via a LoadBalancer Service.
---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx-controller-loadbalancer
  namespace: ingress-nginx
spec:
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
    - name: https
      port: 443
      protocol: TCP
      targetPort: 80
  type: LoadBalancer
To properly configure all the Forwarded HTTP Headers (L7 Headers) this parameters are added to che ConfigMap:
---
apiVersion: v1
data:
  allow-snippet-annotations: "true"
  use-forwarded-headers: "true"
  compute-full-forwarded-for: "true"
  enable-real-ip: "true"
  forwarded-for-header: "X-Forwarded-For"
  proxy-real-ip-cidr: "0.0.0.0/0"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    helm.sh/chart: ingress-nginx-4.0.16
  name: ingress-nginx-controller
  namespace: ingress-nginx
Deploy
We are now ready to deploy our infrastructure. First we ask terraform to plan the execution with:
terraform plan
...
...
      + id                             = (known after apply)
      + ip_addresses                   = (known after apply)
      + is_preserve_source_destination = false
      + is_private                     = true
      + lifecycle_details              = (known after apply)
      + nlb_ip_version                 = (known after apply)
      + state                          = (known after apply)
      + subnet_id                      = (known after apply)
      + system_tags                    = (known after apply)
      + time_created                   = (known after apply)
      + time_updated                   = (known after apply)
      + reserved_ips {
          + id = (known after apply)
        }
    }
Plan: 27 to add, 0 to change, 0 to destroy.
Changes to Outputs:
  + k3s_servers_ips = [
      + (known after apply),
      + (known after apply),
    ]
  + k3s_workers_ips = [
      + (known after apply),
      + (known after apply),
    ]
  + public_lb_ip    = (known after apply)
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
now we can deploy our resources with:
terraform apply
...
...
      + is_preserve_source_destination = false
      + is_private                     = true
      + lifecycle_details              = (known after apply)
      + nlb_ip_version                 = (known after apply)
      + state                          = (known after apply)
      + subnet_id                      = (known after apply)
      + system_tags                    = (known after apply)
      + time_created                   = (known after apply)
      + time_updated                   = (known after apply)
      + reserved_ips {
          + id = (known after apply)
        }
    }
Plan: 27 to add, 0 to change, 0 to destroy.
Changes to Outputs:
  + k3s_servers_ips = [
      + (known after apply),
      + (known after apply),
    ]
  + k3s_workers_ips = [
      + (known after apply),
      + (known after apply),
    ]
  + public_lb_ip    = (known after apply)
  Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.
  Enter a value: yes
...
...
module.k3s_cluster.oci_network_load_balancer_backend.k3s_kube_api_backend[0]: Still creating... [50s elapsed]
module.k3s_cluster.oci_network_load_balancer_backend.k3s_kube_api_backend[0]: Still creating... [1m0s elapsed]
module.k3s_cluster.oci_network_load_balancer_backend.k3s_kube_api_backend[0]: Creation complete after 1m1s [...]
Apply complete! Resources: 27 added, 0 changed, 0 destroyed.
Outputs:
k3s_servers_ips = [
  "X.X.X.X",
  "X.X.X.X",
]
k3s_workers_ips = [
  "X.X.X.X",
  "X.X.X.X",
]
public_lb_ip = tolist([
  "X.X.X.X",
])
Now on one master node you can check the status of the cluster with:
ssh X.X.X.X -lubuntu
ubuntu@inst-iwlqz-k3s-servers:~$ sudo su -
root@inst-iwlqz-k3s-servers:~# kubectl get nodes
NAME                     STATUS   ROLES                       AGE     VERSION
inst-axdzf-k3s-workers   Ready    <none>                      4m34s   v1.22.6+k3s1
inst-hmgnl-k3s-servers   Ready    control-plane,etcd,master   4m14s   v1.22.6+k3s1
inst-iwlqz-k3s-servers   Ready    control-plane,etcd,master   6m4s    v1.22.6+k3s1
inst-lkvem-k3s-workers   Ready    <none>                      5m35s   v1.22.6+k3s1
Public LB check
We can now test the public load balancer, traefik and the security list ingress rules. On your local PC run:
curl -v http://<PUBLIC_LB_IP>
*   Trying PUBLIC_LB_IP:80...
* TCP_NODELAY set
* Connected to PUBLIC_LB_IP (PUBLIC_LB_IP) port 80 (#0)
> GET / HTTP/1.1
> Host: PUBLIC_LB_IP
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Date: Fri, 25 Feb 2022 14:03:09 GMT
< Content-Type: text/html
< Content-Length: 146
< Connection: keep-alive
< 
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host PUBLIC_LB_IP left intact
404 is a correct response since the cluster is empty. We can test also the https listener/backends:
curl -k -v https://<PUBLIC_LB_IP>
curl -k -v https://<PUBLIC_LB_IP>
* Trying PUBLIC_LB_IP:443...
* TCP_NODELAY set
* Connected to PUBLIC_LB_IP (PUBLIC_LB_IP) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
*  subject: C=IT; ST=Italy; L=Brescia; O=GL Ltd; OU=IT; CN=testlb.domainexample.com; emailAddress=email@you.com
*  start date: Feb 25 10:28:29 2022 GMT
*  expire date: Feb 25 10:28:29 2023 GMT
*  issuer: C=IT; ST=Italy; L=Brescia; O=GL Ltd; OU=IT; CN=testlb.domainexample.com; emailAddress=email@you.com
*  SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET / HTTP/1.1
> Host: PUBLIC_LB_IP
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Date: Fri, 25 Feb 2022 13:48:19 GMT
< Content-Type: text/html
< Content-Length: 146
< Connection: keep-alive
< 
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host PUBLIC_LB_IP left intact
Longhorn check
To check if longhorn was successfully installed run on one master nodes:
kubectl get ns
NAME              STATUS   AGE
default           Active   9m40s
kube-node-lease   Active   9m39s
kube-public       Active   9m39s
kube-system       Active   9m40s
longhorn-system   Active   8m52s   <- longhorn namespace 
root@inst-hmgnl-k3s-servers:~# kubectl get pods -n longhorn-system
NAME                                        READY   STATUS    RESTARTS        AGE
csi-attacher-5f46994f7-8w9sg                1/1     Running   0               7m52s
csi-attacher-5f46994f7-qz7d4                1/1     Running   0               7m52s
csi-attacher-5f46994f7-rjqlx                1/1     Running   0               7m52s
csi-provisioner-6ccbfbf86f-fw7q4            1/1     Running   0               7m52s
csi-provisioner-6ccbfbf86f-gwmrg            1/1     Running   0               7m52s
csi-provisioner-6ccbfbf86f-nsf84            1/1     Running   0               7m52s
csi-resizer-6dd8bd4c97-7l67f                1/1     Running   0               7m51s
csi-resizer-6dd8bd4c97-g66wj                1/1     Running   0               7m51s
csi-resizer-6dd8bd4c97-nksmd                1/1     Running   0               7m51s
csi-snapshotter-86f65d8bc-2gcwt             1/1     Running   0               7m50s
csi-snapshotter-86f65d8bc-kczrw             1/1     Running   0               7m50s
csi-snapshotter-86f65d8bc-sjmnv             1/1     Running   0               7m50s
engine-image-ei-fa2dfbf0-6rpz2              1/1     Running   0               8m30s
engine-image-ei-fa2dfbf0-7l5k8              1/1     Running   0               8m30s
engine-image-ei-fa2dfbf0-7nph9              1/1     Running   0               8m30s
engine-image-ei-fa2dfbf0-ndkck              1/1     Running   0               8m30s
instance-manager-e-31a0b3f5                 1/1     Running   0               8m26s
instance-manager-e-37aa4663                 1/1     Running   0               8m27s
instance-manager-e-9cc7cc9d                 1/1     Running   0               8m20s
instance-manager-e-f39d9f2c                 1/1     Running   0               8m29s
instance-manager-r-1364d994                 1/1     Running   0               8m26s
instance-manager-r-c1670269                 1/1     Running   0               8m20s
instance-manager-r-c20ebeb3                 1/1     Running   0               8m28s
instance-manager-r-c54bf9a5                 1/1     Running   0               8m27s
longhorn-csi-plugin-2qj94                   2/2     Running   0               7m50s
longhorn-csi-plugin-4t8jm                   2/2     Running   0               7m50s
longhorn-csi-plugin-ws82l                   2/2     Running   0               7m50s
longhorn-csi-plugin-zmc9q                   2/2     Running   0               7m50s
longhorn-driver-deployer-784546d78d-s6cd2   1/1     Running   0               8m58s
longhorn-manager-l8sd8                      1/1     Running   0               9m1s
longhorn-manager-r2q5c                      1/1     Running   1 (8m30s ago)   9m1s
longhorn-manager-s6wql                      1/1     Running   0               9m1s
longhorn-manager-zrrf2                      1/1     Running   0               9m
longhorn-ui-9fdb94f9-6shsr                  1/1     Running   0               8m59s
Deploy a sample stack
Finally to test all the components of the cluster we can deploy a sample stack. The stack is composed by the following components:
- MariaDB
 - Nginx
 - Wordpress
 
Each component is made by: one deployment and one service.
Wordpress and nginx share the same persistent volume (ReadWriteMany with longhorn storage class). The nginx configuration is stored in four ConfigMaps and  the nginx service is exposed by the nginx ingress controller.
Deploy the resources with:
kubectl apply -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/mariadb/all-resources.yml
kubectl apply -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/nginx/all-resources.yml
kubectl apply -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/wordpress/all-resources.yml
and check the status:
kubectl get deployments
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
mariadb       1/1     1            1           92m
nginx         1/1     1            1           79m
wordpress     1/1     1            1           91m
kubectl get svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes        ClusterIP   10.43.0.1       <none>        443/TCP    5h8m
mariadb-svc       ClusterIP   10.43.184.188   <none>        3306/TCP   92m
nginx-svc         ClusterIP   10.43.9.202     <none>        80/TCP     80m
wordpress-svc     ClusterIP   10.43.242.26    <none>        9000/TCP   91m
Now you are ready to setup WP, open the LB public ip and follow the wizard. NOTE nginx and the Kubernetes Ingress rule are configured without virthual host/server name.
To clean the deployed resources:
kubectl delete -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/mariadb/all-resources.yml
kubectl delete -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/nginx/all-resources.yml
kubectl delete -f https://raw.githubusercontent.com/garutilorenzo/k3s-oci-cluster/master/deployments/wordpress/all-resources.yml
Clean up
terraform destroy
              

    
Top comments (0)