In this article, the author will not explain why all these manipulations are needed, or discuss the advantages and disadvantages of this solution. He hopes readers are smart enough. 😎
Think of this article as a continuation of the persistent volumes story and as a guide for deploying a database in a local Kubernetes dev cluster.
0. Introduction
Three main stages can be distinguished:
- Creation of
PersistentVolume
(PV) andPersistentVolumeClaim
(PVC) - what a surpise, we need a stateful app. - Setting the Helm chart of the target application.
- Checking that everything is working.
Before starting work, we need to minimally configure the Kubernetes cluster. Here are the small requirements:
- The actual version of Kubernetes (v1.22+).
- One master node and one worker node.
- The configured Ingress-controller.
-
helm
is installed on the dev's machine.
As you may know already, the author prefers to use k3s/k3d as his local Kubernetes development stage.
1. Storage preparation
When dealing with k3s, you have to enable so called local-path
provisoner first.
Install the provisioner:
$ kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
Set the local-path
provisioner as default:
$ kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Check the fact provisioner is running:
$ kubectl --namespace kube-system get pod
NAME READY STATUS RESTARTS AGE
...
local-path-provisioner-84bb864455-tmpht 1/1 Running ...
From this moment we can create a PersistentVolume
and PersistentVolumeClaim
resources for our dedicated storage space. To do this, paste the following manifest into the pv.yaml
file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-for-pg
labels:
type: local
spec:
capacity:
storage: 4Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: "local-path"
hostPath:
path: /opt/local-path-provisioner
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k3d-my-cluster-agent-0
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pg-pvc
spec:
storageClassName: "local-path"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
In matchExpressions
field we specify the name of the node on which the disk will be mounted (should be our worker node k3d-my-cluster-agent-0
here).
For convenience, we will mount the disk immediately on the master node, although this can be done on any of those available in the list. K3s will mount volume on the /opt/local-path-provisioner
directory on a host node's file system.
$ docker exec -it k3d-my-cluster-agent-0 sh
/ # ls
bin dev etc k3d lib opt proc run sbin sys tmp usr var
/ # cd opt
/opt # ls
local-path-provisioner
/opt # cd local-path-provisioner/
/opt/local-path-provisioner # ls
data
/opt/local-path-provisioner # cd data
/opt/local-path-provisioner/data # ls
PG_VERSION global pg_dynshmem pg_logical pg_notify pg_serial pg_stat pg_subtrans pg_twophase pg_xact postmaster.opts
base pg_commit_ts pg_ident.conf pg_multixact pg_replslot pg_snapshots pg_stat_tmp pg_tblspc pg_wal postgresql.auto.conf postmaster.pid
/opt/local-path-provisioner/data #
See the result:
$ kubectl apply -f pvc.yaml
persistentvolume/pv-for-pg created
persistentvolumeclaim/pg-pvc created
Good. Let's check promoted PersistentVolumeClaim
's state:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pg-pvc Bound pv-for-pg 4Gi RWO local-path 1m43s
As we may see, the PersistentVolumeClaim
resource is bound.
2. Installing PostgreSQL
Now is the time to deploy Postgres on a cluster. Just at this stage, we already need helm
. We pull the Bitnami repository to ourselves (they are kind and great, they have a lot of useful ready-made charts there in their ⭐5k+ repository):
$ helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
Proceed with helm install
:
$ helm install dev-pg bitnami/postgresql --set primary.persistence.existingClaim=pg-pvc,auth.postgresPassword=pgpass,volumePermissions.enabled=true
NAME: dev-pg
LAST DEPLOYED: Tue May 3 19:25:34 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
⚠️ Due to an on-going issue regarding kubernetes permissions and the use of containerSecurityContext.runAsUser
, you must enable volumePermissions
to ensure everything works as expected.
🥁 Drums are ticking! From now the Postgres Pod will capable to write data to the /opt/local-path-provisioner
directory.
Let's look at the state of the Pod and StatefulSet:
$ kubectl get pod,statefulset
NAME READY STATUS RESTARTS AGE
pod/dev-pg-postgresql-0 1/1 Running 0 26m
NAME READY AGE
statefulset.apps/dev-pg-postgresql 1/1 1h
Check the version:
$ kubectl logs --tail 20 dev-pg-postgresql-0 | grep starting
...[1] LOG: starting PostgreSQL 14.2 on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
Great job.
3. Communicating with the database
The database has been successfully deployed, now let's try to connect to it: create a user, a table and configure access credentials.
There are two major ways to communicate with DB:
- Port forwarding
- Involving Pod with client tool onboard
Port forwarding
We'll need the psql
utility (note, in some cases PgSQL PPA repo should be added, the author uses Ubuntu Focal):
$ sudo apt install postgresql-client-14
Let's export the password from the admin user to an environment variable:
$ export POSTGRES_PASSWORD=$(kubectl get secret --namespace default dev-pg-postgresql -o jsonpath="{.data.postgres-password}" | base64 --decode)
Perform port forwarding. Keep in mind: the current console will be locked after executing the command:
$ kubectl port-forward --namespace default svc/dev-pg-postgresql 5432:5432
Or you may open a separate console:
$ export KUBECONFIG=$(k3d kubeconfig write my-cluster)
kubectl port-forward --namespace default svc/dev-pg-postgresql 5432:5432
Now connect to database:
$ PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432psql (14.2 (Ubuntu 14.2-1.pgdg20.04+1+b1))
Type "help" for help.
postgres=#
Involving Pod with client tool onboard
Export the password from the admin user to an environment variable:
$ export POSTGRES_PASSWORD=$(kubectl get secret --namespace default dev-pg-postgresql -o jsonpath="{.data.postgres-password}" | base64 --decode)
Then let's create a Pod with the psql
utility onboard and execute the command to connect to the database in it:
$ kubectl run dev-pg-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:14.2.0-debian-10-r22 --env="PGPASSWORD=$POSTGRES_PASSWORD" \
> --command -- psql --host dev-pg-postgresql -U postgres -d postgres -p 5432
If you don't see a command prompt, try pressing enter.
postgres=#
Let's list roles to ensure we're in touch:
postgres=# \du;
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
postgres=#
🎯 That's it, guys! We have deployed a working database in a cluster, it also saves its state, it is stateful database here.
I recommend all my readers to surf devs' blogs on the Net regularly. It's worth it. Thanks Roger Lipscombe 🇬🇧 and Fadhil Yaacob 🇲🇾!
In the next part, the author will consider an amazing tool that allows you to observe the functioning of a containerized database in almost real time.
Top comments (0)