loading...
Cover image for Scaling properly a stateful app like Wordpress with Kubernetes Engine and Cloud SQL in Google Cloud

Scaling properly a stateful app like Wordpress with Kubernetes Engine and Cloud SQL in Google Cloud

mfahlandt profile image Mario ・5 min read

There are a lot of examples in the web that show you how you can run Wordpress in Kubernetes. The main issue with this Examples is: Only one Pod running with wordpress and you cannot really scale it.

So i faced the issue, that i needed a highly scalable setup for wordpress and here is what i came up with.

Why is it so hard to scale stateful apps?

These apps write to the disc directly and most of the time you cannot prevent it. This is often the case in PHP based applications that use some kind of plugin system. So files cannot be stored in some kind of bucket but have to be in the filesystem of the application.

Now you say something like but there is a Stateless plugin like https://de.wordpress.org/plugins/wp-stateless/ that writes to cloud buckets. Yes this is true, but it does not store the plugins there or the files, that some plugins might directly write in there folder (sad that this happens but true)

What to do?

We need a couple of things, we want a scalable database, we need some kind of shared filebase for our application and the application itself.

For the sake of shortening we will just use a predefined Wordpress Docker Image, although you should always try to create additions to these Dockerfiles, that fits your own needs. Use them as a base but extend them to your needs.

So we need a shared disc and here we encounter our first problem. We need a ReadWriteMany volume in our Kubernetes Cluster and the problems start. The Cloud providers do not have this.
If you check the Kubernetes documentation
you will see neither GCEPersistantDsik nor AzureDisk nor AWSElasticBlockStore support what we need.
There are options like CloudFileStore in Goolge Cloud or AzureFile but they are way to expensive and to big for our case (We do not need 1TB to store our Wordpress thank you)

NFS to the rescue

But when we look at the list we see the saviour: NFS to the rescue. Let's create the only option we have a ReadWriteOnce Storage connected to our NFS. So we need a Storage Class ideally shared between regions:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: regionalpd-storageclass
provisioner: kubernetes.io/gce-pd
parameters:
 type: pd-standard
 replication-type: regional-pd
 zones: europe-west3-b, europe-west3-c

And we need to create the Volume Claim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    name: nfs
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
  storageClassName: ""
  volumeName: nfs

Now Let’s create our NFS

apiVersion: v1
kind: Service
metadata:
 name: nfs-server
spec:
 clusterIP: 10.3.240.20
 ports:
   - name: nfs
     port: 2049
   - name: mountd
     port: 20048
   - name: rpcbind
     port: 111
 selector:
   role: nfs-server

Now we add the NFS itself. The good thing here, we can use a predefined service

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: nfs-server
spec:
 replicas: 1
 selector:
   matchLabels:
     role: nfs-server
 template:
   metadata:
     labels:
       role: nfs-server
   spec:
     containers:
       - name: nfs-server
         image: gcr.io/google_containers/volume-nfs:0.8
         ports:
           - name: nfs
             containerPort: 2049
           - name: mountd
             containerPort: 20048
           - name: rpcbind
             containerPort: 111
         securityContext:
           privileged: true
         volumeMounts:
           - mountPath: /exports
             name: nfs
     volumes:
       - name: nfs
         gcePersistentDisk:
           pdName: nfs
           fsType: ext4

CloudSQL so secure so much beauty

Alright so we have a running NFS for our static data. So next big step connect Cloud SQL. So let’s say you already have Setup an Cloud SQL Mysql. How do you connect your pods to it?

We use the SQL proxy for it that comes as a sidecar to our container. The good thing about this is, our MySQL is not exposed and we can use localhost. Amazing isn’t it?

First you have to activate the Cloud SQL Admin API

And you need to create a service account that have actual access to cloud SQL.

Here we create a new role that have the rights for Cloud SQL > Cloud SQL-Client

Download the created private Key this one we need to access the SQL instance.

Now create a database user if you have not already done so

gcloud sql users create [DBUSER] --host=% --instance=[INSTANCE_NAME] --password=[PASSWORD]

And we need the name of the instance, easy:

gcloud sql instances describe [INSTANCE_NAME]

Or in the webinterface you find it here:
Google Cloud Webinterface
Now we save the credentials to our Kubernetes instance:

kubectl create secret generic cloudsql-instance-credentials \
    --from-file=credentials.json=[PROXY_KEY_FILE_PATH]
kubectl create secret generic cloudsql-db-credentials \
    --from-literal=username=[DBUSER] --from-literal=password=[PASSWORD]

So we are ready to Setup our Wordpress aren’t we?

Let’s create the service as a first step:

apiVersion: v1
kind: Service
metadata:
 name: wlp-service
 labels:
   app: wlp-service
spec:
 type: LoadBalancer
 sessionAffinity: ClientIP
 ports:
   - port: 443
     targetPort: 443
     name: https
   - port: 80
     targetPort: 80
     name: http
 selector:
   app: wordpress

Alright now we have the Service Up and running only missing is the pod itself.
Let's Split it up a bit so i can explain

apiVersion: apps/v1
kind: Deployment
metadata:
 name: wordpress
 labels:
   app: wordpress
spec:
 replicas: 2
 strategy:
   type: RollingUpdate
 selector:
   matchLabels:
     app: wordpress
 template:
   metadata:
     labels:
       app: wordpress
   spec:
     containers:
       - name: wordpress
         image: wordpress:7.3-apache
         imagePullPolicy: Always
         env:
           - name: DB_USER
             valueFrom:
               secretKeyRef:
                 name: "cloudsql-db-credentials"
                 key: username
           - name: DB_PASSWORD
             valueFrom:
               secretKeyRef:
                 name: "cloudsql-db-credentials"
                 key: password
         ports:
           - containerPort: 80
             name: wordpress
           - containerPort: 443
             name: ssl

This would be enough to run wordpress, but without the database or the persistent nfs. One by one let's add the cloud sql proxy:

       - name: cloudsql-proxy
         image: gcr.io/cloudsql-docker/gce-proxy:1.11
         command: ["/cloud_sql_proxy",
                   "-instances=[YOUR INSTANCESTRING THAT WE LOOKED UP]=tcp:3306",
                   "-credential_file=/secrets/cloudsql/credentials.json"]
         securityContext:
           runAsUser: 2  # non-root user
           allowPrivilegeEscalation: false
         volumeMounts:
           - name: cloudsql-instance-credentials
             mountPath: /secrets/cloudsql
             readOnly: true
     volumes:
       - name: cloudsql-instance-credentials
         secret:
           secretName: cloudsql-instance-credentials

Cool now we can access our Cloud SQL with localhost :) It basically adds a second container to your pod that proxys everything comming to 3306 to our cloud SQL instance without exposing the traffic to the public net.

And now we want to mount our wp-content directory to the NFS

volumeMounts:
           - name: my-pvc-nfs
             mountPath: "/var/www/html/wp-content"
volumes:
        - name:  my-pvc-nfs
        nfs:
            server: 10.3.240.20
            path: "/"

Now you would start saying, but Mario why the hack do you put in a fixed IP for the NFS. There is a reason. This is the only case that i know where the internal dns is not working properly.

And that's it now we can scale our pods by creating and hpa

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
 name: wordpress
 namespace: default
spec:
 maxReplicas: 10
 metrics:
   - resource:
       name: cpu
       targetAverageUtilization: 50
     type: Resource
 minReplicas: 3
 scaleTargetRef:
   apiVersion: extensions/v1beta1
   kind: Deployment
   name: wordpress

All our wp-content files go to the nfs and is shared between the instances. Yes you are correct the NFS is now our single point of failure but an NFS is way more stable than having just one machine running. If you use Caching like redis or increase the fpm cache you can further reduce the load time.

Cool isn’t it?

Are you interested in basic Kubernetes / Cloud walkthroughs? Just let me know

Posted on Mar 10 '19 by:

mfahlandt profile

Mario

@mfahlandt

tech freak, organizer of GDG Munich Cloud, doing react, nodejs and a lot of cloud and kubernetes, docker stuff. Developer @ Königspunkt

Discussion

markdown guide
 

what is the purpose of mountPath: /exports in nfs server? Similar can we use any directly as you listed mountPath is pods for "/var/www/html/wp-content"?

I tried a different directly called "/home/my-folder" for mounPath in a pod and it failed. Any idea?

 

Hi Mario!

Thanks for your walkthrough,

I am very interested in doing this setup, but I am kind of new to Kubernetes and I find some steps hard to follow

What do I do with

volumeMounts:
- name: my-pvc-nfs
mountPath: "/var/www/html/wp-content"
volumes:
- name: my-pvc-nfs
nfs:
server: 10.3.240.20
path: "/"

I think is missing the header :/

 

Hello!

Thank you for your post!! it kicked me in the right direction for making a similar setup.

I've run in a few challenges:
You are missing the storage class name in the Volume Claim description.
Also, the volume claim got stuck in a weird "pending" loop. I tracked the error was specifying the volume name. (last line on the volume claim)

I did not use the cloudsql proxy since I have my cloud sql instances with private internal IP so wordpress can use the ip directly.

There's one thing I could not fix yet... Wordpress can't write the wp-content mounted folder, I know it must be a permissions problem. The fix could be to extend the default wordpress docker image and run a chown there, but I thought I would ask you, maybe you have a clever idea!!

Thanks again!