DEV Community

Liston Fermi
Liston Fermi

Posted on • Updated on

Deploying Microservices with Google Cloud Platform's GKE

Stack used:

  • Frontend : React - Next.js
  • Backend: Node - Express.js
  • DB: Mongodb-atlas
  • RabbitMQ

To Begin with:

  • Have docker desktop & kubectl already installed.
  • Create an account with google cloud
  • Verify your payment method & get a credit of about $300.

  • In the Google cloud console,

    • Create a project.

Create a new Kubernetes cluster:

Select standard mode and configure:

Adding a nodepool in your cluster

Before adding-

Calculate the amount of CPUs & RAM we need according to the no.of services you have:

In this project we have :

  • Frontend
  • 5 different backend servers
  • RabbitMQ as the message broker
  • Need a load balancer- Nginx
  • Also include any database you are going to use.

In this project we don't have the database locally, because we used mongodb's cloud service mongodb-atlas

The below is the example calculation done using ChatGPT:

Example Calculation
Suppose each backend service requires 0.5 CPU and 1 GB of RAM, and the frontend, RabbitMQ, and Nginx each require 1 CPU and 2 GB of RAM:

Backend Services:
5 services * 0.5 CPU + 1 GB RAM = 2.5 CPU + 5 GB RAM
Frontend:
1 CPU + 2 GB RAM
RabbitMQ:
1 CPU + 2 GB RAM
Nginx:
1 CPU + 2 GB RAM
Total estimated requirements:

CPU: 5.5
RAM: 11 GB
If you choose n1-standard-2 nodes (which have 2 CPUs and 7.5 GB RAM each):

Each node can handle: 2 CPU + 7.5 GB RAM
Number of nodes required:
CPU: ceil(5.5 / 2) = 3 nodes
RAM: ceil(11 / 7.5) = 2 nodes
Since the RAM requirement calculation gives a higher number, you would start with 3 nodes to satisfy both CPU and RAM requirements.
Enter fullscreen mode Exit fullscreen mode
  • Go to Kubernetes Clusters
  • Select your cluster
  • Inside that cluster, click on "ADD NODE POOL"

So from this calculation, I chose to go with 3 nodes.
Adding a node pool

Adding a node pool

Creating an image of your frontend, servers and pushing it to the docker hub

Create an Dockerfile && .dockerignore file inside each services & frontend folder if you haven't did it already

This is my Dockerfile for the frontend.

There are several ways to use the env. In the frontend dockerfile, we are giving the env using build arguments.

/frontend/Dockerfile

FROM node:20.8.0

WORKDIR /app

# Define build argument for JWT key
ARG JWT_KEY

# Set the environment variable
ENV JWT_KEY=$JWT_KEY

COPY package*.json ./

RUN npm install

COPY . .

RUN npm run build

EXPOSE 3000

CMD ["npm", "start" ]
Enter fullscreen mode Exit fullscreen mode

Also add the .dockerignore file and add the necessary files/folders to be ignored while building the image

Building the image:

/frontend

docker build --build-arg JWT_KEY=myjwtkey -t listonfermi/wenet-frontend .

docker build --build-arg JWT_KEY=JWTKeyGoesHere -t dockerusername/image-name .

Pushing the image to docker:

docker push listonfermi/wenet-frontend

docker push dockerusername/image-name

The same process for the backend severs.

Before building an image for the backend servers, make sure you have the relevant scripts in package.json and tsconfig.json

package.json - scripts

"scripts": {
    "dev": "nodemon",
    "build": "tsc",
    "start": "node ./dist/server.js"
  },

Enter fullscreen mode Exit fullscreen mode

tsconfig.json

{
  "compilerOptions": {
    "target": "es2016",
    "module": "commonjs", /* Specify what module code is generated. */
    "esModuleInterop": true, /* Emit additional JavaScript to ease support for importing CommonJS modules. This enables 'allowSyntheticDefaultImports' for type compatibility. */
    "forceConsistentCasingInFileNames": true, /* Ensure that casing is correct in imports. */
    "strict": true, /* Enable all strict type-checking options. */
    "skipLibCheck": true, /* Skip type checking all .d.ts files. */
    "outDir": "./dist"
  }
}
Enter fullscreen mode Exit fullscreen mode

/user-service/Dockerfile:

FROM node:20.8.0

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

RUN npm run build

EXPOSE 5001

CMD ["node", "dist/server.js"]
Enter fullscreen mode Exit fullscreen mode

Building the image:

/user-service:
docker build -t listonfermi/wenet-user-service .

docker build dockerusername/image-name .

Pushing the image to docker:

docker push listonfermi/wenet-user-service

docker push dockerusername/image-name

Here also you can give the env in build arguments if you wish.

Since we didn't build the user-service image without the env, we can give it in kubernetes configmap or secrets in the google cloud cluster.

Creating manifest files

In Kubernetes, manifest files are text files in JSON or YAML format that describe the desired state of API objects in a cluster

Deployment & service files for frontend, user-service and rabbitmq should look like this :

/k8s-manifests/frontend-depl.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
        - name: frontend
          image: listonfermi/wenet-frontend
          envFrom:
            - configMapRef:
                name: frontend-env  //we'll create a configmap named frontend-env inside the GKE cluster
---
apiVersion: v1
kind: Service
metadata: 
  name: frontend-srv
spec:
  selector:
    app: frontend
  ports:
    - name: frontend-ports
      protocol: TCP
      port: 3000
      targetPort: 3000
Enter fullscreen mode Exit fullscreen mode

/k8s-manifests/rabbitmq-depl.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rabbitmq-depl
  labels:
    app: rabbitmq
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rabbitmq
  template:
    metadata:
      labels:
        app: rabbitmq
    spec:
      containers:
        - name: rabbitmq
          image: rabbitmq:3-management
          ports:
            - containerPort: 5672 # RabbitMQ main port
            - containerPort: 15672 # RabbitMQ management plugin port
          volumeMounts:
            - name: rabbitmq-data
              mountPath: /var/lib/rabbitmq
      volumes:
        - name: rabbitmq-data
          emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: rabbitmq-service
spec:
  selector:
    app: rabbitmq
  ports:
    - name: amqp
      protocol: TCP
      port: 5672
      targetPort: 5672
    - name: management
      protocol: TCP
      port: 15672
      targetPort: 15672
  type: ClusterIP
Enter fullscreen mode Exit fullscreen mode

/k8s-manifests/user-service-depl.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: user-service-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: user-service
  template:
    metadata:
      labels:
        app: user-service
    spec:
      containers:
        - name: user-service-container
          image: listonfermi/wenet-user-service
          envFrom:
            - configMapRef:
                name: user-service-env
---
apiVersion: v1
kind: Service
metadata:
  name: user-service-srv
spec:
  selector:
    app: user-service
  ports:
    - name: user-service-ports
      protocol: TCP
      port: 5001
      targetPort: 5001
Enter fullscreen mode Exit fullscreen mode

Creating manifests for Loadbalancer nginx-ingress

We need a loadbalancer to get external ip address to connect with the cluster and need to route it to the proper paths.

k8s-manifests/ingress-ngix-depl.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-controller
  annotations:
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  ingressClassName: "nginx"
  rules:
    - host: "*"
          - path: /api/user-service/
            pathType: Prefix
            backend:
              service:
                name: user-service-srv
                port:
                  number: 5001
          - path: /api/posts-service/
            pathType: Prefix
            backend:
              service:
                name: posts-service-srv
                port:
                  number: 5002
          - path: /socket.io
            pathType: Prefix
            backend:
              service:
                name: message-service-srv
                port:
                  number: 5003
          - path: /api/message-service/
            pathType: Prefix
            backend:
              service:
                name: message-service-srv
                port:
                  number: 5003
          - path: /api/notification-service/
            pathType: Prefix
            backend:
              service:
                name: notification-service-srv
                port:
                  number: 5004
          - path: /api/ads-service/
            pathType: Prefix
            backend:
              service:
                name: ads-service-srv
                port:
                  number: 5005
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend-srv
                port:
                  number: 3000
Enter fullscreen mode Exit fullscreen mode

Applying these manifests

Before applying these manifests, we need to change our kubernetes context. Because, only if we change the context to our gcloud cluster's context, these yaml files will be applied there.

You can check the kubernetes context by this command:

kubectl config get-contexts

Or in docker-desktop :

Changing the context to our GCP cluster:

Offical doc for configuring the cluster access

Download the gcloud CLI via this link : gcloud installer or follow the steps from this doc

From your gcloud terminal :
Image description

This command creates a different context in dockerdesktop:

gcloud container clusters get-credentials wenet-cluster --region=asia-south1-a

gcloud container clusters get-credentials <your-cluster-name> --zone <your-zone> --project <your-project-id>

Every kubectl commands we use from now on will be executed in our gcloud cluster.

Adding configmaps to the gcloud cluster:

We need to create configmaps in the gcloud cluster which is to be used with the user-service deployment

We are creating a configmap from .env file in the user-service folder to the

/user-service:
kubectl create configmap user-service-env --from-env-file=.env

We can do this for all the env of every backend services

*Applying the frontend deployment manifests: *

/k8s-manifests/
kubectl apply -f frontend-depl.yaml

kubectl apply -f user-service.yaml

kubectl apply -f rabbitmq-depl.yaml

Applying ingress-nginx.yaml:

_ This command is found in this docs _

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.1/deploy/static/provider/cloud/deploy.yaml

kubectl apply -f ingress-ngix-depl.yaml

Now deployments and services will be created in the cluster.

check for any errors if any in the google cloud console.

Select Ingress from 'Gateways, Services & Ingress' option from Google cloud Console
Gateways, Services & Ingress

Ingress Controller

Select the above IP address.

Check if you the application works as expected

Setting the DNS for the domain

We can buy a domain from any websites like goDaddy, hostinger etc.

Change the DNS to your ip

This is the DNS settings from GoDaddy
goDaddy hosting DNS

Check the working of the application through the domain

Getting SSL certificate

For SSL certificate, we can create ClusterIssuer and Certificate yaml files. We're using letsencrypt.org 's api.

/k8s-manifests/letsencrypt-prod.yaml

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: listonfermi@gmail.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - http01:
        ingress:
          class: nginx
Enter fullscreen mode Exit fullscreen mode

k8s-manifests/certificate.yaml

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: wenet-life-tls
  namespace: default
spec:
  secretName: wenet-life-tls
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  commonName: wenet.life
  dnsNames:
    - wenet.life
Enter fullscreen mode Exit fullscreen mode

Apply these files in the GKE cluster kubernetes cluster

Apply cert manager:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml

kubectl apply -f certificate.yaml

kubectl apply -f letsencrypt-prod.yaml

After successfully applying,
now do changes in the ingress-nginx-depl.yaml

Add annotations, rules- host and tls in the depl file.

/k8s-manifests/ingress-nginx.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-controller
  annotations:
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-origin: "http://wenet.life, https://wenet.life"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: "nginx"
  rules:
    - host: wenet.life
      http:
        paths:
          - path: /.well-known/acme-challenge/
            pathType: ImplementationSpecific
            backend:
              service:
                name: cm-acme-http-solver-<random>
                port:
                  number: 8089
          - path: /api/user-service/
            pathType: Prefix
            backend:
              service:
                name: user-service-srv
                port:
                  number: 5001
          - path: /api/posts-service/
            pathType: Prefix
            backend:
              service:
                name: posts-service-srv
                port:
                  number: 5002
          - path: /socket.io
            pathType: Prefix
            backend:
              service:
                name: message-service-srv
                port:
                  number: 5003
          - path: /api/message-service/
            pathType: Prefix
            backend:
              service:
                name: message-service-srv
                port:
                  number: 5003
          - path: /api/notification-service/
            pathType: Prefix
            backend:
              service:
                name: notification-service-srv
                port:
                  number: 5004
          - path: /api/ads-service/
            pathType: Prefix
            backend:
              service:
                name: ads-service-srv
                port:
                  number: 5005
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend-srv
                port:
                  number: 3000
  tls:
  - hosts:
    - wenet.life
    secretName: wenet-life-tls
Enter fullscreen mode Exit fullscreen mode

Again apply the ingress-nginx-depl.yaml file:

kubectl apply -f ingress-nginx-depl.yaml

check the domain with https:// if its working

Top comments (0)