DEV Community

Cover image for Google Cloud Kubernetes in 10 Minutes
Tomas Fernandez for Semaphore

Posted on • Originally published at semaphoreci.com

Google Cloud Kubernetes in 10 Minutes

In the beginning Google created Kubernetes. “Let it be open source,” Google said, and the sources opened. And Google saw it was good. All kidding aside, if anyone knows how to run Kubernetes, it’s Google.

In this hands-on post, we’ll learn continuously deliver a demo application to the Google Kubernetes Engine using Semaphore CI/CD. By the end of this read, you’ll have a better understanding of how Kubernetes works, and, even better, a continuous delivery pipeline to play with.

How Do Deployments Work in Kubernetes?

A Kubernetes deployment is like one of those Russian dolls. The application lives inside a Docker container, which is inside a pod, which takes part in the deployment.

A pod is a group of Docker containers running on the same node and sharing resources. Pods are ephemeral—they are meant to be started and stopped as needed. To get a stable public IP address, Kubernetes provides a load balancing service that forwards incoming requests to the pods.

The most straightforward way to define a deployment is to write a manifest like the one I present below.

First, we have the deployment resource, which holds and controls the pods. Deployments have a name and a spec, which defines the final desired state:

  • Replicas: how many pods to create. Set the number to match the number of nodes in your cluster. For instance, I’m using three pods, so I’ll change it to replicas: 3.
  • spec.containers: Defines the docker image running in the pods. We're going to upload the image to a Google private registry and pull it from there.
  • Labels: they are key-value mappings that we stick on to pods. We can then use matchLabels to relate the deployment with the pods.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: semaphore-demo-nodejs-k8s-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: semaphore-demo-nodejs-k8s-server
  template:
    metadata:
      labels:
        app: semaphore-demo-nodejs-k8s-server
    spec:
      containers:
        - name: semaphore-demo-nodejs-k8s-server
          image: gcr.io/$GCP_PROJECT_ID/semaphore-demo-nodejs-k8s-server:$SEMAPHORE_WORKFLOW_ID
          env:
            - name: NODE_ENV
              value: "production"
Enter fullscreen mode Exit fullscreen mode

The final piece of the manifest is the service. A load balancer service exposes a stable public IP for our users to connect to. We tell Kubernetes that the service will serve the pods labeled as app: semaphore-demo-nodejs-k8s-server.

kind: Service
metadata:
  name: semaphore-demo-nodejs-k8s-lb
spec:
  selector:
    app: semaphore-demo-nodejs-k8s-server
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 3001
Enter fullscreen mode Exit fullscreen mode

Before we can use this manifest, we need two things:

  • Push a Docker image to Google private repository.
  • Send the manifest to the cluster.

We’ll take care of both in the next section. But first, let’s create the necessary services on Google cloud.

CI/CD and Kubernetes

We’ll use Semaphore to run our Continuous Integration and Delivery workflow:

  • Continuous Integration: the tiniest error can bring down a site and crash an application. We’ll put the code through a Continuous Integration pipeline that can weed out the bugs before they creep into our deployment.
  • Dockerize: generates Docker images for each update so that we can track the exact version that is running in production, and we can rollback or forward in seconds.
  • Deploy: 100% automated deployment to Google Kubernetes. No human intervention means more reliable and frequent releases.

Since I wish to focus on the Kubernetes deployment, I’ll skip the Continuous Integration section altogether. If you are curious and would like to examine in detail how it works, you can find about it in the full demo tutorial in Semaphore blog.

Gettings Things Ready

You’ll need to sign up few services: Google Cloud Platform will be our cloud provider; also GitHub for the application code and Semaphore for the CI/CD. I recommend installing the semaphore cli for a quick setup.

Go to your Google Cloud Platform and:

  1. Create a project. By default, Google assigns a random name, but you can change it using the Edit button. I prefer using something more descriptive like semaphore-demo-nodejs-k8s.
  2. Go to IAM and create a Service account. The account should be Owner of the project. Once created, create and download the key file in JSON format.
  3. In the Kubernetes Engine, create a cluster named semaphore-demo-nodejs-k8s-server. You may choose how many nodes and the size of each machine. Three nodes are enough to get a taste of Kubernetes. The smallest machine will do for this demo.
  4. Go to the SQL console, and create a PostgreSQL database in the same region as the cluster. Enable the Private IP network. You can also enable the Public IP and whitelist yourself to connect remotely. Take note of the IP that Google assigned to your db.
  5. Create a database named demo and a username called demouser.

I know it’s a lot of work. The good news is that you only have to do it once.

Finally, fork the demo.

GitHub logo semaphoreci-demos / semaphore-demo-nodejs-k8s

A Semaphore demo CI/CD pipeline using Node.js and Kubernetes

Semaphore demo CI/CD pipeline using Node.js and Kubernetes

Example application and CI/CD pipeline showing how to run a Node.js project on Semaphore 2.0.

The application is based on Nest.js. The code is written in TypeScript.

The application is deployed to Google Cloud Kubernetes.

CI/CD on Semaphore

  1. Fork this repository and use it to create a project.

  2. Create a project on Google Cloud: semaphore-demo-nodejs-k8s

  3. Create Kubernetes cluster on Google Cloud: semaphore-demo-nodejs-k8s-server

  4. Create PostgreSQL db on Google Cloud.

  5. Create database demo and user demouser.

  6. Copy environment files and edit db hostname and db password:

    $ cp ormconfig.sample.json /tmp/ormconfig.production.json
    $ cp sample.env /tmp/production.env
    Enter fullscreen mode Exit fullscreen mode
  7. Upload environment files as a secret:

    $ sem create secret production-env \
        -f /tmp/ormconfig.production.json:/home/semaphore/ormconfig.production.json \
        -f /tmp/production.env:/home/semaphore/production.env
    Enter fullscreen mode Exit fullscreen mode
  8. Create Service Account in IAM:

    • Role: project owner
    • Create and download access key JSON file.
  9. Upload Access Key to Semaphore as a secret:

    $ sem create secret gcr-secret \
        -e GCP_PROJECT_ID=semaphore-demo-nodejs-k8s
    Enter fullscreen mode Exit fullscreen mode

Clone it, and add it to Semaphore:

$ cd semaphore-demo-nodejs-k8s
$ sem init
Enter fullscreen mode Exit fullscreen mode

The application implements an API endpoint. It is written in TypeScript with the Nest.js framework.

Dockerize Pipeline

This pipeline prepares the Docker image, which is then pushed into Google’s Private Container Registry.

Promotion and Docker Build

Shall we see how it works? Open .semaphore/docker-build.yml.

At the start of the file, we have a name for the pipeline and the agent. The agent tells Semaphore which of the available machines and OS runs the jobs:

version: v1.0
name: Docker build server
agent:
  machine:
    type: e1-standard-2
    os_image: ubuntu1804
Enter fullscreen mode Exit fullscreen mode

Blocks and Jobs organize the pipeline execution order. Blocks are executed sequentially, one after the other. Jobs within a block are executed in parallel. If any command in a job fails, the pipeline stops.

Here is the Docker Build block:



blocks:
  - name: Build
    task:
      secrets:
        - name: gcr-secret
        - name: production-env

      prologue:
        commands:
          - gcloud auth activate-service-account --key-file=.secrets.gcp.json
          - gcloud auth configure-docker -q
          - gcloud config set project $GCP_PROJECT_ID
          - gcloud config set compute/zone $GCP_PROJECT_DEFAULT_ZONE
          - checkout

      jobs:
      - name: Docker build
        commands:
          - cp /home/semaphore/ormconfig.production.json ormconfig.json
          - cp /home/semaphore/production.env production.env

          - docker pull "gcr.io/$GCP_PROJECT_ID/semaphore-demo-nodejs-k8s-server:latest" || true
          - docker build --cache-from "gcr.io/$GCP_PROJECT_ID/semaphore-demo-nodejs-k8s-server:latest" -t "gcr.io/$GCP_PROJECT_ID/semaphore-demo-nodejs-k8s-server:$SEMAPHORE_WORKFLOW_ID" .
          - docker images
          - docker push "gcr.io/$GCP_PROJECT_ID/semaphore-demo-nodejs-k8s-server:$SEMAPHORE_WORKFLOW_ID"
Enter fullscreen mode Exit fullscreen mode

The prologue is executed before each job. Here, it sets up gcloud to work with the project. First, we need to get authorized using gcloud auth. Next, we configure gcloud to work as a docker helper, so we can use the private registry. Finally, we set the active project and zones for the session.

Checkout clones the repository.

The build job copies some configuration files inside the Docker image and pushes it to the registry.
To tag the image, we use $SEMAPHORE_WORKFLOW_ID, which is guaranteed to be unique for every workflow.

At this point, you may be wondering where those config files and environment variables came from. They were imported with the secrets keyword. Semaphore secrets mechanism allows us to store sensitive data outside the repository securely. We'll create the secrets in a moment.

Once this pipeline is read, we can link it up to the next one with a promotion:

promotions:
  - name: Deploy server to Kubernetes
    pipeline_file: deploy-k8s.yml
    auto_promote_on:
      - result: passed
Enter fullscreen mode Exit fullscreen mode

Secrets and Environment Files

We need to pass the database username and password to the server. We’ll use two files for this, both have more or less the same variables:

  • environment: a regular bash file with environment variables.
  • ormconfig: a config file for TypeORM, the database ORM for our project.

Since the files have sensitive information, we shouldn’t check them into GitHub. Instead, we upload them as Secrets to Semaphore. Secrets in Semaphore are automatically encrypted and made available when requested to your jobs.

Copy the provided sample configs outside your repository, for instance to your /tmp directory:

$ cp ormconfig.sample.json /tmp/ormconfig.production.json
$ cp sample.env /tmp/production.env
Enter fullscreen mode Exit fullscreen mode

Edit ormconfig.production.json. Replace the host and password values with your database IP address and the password for your demouser. The first part of the file should look like:

{
  "type": "postgres",
  "host": "YOUR_DB_IP",
  "port": 5432,
  "username": "demouser",
  "password": "YOUR_DB_PASSWORD",
  "database": "demo",

  . . .
Enter fullscreen mode Exit fullscreen mode

Edit production.env. Set NODE_ENV=production, leave PORT unmodified and change the DB_HOST, DB_PASSWORD and DB_PORT as appropriate:

NODE_ENV=production
PORT=3001
URL_PREFIX=v1/api
DATABASE_HOST=YOUR_DB_IP
DATABASE_USER=demouser
DATABASE_PASSWORD=YOUR_DB_PASSWORD
DATABASE_DBNAME=demo
DATABASE_PORT=5432
Enter fullscreen mode Exit fullscreen mode

Upload both files to Semaphore as a secret called production-env:

$ sem create secret production-env \
    -f /tmp/production.env:/home/semaphore/production.env \
    -f /tmp/ormconfig.production.json:/home/semaphore/ormconfig.production.json
Enter fullscreen mode Exit fullscreen mode

We have to create a second secret to store the Google-related information and the service account JSON key:

$ sem create secret gcr-secret \
   -e GCP_PROJECT_ID=semaphore-demo-nodejs-k8s \
   -e GCP_PROJECT_DEFAULT_ZONE=YOUR_REGION \
   -f YOUR_GCP_ACCESS_KEY_FILE.json:/home/semaphore/.secrets.gcp.json
Enter fullscreen mode Exit fullscreen mode

Deployment Pipeline

With a Docker image on hand, we are ready to run it on our Kubernetes cluster.

Take a look at the deployment pipeline at .semaphore/deploy-k8s.yml. It’s made of 2 blocks, each has one job.

Deploy Pipeline and Promotion

Most of the gcloud commands in the prologue we’ve seen, the only new guy here is gcloud container, this one retrieves the Kubernetes config file for the following kubectl commands.

With envsubst we expand in-place the environment variables. The result is a file that should be plain YAML. The last thing remaining is using kubectl apply to send the manifest to our cluster:

blocks:
  - name: Deploy to Kubernetes
    task:
      secrets:
        - name: gcr-secret

      env_vars:
        - name: CLUSTER_NAME
          value: semaphore-demo-nodejs-k8s-server

      prologue:
        commands:
          - gcloud auth activate-service-account --key-file=.secrets.gcp.json
          - gcloud auth configure-docker -q
          - gcloud config set project $GCP_PROJECT_ID
          - gcloud config set compute/zone $GCP_PROJECT_DEFAULT_ZONE
          - gcloud container clusters get-credentials $CLUSTER_NAME --zone $GCP_PROJECT_DEFAULT_ZONE --project $GCP_PROJECT_ID
          - checkout

      jobs:
      - name: Deploy
        commands:
          - cat deployment.yml | envsubst | tee deployment.yml
          - kubectl apply -f deployment.yml
Enter fullscreen mode Exit fullscreen mode

At this point, we’re almost done. To mark that this image is the one running on the cluster, we have the last block that tags the image as latest. Here are the relevant commands:

        - docker pull "gcr.io/$GCP_PROJECT_ID/semaphore-demo-nodejs-k8s-server:$SEMAPHORE_WORKFLOW_ID"
        - docker tag "gcr.io/$GCP_PROJECT_ID/semaphore-demo-nodejs-k8s-server:$SEMAPHORE_WORKFLOW_ID" "gcr.io/$GCP_PROJECT_ID/semaphore-demo-nodejs-k8s-server:latest"
        - docker push "gcr.io/$GCP_PROJECT_ID/semaphore-demo-nodejs-k8s-server:latest"
Enter fullscreen mode Exit fullscreen mode

Ready to do a trial run? Push the modifications and watch the pipelines go.

$ git add deployment.yml
$ git add .semaphore/*
$ git commit -m "first deployment”
$ git push origin master
Enter fullscreen mode Exit fullscreen mode

You can check the progress of the jobs from your Semaphore account. Wait a few minutes until all the pipelines are done. Hopefully, everything is green, and we can check the cluster state now.

Here’s the easiest way of connecting to the cluster from your Google Cloud Console, go to:

  • Kubernetes Engine > Clusters > Select your cluster > Connect button

You’ll get an in-browser terminal that is connected to the project.

Let’s check the cluster. First, check the pods:

$ kubectl get pods
NAME                                                READY   STATUS    RESTARTS   AGE
semaphore-demo-nodejs-k8s-server-6b95cf5dfd-hgrmn   1/1     Running   0          97s
semaphore-demo-nodejs-k8s-server-6b95cf5dfd-jgc9p   1/1     Running   0          112s
semaphore-demo-nodejs-k8s-server-6b95cf5dfd-r29gc   1/1     Running   0          105s
Enter fullscreen mode Exit fullscreen mode

Each pod has been assigned a different name; all of them are running a copy of our application.

Next, let’s check the deployment:

$ kubectl get deployment
NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
semaphore-demo-nodejs-k8s-server   3/3     3            3           12m
Enter fullscreen mode Exit fullscreen mode

The deployment is controlling the 3 pods. We are told the 3 pods are available and up to date.

Finally, let’s check the service:

$ kubectl get service
NAME                                  TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)        AGE
kubernetes                            ClusterIP      10.36.0.1     <none>          443/TCP        71m
semaphore-demo-nodejs-k8s-server-lb   LoadBalancer   10.36.9.180   104.154.96.87   80:32217/TCP   12m
Enter fullscreen mode Exit fullscreen mode

Ignore the ClusterIP service, that one was before the deployment. We can connect to the application using the EXTERNAL-IP of the LoadBalancer service.

Let’s tests the application with curl:

$ curl -w "\n" -X POST -d \
  "username=jimmyh&firstName=Johnny&lastName=Hendrix&age=30&description=Burn the guitar" \
  http://YOUR_EXTERNAL_IP/v1/api/users
{
    "username": "jimmyh",
    "description": "Burn the guitar",
    "age": "30",
    "firstName": "Johnny",
    "lastName": "Hendrix",
    "id": 1,
    "createdAt": "2019-08-05T20:45:48.287Z",
    "updatedAt": "2019-08-05T20:45:48.287Z"
}
Enter fullscreen mode Exit fullscreen mode

The API endpoint accepts GET, POST and DELETE requests:

$ curl -w "\n" http://YOUR_EXTERNAL_IP/v1/api/users/1
{
    "id": 1,
    "username": "jimmyh",
    "description": "Burn the guitar",
    "firstName": "Johnny",
    "lastName": "Hendrix",
    "age": 30,
    "createdAt": "2019-08-05T20:45:48.287Z",
    "updatedAt": "2019-08-05T20:45:48.287Z"
}
Enter fullscreen mode Exit fullscreen mode

Conclusion

It's been a long ride, but hopefully a smooth one. Now you know how to build a CI/CD pipeline for Google Cloud Kubernetes Engine.

Some ideas play with:

  • Create a staging cluster.
  • Build a development container and run tests inside it.
  • Enhance your manifest with rolling updates.

Using AWS instead of Google? Check out this tutorial:

If you wish to learn more about how Semaphore can work Kubernetes and Docker check out these:

Did you find the post useful? Hit those ❤️ and 🦄, follow me or leave a comment below.

Interested in CI/CD and Kubernetes? We’re working on a free ebook, sign up, to receive it as soon as it’s published.

Thanks for reading!

Top comments (0)