DEV Community

Amanda Guan
Amanda Guan

Posted on • Updated on

Recap of "Getting Started with Kubernetes" Course

I recently studied the Getting Started with Kubernetes course, which provided an insightful introduction to Kubernetes, focusing on basic concepts, microservices, containerization, and practical steps for deploying applications both locally and in the cloud. In this article, I’ll recap the essential parts of the course, integrating illustrations to visualize key concepts, and share some practical code snippets and terminal commands to help reinforce the learning.

What Is Kubernetes?

Kubernetes, often abbreviated as K8s, is an open-source platform that automates the deployment, scaling, and management of containerized applications. It plays a pivotal role in orchestrating containers to ensure that applications run reliably and efficiently, especially in complex environments.

What Are Microservices?

One of the key concepts emphasized in the course is microservices—a design pattern that allows for fault isolation and independent management of different application features. Microservices enable faster development and iteration by smaller, specialized teams, although they can add complexity to the management of applications. Kubernetes simplifies this by orchestrating these microservices efficiently.

What Is Cloud Native?

Cloud-native applications are specifically designed to leverage the full advantages of cloud computing. This includes capabilities like scaling on demand, self-healing, supporting zero-downtime rolling updates, and running consistently across any environment using Kubernetes. The course highlights how Kubernetes fits perfectly into the cloud-native paradigm, making it an essential tool for modern application development.

Why Do We Need Kubernetes?

The course provides a clear explanation of why Kubernetes is crucial for managing modern applications. Kubernetes organizes microservices, scales applications automatically, self-heals by replacing failed containers, and allows for seamless updates with zero downtime. These features make Kubernetes indispensable for managing complex microservices architectures.

Illustration: Microservices Architecture

Image description

What Does Kubernetes Look Like?

Masters and Nodes

A Kubernetes cluster consists of Master Nodes, which handle the control plane, and Worker Nodes, which run the actual application workloads. The course offers a detailed breakdown of these components, making it easier to understand the inner workings of a Kubernetes cluster.

Illustration: Kubernetes Cluster Components

Image description

Kubernetes in the Cloud with Linode Kubernetes Engine (LKE)

Deploying Kubernetes in the cloud can be simplified with services like the Linode Kubernetes Engine (LKE). LKE manages the control plane for you, allowing you to focus on your applications and worker nodes. The course provided hands-on exercises with LKE, demonstrating how easy it can be to manage Kubernetes clusters in the cloud.

Introduction to Containerization

Containerization is another critical concept covered in the course. It involves packaging an application with all its dependencies and configurations into a container image, ensuring consistent operation across different environments. This approach is fundamental to how Kubernetes operates.

Build and Host the Image

The course walked through the process of containerizing an application by writing a Dockerfile, building the image, and pushing it to a registry like Docker Hub. Here’s an example Dockerfile that was used:

dockerfile
FROM node:current-slim

# Copy source code to /src in container
COPY . /src

# Install app and dependencies into /src in container
RUN cd /src; npm install

# Document the port the app listens on
EXPOSE 8080

# Run this command (starts the app) when the container starts
CMD cd /src && node ./app.js
Enter fullscreen mode Exit fullscreen mode

Illustration: Containerization Workflow

Image description

Get Hands-on With Kubernetes

The course also provided several hands-on labs that helped reinforce the concepts learned. Here’s a summary of some of the practical exercises:

Setup Required

To get started, you’ll need Docker Desktop, Git, and optionally, accounts for Linode and DockerHub. Familiarize yourself with the kubectl command-line tool, as it’s essential for interacting with Kubernetes clusters.

Deploy the Application Locally

The course guided us through deploying a simple application locally using Kubernetes. Here’s how you can do it:

  1. Define the Pod in pod.yml:

    apiVersion: v1
    kind: Pod
    metadata:
      name: first-pod
      labels:
        project: qsk-course
    spec:
      containers:
        - name: web-ctr
          image: educative1/qsk-course:1.0
          ports:
            - containerPort: 8080
    
  2. Deploy the Pod:

    kubectl apply -f pod.yml  
    
  3. Verify the Pod is running:

    kubectl get pods  
    
  4. Forward the port to access the application:

    kubectl port-forward --address 0.0.0.0 first-pod 8080:8080    
    
  • Access the application at http://localhost:8080.

Illustration: Kubernetes Pod Definition

Image description

Deploy the Application on Cloud

Deploying applications on the cloud using Kubernetes was another crucial part of the course. Here’s how you can do it:

  1. Copy-paste kubeconfig to config file and configure kubectl:

    export KUBECONFIG=/usercode/config  
    
  2. Deploy the Pod:

    kubectl apply -f pod.yml    
    

Connect to the Application

Once the application is deployed, you’ll need to connect to it using a Kubernetes Service:

  1. Define a Service in svc-cloud.yml or svc-local.yml:

    apiVersion: v1
    kind: Service
    metadata:
      name: svc-local
    spec:
      type: NodePort
      selector:
        app: web
      ports:
        - protocol: TCP
          port: 80
          targetPort: 8080 
    
  2. Deploy the Service:

    kubectl apply -f svc-local.yml    
    
  3. Verify the Service:

    kubectl get svc   
    
  4. Access the application via the Service.

Kubernetes Deployments

Kubernetes Deployments are crucial for managing applications at scale. The course demonstrated how to define and deploy a Deployment, and how Kubernetes automatically handles self-healing from failures:

  1. Define Deployment in deploy.yml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: qsk-deploy
    spec:
      replicas: 5
      selector:
        matchLabels:
          project: qsk-course
      template:
        metadata:
          labels:
            project: qsk-course
        spec:
          containers:
            - name: hello-pod
              image: educative1/qsk-course:1.0
              ports:
                - containerPort: 8080 
    
  2. Deploy the Deployment:

    kubectl apply -f deploy.yml   
    
  3. Self-healing from failures:

    • Monitor the deployment and manually delete a Pod to see Kubernetes automatically recreate it.
    • Delete a Node and observe Kubernetes' automatic Pod replacement.

Illustration: Kubernetes Deployment

Image description

Scaling an Application

Scaling an application was another critical aspect covered in the course. Kubernetes makes it easy to scale up or down depending on demand:

  1. Scale up:

    • Edit deploy.yml to set replicas to 10.
    • Apply the changes and verify:

      kubectl apply -f deploy.yml
      kubectl get pods 
      
  2. Scale down:

    kubectl scale --replicas=5 deployment/qsk-deploy
    kubectl get pods   
    

Rolling Update

The course also covered rolling updates—a powerful feature of Kubernetes that allows you to update your applications without downtime:

  1. Perform rolling updates by modifying deploy.yml:
    • Set minReadySeconds, maxSurge, and maxUnavailable.
    • Apply the updates and monitor the progress.
  2. Clean up resources after the update:

    kubectl delete deployment qsk-deploy
    kubectl delete svc svc-local
    

Conclusion

The "Getting Started with Kubernetes" course provided a thorough introduction to Kubernetes, from understanding the basics of microservices and cloud-native applications to deploying and managing containerized applications. The course's practical exercises and hands-on labs were particularly helpful in solidifying the concepts. By following this recap and using the provided illustrations and code snippets, you should have a solid foundation to continue exploring and mastering Kubernetes.

Top comments (0)