DEV Community

Cover image for Deploying a Simple Application in a Container with Minikube in a Docker runtime.
Bernard Chika Uwaezuoke
Bernard Chika Uwaezuoke

Posted on

Deploying a Simple Application in a Container with Minikube in a Docker runtime.

The objective of this article is to serve as a guide to beginners who want to have basic understanding of containerization and container orchestration using Kubernetes.

Introduction:
Kubernetes as a container orchestration platform, has become the de facto standard for managing containerized applications at scale. While deploying applications on a real Kubernetes cluster is ideal for production environments, developers often need a local environment for testing, development, and learning purposes. Minikube comes to the rescue as a powerful tool that enables one to run a single-node Kubernetes cluster on your local machine.

What is Minikube?
Minikube is an open-source tool that allows you to set up a single-node Kubernetes cluster on your local machine. It provides an environment where you can deploy, manage, and test Kubernetes applications without the need for a full-scale production cluster. Minikube is particularly useful for developers who want to experiment with Kubernetes features, test configurations, and develop applications before deploying them to a real cluster.

What is Docker?
Docker is an open-source platform that automates the deployment, scaling, and management of applications inside lightweight, portable containers. Containers are a form of virtualization technology that allows you to package an application and its dependencies, including libraries and other configuration files, into a single unit called a container. A docker is technically a container runtime.

Prerequisites:

  1. Install Docker in your local environment.
  2. Install Minikube.
  3. Install Vim.

Let us now begin the exercise proper.

Deploying a Simple Application

  • To deploy an application in a container, we will start by creating a new deployment that is a kubernetes object. This can be done using minikube. We use the following command to start the minikube:

minikube start --driver=docker

Image description

Image description
Note that we do not require root privileges to run these commands.

  • We can now go ahead and create the deployment. The name of this deployment is serve1 and the parent image is redis. We then run the following command.

kubectl create deployment serv1 --image=redis

Image description
From the above image, we can see that our deployment was successfully created.

  • We can view the deployment we made by running the following command.

kubectl get deployments

Image description

  • We can also procced to view the deployment details with the following command.

kubectl describe deployment serve1

Image description

Image description

From the above snapshots, we can see a more detailed information about our deployment, including date, time, image, ports, age and others.

  • We can also view the event log of our deployment by running this command.

kubectl get events

Image description

  • We can also view existing items in the cluster in a usable YAML output, to see structure of how serve1 is currently deployed.

kubectl get deployments serve1 -o yaml

Below is the output.

Output

donhadley23@donhadley:~$ kubectl get deployments serve1 -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2023-08-12T23:06:45Z"
  generation: 1
  labels:
    app: serve1
  name: serve1
  namespace: default
  resourceVersion: "5935"
  uid: f287eb3d-4c74-44e8-99be-6efe716dd594
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
app: serve1
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: serve1
    spec:
      containers:
      - image: redis
        imagePullPolicy: Always
        name: redis
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2023-08-12T23:07:10Z"
    lastUpdateTime: "2023-08-12T23:07:10Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2023-08-12T23:06:45Z"
    lastUpdateTime: "2023-08-12T23:07:10Z"
    message: ReplicaSet "serve1-5455dc4fcc" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
readyReplicas: 1
  replicas: 1
  updatedReplicas: 1
donhadley23@donhadley:~$
Enter fullscreen mode Exit fullscreen mode
  • We can go ahead and to create a service to see more about our newly created serve1 container, but we have to enable a port in order to achieve this. So, let’s create a deployment file with this vim command:

vim deploymentfile.yml

The above command will take us to our deployment file. We can go ahead and edit the file with information from a similar file in Kubernetes documentation at https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ and also enable a port.
Please note that we replaced the name of the app with sever1 and added the section for port, and protocol. As can be seen below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: serve1
  labels:
    app: serve1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: serve1
  template:
    metadata:
      labels:
        app: serve1
    spec:
      containers:
      - name: serve1
        image: redis
        ports:
        - containerPort: 80
          protocol: TCP
Enter fullscreen mode Exit fullscreen mode

We save and exit from the deployment file.
This changes was necessary to enable us run the service successfully be enabling the port.

  • We can now run this command to replace the deployment with our new changes

kubectl replace -f deploymentfile.yml

Image description

  • We can also view the Pod and Deployment. Take special notice of the Age, showing when the pod was created.

kubectl get deploy,pod

Image description

  • We can expose the resource again, now that we have a port enabled, it should work.

kubectl expose deployment serve1

Image description

  • Let us verify the service configuration.

kubectl get service serve1

Image description

  • To view the Endpoint which is provided by kubelet and kube-proxy. Take special notice of the current endpoint IP.

kubectl get ep serve1

Image description

  • Let also look at all the pods created.

kubectl get pods

Image description

  • We can scale up the deployment. Let's start by checking the current deployment state.

kubectl get deployment serve1

Image description

Let scale it up from 3 to 6.

kubectl scale deployment serve1 --replicas=6

Image description
Scaled successfully!

  • Now that we have successfully scaled our deployment, let’s check the number of pods that we have again.

kubectl get deployment serve1

Image description
We have six now!

View the current endpoints. There will be six now.

kubectl get ep serve1

Image description

  • We can also use o-wide command to view the IP addresses the running pods

kubectl get pod -o wide

Image description

  • Now that we are done with our journey of discovery on Kubernetes, it's time to clean up, by deleting our deployment.

kubectl delete deployment serve1

Image description

  • Let's verify!

kubectl get deployment serve1

Image description
Deleted!

We then stop the minikube.

minikube stop

Image description

Hope this guide is simplified enough and you have a better grasp of how Kubernetes functions.

Thank you for reading! Please do well to follow our page and subscribe too

Top comments (0)