DEV Community

Cover image for Kubernetes Building Blocks(1)
Rahimah Sulayman
Rahimah Sulayman

Posted on

Kubernetes Building Blocks(1)

Introduction

If you liken Kubernetes to an ocean, those individual drops that make up the ocean are the core building blocks: Namespaces, Pods, ReplicaSets, Deployments, Labels, etc.
Our focus here will be on the first two mentioned.
As a professional working across Data Analytics and Cloud Engineering, I’ve found that the best way to master these concepts isn't just by reading documentation, but by using the Build, See, Destroy methodology. This approach allows you to experiment fearlessly, visualize the cluster's internal logic, and clean up after yourself.

In this post, we are going to move from a blank slate Minikube cluster to an orchestrated environment, exploring both the fast-paced command line and the birds-eye view of the Kubernetes Dashboard.

The Scenario: The Isolated Web Fleet
Imagine you are a DevOps Engineer tasked with deploying a fleet of Nginx web servers for a new project. However, the cluster is shared with other teams, so you can't just dump your resources into the default space.
Tasks

  • Carve out a Virtual Sandbox: Create a dedicated Namespace to keep our project isolated.

  • Deploy the Fleet: Use both Imperative (quick CLI) and Declarative (YAML/JSON) methods to launch five Nginx pods.

  • Inspect & Troubleshoot: Go under the hood to check IP addresses, node placements, and handle the inevitable errors that come with cluster management.

  • Visualize the Result: Launch the Minikube Dashboard to confirm 100% health of our fleet.

Namespaces

If multiple users and teams use the same Kubernetes cluster we can partition the cluster into virtual sub-clusters using Namespaces. The names of the resources/objects created inside a Namespace are unique, but not across Namespaces in the cluster.

Step 1: Checking if the Cluster is Ready
Before we can start building with Namespaces and Pods, we need to ensure our local environment is up and running. Using Minikube, we can quickly verify the health of our cluster, run minikube status.

To list all the Namespaces, we can run the following command:
$ kubectl get namespaces

namespaces

Generally, Kubernetes creates four Namespaces out of the box: kube-system, kube-public, kube-node-lease, and default. The kube-system Namespace contains the objects created by the Kubernetes system, mostly the control plane agents. The default Namespace contains the objects and resources created by administrators and developers, and objects are assigned to it by default unless another Namespace name is provided by the user. kube-public is a special Namespace, which is unsecured and readable by anyone, used for special purposes such as exposing public (non-sensitive) information about the cluster.
The newest Namespace is kube-node-lease which holds node lease objects used for node heartbeat data.
Good practice, however, is to create additional Namespaces, as desired, to virtualize the cluster and isolate users, developer teams, applications, or tiers

Step 2. Organizing your cluster by Creating Your Own Namespace
Think of Namespaces as virtual folders within your cluster. While Kubernetes provides a default namespace, it’s best practice to create your own to keep your projects isolated and organized.

In this step, I move from viewing the cluster to actively managing it by running the command:
$ kubectl create namespace new-namespace-name

$ kubectl create namespace rahimah
This is an imperative command. I'm telling the Kubernetes API directly to make this right now. The confirmation namespace/rahimah created is the cluster acknowledging the new logical boundary.

Namespaces are one of the most desired features of Kubernetes, securing its lead against competitors, as it provides a solution to the multi-tenancy requirement of today's enterprise development teams.

createns

Run $ kubectl get namespaces to Verifying our work is key. You can see the new namespace rahimah is now Active alongside the system-generated ones. It’s now a ready-to-use sandbox where we can deploy our pods without cluttering the rest of the cluster.

Namespaces are great for multi-tenancy. If you are working in a team, giving each developer or project their own namespace prevents naming collisions, meaning you can have a pod named web-server in Namespace A and another web-server in Namespace B without any conflict!

newnamespace

Pods

A Pod is the smallest Kubernetes workload object. It is the unit of deployment in Kubernetes, which represents a single instance of the application. A Pod is a logical collection of one or more containers, enclosing and isolating them to ensure that they:

  • Are scheduled together on the same host with the Pod.
  • Share the same network namespace, meaning that they share a single IP address originally assigned to the Pod.
  • Have access to mount the same external storage (volumes) and other common dependencies.

Below is an example of a stand-alone Pod object's definition manifest in YAML format, without an operator. This represents the declarative method to define an object, and can serve as a template for a much more complex Pod definition manifest if desired:

manifest

Step 1: Moving to Declarative by Defining Your First Pod
While kubectl run is great for quick tests, real-world Kubernetes relies on Manifests. These YAML files act as the source of truth for your infrastructure, allowing you to version control and share your configurations easily.

The Screenshot: Preparing the Manifest
In this sequence, I’m setting up the workspace and defining the blueprint for our application.

Step 2: Setting the Scene
I created a dedicated directory cluster and a new file rahimah.yaml to keep the project organized.

What the manifest contains:
apiVersion & kind: Tells Kubernetes we are creating a version 1 Pod.

metadata: This is where we name our resource (nginx-pod).

spec: This is the most critical part. It defines the desired state, specifically, that we want one container running the nginx:1.22.1 image on port 80.
KAMS stands for the parts of a manifest.

Tip: YAML is extremely sensitive to indentation! If you're off by even one space, the Kubernetes API will reject your file. I always recommend using a code editor with a YAML linting extension to catch these invisible errors before you hit the terminal, hence I used the vi editor.

The apiVersion field must specify v1 for the Pod object definition. The second required field is kind specifying the Pod object type. The third required field metadata, holds the object's name and optional labels and annotations. The fourth required field spec marks the beginning of the block defining the desired state of the Pod object (also named the PodSpec). Our Pod creates a single container running the nginx:1.22.1 image pulled from a container image registry, in this case from Docker Hub.

The above definition manifest, if stored by a rahimah.yaml file, is loaded into the cluster to run the desired Pod and its associated container image.

Before creating the pod, I'll save the YAML file in a directory, for organization. Then vi into it for verification.

directory

vi

Step 3: Reviewing the Manifest
Before we send our instructions to the Kubernetes API, it’s always a good habit to peek inside the file one last time. This ensures that our indentation is correct and that we are deploying the exact version of the image we intended.

Am inspecting the File with cat command

cat
This is important because in a real-world DevOps environment, you might be managing dozens of YAML files. Running cat allows you to verify the metadata and spec fields without opening a full text editor.

NOTE: If you're following along, notice the containerPort: 80. This doesn't actually open the port to the outside world yet (you'll need a Service for that later) but it tells Kubernetes which port the application inside the container is listening on.

$ kubectl create -f rahimah.yaml

$ kubectl run nginx-pod --image=nginx:1.22.1 --port=80$ kubectl run nginx-pod --image=nginx:1.22.1 --port=80

Step 4: From Blueprint to Reality
This is where we move from local files to actual running resources. In this step, I demonstrate both the Declarative and Imperative methods of pod creation.

The screenshot shows deployment the Nginx pods.
In this image, I am populating the cluster with three distinct pods using two different techniques.

$ kubectl create -f rahimah.yaml: This is the Declarative approach. We are telling Kubernetes to create whatever is defined in this file. It’s the professional standard for production environments.
$ kubectl run nginx-pod --image=nginx:1.22.1 --port=80$ kubectl run nginx-pod --image=nginx:1.22.1 --port=80

kubectl run nginx-pod2 & nginx-pod3: These are Imperative commands. They are fast, one-liners that are perfect for Build, See, Destroy learning or quick debugging.
3pods

Notice the consistent feedback from the cluster: pod/nginx-pod created.
Whether you use a complex JSON/YAML file or a simple CLI command, the Kubernetes API processes the request and schedules the workload onto a node.

Step 5: Verifying the Workload
The most satisfying part of any Kubernetes project is seeing that Running status. This is where we confirm that the cluster has successfully pulled the images, allocated resources, and started our containers.

The Screenshot shows Healthy Pod Fleet
In this final view, we run the ultimate "truth" command: kubectl get pods.
getpods
READY 1/1: This indicates that the container inside the pod has passed its readiness check and is prepared to handle traffic.

STATUS Running: This confirms the pod is active on a node. If you see ContainerCreating or ErrImagePull here, you know something went wrong during the startup process.

RESTARTS 0: A zero here is a sign of stability. It means your application hasn't crashed or entered a CrashLoopBackOff.

However, when in need of a starter definition manifest, knowing how to generate one can be a life-saver. The imperative command with additional key flags such as dry-run and the yaml output, can generate the definition template instead of running the Pod, while the template is then stored in the rahimah.yaml file. The following is a multi-line command that should be selected in its entirety for copy/paste (including the backslash character "\")
$ kubectl run nginx-pod --image=nginx:1.22.1 --port=80 \
--dry-run=client -o yaml > nginx-pod.yaml

yaml

I ran the ls command to confirm the file presence in that location,
then I created a pod from the YAML file and get pods to list the pods present in the cluster.

pods

The command above generates a definition manifest in YAML, but we can generate a JSON definition file just as easily with:

$ kubectl run nginx-pod --image=nginx:1.22.1 --port=80 \
--dry-run=client -o json > nginx-pod.json

catjson

Then vi into the json file that created pod4 to edit it for pod5. Then cat again it to verify the changes were effected.

newfile
I created a new pod using the apply verb, then ran get pods, get pods -o wide commands.

get pods

NOTE: The difference get pods and get pods -o wide

$ kubectl create -f nginx-pod.yaml
$ kubectl create -f nginx-pod.json

Both the YAML and JSON definition files can serve as templates or can be loaded into the cluster respectively as such:

Step 6: Deep Dive
Sometimes, a simple kubectl get pods isn't enough. When you need to know exactly what is happening inside a Pod, like which node it's running on, its IP address, or its lifecycle events, you need to use the describe command.

The Screenshot shows the anatomy of a Running Pod
In this image, I’m running kubectl describe pods and the output is a goldmine of information.

Node Information: You can see this pod is scheduled on the minikube node at IP 192.168.49.2.

Container Details: It confirms we are using the nginx:1.22.1 image and that the container is officially in the Running state.

Conditions: Notice the True values for Initialized, Ready, and PodScheduled. This is the _checklist _Kubernetes uses to ensure the pod is healthy.

IP Address: Each pod gets its own unique internal IP (in this case, 10.244.0.3).

NOTE: If your pod is stuck in Pending or CrashLoopBackOff, always scroll to the very bottom of the describe output. The Events section will tell you the exact reason, whether it’s a failed pull, a lack of memory, or a configuration error.

Step 7: The Apply Warning
As you saw earlier, switching between kubectl run and kubectl apply can trigger a warning message.

Recall the Screenshot Missing Annotation Warning
In this image, I used kubectl apply on a pod that was originally created with a simple run command.

What the Warning Means: Kubernetes is saying: "I don't see the 'last-applied-configuration' note on this pod." You don't actually have to do anything, Kubernetes automatically patches the pod by adding that annotation so it can track future declarative changes.

Hence, once you move to a file-based workflow (using YAML or JSON), stick with kubectl apply. It makes your deployments much more predictable and stable.

allpods
allpods

Step 8: Using the describe Command for Advanced Inspection
When a simple list isn't enough, we need to look closer. The kubectl describe command is your magnifying glass for everything happening inside a resource.

The Screenshots below show an anatomy of a Running Pod, am inspecting all the pods. The output provides critical data that isn't visible in a standard list:

Placement: You can see exactly which Node (in this case, our minikube VM) is hosting the pod.

Networking: Each pod is assigned its own internal IP address (like 10.244.0.3).

Lifecycle Conditions: Notice the "True" status for Initialized, Ready, and ContainersReady. This is the checklist Kubernetes uses to confirm the pod is healthy and capable of serving traffic.

Events: While not visible in every crop, the bottom of this output logs every action the cluster took, from pulling the image to starting the container.

Tip: If your pod status is Pending, use describe. It will often tell you if the cluster is out of memory or if it can't find a node that fits your requirements.

singlepod

crash
Always verify cluster health before deployment to minimize downtime.
Step 11: The GUI Perspective (Launching the Minikube Dashboard)
Sometimes a visual overview is exactly what you need to see the *big picture * of your cluster.

In the screenshot below an accessing the Dashboard by running the command minikube dashboard. This automates several complex steps for you;

Enabling the Addon: It ensures the dashboard components are active.
Launching the Proxy: It creates a secure tunnel between your local machine and the cluster.

Opening the UI: It provides a local URL that opens directly in your browser.
The dashboard isn't just for looking, you can use it to edit YAML files, scale your deployments, and view real-time logs without typing a single kubectl command.

dashboard

Step 9: Exploring the Kubernetes Dashboard
After spending so much time in the terminal, there is nothing quite like seeing your cluster come to life in a web browser. The Minikube Dashboard provides a real-time, graphical view of your workloads, and it’s an incredible tool for both beginners and advanced users.
dashbrdbrws

The solid green circle shows that 100% of our desired pods are healthy and running.

The Pod List shows all five of our Nginx pods (nginx-pod through nginx-pod5) lined up perfectly.

The Metadata at a Glance: Without typing a single command, we can see the internal IP addresses, the image versions (nginx:1.22.1), and even how long each pod has been alive.

The Sidebar: Notice the menu on the left. This is where you can explore more advanced building blocks like ConfigMaps, Secrets, and Storage Classes as you progress in your journey.

Step 10: Powering Down
Once you’ve finished your lab session, it’s best practice to stop your local cluster to save your machine's battery and CPU.
minikube stop

The screenshot shows the transition from an active environment to a clean stop.

$ minikube stop: This gracefully powers down the Minikube virtual machine.

$ minikube status: Confirming the shutdown. You can see the host, kubelet, and apiserver are all now in a Stopped state.

Your work isn't lost. The next time you run minikube start, your namespaces and manifests will be right where you left them.

Before advancing to more complex application deployment and management methods, become familiar with Pod operations with additional commands such as:

$ kubectl apply -f nginx-pod.yaml
$ kubectl get pods
$ kubectl get pod nginx-pod -o yaml
$ kubectl get pod nginx-pod -o json
$ kubectl describe pod nginx-pod
$ kubectl delete pod nginx-pod

Conclusion

Bridging the Gap Between Code and Infrastructure
Building this fleet of Nginx pods was more than just a technical exercise, it was a demonstration of how a structured "Build, See, Destroy" approach ensures infrastructure reliability. By moving from imperative CLI commands to declarative YAML and JSON manifests, we create a system that is version-controlled, repeatable, and scalable—the core pillars of a modern DevOps culture.

For me, the real power of Kubernetes lies in its self-healing nature and resource isolation. Whether it’s troubleshooting a missing client certificate or using the Dashboard to verify cluster health, the goal remains the same: ensuring high availability and operational excellence for the end user.

As the tech landscape continues to evolve, mastering these foundational building blocks is what allows us to build the resilient, sovereign digital infrastructures of tomorrow.

Top comments (0)