DEV Community

Cover image for Kubernetes 102: Setting Up Your First Cluster and Core Concepts 🚀
Md Khurshid
Md Khurshid

Posted on

Kubernetes 102: Setting Up Your First Cluster and Core Concepts 🚀

In the previous post Kubernetes 101, we learned what Kubernetes is, its features, and how it works behind the scenes.

Now it’s time to get hands-on! In this guide, we’ll:

1. Install a lightweight Kubernetes cluster (using K3s)
2. Explore the basic concepts of Kubernetes (nodes, pods, deployments, etc.)
3. Learn how to interact with Kubernetes using kubectl

Let’s get started.

⚙️ Installing Kubernetes (K3s)

There are multiple ways to install Kubernetes:

  • Minikube – runs Kubernetes inside a VM
  • Kind – runs Kubernetes using Docker containers
  • MicroK8s – Canonical’s lightweight K8s for Linux
  • K3s – an ultra-lightweight distribution

👉 For this tutorial, we’ll use K3s because it’s super lightweight, easy to install, and comes with everything you need (including kubectl).

Step 1: Install K3s

Run this command to install K3s:

$ curl -sfL https://get.k3s.io | sh -
Enter fullscreen mode Exit fullscreen mode

This will download and install the latest version of K3s, then start it as a system service.

Step 2: Configure kubectl

Now, copy the kubeconfig file so that kubectl can talk to your cluster:

$ mkdir -p ~/.kube
$ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
$ sudo chown $USER:$USER ~/.kube/config
Enter fullscreen mode Exit fullscreen mode

Tell your shell to use this config:

$ export KUBECONFIG=~/.kube/config
Enter fullscreen mode Exit fullscreen mode

👉 Add this line to ~/.bashrc or ~/.zshrc to make it permanent.

Step 3: Verify the Cluster

$ kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

You should see something like:

NAME       STATUS   ROLES                  AGE   VERSION
myhost     Ready    control-plane,master   2m    v1.31.0+k3s1
Enter fullscreen mode Exit fullscreen mode

🎉 Congrats! Your Kubernetes cluster is up and running!

🔑 Kubernetes Basic Terms and Concepts

Before we deploy apps, let’s understand the building blocks of Kubernetes.

1. Nodes

Nodes are the machines (computers or VMs) that make up your Kubernetes cluster.

  • They actually run the containers for your apps.
  • Kubernetes keeps track of every node’s health and status.

2. Namespaces

Namespaces are like separate rooms inside the cluster.
They help organize and isolate resources, so names don’t clash.

  • Two Pods in the same namespace cannot have the same name.
  • But two Pods with the same name can exist in different namespaces.
  • Useful for teams, projects, or different environments (like dev, test, prod).

3. Pods

Pods are the smallest unit in Kubernetes.
They are like a wrapper around one or more containers that must run together on the same node.

  • Often, one Pod = one container.
  • But a Pod can have multiple containers that share storage and network.
  • Special containers (init or ephemeral) can be added for setup or debugging.

4. ReplicaSets

ReplicaSets make sure the right number of Pod copies (replicas) are always running.

  • If a Pod crashes or a Node fails, the ReplicaSet creates a new Pod automatically.

5. Deployments

A Deployment is a higher-level controller on top of ReplicaSets.

  • You declare how many Pods you want and what version of the app to run.
  • Kubernetes then updates or rolls back automatically.
  • You can pause, scale, or roll back easily.

6. Services

Services expose Pods to the network so other apps or users can reach them.

  • They provide a stable IP or DNS name even if Pods come and go.
  • Ingress works with Services to set up HTTP/HTTPS routes and load balancing.
  • You can also add TLS certificates for HTTPS.

7. Jobs
Jobs run one-time or batch tasks.

  • A Job creates Pods and waits until they finish.
  • If a Pod fails, it retries until the task completes.
  • CronJobs are like Jobs with a schedule (e.g., run every night at 1 a.m.).

8. Volumes

Volumes are storage that lives outside a Pod’s life.

They let Pods store data that doesn’t disappear when a Pod restarts.
Good for databases or file servers.
Kubernetes works with many storage types (cloud disks, local disks, etc.).

9. Secrets and ConfigMaps

  • Secrets store sensitive data like passwords, API keys, or certificates.
  • ConfigMaps store normal configuration like app settings.
  • Both can be given to Pods as environment variables or as files mounted in a volume.

10. DaemonSets

A DaemonSet makes sure one Pod runs on every Node in the cluster.

  • Useful for things that must run everywhere, like: Logging agents, Monitoring tools
  • When a new Node joins, Kubernetes automatically runs the DaemonSet Pod on it.

11. Network Policies

Network Policies are rules for traffic between Pods.

  • They control who can talk to whom inside the cluster.
  • Two types of rules:

Ingress: control incoming traffic.
Egress: control outgoing traffic.

If a policy denies traffic, Pods cannot connect.

Using Kubectl to interact with Kubernetes

Now you’re familiar with the basics, you can start adding workloads to your cluster with Kubectl. Here’s a quick reference for some key commands.

List Pods
This displays the Pods in your cluster:

$ kubectl get pods
No resources found in default namespace
Enter fullscreen mode Exit fullscreen mode

Specify a namespace with the -n or --namespace flag:

$ kubectl get pods -n demo
No resources found in demo namespace
Enter fullscreen mode Exit fullscreen mode

Alternatively, get Pods from all your namespaces by specifying --all-namespaces:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   coredns-b96499967-4xdpg                   1/1     Running     0          114m
Enter fullscreen mode Exit fullscreen mode

...
This includes Kubernetes system components.

Create a Pod
Create a Pod with the following command:

$ kubectl run nginx --image nginx:latest
pod/nginx created
Enter fullscreen mode Exit fullscreen mode

This starts a Pod called nginx that will run the nginx:latest container image.

Create a Deployment
Creating a Deployment lets you scale multiple replicas of a container:

$ kubectl create deployment nginx --image nginx:latest --replicas 3
deployment.apps/nginx created
Enter fullscreen mode Exit fullscreen mode

You’ll see three Pods are created, each running the nginx:latest image:

$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7597c656c9-4qs55   1/1     Running   0          51s
nginx-7597c656c9-gdjl9   1/1     Running   0          51s
nginx-7597c656c9-7sxrc   1/1     Running   0          51s

Enter fullscreen mode Exit fullscreen mode

Scale a Deployment
Now use this command to increase the replica count:

$ kubectl scale deployment nginx --replicas 5
deployment.apps/nginx scaled
Enter fullscreen mode Exit fullscreen mode

Kubernetes has created two extra Pods to provide additional capacity:

$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7597c656c9-4qs55   1/1     Running   0          2m26s
nginx-7597c656c9-gdjl9   1/1     Running   0          2m26s
nginx-7597c656c9-7sxrc   1/1     Running   0          2m26s
nginx-7597c656c9-kwm6q   1/1     Running   0          2s
nginx-7597c656c9-nwf2s   1/1     Running   0          2s
Enter fullscreen mode Exit fullscreen mode

Expose a Service
Now let’s make this NGINX server accessible.

Run the following command to create a service that’s exposed on a port of the Node running the Pods:

$ kubectl expose deployment/nginx --port 80 --type NodePort
service/nginx exposed
Enter fullscreen mode Exit fullscreen mode

Discover the port that’s been assigned by running this command:

$ kubectl get services
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.43.0.1      <none>        443/TCP        121m
nginx        NodePort    10.43.149.39   <none>        80:30226/TCP   3s
Enter fullscreen mode Exit fullscreen mode

The port is 30226. Visiting :30226 in your browser will show the default NGINX landing page.

You can use localhost as if you’ve been following along with the single-node K3s cluster created in this tutorial.


$ kubectl get nodes -o wide
NAME       STATUS   ROLES                  AGE    VERSION        INTERNAL-IP
ubuntu22   Ready    control-plane,master   124m   v1.24.4+k3s1   192.168.122.210
Enter fullscreen mode Exit fullscreen mode

Using port forwarding
You can access a service without binding it to a Node port by using Kubectl’s integrated port-forwarding functionality. Delete your first service and create a new one without the --type flag:

$ kubectl delete service nginx
service/nginx deleted

$ kubectl expose deployment/nginx –port 80
service/nginx exposed
Enter fullscreen mode Exit fullscreen mode

This creates a ClusterIP service that can be accessed on an internal IP, within the cluster.

Retrieve the service’s details by running this command:

$ kubectl get services
NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
nginx       ClusterIP   10.100.191.238   <none>     80/TCP  2s
Enter fullscreen mode Exit fullscreen mode

The service can be accessed inside the cluster at 10.100.191.238:80.

You can reach this address from your local machine with the following command:

$ kubectl port-forward service/nginx 8080:80
Enter fullscreen mode Exit fullscreen mode

Visiting localhost:8080 in your browser will display the NGINX landing page. Kubectl is redirecting traffic to the service inside your cluster. You can press Ctrl+C in your terminal to stop the port forwarding session when you’re done.

Port forwarding works without services too. You can directly connect to a Pod in your deployment with this command:

$ kubectl port-forward deployment/nginx 8080:80
Enter fullscreen mode Exit fullscreen mode

Visiting localhost:8080 will again display the NGINX landing page, this time without going through a service.

Apply a YAML file
Finally, let’s see how to apply a declarative YAML file to your cluster. First, write a simple Kubernetes manifest for your Pod:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
    - name: nginx
      image: nginx:latest
Enter fullscreen mode Exit fullscreen mode

Save this manifest to nginx.yaml and run kubectl apply to automatically create your Pod:

$ kubectl apply -f nginx.yaml
pod/nginx created
Enter fullscreen mode Exit fullscreen mode

You can repeat the command after you modify the file to apply any changes to your cluster.

Now you’re familiar with the basics of using Kubectl to interact with Kubernetes!

🎯 Wrapping Up

Kubernetes is the leading container orchestrator, and in this blog we explored its features, understood how it works, and looked at the key components that power your applications. You now have the foundation to start experimenting and building with Kubernetes. 🚀

But this is just the beginning! In the upcoming blogs, we’ll dive into advanced Kubernetes topics — from real-world challenges and best practices to security, scaling, and hands-on deployments.

🙌 Stay tuned for the next part of the series where we’ll go beyond the basics and make Kubernetes work for you.

👉 Follow me here, or connect with me on LinkedIn and Twitter for updates, tips, and more developer content.

Top comments (0)