Introduction
Creating your first Kubernetes cluster marks the beginning of your container orchestration journey. While production clusters require complex configuration and security considerations, getting started with a local development cluster is straightforward and essential for learning Kubernetes concepts.
This article walks you through setting up a local Kubernetes cluster using Minikube and deploying your first application, giving you practical experience with core Kubernetes objects and workflows.
Prerequisites
- Docker installed and running on your local machine
- Basic familiarity with command-line operations
- Text editor for creating YAML configuration files
- At least 4GB of RAM available for the virtual machine
Install Minikube and kubectl
Minikube creates a single-node Kubernetes cluster in a virtual machine, perfect for development and learning. kubectl is the command-line tool for interacting with any Kubernetes cluster.
Install Minikube on macOS
$ brew install minikube
Install Minikube on Linux
$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
$ sudo install minikube-linux-amd64 /usr/local/bin/minikube
Install kubectl
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
Verify Installation
$ minikube version
$ kubectl version --client
Start Your First Kubernetes Cluster
Launch your Minikube cluster with sufficient resources for development work:
$ minikube start --memory=4096 --cpus=2
Minikube will download the Kubernetes components and start your cluster. This process takes several minutes on the first run.
Verify Cluster Status
$ kubectl cluster-info
$ kubectl get nodes
You should see output confirming your cluster is running and showing one node in the Ready state.
Learn Kubernetes Objects Through Practical Deployment
Instead of covering theory, you'll learn Kubernetes objects by deploying a complete web application with a database backend.
Create a Namespace for Organization
Namespaces provide logical separation within your cluster:
apiVersion: v1
kind: Namespace
metadata:
name: todo-app
Save this as namespace.yaml
and apply it:
$ kubectl apply -f namespace.yaml
Deploy a Database with Persistent Storage
Create a MySQL database with persistent storage:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
namespace: todo-app
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: todo-app
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
env:
- name: MYSQL_ROOT_PASSWORD
value: "password123"
- name: MYSQL_DATABASE
value: "todoapp"
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-storage
persistentVolumeClaim:
claimName: mysql-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
namespace: todo-app
spec:
selector:
app: mysql
ports:
- port: 3306
targetPort: 3306
Save as mysql.yaml
and deploy:
$ kubectl apply -f mysql.yaml
Deploy the Web Application
Create a deployment for a Node.js web application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: todo-web
namespace: todo-app
spec:
replicas: 3
selector:
matchLabels:
app: todo-web
template:
metadata:
labels:
app: todo-web
spec:
containers:
- name: todo-web
image: node:16-alpine
command: ["sh", "-c"]
args:
- |
npm init -y
npm install express mysql2
cat > app.js << 'EOF'
const express = require('express');
const mysql = require('mysql2');
const app = express();
const db = mysql.createConnection({
host: 'mysql-service',
user: 'root',
password: 'password123',
database: 'todoapp'
});
app.get('/health', (req, res) => {
res.json({ status: 'healthy', timestamp: new Date() });
});
app.get('/', (req, res) => {
res.json({
message: 'Todo App API',
database: 'connected',
replicas: process.env.HOSTNAME
});
});
app.listen(3000, () => {
console.log('Server running on port 3000');
});
EOF
node app.js
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: todo-web-service
namespace: todo-app
spec:
selector:
app: todo-web
ports:
- port: 80
targetPort: 3000
type: ClusterIP
Save as web-app.yaml
and deploy:
$ kubectl apply -f web-app.yaml
Expose Your Application to External Traffic
Create an Ingress to make your application accessible from outside the cluster:
Enable Ingress in Minikube
$ minikube addons enable ingress
Create Ingress Resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: todo-ingress
namespace: todo-app
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: todo.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: todo-web-service
port:
number: 80
Save as ingress.yaml
and apply:
$ kubectl apply -f ingress.yaml
Access Your Application
Add the Minikube IP to your hosts file:
$ echo "$(minikube ip) todo.local" | sudo tee -a /etc/hosts
Open http://todo.local
in your browser to see your application running.
Monitor and Debug Your Deployment
Check Pod Status
$ kubectl get pods -n todo-app
$ kubectl describe pod <pod-name> -n todo-app
View Application Logs
$ kubectl logs -f deployment/todo-web -n todo-app
$ kubectl logs -f deployment/mysql -n todo-app
Execute Commands in Containers
$ kubectl exec -it deployment/todo-web -n todo-app -- sh
$ kubectl exec -it deployment/mysql -n todo-app -- mysql -u root -p
Scale Your Application
Demonstrate horizontal scaling by adjusting replica count:
$ kubectl scale deployment todo-web --replicas=5 -n todo-app
$ kubectl get pods -n todo-app -w
Watch as Kubernetes creates additional pods and distributes traffic across them.
Understand What You Built
Your deployment demonstrates several key Kubernetes concepts:
Pods and Deployments
Each application component runs in pods managed by deployments that ensure desired replica counts and handle updates.
Services and Load Balancing
Services provide stable network endpoints and distribute traffic across healthy pod replicas.
Persistent Storage
PersistentVolumeClaims ensure database data survives pod restarts and rescheduling.
Health Checks
Liveness and readiness probes help Kubernetes determine when pods are healthy and ready to receive traffic.
Ingress and External Access
Ingress controllers provide HTTP routing and make services accessible from outside the cluster.
Conclusion
You have successfully deployed a multi-tier application on Kubernetes, demonstrating fundamental concepts like deployments, services, persistent storage, and ingress. This hands-on experience provides the foundation for understanding more advanced Kubernetes features and patterns.
The next article will explore Kubernetes networking in detail, covering how services, ingress controllers, and network policies work together to provide secure, scalable application connectivity.
Top comments (0)