Part 3: Production Namespace, App Deployment & HTTPS Configuration (4-Core, 8GB RAM, 120GB Storage VPS)
Now that your MicroK8s cluster is running, it's time to deploy production applications with proper security, resource management, and HTTPS configuration! 🚀
Your Server Specs:
- 💻 CPU: 4 cores
- 🧠 RAM: 8 GB
- 💾 Storage: 120 GB SSD
- 🐧 OS: Ubuntu 24.04+
- ☸️ Kubernetes: MicroK8s
In this guide, we'll deploy a complete production stack including PostgreSQL, Redis, and n8n (workflow automation platform) with proper security and monitoring.
📋 What You'll Accomplish in Part 3
✔️ Create a dedicated production namespace
✔️ Configure resource limits & quotas
✔️ Securely create Kubernetes secrets
✔️ Deploy PostgreSQL, Redis, and n8n
✔️ Configure Ingress + HTTPS (Let's Encrypt)
✔️ Enable basic monitoring & health checks
🧩 Step 1: Create a Production Namespace
Namespaces isolate applications and separate environments, making your cluster more organized and secure.
microk8s kubectl create namespace production
Verify the namespace was created:
microk8s kubectl get namespaces
You should see production in the list.
🧩 Step 2: Add Resource Quotas (CPU & Memory Protection)
Resource quotas prevent applications from consuming all available resources, protecting cluster stability.
Create production-limits.yaml:
apiVersion: v1
kind: ResourceQuota
metadata:
name: production-limits
namespace: production
spec:
hard:
requests.cpu: "2"
requests.memory: 4Gi
limits.cpu: "4"
limits.memory: 8Gi
Apply the resource quota:
microk8s kubectl apply -f production-limits.yaml
Why This Matters
✔️ Prevents resource exhaustion — Apps can't consume all CPU/RAM
✔️ Protects cluster stability — Other workloads remain unaffected
✔️ Ensures predictable performance — Resources are distributed fairly
Verify the quota:
microk8s kubectl describe resourcequota production-limits -n production
🧩 Step 3: Securely Create Secrets (Best Practice)
⚠️ Security Warning: Never store passwords or tokens in Git repositories or plain YAML files. Always create secrets directly in Kubernetes.
🔐 Create PostgreSQL Password Secret
Generate a strong password and create the secret:
microk8s kubectl create secret generic postgres-secret \
--from-literal=password='CHANGE_ME_STRONG_PASSWORD' \
-n production
Important: Replace CHANGE_ME_STRONG_PASSWORD with a strong password:
- Minimum 20 characters
- Mix of uppercase, lowercase, numbers, and symbols
- Example generator:
openssl rand -base64 24
🔐 Create n8n Encryption Key Secret
n8n requires an encryption key for securing stored credentials:
Generate a random encryption key:
openssl rand -base64 32
Create the secret with your generated key:
microk8s kubectl create secret generic n8n-secret \
--from-literal=encryption-key='YOUR_GENERATED_KEY_HERE' \
-n production
Verify Secrets
Check that both secrets were created:
microk8s kubectl get secrets -n production
You should see:
NAME TYPE DATA AGE
postgres-secret Opaque 1 30s
n8n-secret Opaque 1 15s
🧩 Step 4: Deploy Core Applications
We'll deploy three interconnected services: PostgreSQL (database), Redis (cache), and n8n (automation platform).
4.1 📦 Deploy PostgreSQL (with Persistent Storage)
PostgreSQL will be our primary database for n8n.
Create postgres.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: production
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: production
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16
env:
- name: POSTGRES_DB
value: n8n
- name: POSTGRES_USER
value: n8n
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "1000m"
livenessProbe:
exec:
command:
- pg_isready
- -U
- n8n
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
exec:
command:
- pg_isready
- -U
- n8n
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: data
persistentVolumeClaim:
claimName: postgres-pvc
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: production
spec:
ports:
- port: 5432
targetPort: 5432
selector:
app: postgres
Apply the configuration:
microk8s kubectl apply -f postgres.yaml
Verify deployment:
microk8s kubectl get pods -n production
microk8s kubectl get pvc -n production
Wait for PostgreSQL to be ready:
microk8s kubectl wait --for=condition=ready pod -l app=postgres -n production --timeout=120s
You should see: pod/postgres-xxxxx condition met
4.2 🗄️ Deploy Redis (Cache & Queue)
Redis provides caching and queue management for n8n.
Create redis.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-pvc
namespace: production
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: production
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7-alpine
command:
- redis-server
- --appendonly
- "yes"
- --save
- "60"
- "1"
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: data
mountPath: /data
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
tcpSocket:
port: 6379
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: data
persistentVolumeClaim:
claimName: redis-pvc
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: production
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
Apply the configuration:
microk8s kubectl apply -f redis.yaml
Wait for Redis to be ready:
microk8s kubectl wait --for=condition=ready pod -l app=redis -n production --timeout=60s
4.3 ⚙️ Deploy n8n (Workflow Automation)
n8n is a powerful workflow automation tool, similar to Zapier or Make.com, but self-hosted.
Create n8n.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: n8n-pvc
namespace: production
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: n8n
namespace: production
spec:
replicas: 1
selector:
matchLabels:
app: n8n
template:
metadata:
labels:
app: n8n
spec:
containers:
- name: n8n
image: n8nio/n8n:latest
env:
- name: DB_TYPE
value: postgresdb
- name: DB_POSTGRESDB_HOST
value: postgres.production.svc.cluster.local
- name: DB_POSTGRESDB_PORT
value: "5432"
- name: DB_POSTGRESDB_DATABASE
value: n8n
- name: DB_POSTGRESDB_USER
value: n8n
- name: DB_POSTGRESDB_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
- name: N8N_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: n8n-secret
key: encryption-key
- name: N8N_PROTOCOL
value: "https"
- name: N8N_HOST
value: "n8n.yourdomain.com"
- name: WEBHOOK_URL
value: "https://n8n.yourdomain.com/"
- name: EXECUTIONS_DATA_PRUNE
value: "true"
- name: EXECUTIONS_DATA_MAX_AGE
value: "168"
ports:
- containerPort: 5678
name: http
volumeMounts:
- name: data
mountPath: /home/node/.n8n
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /healthz
port: 5678
initialDelaySeconds: 60
periodSeconds: 30
readinessProbe:
httpGet:
path: /healthz
port: 5678
initialDelaySeconds: 30
periodSeconds: 10
volumes:
- name: data
persistentVolumeClaim:
claimName: n8n-pvc
---
apiVersion: v1
kind: Service
metadata:
name: n8n
namespace: production
spec:
type: ClusterIP
ports:
- port: 5678
targetPort: 5678
name: http
selector:
app: n8n
Important: Replace n8n.yourdomain.com with your actual domain in the configuration above.
Apply the configuration:
microk8s kubectl apply -f n8n.yaml
Check deployment status:
microk8s kubectl get all -n production
Wait for n8n to be ready (this may take 2-3 minutes):
microk8s kubectl wait --for=condition=ready pod -l app=n8n -n production --timeout=180s
🧩 Step 5: Configure Ingress + HTTPS (Let's Encrypt)
Now we'll expose n8n to the internet with a valid HTTPS certificate.
5.1 Enable cert-manager
cert-manager automates TLS certificate management:
microk8s enable cert-manager
Wait for cert-manager to be ready:
microk8s kubectl wait --for=condition=ready pod -l app=cert-manager -n cert-manager --timeout=120s
5.2 Create DNS Records
Before proceeding, point your domain to your server:
DNS Record:
Type: A
Name: n8n (or your subdomain)
Value: your.server.ip.address
TTL: 300
Verify DNS propagation:
nslookup n8n.yourdomain.com
Or use:
dig n8n.yourdomain.com
Wait until the DNS record points to your server IP before continuing.
5.3 Create ClusterIssuer for Let's Encrypt
Create letsencrypt-issuer.yaml:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your-email@example.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: public
Important: Replace your-email@example.com with your actual email address. Let's Encrypt will use this for certificate expiration notifications.
Apply the issuer:
microk8s kubectl apply -f letsencrypt-issuer.yaml
Verify the ClusterIssuer:
microk8s kubectl get clusterissuer
You should see:
NAME READY AGE
letsencrypt-prod True 30s
5.4 Create Ingress Configuration
Create n8n-ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: n8n-ingress
namespace: production
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
spec:
ingressClassName: public
rules:
- host: n8n.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: n8n
port:
number: 5678
tls:
- hosts:
- n8n.yourdomain.com
secretName: n8n-tls
Important: Replace n8n.yourdomain.com with your actual domain (must match the DNS record you created).
Apply the Ingress:
microk8s kubectl apply -f n8n-ingress.yaml
5.5 Monitor Certificate Creation
Watch the certificate being issued (this takes 1-2 minutes):
microk8s kubectl get certificate -n production -w
Press Ctrl+C to stop watching once you see:
NAME READY SECRET AGE
n8n-tls True n8n-tls 2m
Check certificate details:
microk8s kubectl describe certificate n8n-tls -n production
5.6 Test Your Deployment
Open your browser and visit:
https://n8n.yourdomain.com
You should see:
- ✅ Valid HTTPS certificate (green padlock in browser)
- ✅ n8n setup page or login screen
- ✅ No security warnings
If you see the n8n interface, congratulations! 🎉 Your production deployment is live!
🧩 Step 6: Basic Monitoring & Health Checks
6.1 Check Cluster Resource Usage
View real-time resource consumption:
# Node resources
microk8s kubectl top nodes
Expected output:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
hostname 800m 20% 3000Mi 37%
Check pod resources:
# Pod resources in production namespace
microk8s kubectl top pods -n production
Expected output:
NAME CPU(cores) MEMORY(bytes)
n8n-xxxxxx 50m 800Mi
postgres-xxxxxx 20m 250Mi
redis-xxxxxx 5m 50Mi
6.2 Health Check Commands
Monitor your services with these commands:
# Check all pods status
microk8s kubectl get pods -n production
All pods should show STATUS: Running and READY: 1/1
# Check n8n logs (last 50 lines)
microk8s kubectl logs -n production deployment/n8n --tail=50
# Follow n8n logs in real-time
microk8s kubectl logs -n production deployment/n8n -f
Press Ctrl+C to stop following logs.
# Test PostgreSQL connection
microk8s kubectl exec -n production deployment/postgres -- \
psql -U n8n -d n8n -c "SELECT version();"
# Test Redis connection
microk8s kubectl exec -n production deployment/redis -- redis-cli ping
Expected output: PONG
# Check n8n health endpoint
curl https://n8n.yourdomain.com/healthz
Expected output: {"status":"ok"}
6.3 Install Netdata (Optional System Monitoring)
For comprehensive system monitoring with a beautiful dashboard:
bash <(curl -Ss https://my-netdata.io/kickstart.sh)
During installation:
- Press
Enterto accept defaults - Installation takes 2-3 minutes
Access Netdata at:
http://your-server-ip:19999
Netdata provides:
- Real-time system metrics
- CPU, RAM, disk, network monitoring
- Container resource tracking
- Alert notifications
🎉 Part 3 Complete — Applications Deployed!
| Component | Status |
|---|---|
| ✅ Namespace created | Done |
| ✅ Resource limits applied | Done |
| ✅ Secrets secured | Done |
| ✅ PostgreSQL deployed | Done |
| ✅ Redis deployed | Done |
| ✅ n8n deployed | Done |
| ✅ HTTPS enabled | Done |
| ✅ Basic monitoring configured | Done |
📚 Quick Reference Commands
View All Resources
# View everything in production namespace
microk8s kubectl get all -n production
Manage Deployments
# Restart a deployment
microk8s kubectl rollout restart deployment/n8n -n production
# Scale a deployment
microk8s kubectl scale deployment n8n --replicas=2 -n production
# Check rollout status
microk8s kubectl rollout status deployment/n8n -n production
Logs & Debugging
# View logs
microk8s kubectl logs -n production deployment/n8n -f
# Get detailed pod information
microk8s kubectl describe pod -n production -l app=n8n
# Check recent events
microk8s kubectl get events -n production --sort-by='.lastTimestamp'
Certificate Management
# Check certificate status
microk8s kubectl get certificate -n production
# Describe certificate details
microk8s kubectl describe certificate n8n-tls -n production
# Check certificate renewal
microk8s kubectl get certificaterequest -n production
Storage Management
# Check persistent volumes
microk8s kubectl get pvc -n production
# Check storage usage
microk8s kubectl describe pvc postgres-pvc -n production
🔧 Troubleshooting Guide
Pods Not Starting
# Check pod status and events
microk8s kubectl describe pod -n production <pod-name>
# Check pod logs
microk8s kubectl logs -n production <pod-name>
# Check previous container logs (if pod restarted)
microk8s kubectl logs -n production <pod-name> --previous
Certificate Not Issuing
# Check cert-manager logs
microk8s kubectl logs -n cert-manager -l app=cert-manager
# Check certificate request status
microk8s kubectl get certificaterequest -n production
# Describe certificate for errors
microk8s kubectl describe certificate n8n-tls -n production
Common certificate issues:
- DNS not propagated (wait 5-10 minutes)
- Port 80/443 not accessible (check firewall)
- Invalid email in ClusterIssuer
Ingress Not Working
# Check ingress configuration
microk8s kubectl get ingress -n production
# Check ingress controller logs
microk8s kubectl logs -n ingress -l app.kubernetes.io/name=ingress-nginx
# Describe ingress for details
microk8s kubectl describe ingress n8n-ingress -n production
Database Connection Issues
# Check PostgreSQL logs
microk8s kubectl logs -n production deployment/postgres
# Test connection from n8n pod
microk8s kubectl exec -n production deployment/n8n -- \
nc -zv postgres.production.svc.cluster.local 5432
Out of Resources
# Check resource usage
microk8s kubectl top nodes
microk8s kubectl top pods -n production
# Check resource quotas
microk8s kubectl describe resourcequota -n production
# Free up space if needed
docker system prune -a
💡 Production Tips
Security Best Practices
- Rotate secrets regularly (every 90 days):
# Update PostgreSQL password
microk8s kubectl delete secret postgres-secret -n production
microk8s kubectl create secret generic postgres-secret \
--from-literal=password='NEW_STRONG_PASSWORD' -n production
microk8s kubectl rollout restart deployment/postgres -n production
microk8s kubectl rollout restart deployment/n8n -n production
- Enable pod security policies
- Use network policies to restrict traffic
- Regular security updates:
sudo apt update && sudo apt upgrade -y
sudo snap refresh microk8s
Performance Optimization
- Monitor resource usage regularly
- Adjust resource limits based on actual usage
- Enable horizontal pod autoscaling (when needed)
- Regular database maintenance:
microk8s kubectl exec -n production deployment/postgres -- \
psql -U n8n -d n8n -c "VACUUM ANALYZE;"
Backup Reminders
- Take regular snapshots of persistent volumes
- Export n8n workflows regularly
- Backup PostgreSQL database daily
- Store backups in separate location
- Test restore procedures monthly
🔙 Previous Guide
👈 Back to Part 2: Installing MicroK8s
Need to review the MicroK8s installation or troubleshoot cluster issues? Check out Part 2!
🔜 What's Next?
Ready to protect your data? 💾
👉 Continue to Part 4: Automated Backups & Restore Procedures
In Part 4, you'll learn how to:
- 🔄 Automate daily backups — Set up cron jobs for PostgreSQL, Redis, and application data
- 💾 Backup persistent volumes — Protect your storage from data loss
- 🗄️ Export n8n workflows — Save your automation configurations
- 📤 Off-site backup storage — Send backups to cloud storage (S3, Backblaze, etc.)
- 🔧 Disaster recovery — Complete restore procedures from backup
- ⏰ Retention policies — Manage backup lifecycle and storage costs
- 🧪 Test restore procedures — Validate your backups work before disaster strikes
Never lose your data again! Let's build a bulletproof backup strategy! 🛡️
Top comments (0)