DEV Community

Cover image for Part-84: 🚀Expose a Pod with Google Kubernetes Engine (GKE) LoadBalancer Service (Declarative Way)
Latchu@DevOps
Latchu@DevOps

Posted on

Part-84: 🚀Expose a Pod with Google Kubernetes Engine (GKE) LoadBalancer Service (Declarative Way)

In this guide, we’ll learn how to deploy a simple Pod in Google Kubernetes Engine (GKE) and expose it to the outside world using a LoadBalancer Service — all in a declarative way (using YAML manifests).


Step 01: Understanding Kubernetes YAML Top-Level Objects

Every Kubernetes resource is defined in a YAML manifest.
The top-level objects you’ll see in almost every YAML file are:

apiVersion:  # Defines the API version to use (e.g., v1, apps/v1)
kind:        # Type of Kubernetes resource (Pod, Service, Deployment, etc.)
metadata:    # Information about the resource (name, labels, namespace)
spec:        # Desired state (configuration details)
Enter fullscreen mode Exit fullscreen mode

👉 Think of it like this:

  • apiVersion → which API group to use.
  • kind → what resource you want.
  • metadata → how Kubernetes identifies it.
  • spec → how it should behave.

Step 02: Create a Simple Pod Definition

Let’s create a Pod that runs a simple Nginx-based application.

📂 Create a directory for manifests:

mkdir kube-manifests
cd kube-manifests
Enter fullscreen mode Exit fullscreen mode

📄 File: 01-pod-definition.yaml

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
    - name: myapp
      image: stacksimplify/kubenginx:1.0.0
      ports:
        - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

👉 Key points:

  • We label the Pod with app: myapp.
  • The container runs on port 80.

✅ Apply it:

kubectl apply -f 01-pod-definition.yaml
kubectl get pods
Enter fullscreen mode Exit fullscreen mode

p1


Step 03: Create a LoadBalancer Service

Now that we have a Pod, let’s expose it using a LoadBalancer Service.
This Service will provision a GCP external load balancer and route traffic to our Pod.

📄 File: 02-pod-LoadBalancer-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: myapp-pod-loadbalancer-service
spec:
  type: LoadBalancer
  selector:
    app: myapp   # matches our Pod label
  ports:
    - name: http
      port: 80       # Service port (external entry point)
      targetPort: 80 # Container port inside Pod
Enter fullscreen mode Exit fullscreen mode

👉 Flow of traffic:

[Internet User] → [GCP LoadBalancer] → [K8s Service Port 80] → [Pod Container Port 80]

✅ Apply it:

kubectl apply -f 02-pod-LoadBalancer-service.yaml
kubectl get svc
Enter fullscreen mode Exit fullscreen mode

p2

🔎 Look for an External IP assigned to your Service.

✅ Test it:

curl http://<Load-Balancer-External-IP>
Enter fullscreen mode Exit fullscreen mode

You should see the Nginx response page 🎉

p3


Step 04: Clean Up

When done, delete the resources:

kubectl delete -f 01-pod-definition.yaml
kubectl delete -f 02-pod-LoadBalancer-service.yaml
Enter fullscreen mode Exit fullscreen mode

p4


🎯 Summary

  • Pods are the smallest deployable unit in Kubernetes.
  • Services provide a stable endpoint to access Pods.
  • A LoadBalancer Service in GKE creates a GCP Load Balancer and exposes your app to the internet.

This is the simplest way to expose a Pod to the outside world.


🌟 Thanks for reading! If this post added value, a like ❤️, follow, or share would encourage me to keep creating more content.


— Latchu | Senior DevOps & Cloud Engineer

☁️ AWS | GCP | ☸️ Kubernetes | 🔐 Security | ⚡ Automation
📌 Sharing hands-on guides, best practices & real-world cloud solutions

Top comments (0)