DEV Community

Cover image for Part-122: 🚀Step-by-Step Guide: Create a GKE Private Cluster with Cloud NAT and Deploy an App
Latchu@DevOps
Latchu@DevOps

Posted on

Part-122: 🚀Step-by-Step Guide: Create a GKE Private Cluster with Cloud NAT and Deploy an App

If you’re working with Google Kubernetes Engine (GKE) and want to securely run workloads without exposing nodes to the public internet, this tutorial is for you.

In this post, we’ll walk through:

  • Creating a GKE Private Cluster
  • Configuring Cloud NAT for outbound internet access
  • Deploying and testing a sample NGINX app

🧩 Step 01: Introduction

We’ll perform the following:

  1. Create a GKE Private Cluster
  2. Configure Cloud NAT
  3. Deploy and test a sample application

Private clusters ensure nodes do not have external IPs, enhancing security. Traffic between the control plane and nodes flows internally within your VPC using private IPs.


⚙️ Step 02: Create a Standard GKE Private Cluster

Navigate to:

Kubernetes Engine → Clusters → CREATE

Select:

GKE Standard → CONFIGURE


Cluster Basics

Setting Value
Name standard-private-cluster-1
Location type Regional
Region us-central1
Node locations us-central1-a, us-central1-b, us-central1-c

Leave the remaining options as defaults.


Fleet Registration

Review and leave to defaults.


Node Pools → default-pool

Node Pool Details

  • Name: default-pool
  • Number of Nodes (per zone): 1

Optional Cost-Saving Settings

  • ✅ Enable cluster autoscaler
  • Location policy: Balanced
  • Size limits: Per zone
  1. Min: 0
  2. Max: 1

Nodes: Configure Node Settings

Setting Value
Machine Series General Purpose (E2)
Machine Type e2-small
Boot Disk Type Balanced Persistent Disk
Boot Disk Size 20 GB
Enable Node on Spot VMs ✅ Checked

Leave all other settings as defaults.


Node Networking, Security, Metadata

Review and leave all to defaults.


Cluster Networking

Setting Value
Network default
Node Subnet default
IPv4 Network Access Private cluster
Access control plane using external IP Checked (by default)
Control Plane IP Range 172.16.0.0/28
Enable Control Plane Global Access ✅ (Optional)
Control Plane Authorized Networks Enabled

Access Security Levels

Option Description Access
Least Secure Public endpoint enabled, authorized networks disabled Internet
Medium Secure (we choose this) Public endpoint enabled, authorized networks enabled Specific IP ranges (CloudShell/local)
High Secure Public endpoint disabled VM in GCP VPC / On-prem VPN

Click CREATE to launch the cluster.


🧠 Step 03: Access the Private Cluster via Cloud Shell

By default, you can’t access the cluster as your Cloud Shell IP isn’t authorized. Let’s fix that.


Configure kubectl credentials

gcloud container clusters get-credentials standard-private-cluster-1 --region us-central1 --project gcp-zero-to-hero-468909
Enter fullscreen mode Exit fullscreen mode

Check node access

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Expected: Access fails — Cloud Shell IP not yet authorized.


List existing authorized networks

gcloud container clusters describe standard-private-cluster-1 \
  --format "flattened(masterAuthorizedNetworksConfig.cidrBlocks[])" \
  --location "us-central1"
Enter fullscreen mode Exit fullscreen mode

Get your Cloud Shell public IP

dig +short myip.opendns.com @resolver1.opendns.com
Enter fullscreen mode Exit fullscreen mode

Add Cloud Shell IP to authorized networks

gcloud container clusters update standard-private-cluster-1 \
    --enable-master-authorized-networks \
    --master-authorized-networks 35.187.230.177/32 \
    --location us-central1
Enter fullscreen mode Exit fullscreen mode

Or manually:
Go to Cluster → Details → Networking → EDIT → Control Plane Authorized Networks


Validate

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

✅ Expected: You can now see your private GKE nodes listed.

p1


📦 Step 04: Review Kubernetes Deployment Manifest

File: 01-kubernetes-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp1-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp1
  template:
    metadata:
      labels:
        app: myapp1
    spec:
      containers:
        - name: myapp1-container
          image: ghcr.io/stacksimplify/kubenginx:1.0.0
          ports:
            - containerPort: 80
          imagePullPolicy: Always
Enter fullscreen mode Exit fullscreen mode

🌐 Step 05: Review Service Manifest

File: 02-kubernetes-loadbalancer-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: myapp1-lb-service
spec:
  type: LoadBalancer
  selector:
    app: myapp1
  ports:
    - name: http
      port: 80
      targetPort: 80
Enter fullscreen mode Exit fullscreen mode

🚫 Step 06: Deploy and Observe Initial Failure

kubectl apply -f kube-manifests/
kubectl get pods
Enter fullscreen mode Exit fullscreen mode

Observation:

Pods fail with ImagePullBackOff since the private nodes can’t reach Docker Hub.

kubectl describe pod <POD-NAME>
Enter fullscreen mode Exit fullscreen mode

p2

Clean up before fixing networking:

kubectl delete -f kube-manifests/
Enter fullscreen mode Exit fullscreen mode

☁️ Step 07: Create a Cloud NAT Gateway

Navigate:

Network Services → CREATE CLOUD NAT GATEWAY

Gateway Settings:

  • Name: gke-us-central1-default-cloudnat-gw
  • Network: default
  • Region: us-central1

Create Cloud Router

When prompted:

  • Name: gke-us-central1-cloud-router
  • Description: GKE Cloud Router Region us-central1
  • BGP keepalive: 20s (default)
  • Click CREATE

NAT Settings:

Leave Mapping, Destination, and Logging as defaults

✅ Enable Dynamic Port Allocation

Click CREATE


✅ Step 08: Deploy and Verify Again

kubectl apply -f kube-manifests/
kubectl get pods
Enter fullscreen mode Exit fullscreen mode

Expected:

Pods now pull images successfully and move to Running state.


Check Services

kubectl get svc
Enter fullscreen mode Exit fullscreen mode

Access your app using the external IP from the LoadBalancer:

http://<External-IP>
Enter fullscreen mode Exit fullscreen mode

p3


🧹 Step 09: Clean-Up

kubectl delete -f kube-manifests/
Enter fullscreen mode Exit fullscreen mode

🎯 Summary

Step Description
1 Created a Private GKE Cluster
2 Authorized Cloud Shell for access
3 Deployed an app (initially failed)
4 Configured Cloud NAT for outbound traffic
5 Re-deployed and verified successful access

🧠 Key Takeaways

  • Private clusters improve security by removing public IPs on nodes.
  • Cloud NAT enables controlled outbound internet access.
  • Use authorized networks for granular access control to your GKE API endpoint.

💡 Pro Tip:

If you need even tighter security, disable public endpoint access completely and connect via a bastion host or VPN using private IPs.


🌟 Thanks for reading! If this post added value, a like ❤️, follow, or share would encourage me to keep creating more content.


— Latchu | Senior DevOps & Cloud Engineer

☁️ AWS | GCP | ☸️ Kubernetes | 🔐 Security | ⚡ Automation
📌 Sharing hands-on guides, best practices & real-world cloud solutions

Top comments (0)