DEV Community

Cover image for Part-121: 🔒Understanding Google Kubernetes Engine (GKE) Private Clusters
Latchu@DevOps
Latchu@DevOps

Posted on

Part-121: 🔒Understanding Google Kubernetes Engine (GKE) Private Clusters

When you deploy workloads on Google Kubernetes Engine (GKE), you can choose between public and private clusters.
Private clusters give you better network isolation, controlled access, and enhanced security — perfect for production workloads in enterprise environments.

In this post, we’ll explore what a GKE Private Cluster is, how its network design works, and the different security levels you can configure for cluster access.


🧩 What is a GKE Private Cluster?

p1n

A GKE Private Cluster ensures that your Kubernetes nodes do not have public IP addresses.
Instead, communication between the nodes and the control plane happens entirely over internal IPs inside Google Cloud’s private network.


🏗️ GKE Private Cluster — Network Design

In a private cluster setup:

  • The Control Plane runs inside a Google-managed VPC network
  • Your VPC network hosts the GKE nodes
  • Both VPCs are connected using VPC Network Peering
  • All traffic between the control plane and the nodes uses internal IP addresses only

This ensures that no part of your cluster control traffic travels over the public internet.


⚙️ GKE Private Cluster Highlights

🔹 Deployment Types

You can create and manage Private Clusters in either:

  • Standard mode — full control over the cluster configuration
  • Autopilot mode — Google manages the infrastructure for you

🔹 No External IPs

Nodes inside private clusters do not have external IPs, reducing the attack surface and improving network security.

🔹 Outbound Internet Access

  • If your workloads need internet access (for example, to pull Docker images or connect to APIs), you can enable Cloud NAT (Network Address Translation).
  • Cloud NAT allows outbound connections without exposing node IPs publicly.

🔹 Private Google Access

Enabled by default on the subnet used by your GKE nodes.
This allows workloads to securely access Google APIs and services over the private network.

Examples:

  • Accessing container images from Artifact Registry
  • Sending logs to Cloud Logging

🔐 Accessing a GKE Private Cluster using kubectl

When it comes to cluster access, Google Cloud offers three security levels — from least to most secure.

p2

🟢 Option 1: Least Secure

  • Public endpoint: Enabled
  • Authorized networks: Disabled
  • Access: Anyone over the internet with valid cluster credentials

⚠️ Not recommended for production use.


🟡 Option 2: Medium Secure

  • Public endpoint: Enabled
  • Authorized networks: Enabled
  • Access: Only from trusted IP ranges

(Example: Google Cloud Shell, your corporate office IP, or local desktop)

✅ Ideal for controlled access in test environments.


🔴 Option 3: Highly Secure (Recommended)

  • Public endpoint: Disabled
  • Access: Only from within the VPC network or an on-premises network connected via Cloud VPN or Cloud Interconnect

🔐 Best choice for production environments.


🚀 GKE Private Cluster — Pulling Docker Images from Docker Hub

When you run a Google Kubernetes Engine (GKE) Private Cluster, your nodes don’t have public IP addresses.
So, how can they still pull container images from Docker Hub or other public registries?
That’s exactly what this architecture explains.

p3


🧠 What’s Happening in This Architecture

This diagram shows how a GKE Private Cluster is designed to securely communicate with both the GKE control plane and the public internet (for pulling images).

Let’s break it down step by step 👇


🏗️ 1. Network Separation — Customer Project vs Google Managed Project

The Customer Project (your project) hosts:

  • The VPC network containing the GKE cluster nodes
  • The Load Balancer Service, Deployments, and Pods

The Google Managed Project hosts:

  • The GKE Control Plane (includes kube-apiserver, kube-scheduler, and kube-controller-manager)

These two environments are isolated for security but connected privately.


🔗 2. VPC Network Peering (Private Connectivity)

The Control Plane VPC and your VPC network are connected using VPC Network Peering.

This allows secure private communication between:

  • The GKE Control Plane (Google-managed)
  • The GKE Nodes (inside your project)

💡 All control traffic — scheduling, API calls, status updates — flows only through internal IPs, not over the public internet.


☁️ 3. GKE Nodes Inside Your Private VPC

  • Your cluster nodes are deployed in private subnets (no external IPs).
  • The subnet range might look like 10.128.0.0/20.
  • Each node runs multiple Pods, which run your containerized applications.

⚙️ 4. Load Balancer Service

  • The LoadBalancer Service exposes your application to users.
  • Depending on configuration, it may have a public or internal load balancer.
  • It directs incoming traffic to the Pods running inside your private cluster.

🌐 5. Accessing the Internet via Cloud NAT

Since nodes have no public IPs, they cannot directly access external services like Docker Hub.

That’s where Cloud NAT (Network Address Translation) comes in.

  • Cloud NAT enables outbound internet access from private nodes.
  • When a node tries to pull a Docker image (for example, from Docker Hub), the request goes out through Cloud NAT.
  • Docker Hub sees the request as coming from the Cloud NAT public IP, not from individual nodes.

🧩 6. Control Plane Operations

On the Google-managed side, the GKE Control Plane manages:

  • Scheduling Pods (kube-scheduler)
  • Managing Deployments (kube-controller-manager)
  • Handling API requests (kube-apiserver)

🔁 7. Complete Flow — Putting It All Together

Here’s how everything connects:

  1. Pods are scheduled by the GKE Control Plane through private VPC peering.
  2. Nodes inside your private subnet receive instructions and try to pull images from Docker Hub.
  3. Since the nodes have no external IPs, the Cloud NAT gateway routes outbound requests securely.
  4. Docker Hub serves the image to Cloud NAT → which passes it to the nodes.
  5. The Load Balancer Service exposes your app to users (internally or externally).

🧠 Summary

Feature Description
VPC Peering Connects your VPC to the control plane’s VPC privately
Internal IP Communication Ensures traffic between nodes and control plane stays private
Cloud NAT Enables internet egress for private nodes
Private Google Access Lets workloads reach Google APIs securely
No Public IPs Reduces exposure and enhances cluster security

🌟 Thanks for reading! If this post added value, a like ❤️, follow, or share would encourage me to keep creating more content.


— Latchu | Senior DevOps & Cloud Engineer

☁️ AWS | GCP | ☸️ Kubernetes | 🔐 Security | ⚡ Automation
📌 Sharing hands-on guides, best practices & real-world cloud solutions

Top comments (0)