DEV Community

Kristoffer Hatland
Kristoffer Hatland

Posted on

Subnet Planning for Kubernetes: Why Most Calculators Are Wrong

When planning networks for Kubernetes clusters, many engineers reach for a simple subnet calculator.

Something like:

10.0.0.0/24 → 256 addresses

Looks simple enough.

But in practice, cloud networking rarely behaves that way.

After running into subnet exhaustion issues multiple times while deploying Kubernetes clusters, I realized that traditional subnet calculators miss several critical details — especially in cloud environments like AWS, Azure, and Google Cloud.

Let’s walk through why.


The Hidden Problem With Subnet Planning

Most subnet calculators assume a very simple model:

Total IPs = 2^(32 - prefix)

So a /24 network gives you:

256 total IP addresses

Subtract the network and broadcast addresses:

254 usable addresses

This model works fine in traditional networking.

But cloud providers reserve additional addresses, and Kubernetes can consume IP space far faster than expected.


Cloud Platforms Reserve More IPs

Each cloud platform reserves several IP addresses in every subnet.

For example, Azure reserves five IP addresses:

Address Purpose
.0 Network address
.1 Default gateway
.2 Azure DNS
.3 Reserved
Last Broadcast

That means a /24 subnet actually provides:

256 - 5 = 251 usable addresses

AWS and GCP have similar reservation models.

This already makes many traditional subnet calculators inaccurate for cloud deployments.


Kubernetes Changes the Game

The real challenge appears when running Kubernetes clusters.

Depending on the networking model, each pod may consume an IP address from the subnet.

Examples:

Platform Networking model
AWS EKS VPC CNI
Google GKE VPC-native clusters
Azure AKS Azure CNI

In all these cases, pod IPs come from the underlying network, which means subnets must handle both nodes and pods.

This causes IP consumption to grow quickly as clusters scale.


Example: AKS Subnet Planning

Let’s walk through a simple example.

Imagine an AKS cluster with:

Parameter Value
Nodes 20
Pods per node 30

Required pod IPs:

20 × 30 = 600

Add node IPs:

20

Add Azure reserved addresses:

5

Total IPs required:

625

A /24 subnet only provides 251 usable addresses, so it would run out of space very quickly.

Even a /23 may not leave much room for scaling.

This is why many teams allocate much larger ranges such as:

/20
/19

especially when clusters are expected to grow.


How Large Should a Kubernetes Subnet Be?

One of the most common questions engineers ask is:

How large should the subnet be for a Kubernetes cluster?

The answer depends on three main factors:

  • number of nodes
  • maximum pods per node
  • networking model (CNI plugin)

A simple formula is:

required_ips = (nodes × pods_per_node) + nodes + reserved_ips

Example cluster:

Parameter Value
Nodes 20
Pods per node 30

Pod IPs:

20 × 30 = 600

Node IPs:

20

Reserved addresses:

5

Total required:

625 IPs

This means the cluster would need at least a /22 subnet.

Typical recommendations are:

Subnet Usable IPs Typical Use
/24 ~251 small clusters
/23 ~507 medium clusters
/22 ~1019 larger clusters
/20 ~4091 production environments

Allocating slightly larger ranges early is usually safer than resizing subnets later.


What Happens When Subnets Are Too Small

This is a surprisingly common failure scenario.

Everything works initially.

Then the cluster grows.

Suddenly:

  • pods fail to schedule
  • node pools cannot scale
  • networking errors appear

At that point the options are painful:

  • rebuild the cluster
  • migrate workloads
  • redesign the VNet or VPC
  • update firewall rules and routing

All of which are far harder than simply planning a larger subnet from the beginning.


Planning Subnets the Safe Way

When designing Kubernetes networking, it's often best to start from maximum expected scale, not the initial deployment.

A simple rule of thumb:

max_nodes × pods_per_node

Then add buffer capacity for:

  • future node pools
  • cluster upgrades
  • cloud platform reserved addresses

This approach avoids painful migrations later.


A Small Tool I Built to Help With This

After running into this issue several times, I built a small planner to estimate the subnet size required for Kubernetes clusters running on Azure.

You can try it here:

https://subnettool.com/tools/aks-subnet-planner/

It helps estimate how large your CIDR range should be based on:

  • node count
  • pods per node
  • Azure networking constraints

before deploying the cluster.


Final Thoughts

Subnet planning used to be straightforward.

But in modern cloud-native environments, the interaction between:

  • cloud platform IP reservations
  • Kubernetes networking models
  • cluster scaling

makes it much easier to underestimate address requirements.

Taking the time to plan subnet sizes properly can save a lot of painful networking migrations later.

If you want a deeper explanation of AKS networking models and subnet planning, see this detailed guide:
https://subnettool.com/learn/aks-networking


Related Tools

If you're working with subnet planning in cloud environments, these calculators may also be useful.

General subnet calculator

https://subnettool.com/

Kubernetes subnet planner

https://subnettool.com/tools/kubernetes-subnet-planner/

AKS subnet planner

https://subnettool.com/tools/aks-subnet-planner/

Usable IP calculator

https://subnettool.com/tools/usable-ip-calculator/

Top comments (0)