DEV Community

Cover image for Keep your AWS Kubernetes costs in check with intelligent allocation
CAST AI
CAST AI

Posted on • Originally published at cast.ai

Keep your AWS Kubernetes costs in check with intelligent allocation

Traditional cost allocation and Kubernetes are like oil and water. Surely, containerized environments make a lot of things easier. But not this one.

Luckily, there are a few things you can do to allocate AWS Kubernetes costs smarter and keep them in check.

Read on to find out what they are and finally hold the reins over your cloud expenses.

What you’ll find inside:

  • You’re not the only one getting confused by Kubernetes costs, here’s why
  1. Calculating shared costs is a nightmare
  2. Containers are very dynamic
  3. Dealing with multiple cost centers is hard
  4. Autoscaling leads to more confusion
  • Allocating AWS Kubernetes costs, the smart way
  1. Use container classes
  2. Break costs down for labeling and tagging
  3. Establish labeling and namespace standards
  4. Split and allocate shared costs
  5. Count in cluster costs beyond the core
  • How to apply all of this and win the cost allocation game

You’re not the only one getting confused by Kubernetes costs, here’s why

Alt Text

Getting the hang of Kubernetes cost estimation, allocation, and reporting is something every team mindful of its expenses aspires to.

But why is it so hard? Here are 4 Kubernetes cost challenges we all know all too well.

1. Calculating shared costs is a nightmare

Kubernetes clusters are in essence shared services multiple teams run to hold multiple containers and apps. Once a team deploys a container, it uses some of the cluster’s resources – so, you need to pay for each and every server instance that is part of your cluster.

This doesn’t sound so hard until you try making that work with, say, three teams working on ten unique applications.

Which application or project uses the biggest chunk of your cluster resources? You can’t really tell, because all of these projects use multiple containers.

Knowing how many resources an individual container uses from a specific server is next to impossible. And that’s what makes allocating Kubernetes costs so challenging.

2. Containers are very dynamic

A container’s lifespan lasts only one day. Compare that to how long your virtual machine lasts. It’s a speck in time.

The dynamic character of your containerized environment makes calculating costs even more complex. You need to come up with a cost management system that can handle it.

3. Dealing with multiple cost centers is hard

It’s likely that not all development costs come from the DevOps budget and you have a number of cost centers running across your company.

While your product team develops core applications, another team might launch a shadow IT project that consumes resources. You need to consider this especially if your business has multiple digital services and each comes with its own teams and budgets.

When multiple teams use one cluster, identifying which one is responsible for which part of the bill is a hard nut to crack.

4. Autoscaling leads to more confusion

Teams often use the three built-in Kubernetes autoscaling mechanisms that reduce the waste (and cost) of running clusters. But autoscaling has an impact on your cost calculations.

For example, Vertical Pod Autoscaler (VPA) automatically adjusts requests and limits configuration to eliminate overhead. It changes the number of requests on a container, increasing and reducing its resource allocation.

Horizontal Pod Autoscaler (HPA) focuses on scaling out to get the best combo of CPU or RAM allocated to an instance. It changes the number of containers all the time.

Why does it matter? Here’s an example scenario:

  • Imagine that you have three webserver containers running during the night. Everything works well.
  • But there are some peak hours during the day – so HPA scales from 3 to 50 containers.
  • When lunchtime comes and demand is lower, it scales down.
  • And then it brings the scale back up for the afternoon rush, only to settle at a low level as the day ends.

The number of containers and their sizes is very dynamic in this setup. This complicates the process of calculating and forecasting AWS Kubernetes costs even more.

Allocating AWS Kubernetes costs, the smart way

Alt Text

Take a look at your cloud bill. You get charged for every instance that makes up a cluster where containers are deployed. You need to pay for that resource, even if you’re not using it.

To allocate the individual costs of a container running on a given cluster, you need to discover how much of the server the container ended up consuming.

And then add the satellite AWS Kubernetes costs of a running cluster to that as well (from management nodes and software licensing to backups and disaster recovery).

How to do it? Here are some best practices for allocating Kubernetes costs.

1. Use container classes

You can set different resource guarantees on scheduled containers in Kubernetes. They’re called Quality of Service (QoS) classes. Here’s a quick introduction:

Guaranteed

These pods are top priority and guaranteed to not get killed until the moment they exceed their limits. If limits and requests (not equal to 0) are set for all the resources across your containers and are equal, the pod is classified as guaranteed.

Use this for critical service containers to make sure that a pod gets the vCPU and memory it needs at all times.

Burstable

If your workload experiences spikes, it should have access to more resources when it needs them. This setup allows the pod to use more resources than requested at first – as long as the capacity is available on the underlying instance.

This type of allocation works like burstable instances AWS offers (T-series) – they give you a base level of performance and allow the pod to burst when it requires more.

This is much more cost-effective than investing in an instance large enough to cover the spikes but way too large for regular operation.

BestEffort

These pods are the lowest priority and get killed first if your system runs out of memory. This allocation allows the pod to run while there’s excess capacity available and stops it when it’s not. It works like spot instances in AWS.

It’s a good idea to allocate a mix of pods that have different resource allocation guarantees into a server instance to increase its utilization.

For example, you can allocate a baseline of resources to guaranteed resource pods, add some burstable pods that use up to the remainder of resources, and best-effort pods that take advantage of any spare capacity.
Alt Text

2. Break costs down for labeling and tagging

Breaking costs into separate categories helps to make sense of them through labels and tagging. Here are a few categories that Kubernetes teams find useful:

  • Billing hierarchy – develop and align it with your cloud costs (for example, projects, folders, or organizations),
  • Resources – this part covers compute cores, GPU, TPU, RAM, load balancers, custom machines, network egress, etc.
  • Namespaces – it’s a good practice to label specific and isolated containers,
  • Labels – come up with labels reflecting different cost centers, teams, application names, environments, etc.

3. Establish labeling and namespace standards

Develop and implement a labeling and namespace strategy – and stick to it when allocating cluster costs. That way, teams that use AWS can see which groups are driving costs in a given cluster.

Consider the proportional resources consumed by every group and use your findings to allocate cluster costs to these groups.

Here’s an example:

Let’s say that you have four namespaces in a cluster. Each of them consumes 25% of the cluster resources. One way to allocate costs would be taking 25% of the total cluster costs and allocating them to each namespace.

Naturally, this is an example scenario – don’t expect things to be so straightforward in the real world.

Before setting out to do that, establish how you’ll be determining cluster resource utilization – by CPU, memory, or a combination of these two? Are you going to look at requests or actual consumption?

  • If you go for actual usage, each team will only pay for what it uses. But who will be covering the bill for idle time? How are you going to deal with overprovisioning?
  • If you allocate costs by resource requests, you’ll encourage teams to provision only what they need and allocate all the costs. But this might also lead teams to underestimate their requirements.

4. Split and allocate shared costs

Companies have unique ways to split infrastructure costs. These methods often get inherited when they start using Kubernetes.

Here’s a set of best practices if you’re looking for another approach:

a. Define what shared costs are

This depends on the maturity and size of your company. You share the cloud bill at the organizational level but need to allocate it either to a centralized budget or different cost centers. Still, your shared costs will be charged within one account, so understanding which AWS Kubernetes costs should be shared is challenging.

Here are a few examples of commonly shared costs:

  • Shared resources (network, storage like data lakes)
  • Platform services (Kubernetes, logging)
  • Enterprise-level support and discounts
  • Licensing and third-party costs

Take support charges as an example. They’re applied at the parent account level. While some businesses cover them with a central budget of the IT or cloud team, others go a step further and allocate this cost to customers like application owners or business units.

The rise of shared platforms where multiple teams use the same core resources complicates this – like Kubernetes systems that run on shared clusters.

b. Split your shared costs

Tagging helps to do that accurately, and you can choose from several techniques:

  • Proportional split – based on the relative percentage of direct costs
  • Even split – where you split the total amount evenly across all targets
  • Fixed proportion split – based on a user-defined coefficient This is a bit abstract, so let’s show an example.

Imagine that you have several business units that consume a different portion of cloud resources:
Alt Text

You get a $15k enterprise support charge on top of that, so your final bill is $115k per month.

Here’s how this plays out in different splitting techniques.

Proportional split

In this model, you split the $15k enterprise support charge among your three business units based on the percentage of their spend in direct charges. So, the sales operations team that uses 50% of your bill will also be accountable for $7.5k on top of their bill.
Alt Text

Even split

This model is simpler, so you can often find it among smaller companies with fewer business units. In this scenario, the $15k enterprise support charge is shared evenly by all business units – so $5k each.
Alt Text

Fixed proportion split

When using this method, you set a fixed percentage for attributing shared costs based on past spend. The idea is to get a fair breakdown. So if you decide that the sales operations team’s shared cost allocation is 40%, then it will get $6k of the enterprise support fee allocated to it.
Alt Text

5. Count in cluster costs beyond the core

Alt Text
When allocating costs to cluster consumers, consider the satellite costs of operating this cluster like:

  • Management and operational costs – these are charged by AWS for managing the cluster for you. For example, EKS charges $0.1 per hour per Kubernetes cluster – this amounts to c. $74 per month.

Learn more here: AWS EKS vs. ECS vs. Fargate: Where to manage your Kubernetes?

  • Storage – add the costs of storage consumed by the host OS on the nodes, and any backup or data retrieval storage that is used in operating a production cluster can be allocated back to the workloads running on the cluster.
  • Licensing – these costs might be included in your AWS bill, but if you use Bring Your Own License (BOYL), you need to allocate this cost from the external spend. Moreover, software packages running on the host OS might incur a license fee too.
  • Observability – these metrics and logs are transferred from the cluster to a service your teams use to monitor and visualize them. This cost might be incurred by AWS or a third-party SaaS solution.
  • Security – AWS offers a wealth of security features, but they come at an extra fee that needs to be allocated.

How to apply all of this and win the cost allocation game

Implementing all of these best practices at once is bound to overwhelm you. So start small and develop a process for allocating costs. Build an understanding of how these costs should be allocated in your company.

Or get a solution that keeps your AWS Kubernetes costs at bay. Analyzing and allocating costs is so much easier if you have access to a detailed overview like this:
Alt Text

Here’s how to get started: Analyze your cluster for free to see every single detail that increases your AWS bill.

Oldest comments (2)

Collapse
 
snowboardeyes profile image
snowboardeyes

A bit lenghty read but worth it, thanks guys👍

Collapse
 
castai profile image
CAST AI

Thanks for the feedback, next we will try to make it shorter and straight to the point. And you always have another option try the EKS analyzer where we will do all the job for you :)