DEV Community

Michael Levan
Michael Levan

Posted on • Updated on

KCSA Blog Series Part 1: Overview of Cloud Native Security

In part one of this blog series, you'll learn about the first domain objective for the Kubernetes and Cloud Security Associate (KCSA) certification.

Special Announcement: I will be officially creating the KCSA course for LinkedIn Learning! It’ll be released most likely in Q3 or Q4 of 2023, so although it’s a ways out, it makes sense from a scheduling perspective as the certification should be out of beta around that time.

Cloud Provider Security

When it comes to cloud security, there are two crucial points - what you’re responsible for and what the cloud provider is responsible for. This is called the Shared Responsibility Model.

While you decide which cloud provider you’ll choose, your options will differ. For example, Azure and AWS think about security utilizing similar methodologies, but the tools that are associated with each platform are different. However, each cloud vendor does make recommendations based on best practices (and the best practices are coming from the same place).

The cloud provider is responsible for the underlying hardware security. For example, the servers that Azure is running on aren’t your concern. However, the network/firewall rules that are connected to your virtual machines are your concern.

Think about it like this - if you can access it, it’s your concern. You can’t walk into an Azure datacenter and touch the server. You can, however, log into Azure and set up proper networking. This is the basis of the shared responsibility model.

Below are two links for how AWS thinks about security and the way Azure thinks about security.

Within the links above, you’ll see various options for how you can secure your cloud resources based on the tools that are available for you out of the box.

Infrastructure Security

When it comes to infrastructure security, it’s going to be similar to cluster security. The cloud is responsible for certain parts and you as the engineer are responsible for certain parts.

For example, one of the best infrastructure practices when it comes to Kubernetes is to encrypt Etcd (the Kubernetes datastore). If you’re hosting Kubernetes yourself, you can do it. If Kubernetes is hosted in the cloud with a Managed Kubernetes Service, you won’t have access to Etcd, which means you cannot secure it.

However, even in the cloud, there are some infrastructure best practices you can follow.

  1. Ensure proper access to the API server. By default, a lot of the Managed Kubernetes Services will put a public IP address on your cluster. You can turn this off.
  2. The VMs that your worker nodes are running on can be protected via network policies and controls. Remember, the worker nodes attached to your Managed Kubernetes Service cluster are just virtual machines, and virtual machines can be controlled.
  3. Least privilege to the API server. Not everyone on your team needs RBAC permissions to reach the API server and some may only need certain access (like readonly).

Cluster Security

When thinking about how to secure a cluster, it’ll be broken down into two pieces:

  • The cluster itself.
  • The applications inside of the cluster.

At the cluster level in the cloud, because it’s managed by the cloud, it really all comes down to giving proper permissions and access. Think about it in a least privilege fashion. Not everyone needs access to scale the cluster or access the API server. Some engineers may only need, for example, read access to the production clusters and other engineers may need full permission to the production clusters. Cluster security in the cloud comes down to ensuring that whoever has access 1) Has only the proper access they need and 2) Has access to the proper resources and clusters.

A few other cluster-specific security protocols to think about are:

  1. Limiting resource usage on the cluster.
  2. Security Contexts in your Pod spec.
  3. Restricting network access.
  4. Audit logging enabled.

From an application perspective, it comes down to three parts:

  • Access to the application.
  • What Pods have access to from a network perspective.
  • Proper encryption.

The first step is to think about RBAC. How are your permissions actively assigned for your Pods? Is there a proper service account deploying your Pods and does it have appropriate permissions? Do the users/engineers that need access to your Pods have proper access and are defined with least privilege?

The second step is to confirm what your Pods have access to. By default, the Pod network is a flat network. Meaning, all Pods can talk to other Pods regardless of what Namespace they’re in by default. Because of that, you want to ensure to have proper network policies in place along with ensuring that the containers running inside of the Pods are configured with proper security contexts. Along with that, one of the biggest overall concerns (and security concerns) is resource constraints. Don’t let your Pods have access to as many resources (CPU, memory, storage) as they want. Set a limit/request depending on CPU/memory.

The third step is to ensure proper encryption. If you’re using an Ingress Controller, ensure TLS is enabled. For the Pod network, think about a security-centric CNI like Cilium or Calico. You can also implement a Service Mesh for mTLS capabilities. For Kubernetes Secrets, ensure that Etcd encrypts them properly (because it doesn’t by default) or you use a third-party platform like HashiCorp Vault.

Container Security

When securing a Pod outside of the network layer, it comes down to the container image and the container itself.

The first step is to properly scan a container image. Before anything is deployed, you should always scan. A few good tools for this purpose are Kubescape and Checkov. Kubescape, for example, will scan container images against CIS benchmarks.

The next step would be to sign the container images so they’re trusted. You can read more about the full process here:

The third step is to ensure that proper access is given to containers. As mentioned a few times above, proper permissions and RBAC should be on top of your mind.

The last step is to ensure that you’re using a proper container runtime with isolation. Chances are you won’t see much of this in the cloud, but you can utilize a Kubernetes API to manage this.

Code Security

When you’re securing code, you’ll most likely be doing that way before it ever gets to Kubernetes. As you’re doing unit tests, mock tests, and integration tests, you’ll be performing (well, hopefully you will) security scans against the code.

One of the most popular security scanning tools is Sonarqube. This most likely won’t be on the exam, but it’s still good to know.

For code security when it comes to Kubernetes, think about the following:

  1. TLS access for any communication that needs to occur via TCP/IP.
  2. Ensure that the apps only have access to particular ports that are necessary.
  3. Scan your code with a third-party tool (like Sonarqube) and utilize Static Code Analysis.
  4. Test out attacks against your code in a controlled environment using dynamic probing. You can learn about a few methods on how to do that here:

Top comments (0)