The Hidden Risks of "Secure by Default": Why Security Contexts in Kubernetes Matter
Kubernetes says it's "secure by default." But defaults in this case are just starting points, and in many clusters they're dangerously permissive. A single missing security context can make the difference between isolation and compromise.
What "Secure by Default" Actually Means (and Doesn't)
When Kubernetes claims to be "secure by default," it means the platform provides security primitives out of the box: RBAC, network policies, secrets management, and pod security standards.
The security infrastructure it's available... But here's the catch: Available doesn't mean enabled or enforced.
By default, Kubernetes allows:
- Pods to run as root
- Containers to escalate privileges
- Processes to access the host filesystem
- Services to communicate freely across namespaces
These defaults prioritize compatibility and ease of getting started. For production workloads, they're a security liability. The burden falls on you to configure proper security boundaries.
Security Context Deep Dive
A security context defines privilege and access control settings for pods and containers. Think of it as the difference between giving someone full admin access versus a restricted user account.
Critical Security Context Fields
runAsUser / runAsGroup
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
Forces the container to run as a non-root user. Without this, containers default to UID 0 (root), which means a compromised container has root-level access to do damage.
allowPrivilegeEscalation
securityContext:
allowPrivilegeEscalation: false
Prevents processes from gaining more privileges than their parent. Even if running as non-root, without this control, an attacker might find a way to escalate to root privileges.
readOnlyRootFilesystem
securityContext:
readOnlyRootFilesystem: true
Makes the container's root filesystem immutable. Attackers can't install malware, modify binaries, or leave persistence mechanisms. Any necessary writes go to mounted volumes.
capabilities
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
Linux capabilities provide fine-grained permissions. Drop all by default, then add back only what's truly needed. Most applications don't need the 40+ capabilities root has.
Don't know what are the capabilities by hearth? I got you covered: Here the list from the Linux manpages.
Want to lear a bit more about the capabilities & Seccomp? Sure, there you go: Courtesy of RedHat
seccompProfile / seLinuxOptions / AppArmorProfile
securityContext:
seccompProfile:
type: RuntimeDefault
Applies kernel-level security profiles that restrict system calls and access patterns. These are your defense-in-depth layers.
Common Pitfalls That Lead to Compromise
Pitfall 1: The Root Default
The scenario: You deploy a common web application without specifying runAsUser. The container's Dockerfile doesn't set a user, so it runs as root.
The impact: An attacker who exploits an application vulnerability (like a command injection) now has root access inside the container. They can:
- Read secrets mounted as files
- Access service account tokens
- Potentially break out of the container if other protections are weak
The fix: Always explicitly set runAsNonRoot: true and specify a user ID.
Pitfall 2: Privilege Escalation Enabled
The scenario: Your pod runs as a non-root user, but allowPrivilegeEscalation is left at its default (true).
The impact: Vulnerabilities in setuid binaries or kernel exploits could allow privilege escalation to root, bypassing your runAsUser protection.
The fix: Explicitly set allowPrivilegeEscalation: false for all workloads.
Pitfall 3: HostPath Misuse
The scenario: A developer mounts /var/run/docker.sock or another host path into a pod for "convenience."
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
The impact: This is essentially handing over the keys to the kingdom. An attacker with access to this pod can manipulate the container runtime, escape the container, and compromise the entire node.
The fix: Avoid hostPath mounts entirely unless absolutely necessary. Use volume types like emptyDir, PersistentVolumes, or CSI drivers instead.
Pitfall 4: Overprivileged Service Accounts
The scenario: Pods use the default service account in a namespace, which has been granted broad RBAC permissions for convenience.
The impact: Compromised pods can interact with the Kubernetes API to discover other workloads, access secrets across the namespace, or even escalate privileges.
The fix:
- Create dedicated service accounts per application
- Apply least-privilege RBAC
- Prefer to always set
automountServiceAccountToken: falseso when/if API access is needed, the workload should mount the token explicitly at the container spec
Real-World Breach Scenarios
Scenario 1: Lateral Movement via Shared Service Accounts
Attack chain:
- Attacker exploits a vulnerability in a public-facing API pod
- The pod runs as root with a permissive service account
- Attacker uses the service account token to query the K8s API
- Discovers credentials to a database pod in the same namespace
- Moves laterally and exfiltrates customer data
Prevention: Non-root execution + restricted service account + network policies
Scenario 2: Container Escape Through Capabilities
Attack chain:
- Attacker compromises a container running with
CAP_SYS_ADMIN - Uses this capability to mount the host filesystem
- Escapes the container and gains node access
- Installs a cryptominer on the host
Prevention: Drop all capabilities except what's strictly needed
Enforcing Security Contexts: Beyond Manual Review
Manual code review doesn't scale, and humans forget. You need policy-as-code to enforce security contexts automatically.
Pod Security Standards (PSS)
Kubernetes' built-in solution offers three levels:
- Privileged: Unrestricted (use sparingly, for system pods only)
- Baseline: Prevents known privilege escalations
- Restricted: Deeply defensive, best practice for most workloads
Enable at the namespace level:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
Policy Engines: OPA Gatekeeper and Kyverno
For more sophisticated policies, use admission controllers.
Example Gatekeeper Policy: Require runAsNonRoot
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequireNonRootUser
metadata:
name: must-run-as-non-root
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaces:
- production
Example Kyverno Policy: Enforce read-only filesystem
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-ro-rootfs
spec:
validationFailureAction: enforce
rules:
- name: check-readonlyrootfs
match:
resources:
kinds:
- Pod
validate:
message: "Root filesystem must be read-only"
pattern:
spec:
containers:
- securityContext:
readOnlyRootFilesystem: true
These tools catch misconfigurations at admission time, before they reach your cluster.
Practical Checklist: Baseline Security Context
Integrate this into your CI/CD and make it the default for all deployments:
apiVersion: v1
kind: Pod
metadata:
name: secure-app
spec:
securityContext:
runAsNonRoot: true
runAsUser: 10001
fsGroup: 10001
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: myapp:latest
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /app/cache
volumes:
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
automountServiceAccountToken: false
Key elements:
- Non-root user at both pod and container level
- Privilege escalation blocked
- Read-only root filesystem (with
emptyDirfor any writable paths) - All capabilities dropped
- Default seccomp profile applied
- Service account token not mounted (add back only if needed)
Integrating Into Your Workflow
1. Developer Guardrails
- Provide secure pod templates and Helm chart libraries
- Run policy checks in pre-commit hooks like pre-commit-opa for example
- Include security context validation in code review checklists
2. CI/CD Gates
- Scan manifests with tools like kubesec or Polaris
- Fail builds if security context is missing or weak
- Generate reports showing policy compliance trends
3. Runtime Enforcement
- Deploy Gatekeeper or Kyverno in audit mode first to measure impact (and get ready to when passing it to enforce mode to have engineering teams crying out loud!)
- Gradually transition to enforce mode for critical namespaces
- Monitor for policy violations and adjust as needed
4. Continuous Monitoring
- Use tools like Falco to detect runtime anomalies
- Alert on unexpected privilege escalation attempts
- Correlate security context violations with CVE data
Beyond the Basics: Defense in Depth
Security contexts are foundational, but they're not the whole story. Layer them with:
- Network policies: Restrict pod-to-pod communication
- RBAC: Limit API server access
- Secrets management: Use external secret stores (Vault, AWS Secrets Manager)
- Image scanning: Catch vulnerabilities before deployment, and here, its important emphasize something: Please DO NOT scan your containers only during build time, include a step to scan it during deployment time too, you would be surprised on how many images get untouched for weeks/months and vulnerabilities are found on those
- Runtime security: Detect and respond to threats in real-time
Each layer compensates for potential weaknesses in others.
"Secure by default" isn't a guarantee: It's an invitation to configure responsibly.
Kubernetes gives you the tools, but security contexts won't configure themselves. The good news? Once you establish baseline policies and enforcement mechanisms, security becomes systematic rather than heroic.
Start small: audit your current deployments, identify the highest-risk workloads, and apply these principles there first. Then bake security contexts into your templates and pipelines so they become the path of least resistance.
Remember: in Kubernetes, privilege is easy to grant and hard to take back. Start restrictive, and relax only when you have a clear, documented reason. Your future incident response team will thank you.
Top comments (0)