DEV Community

Matheus
Matheus

Posted on • Originally published at releaserun.com

Securing the Kubernetes Dashboard: A Hardening Guide for 2026

What Is the Kubernetes Dashboard?

The Kubernetes Dashboard is a general-purpose, web-based UI for Kubernetes clusters. It lets users deploy containerized applications, troubleshoot running workloads, and manage cluster resources through a browser rather than the command line. For organizations with mixed-experience teams, where not everyone lives in kubectl all day, the dashboard has historically been the fastest way to give developers visibility into cluster state.

The dashboard shows pods, deployments, services, config maps, secrets (masked), node status, resource consumption, and logs. It also allows write operations: scaling deployments, editing manifests, and deleting resources. That combination of broad visibility and full mutability is exactly what makes it a security concern.

Important note for 2026: The Kubernetes Dashboard project was officially archived on January 21, 2026. The repository has been moved to kubernetes-retired/dashboard on GitHub and will receive no further security patches, bug fixes, or feature updates. The Kubernetes SIG UI group now recommends Headlamp as the successor project. Despite this, many production clusters still run the dashboard, some intentionally, some as legacy debt. If you are running it, hardening it is non-negotiable. If you are planning new infrastructure, you should evaluate the alternatives discussed later in this guide.

Why the Kubernetes Dashboard Is a Security Risk

The dashboard's attack surface is straightforward: it is a web application with broad API access running inside your cluster. If an attacker reaches it, they effectively have the same access as whatever service account the dashboard is bound to, which, in too many real-world deployments, means cluster-admin.

The Tesla Cryptojacking Incident

The most widely cited example of a dashboard breach is the 2018 Tesla cryptojacking incident. Researchers from RedLock, a cloud security firm, discovered that Tesla's Kubernetes dashboard was accessible from the public internet without any authentication. Attackers had already found it first. They used the exposed console to deploy cryptocurrency mining software (Stratum) across Tesla's cloud infrastructure, quietly mining approximately 0.9 Bitcoin over several weeks.

The attackers were sophisticated. They used an unlisted mining pool rather than a public one, routed traffic through Cloudflare to mask the pool server's address, and throttled CPU usage to avoid triggering resource-usage alerts. Beyond the compute theft, the attackers also gained access to private data stored in Amazon S3 buckets through credentials exposed in the dashboard environment.

Tesla's security team patched the issue within hours of RedLock's disclosure, and the company stated the impact was limited to internal engineering test systems with no customer data or vehicle safety compromised. But the incident became a defining case study in Kubernetes security, not because Tesla's cluster was uniquely vulnerable, but because the same misconfiguration was (and still is) widespread.

Default Configuration Problems

Several default behaviors and common deployment patterns make the dashboard dangerous out of the box:

  • Cluster-admin service account binding. Many installation tutorials, including some that circulated for years, instruct users to create a ClusterRoleBinding granting cluster-admin to the dashboard's service account. This gives the dashboard unrestricted access to every resource in every namespace. If the dashboard is compromised, the attacker has full cluster control.
  • Authentication disabled or bypassed. The dashboard supports a "skip" button on the login screen that was enabled by default in older versions. Clicking it grants access using the dashboard's own service account permissions. In environments where that service account has elevated privileges, this is equivalent to having no authentication at all.
  • Exposed via NodePort or LoadBalancer. Deploying the dashboard as a NodePort or LoadBalancer service makes it directly accessible on a routable IP address. Combined with missing authentication, this is the exact pattern that led to the Tesla breach. The dashboard should never be directly exposed to the internet.
  • No network segmentation. Without NetworkPolicies restricting traffic to the kubernetes-dashboard namespace, any pod in any namespace can reach the dashboard. A compromised application pod can pivot to the dashboard and escalate privileges.

Authentication Options

The Kubernetes Dashboard supports several authentication mechanisms. Choosing the right one depends on your organization's identity infrastructure and the dashboard's role in your operational workflows.

Token-Based Authentication

The simplest method is bearer token authentication tied to a Kubernetes ServiceAccount. You create a service account, bind it to a role with the appropriate permissions, generate a token, and paste it into the dashboard login screen.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-viewer
  namespace: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
  name: dashboard-viewer-token
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/service-account.name: dashboard-viewer
type: kubernetes.io/service-account-token
Enter fullscreen mode Exit fullscreen mode

Generate and retrieve the token:

# For Kubernetes 1.24+ (including 1.35), create a long-lived token:
kubectl -n kubernetes-dashboard create token dashboard-viewer --duration=8760h

# Or retrieve from the Secret:
kubectl -n kubernetes-dashboard get secret dashboard-viewer-token \
  -o jsonpath='{.data.token}' | base64 --decode
Enter fullscreen mode Exit fullscreen mode

Token-based auth works well for small teams and break-glass access. Its weakness is token management: tokens are static strings that can be shared, stolen, or forgotten. There is no audit trail of which human used a token, only that the associated service account was used. For production environments with more than a handful of users, OIDC or an auth proxy is a better choice.

OIDC (OpenID Connect)

If your organization already uses an identity provider (Okta, Azure AD/Entra ID, Google Workspace, Keycloak), you can configure the Kubernetes API server to accept OIDC tokens. The dashboard can then use these tokens directly, tying dashboard access to your organization's single sign-on (SSO) system.

This approach requires configuring the API server with OIDC flags:

# API server flags for OIDC
--oidc-issuer-url=https://accounts.google.com
--oidc-client-id=your-client-id.apps.googleusercontent.com
--oidc-username-claim=email
--oidc-groups-claim=groups
Enter fullscreen mode Exit fullscreen mode

The advantage is identity-level audit logging: you know exactly which person performed each action. The disadvantage is complexity; you need a token refresh mechanism on the client side, and the API server configuration is cluster-wide.

OAuth2 Proxy (Recommended for Most Teams)

The most practical approach for most organizations is to deploy the dashboard behind an authenticating reverse proxy such as oauth2-proxy or Pomerium. This keeps identity management outside the dashboard entirely. The proxy handles SSO, and the dashboard sees requests that have already been authenticated and authorized.

RBAC Configuration: Least Privilege Access

Regardless of which authentication method you choose, the RBAC configuration attached to dashboard users is where you enforce least privilege. The goal is simple: most people need read-only access, and nobody should get cluster-admin through the dashboard.

Read-Only ClusterRole

Create a ClusterRole that allows viewing resources across all namespaces but cannot modify anything:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: dashboard-read-only
rules:
  - apiGroups: [""]
    resources:
      - pods
      - pods/log
      - services
      - endpoints
      - namespaces
      - nodes
      - events
      - configmaps
      - persistentvolumeclaims
    verbs: ["get", "list", "watch"]
  - apiGroups: ["apps"]
    resources:
      - deployments
      - daemonsets
      - statefulsets
      - replicasets
    verbs: ["get", "list", "watch"]
  - apiGroups: ["batch"]
    resources:
      - jobs
      - cronjobs
    verbs: ["get", "list", "watch"]
  - apiGroups: ["networking.k8s.io"]
    resources:
      - ingresses
      - networkpolicies
    verbs: ["get", "list", "watch"]
  - apiGroups: ["gateway.networking.k8s.io"]
    resources:
      - gateways
      - httproutes
    verbs: ["get", "list", "watch"]
Enter fullscreen mode Exit fullscreen mode

Note: this role deliberately excludes Secrets. Unless there is a specific operational need, dashboard users should not be able to view Secret values. Even though the dashboard masks Secret data by default, the underlying API call still retrieves the full base64-encoded content.

Namespace-Scoped Role for Development Teams

For teams that need write access within their own namespace but should not see other namespaces, use a namespace-scoped Role instead of a ClusterRole:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: dashboard-team-alpha
  namespace: team-alpha
rules:
  - apiGroups: [""]
    resources:
      - pods
      - pods/log
      - pods/exec
      - services
      - configmaps
      - events
      - persistentvolumeclaims
    verbs: ["get", "list", "watch"]
  - apiGroups: ["apps"]
    resources:
      - deployments
      - statefulsets
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: ["apps"]
    resources:
      - deployments/scale
    verbs: ["update", "patch"]
Enter fullscreen mode Exit fullscreen mode

Bind it to the team's service account:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: dashboard-team-alpha-binding
  namespace: team-alpha
subjects:
  - kind: ServiceAccount
    name: dashboard-team-alpha
    namespace: kubernetes-dashboard
roleRef:
  kind: Role
  name: dashboard-team-alpha
  apiGroup: rbac.authorization.k8s.io
Enter fullscreen mode Exit fullscreen mode

This pattern gives the team visibility and limited control within team-alpha while preventing any access to other namespaces. Scale this by creating one Role and RoleBinding per team. If you are running Kubernetes 1.35, make sure your RBAC policies account for any new resource types introduced in that release.

What to Never Do

Do not create a ClusterRoleBinding like this:

# DO NOT DO THIS -- grants full cluster-admin to the dashboard
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
Enter fullscreen mode Exit fullscreen mode

This single manifest is the root cause of most Kubernetes Dashboard security incidents. If you find this in your cluster, remove it immediately and replace it with the least-privilege roles shown above.

Network Policies to Restrict Access

RBAC controls what an authenticated user can do. Network Policies control who can reach the dashboard in the first place. Even if your RBAC is perfect, a missing network policy means any compromised pod in the cluster can reach the dashboard service and attempt authentication attacks.

Default Deny for the Dashboard Namespace

Start by denying all ingress traffic to the kubernetes-dashboard namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: kubernetes-dashboard
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
Enter fullscreen mode Exit fullscreen mode

This blocks all traffic into and out of the namespace. From here, you selectively open only the paths you need.

Allow Traffic Only from the Auth Proxy

If you are running the dashboard behind an auth proxy (recommended), allow ingress only from the proxy's namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-auth-proxy
  namespace: kubernetes-dashboard
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/name: kubernetes-dashboard
  policyTypes:
    - Ingress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: auth-proxy
        - podSelector:
            matchLabels:
              app: oauth2-proxy
Enter fullscreen mode Exit fullscreen mode

Allow Dashboard Egress to the API Server

The dashboard needs to communicate with the Kubernetes API server. Allow that egress while blocking everything else:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-apiserver-egress
  namespace: kubernetes-dashboard
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/name: kubernetes-dashboard
  policyTypes:
    - Egress
  egress:
    - to:
        - ipBlock:
            cidr: 10.96.0.1/32  # Replace with your API server ClusterIP
      ports:
        - protocol: TCP
          port: 443
    # Allow DNS resolution
    - to: []
      ports:
        - protocol: UDP
          port: 53
        - protocol: TCP
          port: 53
Enter fullscreen mode Exit fullscreen mode

Replace 10.96.0.1/32 with the actual ClusterIP of your kubernetes service in the default namespace. You can find it with kubectl get svc kubernetes -n default -o jsonpath='{.spec.clusterIP}'.

These policies require a CNI plugin that supports NetworkPolicies (Calico, Cilium, Antrea, or similar). The default kubenet CNI does not enforce NetworkPolicies; the objects will be accepted by the API server but have no effect. For more on Kubernetes networking options, see our Gateway API vs Ingress guide.

Deploying Behind an Auth Proxy

An identity-aware proxy is the strongest practical approach to dashboard security. It moves authentication and authorization out of the dashboard entirely, placing it in a dedicated component that integrates with your organization's SSO. The dashboard becomes an internal-only service that only receives pre-authenticated requests.

oauth2-proxy Deployment

oauth2-proxy is a widely adopted reverse proxy that handles OAuth2/OIDC flows with providers like Google, GitHub, Azure AD, Okta, and Keycloak. Here is a deployment pattern for fronting the Kubernetes Dashboard:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: oauth2-proxy
  namespace: auth-proxy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: oauth2-proxy
  template:
    metadata:
      labels:
        app: oauth2-proxy
    spec:
      containers:
        - name: oauth2-proxy
          image: quay.io/oauth2-proxy/oauth2-proxy:v7.7.1
          args:
            - --provider=oidc
            - --oidc-issuer-url=https://accounts.google.com
            - --client-id=$(CLIENT_ID)
            - --client-secret=$(CLIENT_SECRET)
            - --cookie-secret=$(COOKIE_SECRET)
            - --email-domain=yourcompany.com
            - --upstream=https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
            - --http-address=0.0.0.0:4180
            - --cookie-secure=true
            - --cookie-httponly=true
            - --cookie-samesite=lax
            - --set-xauthrequest=true
            - --pass-access-token=true
            - --skip-provider-button=true
            - --silence-ping-logs=true
          envFrom:
            - secretRef:
                name: oauth2-proxy-secrets
          ports:
            - containerPort: 4180
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /ping
              port: 4180
            initialDelaySeconds: 5
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /ready
              port: 4180
            initialDelaySeconds: 5
            periodSeconds: 10
          resources:
            requests:
              cpu: 50m
              memory: 64Mi
            limits:
              cpu: 200m
              memory: 128Mi
          securityContext:
            runAsNonRoot: true
            readOnlyRootFilesystem: true
            allowPrivilegeEscalation: false
            capabilities:
              drop: ["ALL"]
Enter fullscreen mode Exit fullscreen mode

Key configuration points:

  • --email-domain=yourcompany.com restricts access to users with email addresses on your corporate domain.
  • --cookie-secure=true and --cookie-httponly=true protect the session cookie from interception and JavaScript access.
  • --pass-access-token=true forwards the OIDC token to the upstream dashboard, which can use it for Kubernetes API authentication if the API server is configured for OIDC.
  • The securityContext block follows container security best practices: non-root user, read-only filesystem, no privilege escalation, and all Linux capabilities dropped.

Exposing Through Gateway API

Regardless of which proxy you use, expose the proxy (not the dashboard) through your ingress layer. If you are using the Gateway API, route to the oauth2-proxy service:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: dashboard-route
  namespace: auth-proxy
spec:
  parentRefs:
    - name: internal-gateway
      namespace: gateway-system
  hostnames:
    - "dashboard.internal.yourcompany.com"
  rules:
    - backendRefs:
        - name: oauth2-proxy
          port: 4180
Enter fullscreen mode Exit fullscreen mode

Use an internal-only gateway class so the dashboard is only accessible from your corporate network or VPN. Never attach it to a public-facing gateway.

Dashboard Versions and the Archival

For teams still running the Kubernetes Dashboard, here is the version landscape as of early 2026:

  • Helm chart v7.x (latest: v7.14.0), the final series before archival. This introduced a new architecture using cert-manager and nginx-ingress-controller as default dependencies. Installation is Helm-only; manifest-based installation was dropped in v7.0.0.
  • Dashboard application v3.x, the underlying application version that ships with Helm chart v7.x. This is the version that runs inside the container images.
  • Compatibility: Helm chart v7.x supports Kubernetes 1.35 (Timbernetes, released December 2025). Earlier versions may work but are not tested against 1.35's API changes.
  • Archived January 21, 2026: The repository was moved to kubernetes-retired/dashboard. No further releases will be made. Any CVEs discovered after this date will not be patched.

If you are installing the dashboard for the first time in 2026, the Helm-based installation is the only supported method:

# Add the repo (still accessible despite archival)
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm repo update

# Install with dependencies disabled if you already have cert-manager and nginx
helm upgrade --install kubernetes-dashboard \
  kubernetes-dashboard/kubernetes-dashboard \
  --create-namespace \
  --namespace kubernetes-dashboard \
  --set nginx.enabled=false \
  --set cert-manager.enabled=false \
  --set app.ingress.enabled=false
Enter fullscreen mode Exit fullscreen mode

Disable the built-in ingress (app.ingress.enabled=false) because you will route traffic through your auth proxy instead.

Alternatives to the Kubernetes Dashboard

Given the archival, teams should evaluate modern alternatives. Here are the primary options, each suited to different use cases.

Headlamp (Official Successor)

Headlamp is now the officially recommended replacement by Kubernetes SIG UI. It is a CNCF Sandbox project, originally developed by Kinvolk (now part of Microsoft), and was formally adopted under SIG UI in 2025.

Key advantages over the legacy dashboard:

  • Native OIDC support, integrates directly with identity providers without needing an external auth proxy (though you can still use one).
  • Plugin architecture, extensible through plugins, allowing teams to add custom views, actions, and integrations.
  • Desktop and in-cluster modes, can run as a desktop application (using local kubeconfig) or deployed in-cluster as a web application, or both.
  • Fine-grained RBAC, respects Kubernetes RBAC natively, showing users only what their permissions allow.
  • Active maintenance, regular releases, active contributor community, and backing from the Kubernetes project itself.

For most teams migrating from the Kubernetes Dashboard, Headlamp is the natural first choice.

k9s (Terminal UI)

k9s is a terminal-based Kubernetes UI that provides a fast, keyboard-driven interface for cluster management. It is the tool of choice for platform engineers and SREs who live in the terminal.

  • Security advantage: k9s runs locally, using the user's kubeconfig. There is no in-cluster web service to attack. The dashboard's entire attack surface disappears.
  • Minimal footprint: single binary, no server-side components, minimal memory usage.
  • Drawback: requires terminal access and comfort with keyboard navigation. Not suitable as a self-service tool for developers who primarily use web interfaces.

Lens (Desktop Application)

Lens is a desktop application that provides a rich graphical interface for Kubernetes cluster management. Note that the open-source OpenLens project is no longer actively maintained; the current Lens product is developed and maintained by Mirantis.

  • Security advantage: like k9s, Lens runs locally and uses kubeconfig for authentication. No in-cluster web surface.
  • Rich feature set: built-in metrics, Helm chart management, and an extension ecosystem.
  • Drawback: the free tier has limitations, and the commercial licensing model has changed several times.

When to Use What

  • You need a web-based dashboard for mixed-skill teams: use Headlamp.
  • Your team is all SREs and platform engineers: use k9s.
  • You want a desktop GUI with metrics integration: evaluate Lens.
  • You are running the legacy dashboard in production today: harden it using this guide while planning your migration.

Step-by-Step Hardening Checklist

Whether you are securing an existing dashboard deployment or performing a final audit before migrating away, work through this checklist systematically.

1. Remove Cluster-Admin Bindings

Search for any ClusterRoleBinding that grants cluster-admin to the dashboard's service account:

kubectl get clusterrolebindings -o json | \
  jq -r '.items[] | select(.roleRef.name=="cluster-admin") |
  select(.subjects[]?.namespace=="kubernetes-dashboard") | .metadata.name'
Enter fullscreen mode Exit fullscreen mode

Delete any results and replace them with the least-privilege ClusterRole and RoleBindings shown above.

2. Disable the Skip Button

Ensure the dashboard is started with the --enable-skip-login=false flag (the default in v7.x, but verify).

3. Enforce HTTPS

The dashboard should only serve over TLS. In Helm chart v7.x, cert-manager handles certificate provisioning by default. Never run the dashboard over plain HTTP, even internally.

4. Deploy Behind an Auth Proxy

Follow the oauth2-proxy or Pomerium deployment outlined above. The dashboard's Service should be ClusterIP type only, not NodePort, not LoadBalancer:

kubectl -n kubernetes-dashboard get svc

# If it is NodePort or LoadBalancer, patch it:
kubectl -n kubernetes-dashboard patch svc kubernetes-dashboard \
  -p '{"spec": {"type": "ClusterIP"}}'
Enter fullscreen mode Exit fullscreen mode

5. Apply Network Policies

Deploy the three NetworkPolicies: default deny, allow from auth proxy, and allow API server egress.

6. Restrict Dashboard Namespace Access

Ensure that only platform administrators can manage resources in the kubernetes-dashboard namespace.

7. Enable Audit Logging

Configure the Kubernetes API server's audit policy to log all requests from the dashboard's service accounts.

8. Set Resource Limits

Prevent the dashboard from consuming excessive cluster resources (which could be a sign of abuse or a compromised container).

9. Run as Non-Root with Read-Only Filesystem

The dashboard container should run as a non-root user with a read-only root filesystem. In Helm chart v7.x, this is the default, but verify:

kubectl -n kubernetes-dashboard get deployment kubernetes-dashboard -o json | \
  jq '.spec.template.spec.containers[0].securityContext'
Enter fullscreen mode Exit fullscreen mode

Expected output should include "runAsNonRoot": true, "readOnlyRootFilesystem": true, and "allowPrivilegeEscalation": false.

10. Plan Your Migration

The dashboard is archived and will receive no further security patches. Set a concrete migration deadline:

  1. Evaluate Headlamp, k9s, and Lens against your team's needs.
  2. Deploy the replacement in a staging environment.
  3. Migrate users in phases, starting with teams that have the lightest dashboard usage.
  4. Decommission the legacy dashboard and remove all associated RBAC, Services, and Deployments.
  5. Delete the kubernetes-dashboard namespace entirely once migration is confirmed complete.

Follow the Kubernetes upgrade checklist to ensure your replacement tooling is compatible with your target cluster version before cutting over.

Summary

The Kubernetes Dashboard has been a valuable tool for cluster management, but its security track record and recent archival make it a liability in production environments. If you are still running it, the hardening steps in this guide (least-privilege RBAC, network policies, auth proxy deployment, audit logging, and container security) will reduce your attack surface to the minimum practical level.

But hardening is a stopgap. The dashboard will not receive security patches going forward. The right long-term move is migrating to Headlamp (for web-based access), k9s (for terminal-based workflows), or another actively maintained tool that fits your team's operational model. Start that migration now, and use this guide to keep your existing deployment safe until the cutover is complete.

Top comments (0)