DEV Community

Rahul Singh
Rahul Singh

Posted on • Originally published at aicodereview.cc

How to Deploy SonarQube on Kubernetes with Helm

Deploying SonarQube on Kubernetes gives your team a scalable, resilient code quality platform that fits naturally into a cloud-native stack. Instead of managing a bare VM or relying on Docker Compose, you get self-healing pods, declarative configuration, rolling upgrades, and integration with your existing ingress controllers and certificate management.

This guide walks through the entire Helm-based deployment from scratch: adding the official chart, writing a production-ready values.yaml, connecting to PostgreSQL, configuring persistent volumes, setting up an ingress, applying resource limits, and integrating quality gates into CI/CD pipelines running inside the cluster. By the end, you will have a working SonarQube instance that is ready for your team to use.

If you are starting with a simpler setup first, see our guides on SonarQube with Docker and SonarQube Docker Compose for approaches that do not require a Kubernetes cluster. For CI/CD integration details, the SonarQube GitHub Actions guide covers PR decoration and pipeline scanning in depth.

SonarQube screenshot

Prerequisites

Before you begin, make sure you have the following ready:

  • A Kubernetes cluster (1.25 or later) - any distribution works: EKS, GKE, AKS, k3s, or a local kind/minikube cluster
  • kubectl configured to communicate with your cluster
  • Helm 3.10 or later installed locally (helm version to verify)
  • A StorageClass that supports dynamic provisioning (check with kubectl get storageclass)
  • An Ingress controller installed if you want to expose SonarQube via a hostname (Nginx ingress is the most common choice)
  • At least 4 GB of allocatable RAM on the node(s) where SonarQube will run

On every Kubernetes node that might host the SonarQube pod, the Linux kernel parameter vm.max_map_count must be at least 524288. SonarQube embeds Elasticsearch, which refuses to start without this setting. You can handle this via an init container in the Helm chart (covered in the configuration section below) or by applying it at the node level using a DaemonSet.

Step 1 - Add the SonarSource Helm repository

SonarSource publishes an official Helm chart for SonarQube. Add the repository and update your local chart index:

helm repo add sonarqube https://SonarSource.github.io/helm-chart-sonarqube
helm repo update
Enter fullscreen mode Exit fullscreen mode

Verify the chart is available:

helm search repo sonarqube/sonarqube
Enter fullscreen mode Exit fullscreen mode

You should see output listing the chart version and the app version (the SonarQube version it deploys). To see all available chart versions:

helm search repo sonarqube/sonarqube --versions
Enter fullscreen mode Exit fullscreen mode

Create a namespace for SonarQube to keep its resources separate from other workloads:

kubectl create namespace sonarqube
Enter fullscreen mode Exit fullscreen mode

Step 2 - Create your values.yaml

Rather than installing with inline --set flags, define all configuration in a values.yaml file. This gives you a repeatable, version-controlled deployment. Start by pulling the chart's default values as a reference:

helm show values sonarqube/sonarqube > sonarqube-defaults.yaml
Enter fullscreen mode Exit fullscreen mode

Now create your own values.yaml with the settings that matter for your deployment. Here is a production-oriented starting point covering all the key areas:

# values.yaml - SonarQube on Kubernetes

# Edition: community, developer, enterprise, datacenter
edition: "community"

# Image configuration
image:
  repository: sonarqube
  tag: lts-community
  pullPolicy: IfNotPresent

# Replica count (Data Center Edition supports >1, others must stay at 1)
replicaCount: 1

# -------------------------------------------------------
# PostgreSQL - use an external database for production
# -------------------------------------------------------
postgresql:
  enabled: false  # Disable the bundled subchart for production

# External database connection
jdbcOverwrite:
  enable: true
  jdbcUrl: "jdbc:postgresql://your-postgres-host:5432/sonarqube"
  jdbcUsername: "sonar"
  jdbcSecretName: "sonarqube-db-secret"
  jdbcSecretPasswordKey: "jdbc-password"

# -------------------------------------------------------
# Persistence
# -------------------------------------------------------
persistence:
  enabled: true
  storageClass: "gp3"   # Replace with your cluster's StorageClass name
  size: 20Gi
  accessMode: ReadWriteOnce

# -------------------------------------------------------
# Resource limits
# -------------------------------------------------------
resources:
  requests:
    cpu: "500m"
    memory: "2Gi"
  limits:
    cpu: "2000m"
    memory: "4Gi"

# -------------------------------------------------------
# JVM tuning
# -------------------------------------------------------
env:
  - name: SONAR_WEB_JAVAOPTS
    value: "-Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError"
  - name: SONAR_CE_JAVAOPTS
    value: "-Xmx1024m -Xms128m -XX:+HeapDumpOnOutOfMemoryError"
  - name: SONAR_SEARCH_JAVAOPTS
    value: "-Xmx512m -Xms512m -XX:+HeapDumpOnOutOfMemoryError"

# -------------------------------------------------------
# Init container to set vm.max_map_count
# -------------------------------------------------------
initSysctl:
  enabled: true
  vmMaxMapCount: 524288
  securityContext:
    privileged: true

# -------------------------------------------------------
# Ingress
# -------------------------------------------------------
ingress:
  enabled: true
  ingressClassName: nginx
  hosts:
    - name: sonar.yourcompany.com
      path: /
  tls:
    - secretName: sonarqube-tls
      hosts:
        - sonar.yourcompany.com
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: "64m"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"

# -------------------------------------------------------
# Service
# -------------------------------------------------------
service:
  type: ClusterIP
  port: 9000

# -------------------------------------------------------
# Security context
# -------------------------------------------------------
securityContext:
  fsGroup: 1000

containerSecurityContext:
  runAsUser: 1000
  runAsNonRoot: true

# -------------------------------------------------------
# Pod scheduling
# -------------------------------------------------------
affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/os
              operator: In
              values:
                - linux

# -------------------------------------------------------
# Liveness and readiness probes
# -------------------------------------------------------
livenessProbe:
  initialDelaySeconds: 120
  periodSeconds: 30
  failureThreshold: 6

readinessProbe:
  initialDelaySeconds: 60
  periodSeconds: 10
  failureThreshold: 6
Enter fullscreen mode Exit fullscreen mode

This file handles the core scenarios: external PostgreSQL via a Kubernetes Secret, persistent storage, resource limits, JVM tuning, the sysctl init container, and TLS ingress with cert-manager. Walk through each section below to understand what each block does and how to adapt it for your environment.

Step 3 - Set up PostgreSQL and the database Secret

For production deployments, use an external PostgreSQL instance rather than the bundled subchart. The subchart is convenient for development but lacks production features like automated backups, point-in-time recovery, and connection pooling.

Good options for managed PostgreSQL:

  • Amazon RDS for PostgreSQL (15 or 16) on EKS
  • Cloud SQL for PostgreSQL on GKE
  • Azure Database for PostgreSQL on AKS
  • A PostgreSQL Operator (CloudNativePG or Zalando) running inside the cluster

Create the database and user in PostgreSQL before installing the chart:

CREATE DATABASE sonarqube;
CREATE USER sonar WITH ENCRYPTED PASSWORD 'a_strong_password_here';
GRANT ALL PRIVILEGES ON DATABASE sonarqube TO sonar;
ALTER DATABASE sonarqube OWNER TO sonar;
Enter fullscreen mode Exit fullscreen mode

Store the database password as a Kubernetes Secret so it is not hardcoded in values.yaml:

kubectl create secret generic sonarqube-db-secret \
  --namespace sonarqube \
  --from-literal=jdbc-password='a_strong_password_here'
Enter fullscreen mode Exit fullscreen mode

The jdbcOverwrite section in values.yaml references this Secret by name (sonarqube-db-secret) and key (jdbc-password). Helm will mount the password into the SonarQube pod as an environment variable without it appearing in plaintext in any Kubernetes resource.

If you want to use the bundled PostgreSQL subchart for a development cluster (where data loss is acceptable), replace the PostgreSQL block in values.yaml with:

postgresql:
  enabled: true
  postgresqlUsername: sonar
  postgresqlPassword: sonar_dev_password
  postgresqlDatabase: sonarqube
  persistence:
    enabled: true
    size: 10Gi
Enter fullscreen mode Exit fullscreen mode

Do not use this in production. The subchart does not configure backups and a single pod failure can result in data loss.

Step 4 - Configure persistent volumes

SonarQube needs persistent storage for its data directory (where Elasticsearch indices live), its extensions directory (installed plugins), and its logs. The Helm chart creates a single PersistentVolumeClaim mapped to /opt/sonarqube/data by default, with extensions and logs also under /opt/sonarqube.

The key settings in values.yaml are:

persistence:
  enabled: true
  storageClass: "gp3"      # Must match an existing StorageClass in your cluster
  size: 20Gi               # 20 GB minimum for production; scale up for many projects
  accessMode: ReadWriteOnce
Enter fullscreen mode Exit fullscreen mode

To see what StorageClasses are available in your cluster:

kubectl get storageclass
Enter fullscreen mode Exit fullscreen mode

On AWS EKS with the EBS CSI driver, gp3 is the right choice. On GKE, use standard-rwo. On AKS, use managed-premium. On a local cluster, standard or local-path (from k3s) works for development.

If you need a larger data directory than the chart's default layout allows, you can define explicit PersistentVolumeClaims and use extraVolumes and extraVolumeMounts in values.yaml to mount them at specific paths:

extraVolumes:
  - name: sonarqube-extensions
    persistentVolumeClaim:
      claimName: sonarqube-extensions-pvc

extraVolumeMounts:
  - name: sonarqube-extensions
    mountPath: /opt/sonarqube/extensions
Enter fullscreen mode Exit fullscreen mode

Create the PVC separately in that case:

# extensions-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sonarqube-extensions-pvc
  namespace: sonarqube
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: gp3
  resources:
    requests:
      storage: 5Gi
Enter fullscreen mode Exit fullscreen mode
kubectl apply -f extensions-pvc.yaml
Enter fullscreen mode Exit fullscreen mode

Step 5 - Install SonarQube with Helm

With values.yaml ready and the database Secret created, install the chart:

helm install sonarqube sonarqube/sonarqube \
  --namespace sonarqube \
  --values values.yaml \
  --timeout 10m
Enter fullscreen mode Exit fullscreen mode

The --timeout 10m flag is important. SonarQube takes 90 to 180 seconds to initialize on first boot because it runs database migrations and starts Elasticsearch. Without a longer timeout, Helm may report a failure even though the pod is still starting.

Watch the pod come up:

kubectl get pods -n sonarqube -w
Enter fullscreen mode Exit fullscreen mode

Once the pod reaches Running status, check the logs to confirm SonarQube is operational:

kubectl logs -n sonarqube -l app=sonarqube -f
Enter fullscreen mode Exit fullscreen mode

Look for the line SonarQube is operational in the output. After that, the readiness probe will start passing and Kubernetes will mark the pod as ready.

To verify the deployment and service:

kubectl get all -n sonarqube
kubectl describe ingress -n sonarqube
Enter fullscreen mode Exit fullscreen mode

If you configured an ingress, SonarQube should be accessible at https://sonar.yourcompany.com once DNS resolves and cert-manager has issued the TLS certificate.

For local testing without an ingress, port-forward directly to the pod:

kubectl port-forward -n sonarqube svc/sonarqube-sonarqube 9000:9000
Enter fullscreen mode Exit fullscreen mode

Then open http://localhost:9000 and log in with admin/admin. SonarQube will prompt you to change the password immediately.

Step 6 - Ingress and TLS configuration

The values.yaml ingress section configures an Nginx ingress resource with TLS. Here is what each annotation does:

ingress:
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: "64m"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
Enter fullscreen mode Exit fullscreen mode
  • proxy-body-size: "64m" - SonarQube scanner uploads analysis reports to the server. The default Nginx limit of 1 MB is too small for large codebases. 64 MB handles most projects comfortably; increase to 128 MB or higher for monorepos.
  • proxy-read-timeout: "600" - SonarQube's compute engine processes analysis reports asynchronously, but the initial upload can take time. A 10-minute timeout prevents Nginx from cutting off large analysis uploads.
  • cert-manager.io/cluster-issuer - If you have cert-manager installed with a Let's Encrypt ClusterIssuer named letsencrypt-prod, cert-manager will automatically provision and renew a TLS certificate for your hostname.

After installing cert-manager and creating a ClusterIssuer, the certificate is issued automatically. Check its status:

kubectl get certificate -n sonarqube
kubectl describe certificate sonarqube-tls -n sonarqube
Enter fullscreen mode Exit fullscreen mode

If you are using a pre-existing TLS certificate instead of cert-manager, create the Secret manually:

kubectl create secret tls sonarqube-tls \
  --namespace sonarqube \
  --cert=path/to/fullchain.pem \
  --key=path/to/privkey.pem
Enter fullscreen mode Exit fullscreen mode

After the ingress is working, update SonarQube's base URL so it generates correct links in emails and PR comments. In the Helm values, add:

sonarProperties:
  sonar.core.serverBaseURL: "https://sonar.yourcompany.com"
Enter fullscreen mode Exit fullscreen mode

Then apply the change with a Helm upgrade (covered in the production tips section).

Step 7 - Resource limits and production tuning

Getting resource limits right is critical for stable SonarQube operation on Kubernetes. Too low and the pod gets OOMKilled; too high and you waste cluster resources.

Memory breakdown

SonarQube's memory consumption comes from three JVM processes:

Component Recommended heap Notes
Web server 512 MB Handles HTTP requests and the UI
Compute engine 1024 MB Processes scanner reports - increase for large repos
Elasticsearch 512 MB Stores analysis data - set heap = half of node RAM available to Elasticsearch

The total JVM heap is around 2 GB. Add overhead for the JVM itself, OS processes, and Kubernetes overhead, and you arrive at the 4 GB container memory limit in values.yaml.

For Kubernetes resource requests, set them lower than the limits to allow the scheduler some flexibility:

resources:
  requests:
    cpu: "500m"
    memory: "2Gi"
  limits:
    cpu: "2000m"
    memory: "4Gi"
Enter fullscreen mode Exit fullscreen mode

Pod disruption budget

For production clusters where you run planned maintenance or node upgrades, add a PodDisruptionBudget to prevent the SonarQube pod from being evicted during busy periods:

# pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: sonarqube-pdb
  namespace: sonarqube
spec:
  minAvailable: 1
  selector:
    matchLabels:
      app: sonarqube
Enter fullscreen mode Exit fullscreen mode
kubectl apply -f pdb.yaml
Enter fullscreen mode Exit fullscreen mode

Since SonarQube Community Edition runs as a single replica, this PDB ensures the pod is not evicted if it is the only one running. The node drain will wait for a rescheduled pod to become healthy before proceeding.

Horizontal Pod Autoscaler

SonarQube Community and Developer editions do not support running multiple replicas. Do not configure an HPA for these editions - it will not work as expected. The Data Center Edition supports horizontal scaling, but that is a separate architecture requiring a shared NFS or object storage backend.

Step 8 - Quality gates and CI/CD integration

Deploying SonarQube on Kubernetes is the infrastructure side of the story. The value comes from running analysis as part of your CI/CD pipelines and enforcing quality gates on every pull request.

Creating a project and token

After SonarQube is running, create a project and generate an authentication token:

  1. Log in at https://sonar.yourcompany.com
  2. Navigate to Projects and click Create Project
  3. Choose Manually and enter your project key and name
  4. Under Set Up, select With Jenkins, GitHub Actions, or Locally depending on your CI
  5. Generate a User Token (or Project Token for tighter scope) and store it as a Kubernetes Secret or CI secret

For GitHub Actions pipelines, store the token and SonarQube URL as repository secrets. For pipelines running as Kubernetes Jobs, store them as Secrets and mount them into the job pod.

For a detailed GitHub Actions workflow, see our SonarQube GitHub Actions guide.

Running sonar-scanner as a Kubernetes Job

If your CI system is also running on Kubernetes (Tekton, Argo Workflows, GitHub Actions self-hosted runners on k8s), you can run the scanner as a Job:

# scanner-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: sonarqube-scan
  namespace: sonarqube
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
        - name: scanner
          image: sonarsource/sonar-scanner-cli:latest
          env:
            - name: SONAR_HOST_URL
              value: "http://sonarqube-sonarqube.sonarqube.svc.cluster.local:9000"
            - name: SONAR_TOKEN
              valueFrom:
                secretKeyRef:
                  name: sonarqube-scanner-token
                  key: token
          volumeMounts:
            - name: source
              mountPath: /usr/src
      volumes:
        - name: source
          persistentVolumeClaim:
            claimName: source-code-pvc
Enter fullscreen mode Exit fullscreen mode

Inside the cluster, reach SonarQube via its internal service DNS name: sonarqube-sonarqube.sonarqube.svc.cluster.local. This avoids going through the ingress for internal traffic.

Enforcing quality gates

Quality gates are defined in the SonarQube web UI under Quality Gates. The default "Sonar way" gate is a reasonable starting point. You can customize it to require:

  • Coverage on new code above 80%
  • No new blocker or critical bugs
  • No new critical vulnerabilities
  • Duplication on new code below 3%

To enforce the quality gate in a CI pipeline, the scanner exits with a non-zero status code when the analysis fails the quality gate. Add the sonar.qualitygate.wait=true property to your sonar-project.properties (or pass it as -Dsonar.qualitygate.wait=true) to make the scanner block until the compute engine finishes processing and then fail the build if the gate fails.

sonar.qualitygate.wait=true
sonar.qualitygate.timeout=300
Enter fullscreen mode Exit fullscreen mode

For pull request decoration (adding inline comments to PRs with SonarQube findings), configure the GitHub or GitLab integration under Administration, then DevOps Platform Integrations. You will need a GitHub App or personal access token with permission to write to pull request comments.

Step 9 - Upgrading SonarQube with Helm

Upgrading SonarQube on Kubernetes is simpler than on a VM because Helm manages the rollout and Kubernetes handles pod replacement. The general process is:

  1. Update the image tag in values.yaml:
image:
  tag: 10.8-community  # or whatever the new version is
Enter fullscreen mode Exit fullscreen mode
  1. Run the Helm upgrade:
helm upgrade sonarqube sonarqube/sonarqube \
  --namespace sonarqube \
  --values values.yaml \
  --timeout 10m
Enter fullscreen mode Exit fullscreen mode
  1. Watch the rollout:
kubectl rollout status deployment/sonarqube-sonarqube -n sonarqube
Enter fullscreen mode Exit fullscreen mode

SonarQube handles database migrations automatically on startup. For major version upgrades (for example, 9.x to 10.x), read the SonarQube upgrade guide first and back up your database before running the upgrade.

To back up PostgreSQL before upgrading:

kubectl exec -n sonarqube \
  $(kubectl get pod -n sonarqube -l app=postgresql -o jsonpath='{.items[0].metadata.name}') \
  -- pg_dump -U sonar sonarqube > sonarqube_backup_$(date +%Y%m%d).sql
Enter fullscreen mode Exit fullscreen mode

If you are using an external managed database, trigger a snapshot or point-in-time backup through your cloud provider's console before upgrading.

To also upgrade the Helm chart itself (separate from the SonarQube image version):

helm repo update
helm upgrade sonarqube sonarqube/sonarqube \
  --namespace sonarqube \
  --values values.yaml
Enter fullscreen mode Exit fullscreen mode

Troubleshooting common Kubernetes deployment issues

Pod stays in Pending state

The pod cannot be scheduled because no node has enough resources or the PVC cannot be bound.

kubectl describe pod -n sonarqube -l app=sonarqube
Enter fullscreen mode Exit fullscreen mode

Look for Insufficient memory, Insufficient cpu, or pod has unbound immediate PersistentVolumeClaims in the Events section. Reduce resource requests or fix the StorageClass name in values.yaml.

CrashLoopBackOff on startup

Usually caused by the vm.max_map_count issue or a database connection failure.

kubectl logs -n sonarqube -l app=sonarqube --previous
Enter fullscreen mode Exit fullscreen mode

If you see max virtual memory areas vm.max_map_count in the logs, the init container is not running or is not privileged. Verify initSysctl.enabled: true in values.yaml and that the init container's securityContext.privileged is set to true.

If you see Connection refused or FATAL: password authentication failed, check the database Secret and verify the JDBC URL points to the correct host and port.

Init container fails

The sysctl init container requires privileged access. If your cluster has a PodSecurity admission policy that blocks privileged containers, either set the sonarqube namespace to privileged policy level or configure vm.max_map_count at the node level with a DaemonSet:

# node-sysctl-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: set-vm-max-map-count
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: set-vm-max-map-count
  template:
    metadata:
      labels:
        name: set-vm-max-map-count
    spec:
      hostPID: true
      hostIPC: true
      hostNetwork: true
      initContainers:
        - name: set-sysctl
          image: alpine
          securityContext:
            privileged: true
          command: ["sysctl", "-w", "vm.max_map_count=524288"]
      containers:
        - name: pause
          image: gcr.io/google-containers/pause:3.1
Enter fullscreen mode Exit fullscreen mode

SonarQube is slow to process analyses

If the compute engine queue backs up, it is usually a memory constraint. Increase SONAR_CE_JAVAOPTS to give the compute engine more heap:

env:
  - name: SONAR_CE_JAVAOPTS
    value: "-Xmx2048m -Xms256m"
Enter fullscreen mode Exit fullscreen mode

And increase the container memory limit accordingly:

resources:
  limits:
    memory: "6Gi"
Enter fullscreen mode Exit fullscreen mode

Is self-hosting SonarQube on Kubernetes right for you?

Running SonarQube on Kubernetes is the right choice when you need data residency (analysis results stay on your infrastructure), air-gap support (no internet access), or deep customization of plugins and configuration. It also makes sense if your team already operates a Kubernetes cluster and has the expertise to maintain it.

That said, the operational overhead is real. You are responsible for capacity planning, node configuration, certificate management, database backups, upgrades, and troubleshooting. If those responsibilities are a poor fit for your team, consider the managed alternatives.

SonarQube Cloud (formerly SonarCloud) is the SaaS version of SonarQube - same analysis engine, zero infrastructure. It is free for public repositories and starts at $14/month for private ones. See our SonarQube alternatives guide and the SonarQube tool page for a full feature breakdown.

CodeAnt AI is a modern alternative worth evaluating if you want AI-powered code review alongside static analysis. At $24 to $40 per user per month, CodeAnt AI connects directly to your repositories, reviews pull requests automatically, and flags security issues without any infrastructure to manage. For teams that find themselves spending engineering time on SonarQube Kubernetes maintenance rather than shipping code, CodeAnt AI offers a compelling trade-off.

Conclusion

Deploying SonarQube on Kubernetes with Helm is a well-supported path for teams that want a self-hosted code quality platform in a cloud-native environment. The official SonarSource Helm chart handles most of the heavy lifting: Deployments, Services, PVCs, and Ingress resources are all managed through values.yaml.

Here is a summary of what this guide covered:

  • Adding the SonarSource Helm repo and creating a dedicated namespace
  • Writing a production values.yaml with PostgreSQL, persistence, resource limits, and ingress
  • Storing database credentials as Kubernetes Secrets
  • Configuring persistent volumes with the right StorageClass
  • Setting vm.max_map_count via the init container
  • Exposing SonarQube with Nginx ingress and TLS from cert-manager
  • Enforcing quality gates in CI/CD pipelines running on Kubernetes
  • Upgrading with helm upgrade and backing up PostgreSQL beforehand
  • Troubleshooting Pending pods, CrashLoopBackOff, and slow analysis queues

For teams that prefer a simpler starting point before committing to Kubernetes, the Docker Compose guide gets you to a production-grade setup with less infrastructure complexity. And for integrating your SonarQube instance - however it is deployed - into GitHub-based CI/CD workflows, the SonarQube GitHub Actions guide covers the full pipeline setup.

Further Reading

Frequently Asked Questions

How do I deploy SonarQube on Kubernetes?

The recommended way to deploy SonarQube on Kubernetes is with the official Helm chart maintained by SonarSource. Add the Helm repo with 'helm repo add sonarqube https://SonarSource.github.io/helm-chart-sonarqube', customize a values.yaml file, and install with 'helm install sonarqube sonarqube/sonarqube -f values.yaml'. The chart handles the Deployment, Service, PersistentVolumeClaims, and optional Ingress resources automatically.

Which Helm chart should I use for SonarQube?

Use the official SonarSource Helm chart from the sonarqube/sonarqube chart repository. The chart at 'sonarqube/sonarqube' deploys the Community, Developer, Enterprise, or Data Center editions depending on the image tag you specify in values.yaml. Avoid third-party charts - they are often outdated and miss critical configuration options added in recent SonarQube releases.

Does SonarQube work with Kubernetes built-in PostgreSQL?

Kubernetes does not ship a built-in PostgreSQL. You have two options: enable the bundled PostgreSQL subchart in the Helm chart values (fine for development, not recommended for production), or point SonarQube at an external PostgreSQL instance such as Amazon RDS, Cloud SQL, or a separately managed PostgreSQL StatefulSet. For production, always use an external, managed database with backups and automated failover.

What are the minimum resource requirements for SonarQube on Kubernetes?

SonarQube Community Edition requires at least 2 vCPUs and 4 GB of RAM for the SonarQube pod. The embedded Elasticsearch needs vm.max_map_count set to 524288 on every Kubernetes node it might be scheduled on. For production workloads analyzing multiple projects concurrently, allocate 4 vCPUs and 8 GB. Set resource requests and limits in values.yaml to ensure the scheduler places the pod on a node with sufficient capacity.

How do I configure persistent storage for SonarQube on Kubernetes?

The Helm chart creates PersistentVolumeClaims for SonarQube's data, extensions, and logs directories. Set the storageClass in values.yaml to match your cluster's available StorageClass (for example, gp3 on EKS or standard-rwo on GKE). Set persistence.size to at least 20Gi for production deployments. The PostgreSQL subchart also creates its own PVC for database storage.

How do I expose SonarQube externally on Kubernetes?

Enable ingress in values.yaml by setting ingress.enabled to true and providing your hostname. If your cluster uses an Nginx ingress controller, set ingress.ingressClassName to nginx. For TLS, reference a Kubernetes Secret containing your certificate in ingress.tls. Alternatively, change service.type to LoadBalancer to expose SonarQube directly via a cloud load balancer, though Ingress with cert-manager is the cleaner production approach.

How do I set the vm.max_map_count for SonarQube on Kubernetes?

SonarQube's embedded Elasticsearch requires vm.max_map_count of at least 524288 on the host node. In the Helm chart, enable the initSysctl init container by setting initSysctl.enabled to true in values.yaml. This runs a privileged init container that sets vm.max_map_count before the main SonarQube container starts. Alternatively, configure this at the node level using a DaemonSet that applies the sysctl setting to every node.

Can I run SonarQube on Kubernetes in production with the Community Edition?

Yes. The Community Edition is fully supported on Kubernetes and is free. The main limitations compared to paid editions are that you cannot analyze multiple branches in parallel (only the main branch and pull requests), and some enterprise plugins are unavailable. For a single team or small organization, Community Edition on Kubernetes is a practical and cost-effective production setup.

How do I configure quality gates in a Kubernetes-based SonarQube deployment?

Quality gates are configured through the SonarQube web interface, not through Kubernetes resources. After deploying with Helm, log in, navigate to Quality Gates, and define your thresholds for coverage, duplication, bugs, and vulnerabilities. To enforce quality gates in CI/CD pipelines running on Kubernetes, configure your pipeline steps to call 'sonar-scanner' and fail the build if the SonarQube analysis does not pass the quality gate. The scanner polls SonarQube's API and returns a non-zero exit code on failure.

How do I upgrade SonarQube deployed with Helm?

Update the image tag in your values.yaml to the new SonarQube version, then run 'helm upgrade sonarqube sonarqube/sonarqube -f values.yaml'. SonarQube handles database migrations automatically on startup. For major version upgrades, check the SonarQube upgrade notes for any Helm chart version requirements and back up your database before running the upgrade command.

What is the difference between SonarQube Kubernetes deployment and SonarQube Cloud?

A Kubernetes deployment is self-managed - you control the infrastructure, data, and updates. SonarQube Cloud (formerly SonarCloud) is a fully managed SaaS platform from SonarSource with no infrastructure to operate. Cloud is free for public repositories and starts at $14 per month for private ones. Kubernetes gives you data residency, air-gap support, and full customization. Cloud gives you zero operational overhead and instant setup.

Are there managed alternatives to running SonarQube on Kubernetes?

Yes. SonarQube Cloud eliminates all Kubernetes infrastructure. CodeAnt AI at $24 to $40 per user per month provides AI-powered code review and static analysis with no infrastructure to manage. Both options are worth evaluating if your team spends significant time maintaining SonarQube's Kubernetes deployment rather than writing code.


Originally published at aicodereview.cc

Top comments (0)