DEV Community

Chen Debra
Chen Debra

Posted on

Migrating DolphinScheduler into K8s: A Field Report on Pitfalls and Lessons Learned from 900 Days of Qihoo 360’s Practice

👋 Hi, community, I’m Yuanpeng. Over the past 3 years, our team gradually migrated part of our scheduling workloads from Azkaban to DolphinScheduler and containerized them on K8s. Today I’m dumping every pitfall and lesson in one place—bookmark this!

About the Author

Yuanpeng Wang

Data Expert, Shanghai Qihoo Technology Co., Ltd.

Core member of Commercial SRE & Big-Data teams

Long-term responsible for DolphinScheduler production deployment and optimization, with deep expertise in containerization and big-data scheduling.

In our day-to-day big-data job orchestration, Apache DolphinScheduler has become one of our most critical tools. We used to run it on bare-metal (v3.1.9 still sat on physical machines), but that approach exposed gaps in elastic scaling, resource isolation, and operational efficiency. As the company’s cloud-native strategy accelerated, we finally upgraded DolphinScheduler to 3.2.2 in 2025 and partially migrated it to Kubernetes.

The motivation was crystal-clear: first, elastic scaling—K8s can spin up extra Worker pods at peak load; second, resource isolation so jobs don’t clobber each other; third, automated rollout & rollback, slashing maintenance costs; and finally, and most importantly, alignment with our cloud-native direction.

Image Build: From Source to Modules

The first step of the migration was image construction.

1

We prepared a base image containing Hadoop, Hive, Spark, Flink, Python, etc., then built DolphinScheduler’s base image on top, bundling re-compiled modules and the MySQL driver.

base image

Note: MySQL stores DolphinScheduler’s metadata, so the driver JAR must be symlinked into every module: dolphinscheduler-tools, dolphinscheduler-master, dolphinscheduler-worker, dolphinscheduler-api, and dolphinscheduler-alert-server.

worker image

Module images are customized on top of the base DS image, mainly tweaking ports and configs. To minimize later changes, we kept the image names identical to the official ones. You can build a single module:

./build.sh worker-server
Enter fullscreen mode Exit fullscreen mode

single build

or batch-build everything:

./build-all.sh
Enter fullscreen mode Exit fullscreen mode

batch build

Typical headaches: huge base image → slow builds; refactored JARs not overwriting old ones; mismatched port configs & start scripts across modules. Overlook any of these and you’ll suffer later.

Issue Fix
Huge base image, slow builds Split common layers, use multi-stage cache
MySQL driver not found Symlink into every module’s lib/
Custom JARs not overriding Add find -name "*.jar" -delete in build.sh

Deployment: From Hand-Rolled YAML to Official Helm Chart

Early on we hand-wrote YAMLs—painful for config sprawl and upgrades. We switched to the official Helm chart for centralized configs and smoother upgrades.

K8s cluster version: v1.25. First create the namespace:

kubectl create ns dolphinscheduler
helm pull oci://registry-1.docker.io/apache/dolphinscheduler-helm --version 3.2.2
Enter fullscreen mode Exit fullscreen mode

values.yaml is where the dragons hide. Key snippets:

1. Image

image:
  registry: my.private.repo
  repository: dolphinscheduler
  tag: 3.2.2
  pullPolicy: IfNotPresent
Enter fullscreen mode Exit fullscreen mode

💡 Pre-push utility images to your private repo to avoid network hiccups.

2. External MySQL

mysql:
  enabled: false   # disable embedded MySQL
externalMysql:
  host: mysql.prod.local
  port: 3306
  username: ds_user
  password: ds_password
  database: dolphinscheduler
Enter fullscreen mode Exit fullscreen mode

💡 Always disable the built-in DB; prod uses external MySQL (future plan: migrate to PostgreSQL).

3. LDAP Auth

ldap:
  enabled: true
  url: ldap://ldap.prod.local:389
  userDn: cn=admin,dc=company,dc=com
  password: ldap_password
  baseDn: dc=company,dc=com
Enter fullscreen mode Exit fullscreen mode

💡 Single sign-on via corporate LDAP simplifies permission management.

4. Shared Storage

sharedStoragePersistence:
  enabled: true
  storageClassName: nfs-rwx
  size: 100Gi
  mountPath: /dolphinscheduler/shared
Enter fullscreen mode Exit fullscreen mode

💡 storageClassName must support ReadWriteMany, or multiple Workers can’t share.

shared storage

5. HDFS

hdfs:
  defaultFS: hdfs://hdfs-nn:8020
  path: /dolphinscheduler
  rootUser: hdfs
Enter fullscreen mode Exit fullscreen mode

💡 Ensure big-data paths like /opt/soft exist beforehand.

6. Zookeeper

zookeeper:
  enabled: false   # disable embedded ZK
externalZookeeper:
  quorum: zk1.prod.local:2181,zk2.prod.local:2181,zk3.prod.local:2181
Enter fullscreen mode Exit fullscreen mode

💡 When using external ZK, disable the built-in one and verify version compatibility.


Pitfalls & Maintenance Battles

We stepped on plenty of rakes:

  • Image issues

    • Base image too fat → slow CI
    • Module deps diverged → duplicate installs
    • MySQL driver path wrong → startup failures
    • Custom JAR forgot to overwrite old ones → runtime exceptions
    • Port & script mismatches between modules
  • Helm values.yaml gotchas

    • sharedStoragePersistence.storageClassName must be RWX-capable
    • Storage size, mountPath, and config path indentation errors
    • Disable defaults you don’t need (e.g., built-in ZK) and mind version requirements
  • Upgrade & maintenance cost


    Every new DolphinScheduler release forces us to diff our custom patches, rebuild every module image, and re-test. Version drift in config keys makes upgrades and rollbacks fragile, stretching release cycles and burning team hours.


Roadmap & Thoughts

To cut long-term OPEX we’re standardizing:

  • Migrate metadata DB from MySQL → PostgreSQL
  • Move to vanilla community images instead of custom ones
  • Shift remaining prod workloads to K8s
  • Introduce full CI/CD with Prometheus + Grafana observability

K8s gives DolphinScheduler far better elasticity, isolation, and portability than bare metal ever could. Custom images and configs did hurt, but as we converge on community releases and standardized ops, pain will fade and velocity will rise.

Our end goal: a highly-available, easily-extensible, unified scheduling platform that truly unlocks cloud-native value. If you’re also considering moving your scheduler to K8s, hit the comments or join the DolphinScheduler community—let’s dig together!

Top comments (0)