DEV Community

afnan amin
afnan amin

Posted on

Journey with Apache Flink on AWS EKS

When building streaming pipelines, one of the biggest challenges isn’t just processing data , it’s keeping jobs running reliably while data continuously flows. Apache Flink addresses this challenge effectively.

Discovering Flink

Apache Flink is a distributed stream-processing framework built for stateful, real-time data processing. Unlike batch-first frameworks, Flink treats streams as the core abstraction, with batch as just a special case.

Flink stands out due to its handling of state, time, and fault tolerance:

  • Exactly-once processing guarantees
  • Native state management for long-running jobs
  • Event-time processing with watermarks
  • Checkpointing that makes failures recoverable

This made Flink feel less like a tool and more like a platform for always-on systems.

Why We Chose Flink

Streaming pipelines evolve over time , logic changes, schemas expand, and performance tuning becomes necessary. Flink’s checkpoint-based execution model allowed us to:

  • Restart jobs without breaking downstream systems
  • Roll out logic or configuration changes safely
  • Treat streaming jobs as living systems, not one-off deployments
  • Its combination of stateful processing, exactly-once semantics, and safe evolution made Flink the clear choice for our production pipelines.

Running Flink on AWS EKS

Once we decided on Flink, we had to figure out where and how to run it reliably. AWS EKS provided:

  • A managed Kubernetes control plane
  • Native integration with AWS services
  • Consistent environments across dev, test, and prod

To make Flink truly Kubernetes-native, we adopted the Flink Kubernetes Operator.

After installing it via Helm, it introduced a FlinkDeployment CRD. From that point, our deployments became fully declarative. We define the desired state in YAML, and the operator continuously reconciles it.

The operator:

  • Launches the JobManager pod (hosting the Flink UI)
  • Scales TaskManagers as needed
  • Configures networking, volumes, and service accounts
  • Manages job restarts, upgrades, and recovery

This drastically reduced operational overhead and made Flink cloud-native and production-ready.

Deploying Flink Jobs

In practice, we separate cluster management from job management:

FlinkDeployment defines the session cluster (image, resources, Flink config, EFS mounts)

FlinkSessionJob defines the actual streaming job (entrypoint, arguments, parallelism, upgrade mode)

Most of our deployments happen via Terraform, rendering YAML templates and applying them using kubernetes_manifest.

Here’s a simplified example of a FlinkSessionJob:

apiVersion: flink.apache.org/v1beta1
kind: FlinkSessionJob
metadata:
name: flink-job-gg-pending-orders
namespace: flink
spec:
deploymentName: flink-session-v2
job:
entryClass: "org.apache.flink.client.python.PythonDriver"
args:
- "-py"
- "/opt/flink/lib/src/s3_to_iceberg.py"
parallelism: 1
upgradeMode: stateless

💡 Tip: To rerun a job from the last checkpoint, switch upgradeMode to last-state.

Operating Jobs Safely

This is where Flink’s strength shines. For initial runs, we typically start jobs stateless. But for restarts, backfills, or upgrades, we rely on checkpoint-based recovery using upgradeMode: last-state.

This ensures:

  • Jobs resume from the latest successful checkpoint
  • Downstream systems remain stable
  • Minimal gaps or duplicates, even for CDC sources

Evolving Jobs Over Time

We handle changes carefully:

Configuration Changes (parallelism, resources, checkpoint tuning):

  • Update the job or cluster spec
  • Apply via Terraform or kubectl
  • Operator restores state automatically

Business Logic Changes (Python, SQL, JARs):

  • Push updated code to S3
  • Sync to EFS via AWS DataSync
  • Verify files in Flink containers
  • Perform a rolling, stateful upgrade

This process allows us to iterate safely, even on critical streaming pipelines.

Lessons Learned

From choosing Flink for its stateful streaming model, to running it on AWS EKS, and finally operating jobs safely with the Flink Kubernetes Operator, this journey has shaped how we build and maintain streaming pipelines.

Flink is more than a processing engine ,it’s a platform for evolving, long-running data pipelines. And running it Kubernetes-native on AWS lets us balance operational safety, scalability, and flexibility.

Top comments (0)