DEV Community

Cover image for Orchestrate Dataflow pipelines easily with GCP Workflows
matthieucham for Stack Labs

Posted on • Updated on

Orchestrate Dataflow pipelines easily with GCP Workflows

Dataflow pipelines rarely are on their own. Most of the time, they are part of a more global process. For example : one pipeline collects events from the source into BigTable, then a second pipeline computes aggregated data from BigTable and store them into BigQuery.

Alt Text

Of course, each pipeline could be scheduled independently with Cloud Scheduler.
But if these pipelines need to be linked somehow, such as launching the second pipeline when the first is done, then orchestration is required.

Until recently, GCP had one tool in the box for this kind of purpose : Cloud Composer , a (slightly) managed Apache Airflow. Despite its rich and numerous functionalities and its broad community, this service had several caveats for the kind of simple orchestration I was after:

  • it's not fully integrated : you need to manage costly resources such as a GKE cluster, a CloudSQL instance

  • it pushes Python in your codebase, there is no other choice

  • any change in the setup (like, adding an environment variable) is painfully slow to propagate

  • the wide variety of operators in the ecosystem can lead to a poor separation of concerns between orchestration and business processes

And I won't even talk about the Airflow UI... (I've heard that some people like it)

Because of these, orchestrating with Composer is overly difficult. Yet, as it is often the case with the GCP platform, if you face too many difficulties when doing something that should be simple enough, you're probably not doing it right. This proved true once again: Cloud Composer wasn't the right product for my need...

Enter GCP Workflows !

Workflows is a new service : it has been promoted out of bêta very recently. And luckily, it already offers most of the needed functionality to do the orchestration of GCP services' jobs, and doing it simply:

  • it is fully managed and serverless, which means you don't pay when you don't use it

  • it does only one job and does it well : orchestrating HTTP calls

  • all is configured in YAML files, whose syntax is short and easy to learn

  • the UI is neatly integrated and feels more "part of GCP" than Composer (Although there is still quite a few display bugs at the moment)

With this new product it becomes really easy to write a Workflow which chains multiple Dataflow jobs like in the diagram above.

A sample workflow for Dataflow jobs

Workflow files are YAML. It is simple and straightforward:

    - init:
          - project: ${sys.get_env("GOOGLE_CLOUD_PROJECT_ID")}
          - region: "europe-west1"
          - topic: "myTopic"
    - firstPipeline:
        call: LaunchDataflow
          project: ${project}
          region: ${region}
          template: "first"
        result: firstJobId
    - waitFirstDone:
        call: DataflowWaitUntilStatus
          project: ${project}
          region: ${region}
          jobId: ${firstJobId}
          status: "JOB_STATE_DONE"
    - secondPipeline:
        call: LaunchDataflow
          project: ${project}
          region: ${region}
          template: "second"
        result: secondJobId    
    - waitSecondDone:
        call: DataflowWaitUntilStatus
          project: ${project}
          region: ${region}
          jobId: ${secondJobId}
          status: "JOB_STATE_DONE"
    - publish:
        call: googleapis.pubsub.v1.projects.topics.publish
          topic: ${"projects/" + project + "/topics/" + topic}
              - data: ${base64.encode(text.encode("{\"message\":\"workflow done\"}"))}
Enter fullscreen mode Exit fullscreen mode

Let's break it down. The sample workflow has the following steps:

  • init: preprocessing stage, where workflow variables are initialized.
  • firstPipeline: Launch the first dataflow job
  • waitFirstDone: Wait until the first dataflow job is completed
  • secondPipeline: Launch the second dataflow job
  • waitSecondDone: Wait until the second dataflow job is completed -publish: Push a sample PubSub notification at the end of the workflow

As you have noticed, firstPipeline and secondPipeline call a custom routine, a subworkflow, which is defined in the same file:

  params: [project, region, template]
    - launch:
          url: ${""+project+"/locations/"+region+"/flexTemplates:launch"}
            type: OAuth2
              jobName: ${"workflow-" + template }
                numWorkers: 1
                maxWorkers: 8
              containerSpecGcsPath: ${template}
        result: dataflowResponse
        next: jobCreated
    - jobCreated:
        return: ${}
Enter fullscreen mode Exit fullscreen mode

This subworkflow calls the Dataflow Rest API to launch a job (here, a flex template). With workflows you can easily call any service's API or any external HTTP endpoint.

Similarly, waitFirstDone and waitSecondDone call another subworkflow:

  params: [project, region, jobId, status]
    - init:
          - currentStatus: ""
    - check_condition:
          - condition: ${currentStatus in failureStatuses}
            next: exit_fail
          - condition: ${currentStatus != status}
            next: iterate
        next: exit_success
    - iterate:
          - sleep30s:
              call: sys.sleep
                seconds: 30
          - getJob:
              call: http.get
                url: ${""+project+"/locations/"+region+"/jobs/"+jobId}
                  type: OAuth2
              result: getJobResponse
          - getStatus:
                - currentStatus: ${getJobResponse.body.currentState}
          - log:
              call: sys.log
                text: ${"Current job status="+currentStatus}
                severity: "INFO"
        next: check_condition
    - exit_success:
        return: ${currentStatus}
    - exit_fail:
        raise: ${"Job in unexpected terminal status "+currentStatus}
Enter fullscreen mode Exit fullscreen mode

This subworkflow also calls the Dataflow Rest API, this time in a kind of loop until the job reach a terminal status. In case of unexpected state, an exception is raised and the workflow stops and is marked as failed. Otherwise, it proceeds to the next stage

Finally, just deploy this workflow, via the UI or gcloud for example:

#! /bin/bash
localDir=$(dirname "$0")

DESCRIPTION="Sample workflow"

gcloud beta workflows deploy "${WORKFLOW}" --location="${REGION}" --service-account="${SERVICE_ACCOUNT}" --source="${localDir}/${SOURCE}" --description="${DESCRIPTION}"
Enter fullscreen mode Exit fullscreen mode

Breaking news Workflows resources are now available in Terraform for you IAC freaks

Once deployed, it can be launched, for example from Scheduler, by POSTing to this endpoint${PROJECT}/locations/${REGION}/workflows/${WORKFLOW}/executions


Thanks to Workflows, with just a relatively small YAML file we were able to chain two Dataflow jobs the easy way: serverlessly.

Thanks for reading! I’m Matthieu, data engineer at Stack Labs.
If you want to discover the Stack Labs Data Platform or join an enthousiast Data Engineering team, please contact us.

Top comments (0)