DEV Community

Chen Debra
Chen Debra

Posted on

Use DolphinScheduler to schedule Spark jobs

DolphinScheduler is a distributed and extensible open-source workflow orchestration platform with powerful Directed Acyclic Graph (DAG) visual interfaces. DolphinScheduler can help you efficiently execute and manage workflows for large amounts of data. You can create, edit, and schedule Spark jobs of AnalyticDB for MySQL on the DolphinScheduler web interface.

Prerequisites

  • An AnalyticDB for MySQL Enterprise Edition, Basic Edition, or Data Lakehouse Edition cluster is created.

  • A job resource group or a Spark interactive resource group is created for the AnalyticDB for MySQL cluster.

  • Java Development Kit (JDK) V1.8 or later is installed.

  • DolphinScheduler is installed.

  • The IP address of the server that runs DolphinScheduler is added to an IP address whitelist of the AnalyticDB for MySQL cluster.

Schedule Spark SQL jobs

AnalyticDB for MySQL allows you to execute Spark SQL in batch or interactive mode. The schedule procedure varies based on the execution mode.

Batch mode

  1. Install the spark-submit command-line tool and specify the relevant parameters.

Note:You need to specify only the following parameters: keyId, secretId, regionId, clusterId, and rgName.

  1. Create a project.

    1. Access the DolphinScheduler web interface. In the top navigation bar, click Project.
    2. Click Create Project.
    3. In the Create Project dialog box, configure the parameters such as Project Name and Owned Users.
  2. Create a workflow.

    1. Click the name of the created project. In the left-side navigation pane, choose Workflow > Workflow Definition to go to the Workflow Definition page.
    2. Click Create Workflow to go to the workflow DAG edit page.
    3. In the left-side list of the page, select SHELL and drag it to the right-side canvas.
    4. In the Current node settings dialog box, configure the parameters that are described in the following table. > Note:For information about other parameters, see DolphinScheduler Task Parameters Appendix.
    5. Click Confirm.
    6. In the upper-right corner of the page, click Save. In the Basic Information dialog box, configure the parameters such as Workflow Name. Click Confirm.
  3. Run the workflow.

    1. Find the created workflow and click the icon in the Operation column to publish the workflow.
    2. Click theicon in the Operation column.
    3. In the Please set the parameters before starting dialog box, configure the parameters.
    4. Click Confirm to run the workflow.
  4. View the details about the workflow.

    1. In the left-side navigation pane, choose Task > Task Instance.
    2. Find the tasks of the workflow and click the icon in the Operation column to view the execution results and logs of the workflow. ### Interactive Mode
  5. Obtain the connection URL of the Spark interactive resource group.

    1. Log on to the AnalyticDB for MySQL console. In the upper-left corner of the console, select a region. In the left-side navigation pane, click Clusters. On the Enterprise Edition, Basic Edition, or Data Lakehouse Edition tab, find the cluster that you want to manage and click the cluster ID.
    2. In the left-side navigation pane, choose Cluster Management > Resource Management. On the page that appears, click the Resource Groups tab.
    3. Find the Spark interactive resource group that you created and click Details in the Actions column to view the internal or public connection URL of the resource group. You can click the image icon within the parentheses next to the corresponding port number to copy the connection URL. You must click Apply for Endpoint next to Public Endpoint to manually apply for a public endpoint in the following scenarios:
    * The client tool that is used to submit a Spark SQL job is deployed on premises.
    
    * The client tool that is used to submit a Spark SQL job is deployed on an Elastic Compute Service (ECS) instance that resides in a different virtual private cloud (VPC) from your AnalyticDB for MySQL cluster.
    
  6. Create a data source.

    1. Access the DolphinScheduler web interface. In the top navigation bar, click Datasource.
    2. Click Create DataSource.
    3. In the Create DataSource dialog box, configure the parameters that are described in the following table.

    1. Click Test Connect. After the test is successful, click Confirm.

Note: For information about other optional parameters, see MySQL.

  1. Create a project.

    1. Access the DolphinScheduler web interface. In the top navigation bar, click Project.
    2. Click Create Project.
    3. In the Create Project dialog box, configure the parameters such as Project Name and Owned Users.
  2. Create a workflow.

    1. Click the name of the created project. In the left-side navigation pane, choose Workflow > Workflow Definition to go to the Workflow Definition page.
    2. Click Create Workflow to go to the workflow DAG edit page.
    3. In the left-side list of the page, select SQL and drag it to the right-side canvas.
    4. In the Current node settings dialog box, configure the parameters that are described in the following table.
    5. Click Confirm.
    6. In the upper-right corner of the page, click Save. In the Basic Information dialog box, configure the parameters such as Workflow Name. Click Confirm.
  3. Run the workflow.

    1. Find the created workflow and click the icon in the Operation column to publish the workflow.
    2. Click theicon in the Operation column.
    3. In the Please set the parameters before starting dialog box, configure the parameters.
    4. Click Confirm to run the workflow.
  4. View the details about the workflow.

    1. In the left-side navigation pane, choose Task > Task Instance.
    2. Find the tasks of the workflow and click theicon in the Operation column to view the execution results and logs of the workflow.

Schedule Spark JAR jobs

  1. Install the spark-submit command-line tool and specify the relevant parameters.

Note:You need to specify only the following parameters: keyId, secretId, regionId, clusterId, and rgName. If your Spark JAR package is stored on your on-premises device, you must specify Object Storage Service (OSS) parameters such as ossUploadPath.

  1. Create a project.

    1. Access the DolphinScheduler web interface. In the top navigation bar, click Project.
    2. Click Create Project.
    3. In the Create Project dialog box, configure the parameters such as Project Name and Owned Users.
  2. Create a workflow.

    1. Click the name of the created project. In the left-side navigation pane, choose Workflow > Workflow Definition to go to the Workflow Definition page.
    2. Click Create Workflow to go to the workflow DAG edit page.
    3. In the left-side list of the page, select SHELL and drag it to the right-side canvas.
    4. In the Current node settings dialog box, configure the parameters that are described in the following table.

    1. Click Confirm.
    2. In the upper-right corner of the page, click Save. In the Basic Information dialog box, configure the parameters such as Workflow Name. Click Confirm.

Note:For information about other parameters, see DolphinScheduler Task Parameters Appendix.

  1. Run the workflow.

    1. Find the created workflow and click the icon in the Operation column to publish the workflow.
    2. Click the icon in the Operation column.
    3. In the Please set the parameters before starting dialog box, configure the parameters.
    4. Click Confirm to run the workflow.
  2. View the details about the workflow.

  3. In the left-side navigation pane, choose Task > Task Instance.

  4. Find the tasks of the workflow and click the icon in the Operation column to view the execution results and logs of the workflow.

Top comments (0)