DEV Community

Michael Levan
Michael Levan

Posted on

Running Machine Learning Workflows On Kubeflow

Data is mere bytes until we do something with it. Whether we structure it, expose it, export it, ingest it, or view it, the data needs to come from somewhere and it needs to be trained in a particular way to view it or use it in the way that we need to.

When it comes to AI (regardless of if it’s GenAI or not), the same rules apply.

Machine Learning is one method for training data to extract what you need from said data.

In this blog post, you’ll learn how to run ML workflows with Kubeflow, a method of running ML on Kubernetes.

What Is Kubeflow

Kubeflow is an open-source platform that allows you to ingest, train, experiment, and model data. The data could be anything from three rows and three columns to thousands and thousands of data sets.

Now the question is - why Kubeflow?

First and foremost, a lot of environments are running on Kubernetes now (not all, but a good amount) and Kubernetes allows you to shrink the footprint of application stacks. For example, engineers used to have to spin up VMs just to get application stacks running. Now, engineers can create a small container image and deploy it as a container.

The same rules apply to ML and AI.

Yes, we still need GPU and CPU power, but we can decouple the workloads to run in Pods so we can have a smaller footprint instead of massive bare-metal boxes or VMs running ML workloads.

If you want to dive deeper into the “why” and “how”, I wrote a blog post on it: https://dev.to/thenjdevopsguy/from-theory-to-installation-kubeflow-10nj

Kubeflow Components

Kubeflow is made up of several pieces that aren’t listed here (for example, PyTorch). Instead of going into each component, let’s talk about the primary sections of Kubeflow you’ll need to get a workflow running.

First, there are Models. Models are a collection of data that are trained for a specific action. The goal with a Model is to take a ton of structured data and turn it into something useful to use at a later time. The “useful thing” could be anything from forecasting of Kubernetes resources to feeding GenAI real information (instead of pulling information from somewhere like random comments on the internet).

Next, Experiments. Experiments allow you to test multiple scenarios of your data. You can have one Model and run as many tests against it as you’d like. Think of it like a Workspace/storage area.

Lastly, Runs. Runs are what run the actual Model to see what comes out the other end. If you create a Model, you’re expecting something to occur at the end. The Run ensures that it’s working as you expected. Think of it like running code. You write code and then you run it. Sometimes after running it, it’s not what you were expecting.

Image description

Next, let’s see how to run Models, Experiments, and Runs.

Running Pipelines

There are three steps to running Models:

  1. Create the Pipeline.
  2. Create an Experiment.
  3. Create a run.

Image description

The first step is to create a Pipeline. The Pipeline is code that you or someone else wrote to create a Model. The Pipeline automates the run of the Model. Think of it like a CICD pipeline.

To create the pipeline, you’ll need some code.

Create The Pipline

  1. Install the Kubeflow Python library.
pip install kfp
Enter fullscreen mode Exit fullscreen mode

💡 If you don’t have Python installed, check out this documentation: https://www.python.org/downloads/

  1. Run a Python script to create a YAML-based Pipeline.
from kfp import dsl

@dsl.component
def say_hello(name: str) -> str:
    hello_text = f'Hello, {name}!'
    print(hello_text)
    return hello_text

@dsl.pipeline
def hello_pipeline(recipient: str) -> str:
    hello_task = say_hello(name=recipient)
    return hello_task.output


compiler.Compiler().compile(hello_pipeline, 'pipeline.yaml')
Enter fullscreen mode Exit fullscreen mode

The code is simple in the sense of it’s logic. It just gives you the ability to output someone’s name on the terminal. Nothing too complex if this is your first time building a Model.

You should see a YAML configuration similar to the one below. It’ll be saved in your current directory and the file will be named pipeline.yaml.

# PIPELINE DEFINITION
# Name: hello-pipeline
# Inputs:
#    recipient: str
# Outputs:
#    Output: str
components:
  comp-say-hello:
    executorLabel: exec-say-hello
    inputDefinitions:
      parameters:
        name:
          parameterType: STRING
    outputDefinitions:
      parameters:
        Output:
          parameterType: STRING
deploymentSpec:
  executors:
    exec-say-hello:
      container:
        args:
        - --executor_input
        - '{{$}}'
        - --function_to_execute
        - say_hello
        command:
        - sh
        - -c
        - "\nif ! [ -x \"$(command -v pip)\" ]; then\n    python3 -m ensurepip ||\
          \ python3 -m ensurepip --user || apt-get install python3-pip\nfi\n\nPIP_DISABLE_PIP_VERSION_CHECK=1\
          \ python3 -m pip install --quiet --no-warn-script-location 'kfp==2.7.0'\
          \ '--no-deps' 'typing-extensions>=3.7.4,<5; python_version<\"3.9\"' && \"\
          $0\" \"$@\"\n"
        - sh
        - -ec
        - 'program_path=$(mktemp -d)

          printf "%s" "$0" > "$program_path/ephemeral_component.py"

          _KFP_RUNTIME=true python3 -m kfp.dsl.executor_main                         --component_module_path                         "$program_path/ephemeral_component.py"                         "$@"

          '
        - "\nimport kfp\nfrom kfp import dsl\nfrom kfp.dsl import *\nfrom typing import\
          \ *\n\ndef say_hello(name: str) -> str:\n    hello_text = f'Hello, {name}!'\n\
          \    print(hello_text)\n    return hello_text\n\n"
        image: python:3.7
pipelineInfo:
  name: hello-pipeline
root:
  dag:
    outputs:
      parameters:
        Output:
          valueFromParameter:
            outputParameterKey: Output
            producerSubtask: say-hello
    tasks:
      say-hello:
        cachingOptions:
          enableCache: true
        componentRef:
          name: comp-say-hello
        inputs:
          parameters:
            name:
              componentInputParameter: recipient
        taskInfo:
          name: say-hello
  inputDefinitions:
    parameters:
      recipient:
        parameterType: STRING
  outputDefinitions:
    parameters:
      Output:
        parameterType: STRING
schemaVersion: 2.1.0
sdkVersion: kfp-2.7.0
Enter fullscreen mode Exit fullscreen mode

Create The Pipeline

  1. Take the YAML pipeline that you created and upload it to Kubeflow.

Image description

  1. Give your Pipeline a Name and add the pipeline.yaml in the Upload a file section. After that, click the blue Create button.

Image description

Create An Experiment

Now that the Pipeline is uploaded, you can create an Experiment.

  1. Click the blue + Create experiment button.

Image description

  1. Give the Experiment a name and click the blue Next button.

Image description

  1. On the next screen, you’ll be automatically brought to the Runs page. Choose the Experiment you created.

Under the Run parameters section, you can put any name you’d like. This is the portion of the Python script that asks for a parameter.

Once complete, click the blue Start button.

Image description

You should see the test running or completed.

Image description

Create Runs

You can also create another Run if you’d like.

  1. Go to the Runs page and click the blue + Create run button.

Image description

  1. You’ll see the same screen that you saw previously after creating an Experiment. Type in a new name for the new Run and choose the same Experiment.

You should now see the new Run.

Image description

Top comments (0)