DEV Community

Cover image for Building scalable ML workflows
Arik
Arik Subscriber

Posted on

Building scalable ML workflows

A little while back, I wrote a post introducing Tork, an open-source project I've been developing. In a nutshell, Tork is a general-purpose, distributed workflow engine suitable for various workloads. At my work, we primarily use it for CPU/GPU-heavy tasks such as processing digital assets (3D, videos, images etc.), as well as our CI/CD tool for our internal PaaS.

Recently, I've been thinking about how Tork could potentially be leveraged to run machine learning type workloads. I was particularly inspired by the Ollama project and wanted to see if I can do something similar, but using plain old Docker images rather than Ollama's Modelfile.

Given that ML workloads often consist of distinct, interdependent stages—such as data preprocessing, feature extraction, model training, and inference—it’s crucial to have an engine that can orchestrate these steps. These stages frequently require different types of compute resources (e.g., CPUs for preprocessing, GPUs for training) and can benefit greatly from parallelization.

Moreover, resiliency is a critical requirement when running machine learning workflows. Interruptions, whether due to hardware failures, network issues, or resource constraints, can result in significant setbacks, especially for long-running processes like model training.

These requirements are very similar to my other non-ML workloads, so I decided to put my all my theories to the test and see what it would take to execute a simple ML workflow on Tork.

The experiment

For this first experiment, let's try to execute a simple sentiment analysis inference task:

Download the latest Tork binary and untar it.

tar xvzf tork_0.1.109_darwin_arm64.tgz
Enter fullscreen mode Exit fullscreen mode

Start Tork in standalone mode:

./tork run standalone
Enter fullscreen mode Exit fullscreen mode

If all goes well, you should something like this:

...
10:36PM INF Coordinator listening on http://localhost:8000
...

Enter fullscreen mode Exit fullscreen mode

Next, we need to build a docker image that contains the model and the necessary inference script. Tork tasks typically execute within a Docker container.

inference.py

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import os

MODEL_NAME = os.getenv("MODEL_NAME")

def load_model_and_tokenizer(model_name):
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    model = AutoModelForSequenceClassification.from_pretrained(model_name)
    return tokenizer, model

def predict_sentiment(text, tokenizer, model):
    inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512)

    with torch.no_grad():
        outputs = model(**inputs)

    predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
    predicted_label = torch.argmax(predictions, dim=1).item()
    confidence = predictions[0][predicted_label].item()

    return predicted_label, confidence

if __name__ == "__main__":
    tokenizer, model = load_model_and_tokenizer(MODEL_NAME)
    text = os.getenv("INPUT_TEXT")
    label, confidence = predict_sentiment(text, tokenizer, model)
    sentiment_map = {0: "Negative", 1: "Positive"}
    sentiment = sentiment_map[label]
    print(f"{sentiment}")

Enter fullscreen mode Exit fullscreen mode

Dockerfile

FROM huggingface/transformers-pytorch-cpu:latest

WORKDIR /app

COPY inference.py .

# Pre-load the model during image build
RUN python3 -c "from transformers import AutoTokenizer, AutoModelForSequenceClassification; AutoTokenizer.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english'); AutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english')"
Enter fullscreen mode Exit fullscreen mode
docker build -t sentiment-analysis .
Enter fullscreen mode Exit fullscreen mode

Next, let's create the Tork job to run the inference.

sentiment.yaml

name: Sentiment analysis example
inputs:
  input_text: Today is a lovely day
  model_name: distilbert-base-uncased-finetuned-sst-2-english
output: "{{trim(tasks.sentimentResult)}}"
tasks:
  - name: Run sentiment analysis
    var: sentimentResult
    # the image we created in the previous step, 
    # but can be any image available from Docker hub
    # or any other image registry
    image: sentiment-analysis:latest
    run: |
      python3 inference.py > $TORK_OUTPUT
    env:
      INPUT_TEXT: "{{inputs.input_text}}"
      MODEL_NAME: "{{inputs.model_name}}"
Enter fullscreen mode Exit fullscreen mode

Submit the job. Tork jobs execute asynchronously. Once a job is submitted you get back a job ID to track its progress:

JOB_ID=$(curl -s \
  -X POST \
  -H "content-type:text/yaml" \
  --data-binary @sentiment.yaml \
  http://localhost:8000/jobs | jq -r .id)
Enter fullscreen mode Exit fullscreen mode

Poll the job's status and wait for it to complete:

while true; do 
  state=$(curl -s http://localhost:8000/jobs/$JOB_ID | jq -r .state)
  echo "Status: $state"
  if [ "$state" = "COMPLETED" ]; then; 
     break 
  fi 
  sleep 1
done
Enter fullscreen mode Exit fullscreen mode

Inspect the job results:

curl -s http://localhost:8000/jobs/$JOB_ID | jq -r .result
Enter fullscreen mode Exit fullscreen mode
Positive
Enter fullscreen mode Exit fullscreen mode

Try changing the input_text in sentiment.yaml and re-submit the job for different results.

Next steps

Now that I got this basic proof of concept working on my machine I need to push that Docker image to a Docker registry so it is available for any Tork workers on my production cluster. But this seems like a viable approach.

The code for this article can be found on Github.

If you're interested in learning more about Tork:

Documentation: https://www.tork.run
Backend: https://github.com/runabol/tork
Web UI: https://github.com/runabol/tork-web

Top comments (0)