DEV Community

Cover image for How to Turn Your OpenShift Pipelines Into an MLOps Pipeline
Jesse Williams for KitOps

Posted on • Edited on • Originally published at jozu.com

15 11 10 11 12

How to Turn Your OpenShift Pipelines Into an MLOps Pipeline

Note, this post was updated to resolve technical inacuracies, published in the original post on December 3, 2025.

Data scientists and machine learning engineers often face significant challenges in taking models from experimentation to production. Machine learning is the iterative process of data collection from various sources, data processing, model training, hyperparameter tuning, model evaluation, and model deployment. Performing these operations manually every time you make a change to your model or its dependencies (code, datasets, or configurations) will lead to inconsistent results because:

  • There is no continuous integration and continuous deployment system to automate this workflow.
  • Limited version control makes it difficult to track changes across model iterations.
  • It is difficult to monitor the performance of your models in real-time.
  • Deployment is inconsistent across environments, leading to unpredictable results.

For these reasons, the demand for MLOps will keep expanding, and organizations will keep adopting it. The rapid growth of MLOps stems from organizations needing to reduce the friction between DevOps and machine learning teams. Organizations using MLOps pipelines gain a competitive edge by streamlining model deployment, monitoring, and scalability. Building an MLOps pipeline does not have to be tedious. With KitOps and OpenShift pipelines, you can quickly build an ML pipeline to get your AI models to production. This article will teach you how to easily build and deploy your machine learning model using KitOps and OpenShift.

TL;DR

  • MLOps pipelines improve your machine learning workflows by incorporating automation.
  • KitOps enables various teams to easily unpack models and their dependencies, such as code and datasets, into different directories.
  • OpenShift pipelines and Quay make it easy to run KitOps packaged AI projects.

Steps to building an MLOps pipeline with OpenShift pipeline and KitOps

Prerequisites
To follow along with this tutorial, you will need the following:

- **A container registry:** You can use [Jozu Hub](https://jozu.ml), [GitHub Package](https://docs.github.com/en/packages/learn-github-packages/introduction-to-github-packages) registry, or [DockerHub](https://hub.docker.com/). This guide will use Jozu Hub. 
- A **c****ode hosting platform:** [Create a GitHub account](https://docs.github.com/en/get-started/start-your-journey/creating-an-account-on-github).
- A [HuggingFace](https://huggingface.co/login) account.
- **KitOps:** Here’s a [guide to install](https://kitops.ml/docs/cli/installation.html)[ing](https://kitops.ml/docs/cli/installation.html) [KitOps](https://kitops.ml/docs/cli/installation.html).
- **OpenShift pipeline:** Create a [developer sandbox](https://developers.redhat.com/developer-sandbox) account.
Enter fullscreen mode Exit fullscreen mode

Step 1: Install KitOps
First, you must make sure you have the Kit CLI installed locally. Once installed, run the command below to verify the installation:

kit version
Enter fullscreen mode Exit fullscreen mode

Step 2: Create a Jozu Hub repository
Login to your Jozu Hub account and create a repository. Here, you create an empty repository called qwen-openshift.

Create Jozu Hub repo

To authenticate your local terminal to Jozu Hub, run the command:

kit login jozu.ml
Enter fullscreen mode Exit fullscreen mode

This prompts for your username and password. Your username is the email address used to create your Jozu Hub account and the password. After authenticating, you’ll download a model from HuggingFace.

Step 3: Download a model from HuggingFace
Head over to the Qwen model on HuggingFace. You will see a list of files, including models, LICENSE and README.md. You can install all of these packages by running the command on your local terminal:

curl -L -O https://huggingface.co/Qwen/Qwen2-0.5B-Instruct-GGUF/blob/main/LICENSE
curl -L -O https://huggingface.co/Qwen/Qwen2-0.5B-Instruct-GGUF/blob/main/README.md
wget https://huggingface.co/Qwen/Qwen2-0.5B-Instruct-GGUF/resolve/main/qwen2-0_5b-instruct-q2_k.gguf
Enter fullscreen mode Exit fullscreen mode

This installs the packages in your current working directory. Let’s make the directory structure more readable.

Currently, your directory structure should look like this:

|-- Kitfile
|-- models
  |-- qwen2-0_5b-instruct-q2_k.gguf
|-- docs
  |-- LICENSE
  |-- README.md
Enter fullscreen mode Exit fullscreen mode

Author a Kitfile and organize your packages in two new folders: models and docs. Move the Qwen model into the models folder and the license and markdown document into the docs folder. Copy the following code in your Kitfile:

manifestVersion: 1.0.0
package:
  name: qwen2-0.5B
  version: 2.0.0
  description: The instruction-tuned 0.5B Qwen2 large language model.
  authors: [Emmanuel]
model:
  name: qwen2-0_5b-instruct-q2_k
  path: models/qwen2-0_5b-instruct-q2_k.gguf
  description: The model downloaded from hugging face
code:
  - path: docs/LICENSE
    description: License file.
  - path: docs/README.md
    description: Readme file.
Enter fullscreen mode Exit fullscreen mode

Step 4: Pack your ModelKit
The next thing you want to do is to pack your ModelKit. To do that, run the command:

kit pack . -t jozu.ml/<your-Jozu-username>/<your-Jozu-repository-name>:latest
Enter fullscreen mode Exit fullscreen mode

You created the Jozu repository in Step 2 above - qwen-openshift.

Tag the ModelKit as latest. After executing the kit pack command, you should see an output similar to the one below:

Packing a ModelKit

Step 5: Push your ModelKit
To push the ModelKit, run the command:

kit push jozu.ml/<your-Jozu-username>/<your-Jozu-repository-name>:latest
Enter fullscreen mode Exit fullscreen mode

After executing the command, you should see an output:

Pushing your ModelKit

With a successful push to your remote repository, you can view the packages you have uploaded to Jozu Hub’s container registry.

Your ModelKit in Jozu Hub

After pushing your model dependencies to Jozu Hub, deploy the model using OpenShift Pipelines.

Step 6: Create an OpenShift pipeline
RedHat OpenShift is a platform that simplifies the building, testing, and deployment of applications at scale. With OpenShift Pipelines, you get a CI/CD framework that’s built into Kubernetes, allowing each step of the pipeline to run in its own container for better scalability.

Deploying your Qwen Modelkit to OpenShift pipelines is easy and can be done in a few steps. The first step is to create a developer sandbox account.

After creating your account, visit your dashboard, as shown in the image below.

OpenShift dashboard

The next step is to create your Persistent Volume Claim. The reason for creating this is so you can have data shared across various tasks when you create your pipeline. On the top left corner, switch to the Administrator. Using this administrator gives you sufficient permissions to create your persistent volume claim.

Adminstrator page

After switching to the administrator profile, navigate to the Storage section, and click on Create Persistent Volume Claim.

Create persistent volume claim

At this point, you can configure your persistent volume claim and give it a storage size.

Configure PVC

After creating your persistent volume claim, switch back to the Developer profile located in the top left corner. From there, click on Create a Pipeline to start building an OpenShift pipeline.
You have two options for creating the pipeline: using the Pipeline Builder on the OpenShift web console or taking a programmatic approach with YAML. In this tutorial, you will use the YAML view.
To proceed, open the YAML View in the Pipeline Builder and paste the following code into it:

apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
  name: kit-pipeline
spec:
  finally:
    - name: login-pack-push-kit
      params:
        - name: SCRIPT
          value: './kit login jozu.ml -u $(params.EMAIL) -p $(params.PASSWORD) && ./kit pack . -t jozu.ml/emmanueloffisongetim/qwen-openshift:latest && ./kit push jozu.ml/emmanueloffisongetim/qwen-openshift:latest'
        - name: VERSION
          value: latest
      taskRef:
        kind: ClusterTask
        name: openshift-client
      workspaces:
        - name: manifest-dir
          workspace: workspace
        - name: kubeconfig-dir
          workspace: workspace
  params:
    - default: <your-password-for-jozu>
      name: PASSWORD
      type: string
    - default: <your-email-address-for-jozu>
      name: EMAIL
      type: string
  tasks:
    - name: git-clone
      params:
        - name: url
          value: 'https://github.com/Techtacles/kitops-openshift'
        - name: revision
          value: master
        - name: refspec
          value: ''
        - name: submodules
          value: 'true'
        - name: depth
          value: '1'
        - name: sslVerify
          value: 'true'
        - name: crtFileName
          value: ca-bundle.crt
        - name: subdirectory
          value: ''
        - name: sparseCheckoutDirectories
          value: ''
        - name: deleteExisting
          value: 'true'
        - name: httpProxy
          value: ''
        - name: httpsProxy
          value: ''
        - name: noProxy
          value: ''
        - name: verbose
          value: 'true'
        - name: gitInitImage
          value: 'registry.redhat.io/openshift-pipelines/pipelines-git-init-rhel8@sha256:48daa3092248256fd0538e4ecbecf2dfe5aff0373c4eed52601f11a9035f872f'
        - name: userHome
          value: /home/git
      taskRef:
        kind: ClusterTask
        name: git-clone
      workspaces:
        - name: output
          workspace: workspace
    - name: install-kit
      params:
        - name: SCRIPT
          value: 'wget https://github.com/jozu-ai/kitops/releases/latest/download/kitops-linux-x86_64.tar.gz && tar -xzvf kitops-linux-x86_64.tar.gz && ls && ./kit version'
        - name: VERSION
          value: latest
      runAfter:
        - git-clone
      taskRef:
        kind: ClusterTask
        name: openshift-client
      workspaces:
        - name: manifest-dir
          workspace: workspace
        - name: kubeconfig-dir
          workspace: workspace
    - name: unpack-kit
      params:
        - name: SCRIPT
          value: 'mkdir models && wget https://huggingface.co/Qwen/Qwen2-0.5B-Instruct-GGUF/resolve/main/qwen2-0_5b-instruct-q2_k.gguf && mv qwen2-0_5b-instruct-q2_k.gguf models'
        - name: VERSION
          value: latest
      runAfter:
        - install-kit
      taskRef:
        kind: ClusterTask
        name: openshift-client
      workspaces:
        - name: manifest-dir
          workspace: workspace
        - name: kubeconfig-dir
          workspace: workspace
  workspaces:
    - name: workspace
Enter fullscreen mode Exit fullscreen mode

In the following snippet, replace with your Jozu email address and with your Jozu password. After making these changes, click on Create. to generate the pipeline. Once the process is complete, the pipeline view should match the image shown below.

Pipeline view

With your pipeline now created, the next step is to run it. To begin, click on Start Pipeline. In the Workspace section, select PersistentVolumeClaim and choose the volume you created earlier. Once this is done, click Start to run your pipeline.

Selecting PersistentVolumeClaim

Once the pipeline run is completed, you will be able to view the logs and a success chart indicating the pipeline's status.

Successful pipeline run

After the pipeline deployment is complete, navigate to the repository you created on Jozu Hub. There, you will find the new image that your pipeline deployed.

Image  in Jozu Hub

The deployment was successful, and the newly deployed image was verified on Jozu Hub.

This code creates an OpenShift pipeline of name qway-openshift-pipeline. This pipeline performs several tasks: it installs the Kit, logs in to the Jozu Hub repository, downloads the model from HuggingFace, packs the ModelKit, and pushes it to the Jozu Hub registry. When you navigate to the OpenShift web console, you’ll see an output similar to the image below.

Create OpenShift pipeline

You can trigger this pipeline manually or integrate with GitHub using webhooks. To run the pipeline manually, click on the run command. Once executed, you will see an output as shown below:

Run the OpenShift pipeline

At this point, you’ve successfully built your ML pipeline. In the next section, we will show you how to deploy and interact with the chatbot you imported from Huggingface.

Step 7: Validating your deployment
By default, OpenShift creates two projects for you. On the developer portal, click on +Add. Under the Git repository, click on container images. Give your deployment a name. In this case, the deployment was named llama-cpp. Also, expose a port where the application will run.

Create OpenShift deployment

Select the Qwen ModelKit you want to deploy from Jozu Hub and click on Create.

Deploy Qwen model

Once created, a domain will be assigned to your deployment. Behind the scenes, Kubernetes deployments, pods, service accounts, auto-scaling groups, and services are created for you. On the RedHat OpenShift dashboard, navigate to the Topology view to see a diagram representing your deployment.

Visual representation of the deployment

Next, if you open the assigned domain, you'll see the deployed llama-cpp container. This includes a web UI where you can send prompts and view responses.

Result

In the image above, the Qwen model was prompted to discuss the “French Revolution.” With a few steps, you've successfully built an MLOps pipeline for a chatbot.

Conclusion

Building an effective MLOps pipeline can be simple using the right tools. With KitOps and OpenShift pipelines, you can automate the building and deployment of your MLOps applications. This is also helpful because you can easily visualize your models and see how they perform in real-time.

KitOps plays a key role in packaging models and managing dependencies. OpenShift helps you automate deployment whenever a change is made. This results in faster, more reliable deployments and improved team collaboration.

If you have questions about integrating KitOps with your team, join the conversation on Discord and start using KitOps today!

Reinvent your career. Join DEV.

It takes one minute and is worth it for your career.

Get started

Top comments (0)

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs

👋 Kindness is contagious

Explore a sea of insights with this enlightening post, highly esteemed within the nurturing DEV Community. Coders of all stripes are invited to participate and contribute to our shared knowledge.

Expressing gratitude with a simple "thank you" can make a big impact. Leave your thanks in the comments!

On DEV, exchanging ideas smooths our way and strengthens our community bonds. Found this useful? A quick note of thanks to the author can mean a lot.

Okay