DEV Community

Michael Levan
Michael Levan

Posted on

Boosting ML And AI Workloads (Neuroblade)

As you're creating models for either Large Language Models (LLM) or going the ML/AI route to train workloads, the goal is to do it as fast as possible. What does that mean? It means beefing up the performance of data analytics.

Neuroblade has a method of doing this with an SPU.

SPU

An SPU is a piece of hardware that plugins in a PCIe slot on your motherboard. The goal with the hardware is to take the power you're currently getting from your GPU to create/train models and enhance it.

The enhancement numbers appear to be around 30% based on Neuroblades data.

AI and ML Workloads

You'll typically see two types of "AI data build" methods:

  1. Going the standard route of having ML models train the data and give the data to AI workloads.
  2. The LLM route, which removes the ML model needs as LLM's train itself (whether the data is correct or not is another story).

To do option one or two, you need powerful servers. It's why you see a huge spike in GPU cost right now.

With an SPU, the goal is to raise the power within these servers by 30%, which will then increase the rate of which models or LLMs can be trained.

Top comments (0)