DEV Community

Cover image for Running Local LLMs, CPU vs. GPU - a Quick Speed Test

Running Local LLMs, CPU vs. GPU - a Quick Speed Test

Maxim Saplin on March 11, 2024

This is the 1st part of my investigations of local LLM inference speed. Here're the 2nd and 3rd ones May 12 Update Putting together a table wit...
Collapse
 
adderek profile image
Maciej Wakuła

This depends much on the settings. I tried the same model and example query "tell me about Mars". Having Ryzen 3900 PRO CPU (12 cores, 24 threads, I got it for less than half price of 3900x), AMD RX 6700 (without x) which I also got cheap. RAM is pretty cheap as well so 128GB is in range of most. Using kobald-cpp rocm. With (14 layers on gpu, 14 cpu threads) it gave 6 tokens per second. (28,14) gave 15 T/s. (30,24) gave 4.43 T/s. Finally 35 layers, 24 CPU threads consumed total 7.3GB on GPU giving 34.61 T/s.

I'm writing to show that results depends very much on the settings.

Collapse
 
maximsaplin profile image
Maxim Saplin

JIC, I tested pure cases, 100% CPU and 100% offloading to GPU

Collapse
 
orlando_arroyo_1 profile image
Orlando Arroyo

How did you get to use 100% of the CPU?, which config or settings did you have?

Collapse
 
adderek profile image
Maciej Wakuła • Edited

You can offload all layers to GPU (CUDA, ROCm) or use CPU implementation (ex. HIPS). Just run LM Studio for your first steps. Run kobaldcpp or kobapldcpp-ROCm as second. Then try to use python and transformers. From there you should know enough about the basics to choose your directions. And remember that offloading all to GPU still consumes CPU

Image description

This is a peak when using full ROCm (GPU) offloading. See CPU usage on the left (initial CPU load is to start the tools, LLM was used on the peak at the end - there is GPU usage but also CPU used)
Image description

And this is windows - ROCm still is very limited on other operating systems :/

Collapse
 
orlando_arroyo_1 profile image
Orlando Arroyo • Edited

Just for fun, here are some additional results:

iPad Pro M1 256GB, using LLM Farm to load the model: 12.05tok/s
Asus ROG Ally Z1 Extreme (CPU): 5.25 tok/s using the 25W preset, 5.05tok/s using the 15W preset

Update:
Asked a friend with a M3 Pro 12core CPU 18GB. Running from CPU: 17.93tok/s, GPU: 21.1tok/s

Collapse
 
maximsaplin profile image
Maxim Saplin

The CPU result for ROG is close to the one from 7840U, after all they almost identical CPUs

Collapse
 
clegger profile image
clegger

The ROG Ally has a Ryzen Z1 Extreme which appears to be nearly identical to the 7840U, but from what I can discern, the NPU is disabled. So if / when LM Studio gets around to implementing support for that AI accelerator the 7840U should be faster at inferencing workloads.

Thread Thread
 
maximsaplin profile image
Maxim Saplin

AMD GPU seems to be an underdog in the ML world, when compared to Nvidia... I doubt that AMD's NPU will see better compatibility with ML stack than it's GPUs

Collapse
 
rickyricky profile image
Ricardo Meleschi • Edited

If you let me know what settings / template you used for this test, I'll run a similar test on my M4 iPad with 16GB Ram. I get wildly different tok/s depending on which LLM and which template I'm using now.

As of right now, with the fine-tuned LLM and the "TinyLLaMa 1B" template being used I get the following:

M4 iPad w 16GB Ram / 2TB Storage: 15.52t/s

Collapse
 
redbook2000 profile image
Red Book • Edited

I came across your benchmark. It's very useful. Here is a result from my machine:

Ryzen 5 7600 128GB + MSI RX 7900 XTX 70.1 tok/s

The total system power draw 478 watts, idle 95 watts.

using Mistral Orca Dpo V2 Instruct v0.2 Slerp 7B Q6_K

Best,

PS I've been thinking to get the M4 Pro 96GB when it's available, just to run 70B models.

This benchmark shows a difference.
twitter.com/ronaldmannak/status/17...

Collapse
 
bharath063 profile image
Bharath B

Intel i7 14700k - 9.82 token/s with no GPU offloading(peaked at 35% CPU usage in LMStudio. Guessing issue with multithreading)
Zotac Trinity non-OC 4080 Super - 71.61 tokens/s max GPU offloading

All numbers measured on non-overclocked factory default setup

Collapse
 
maximsaplin profile image
Maxim Saplin

Thanks for sharing the numbers!

Collapse
 
orlando_arroyo_1 profile image
Orlando Arroyo

Indeed there’s something odd with the multithreading of the CPUs

Collapse
 
orlando_arroyo_1 profile image
Orlando Arroyo

Adding some info here:

Running on a Razer Blade 2021 with a Ryzen 5900HX, a GF 3070Ti and 16GB RAM, I got 41.75tok/s. I used the same test as you, asking about Mars on the same model.

Hope that adds information to this very interesting topic.

Collapse
 
maximsaplin profile image
Maxim Saplin

Thanks for the contribution! I assume you used 100% GPU off-loading , right? Just checking:)

Collapse
 
orlando_arroyo_1 profile image
Orlando Arroyo

Indeed, 100%GPU off-loading.

I also tested an Ryzen 7950X with 0% off loading, but there’s something odd. I set 32 threads but CPU use is not going beyond 60% and only gets 7tok/s. Any thoughts how about possible cause?

Just for fun, I’ll check with an Asus ROG Ally later (Z1 Extreme version).

Thread Thread
 
maximsaplin profile image
Maxim Saplin

Seems the threads param is ignored, I saw same behaviour when testing CPU inference

Collapse
 
orlando_arroyo_1 profile image
Orlando Arroyo

Just a quick update: using a RTX 4070 Super gets 58.2tok/s

Collapse
 
oleksandr_davyskyba_5a399 profile image
Oleksandr Davyskyba

And RTX 4070 TI Super get 62tok/s

Collapse
 
maximsaplin profile image
Maxim Saplin

Is that a desktop card?

Collapse
 
nicolay profile image
Nicolay • Edited

On my rtx 3050 the speed was 28.6 tok/s.
Based on the comments above, I made a table.

RTX 3050         8gb    28.6 tok/s
RTX 3070 TI     8gb    41,75
RTX 4060         8gb    37.9 tok/s
RTX 4070         12gb   58.2tok
RTX 4080         8gb     78.1

Collapse
 
maximsaplin profile image
Maxim Saplin

Are all those videocards desktop ones?

Collapse
 
melroy89 profile image
Melroy van den Berg

Thank you for testing! Helped me a lot! AMD RX 7900 XTX is doing good..!

Collapse
 
melroy89 profile image
Melroy van den Berg

Anybody with an AMD W7900?

Collapse
 
oliverdevto profile image
Oliver Stutz

78.51 tok/s with AMD 7900 XTX on RoCm Supported Version of LM Studio with llama 3
33 gpu layers (all while sharing the card with the screen rendering)

Collapse
 
jared_goodpasture_9698fe3 profile image
Jared Goodpasture

Thanks so much for keeping this post up to date 🙏

Collapse
 
oleksandr_davyskyba_5a399 profile image
Oleksandr Davyskyba

Got thinkpad p14s with 7840u and 64gb lpddr5x and with mistral-7b-finetuned-orca-dpo-v2-Mistral-7B-Instruct-v0.2-slerp-GGUF got 15 T/s

Image description

Collapse
 
devtoluk profile image
Luk

@maximsaplin It seems your table could use an update.

You mention AMD Radeon 780M iGPU.
I have it in my Ryzen 7840hs paired with LPDDR5x-6400 32gb and I am getting for the same model as you consistently ~ 10 T/s with 100% gpu offload. @oleksandr_davyskyba_5a399 is getting 15 T/s which means there is a big variance for this igpu.

Aside from that, I also tested AMD RX 6800XT 16GB GPU using Razer Core X Chroma via USB4/Thunderbolt connected to the same laptop and I am getting consistently ~50 T/s on it so only a very small difference compared to the result you posted, despite using via Thunderbolt.

Collapse
 
maximsaplin profile image
Maxim Saplin

Thanks! Thunderbolt os PCI-E doesn't seem that important for GPU, for LLMs most of the memory intensive operations are happening inside VRAM with little communication with the outer world (CPU, System RAM)

Collapse
 
andy_h profile image
Andy Harris

Here's some additional config data for the list.

Laptop - 7940HS + 32gb RAM + RTX 4070 (8g)
GPU only - RTX 4070 mobile (8GB) = 30.69 T/S
CPU only - 7940HS + 32gb RAM = 8.28 T/S
Note. I'm not sure why the 4070 is posting lower than the 4060 mobile.

Desktop - R5 3600x + 80gb RAM + RX 6800XT (16gb)
GPU only - Radeon RX 6800 XT (16gb) = 52.92 T/S
CPU only - R5 3600x + 80gb RAM = 4.07 T/S

Collapse
 
maximsaplin profile image
Maxim Saplin

There're different power levels for 4xxx mobile GPUs - 40-140w. 4070 might be coming with a thinner laptop with TGP at arpind 40w. My 4060 Mobile has 105w TGP

Collapse
 
andy_h profile image
Andy Harris

Good point! I'll check later and post an update.

Collapse
 
alex_has_toes profile image
Alex • Edited

Hi,
using lm studio 3.5 with your moon question I got 76.46 T/s average over 3 runs with an RTX 3090 stock.
Using lm studio 3.4 I got 74.36.

On the Ryzen 5900x in 65W mode (24 Threads) I get ~9 T/s. 3200Mhz DDR4 cl22. (lm studio 3.5)

These results were with the Q4 Model...

Q6 Model:
Vega 8 on 5800HS: 1.92 T/s
5800HS CPU: 5.87 T/s
RTX 3090: 64.49 T/s
Ryzen 5900x: 6 T/s

Cheers.

Correction: I was using the q4 model. I will update the comment later with my q6 results.

Collapse
 
clegger profile image
clegger

In these tests is the 7840U utilizing the integrated NPU to accelerate the workload?

Collapse
 
maximsaplin profile image
Maxim Saplin

The result for "780M iGPU" is indeed the result coming from the GPU integrated into 7840U APU

Collapse
 
clegger profile image
clegger • Edited

@maximsaplin GPU != NPU
They are distinct accelerators.

Collapse
 
maximsaplin profile image
Maxim Saplin

NPU is not mentioned anywhere

Collapse
 
yaz_68ac15f9adb914b742f profile image
Yağız

is there are github repo?

Collapse
 
thearcticman profile image
thearcticman

M1 Max 32GB RAM, 100% Offload to GPU
~ 35 tok/s

Collapse
 
rober81 profile image
Roberto Piombi

RTX 3090 get 58.66 tok/s
Ryzen 5800x get 4,47 tok/s

Ram 32gb 3600mhz cl16
Llama 3.1 Instruct 8B