DEV Community

Discussion on: Running Local LLMs, CPU vs. GPU - a Quick Speed Test

Collapse
 
maximsaplin profile image
Maxim Saplin

The CPU result for ROG is close to the one from 7840U, after all they almost identical CPUs

Collapse
 
clegger profile image
clegger

The ROG Ally has a Ryzen Z1 Extreme which appears to be nearly identical to the 7840U, but from what I can discern, the NPU is disabled. So if / when LM Studio gets around to implementing support for that AI accelerator the 7840U should be faster at inferencing workloads.

Thread Thread
 
maximsaplin profile image
Maxim Saplin

AMD GPU seems to be an underdog in the ML world, when compared to Nvidia... I doubt that AMD's NPU will see better compatibility with ML stack than it's GPUs