DEV Community

Discussion on: Rise of Local LLMs ?

Collapse
 
mandarvaze profile image
Mandar Vaze

What configuration do you suggest for running Local LLMs ?
I have a Windows 10 machine with 16GB RAM. I run ollama inside WSL2
I tried several 7B models including codellama, and response is VERY SLOW.
Some 3B models are only slightly better.
This windows PC does not have a GPU

OTOH, my work macbook pro M2 with 16GB RAM has respectable response time.