What configuration do you suggest for running Local LLMs ?
I have a Windows 10 machine with 16GB RAM. I run ollama inside WSL2
I tried several 7B models including codellama, and response is VERY SLOW.
Some 3B models are only slightly better.
This windows PC does not have a GPU
OTOH, my work macbook pro M2 with 16GB RAM has respectable response time.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
What configuration do you suggest for running Local LLMs ?
I have a Windows 10 machine with 16GB RAM. I run ollama inside WSL2
I tried several 7B models including codellama, and response is VERY SLOW.
Some 3B models are only slightly better.
This windows PC does not have a GPU
OTOH, my work macbook pro M2 with 16GB RAM has respectable response time.