DEV Community

Discussion on: How to run Llama 2 on anything

Collapse
 
chand1012 profile image
Chandler • Edited

I would start by making sure your source code is up to date and doing a recompile, as well as double checking the path where your model is located.

I don't know much about getting Llama.cpp working on Intel Macs, but I'd try to run it without metal enabled (set -ngl to 0) and see if you can get that working. I am not seeing anything about running Llama.cpp on Intel other than a few issues saying it doesn't work and they're trying to get it working with another GPU in their system (both of which were iMacs with other GPUs installed as well). If you can't get it working with the above advice, I'd advise making an issue on the Llama.cpp GitHub.