I would start by making sure your source code is up to date and doing a recompile, as well as double checking the path where your model is located.
I don't know much about getting Llama.cpp working on Intel Macs, but I'd try to run it without metal enabled (set -ngl to 0) and see if you can get that working. I am not seeing anything about running Llama.cpp on Intel other than a few issues saying it doesn't work and they're trying to get it working with another GPU in their system (both of which were iMacs with other GPUs installed as well). If you can't get it working with the above advice, I'd advise making an issue on the Llama.cpp GitHub.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I would start by making sure your source code is up to date and doing a recompile, as well as double checking the path where your model is located.
I don't know much about getting Llama.cpp working on Intel Macs, but I'd try to run it without metal enabled (set
-nglto0) and see if you can get that working. I am not seeing anything about running Llama.cpp on Intel other than a few issues saying it doesn't work and they're trying to get it working with another GPU in their system (both of which were iMacs with other GPUs installed as well). If you can't get it working with the above advice, I'd advise making an issue on the Llama.cpp GitHub.