This space is crazy. A bit more than a month after you published this post and GGML models are not supported by llama.cpp anymore.
Note sure if TheBloke already released the new GGUF models, but he said he would soon.
Quickest way for me to test was to roll back to the recommended commit discussed here: huggingface.co/TheBloke/Llama-2-13...
Really appreciate your post. It helped me run llama2 locally for the first time ever. If you find the time it would be helpful for others if you updated the model references.
TheBloke hasn't updated all the models, this one worked for me: huggingface.co/substratusai/Llama-...
I will update the article right now, thanks for reaching out and reminding me to update this!
Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink.
Hide child comments as well
Confirm
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
This space is crazy. A bit more than a month after you published this post and GGML models are not supported by llama.cpp anymore.
Note sure if TheBloke already released the new GGUF models, but he said he would soon.
Quickest way for me to test was to roll back to the recommended commit discussed here: huggingface.co/TheBloke/Llama-2-13...
Really appreciate your post. It helped me run llama2 locally for the first time ever. If you find the time it would be helpful for others if you updated the model references.
TheBloke hasn't updated all the models, this one worked for me: huggingface.co/substratusai/Llama-...
I will update the article right now, thanks for reaching out and reminding me to update this!