DEV Community

Discussion on: How to run Llama 2 on anything

Collapse
 
undraftedjogger profile image
U J

This space is crazy. A bit more than a month after you published this post and GGML models are not supported by llama.cpp anymore.

Note sure if TheBloke already released the new GGUF models, but he said he would soon.

Quickest way for me to test was to roll back to the recommended commit discussed here: huggingface.co/TheBloke/Llama-2-13...

Really appreciate your post. It helped me run llama2 locally for the first time ever. If you find the time it would be helpful for others if you updated the model references.

Collapse
 
chand1012 profile image
Chandler TimeSurge Labs

TheBloke hasn't updated all the models, this one worked for me: huggingface.co/substratusai/Llama-...

I will update the article right now, thanks for reaching out and reminding me to update this!