DEV Community

Discussion on: Rise of Local LLMs ?

Collapse
 
siph profile image
Chris Dawkins

I've been using Ollama + ROCM with my fairly underpowered RX 580 (8gb), and have had a lot of success with different models. I'm surprised at how well everything works and can see myself building a home server dedicated to AI workloads in the relatively near future.