DeepSeek R1 can now run seamlessly on a Mac without requiring a GPU, thanks to Ollama. This solution is tailored for Mac users but also works on Windows, Linux, and other platforms supported by Ollama.
Steps to Use the DeepSeek R1 Model on a Mac
- Install Ollama using Homebrew
Open your terminal and run:
brew install ollama
For other operating systems, visit: Ollama Downloads.
- Run DeepSeek R1
Once Ollama is installed, start DeepSeek R1 with:
ollama run deepseek-r1:8b
The model will be downloaded the first time you run it (approximately 4.9 GB). Subsequent uses will be faster.
To exit the session, type /bye.
Usage Recommendations
- Choose a model size appropriate for your Mac’s specifications
- Start with smaller models first to test performance
- Monitor system resources during initial usage
- Ensure adequate free storage space for model downloads
- Keep Ollama running in the background
Notes and Observations
• Session Context: The model keeps a deep context in session, switching the same is hard with just prompting, recommended to restart the service for running it.
• Model Size: While DeepSeek R1 (8B) is practical for most users, larger models like DeepSeek V3 (671B) are far less feasible due to their size and resource requirements:
Above 70B would required clusters to run on.
Troubleshooting
- For large models, consider closing other resource-intensive applications
- If a model fails to load, check your available system memory
- If experiencing issues, try restarting Ollama
Conclusion
Remember that model downloads are persistent — once downloaded, you can use them offline in future sessions.
Top comments (0)