DEV Community

NAEEM HADIQ
NAEEM HADIQ

Posted on • Originally published at Medium on

Running Deep seek Models on Mac Quickly

DeepSeek R1 can now run seamlessly on a Mac without requiring a GPU, thanks to Ollama. This solution is tailored for Mac users but also works on Windows, Linux, and other platforms supported by Ollama.

Steps to Use the DeepSeek R1 Model on a Mac

  1. Install Ollama using Homebrew

Open your terminal and run:

brew install ollama
Enter fullscreen mode Exit fullscreen mode

For other operating systems, visit: Ollama Downloads.

  1. Run DeepSeek R1

Once Ollama is installed, start DeepSeek R1 with:

ollama run deepseek-r1:8b
Enter fullscreen mode Exit fullscreen mode

The model will be downloaded the first time you run it (approximately 4.9 GB). Subsequent uses will be faster.

To exit the session, type /bye.

Usage Recommendations

  1. Choose a model size appropriate for your Mac’s specifications
  2. Start with smaller models first to test performance
  3. Monitor system resources during initial usage
  4. Ensure adequate free storage space for model downloads
  5. Keep Ollama running in the background

Notes and Observations

Session Context: The model keeps a deep context in session, switching the same is hard with just prompting, recommended to restart the service for running it.

Model Size: While DeepSeek R1 (8B) is practical for most users, larger models like DeepSeek V3 (671B) are far less feasible due to their size and resource requirements:

Model and Macbook Recommended

Above 70B would required clusters to run on.

Troubleshooting

  • For large models, consider closing other resource-intensive applications
  • If a model fails to load, check your available system memory
  • If experiencing issues, try restarting Ollama

Conclusion

Remember that model downloads are persistent — once downloaded, you can use them offline in future sessions.

Top comments (0)