If you want to run DeepSeek R1 locally on your system, there's no need to worry. This guide is written in a simple and easy-to-follow manner, explaining step-by-step how to use Ollama and ChatboxAI to get it running.
π₯οΈ System Requirements (Based on GPU/RAM)
Each model has different hardware requirements, so first, check which model your system can support:
Model | GPU Required | VRAM (GPU Memory) | RAM (System Memory) | Storage (ROM) |
---|---|---|---|---|
DeepSeek R1 1.5B | No GPU / Integrated GPU | 4GB+ | 8GB+ | 10GB+ |
DeepSeek R1 7B | GTX 1650 / RTX 3050 | 6GB+ | 16GB+ | 30GB+ |
DeepSeek R1 14B | RTX 3060 / RTX 4060 | 12GB+ | 32GB+ | 60GB+ |
DeepSeek R1 33B | RTX 4090 / A100 | 24GB+ | 64GB+ | 100GB+ |
- π If your system has GTX 1650 or lower, you can only run DeepSeek R1 1.5B or at most 7B.
- π For 7B, at least 16GB RAM is required.
- π If you have a GPU lower than GTX 1650 (or an integrated GPU), only use 1.5B to avoid crashes.
βοΈ Step-by-Step Installation Guide
1οΈβ£ Install Ollama (Base for Llama Models)
Ollama is a lightweight tool that helps run LLMs (Large Language Models) locally. Install it first:
π For Windows Users:
- Download the installer and install it (just click Next-Next).
- Open CMD and check by running:
ollama run llama2
If this command runs successfully, the installation is complete.
π For Mac Users:
- Open Terminal and run:
curl -fsSL https://ollama.com/install.sh | sh
2οΈβ£ Download the DeepSeek R1 Model
Use the following command to pull the model:
ollama pull deepseek-ai/deepseek-coder:7b
- π If you want to run 1.5B instead of 7B, use:
ollama pull deepseek-ai/deepseek-coder:1.5b
β This download may take some time depending on your internet speed. Once downloaded, you can run it using Ollama.
3οΈβ£ Install ChatboxAI (Optional GUI for Better Experience)
If you want a Graphical User Interface (GUI), ChatboxAI is the best tool to interact with local AI models.
π ChatboxAI Installation Link
Installation Steps:
- Ensure Python 3.10+ is installed.
- Open Command Prompt (CMD) and run:
git clone https://github.com/oobabooga/text-generation-webui.git
cd text-generation-webui
pip install -r requirements.txt
- Start the server:
python server.py
- Open your browser and go to localhost:7860, then select your model.
π Running DeepSeek R1 (Final Step)
Once everything is installed, itβs time to run the model:
π Open CMD and run:
ollama run deepseek-ai/deepseek-coder:7b
π If 7B is not running, try with 1.5B:
ollama run deepseek-ai/deepseek-coder:1.5b
π If you are using ChatboxAI, just open the browser and interact with the model through the GUI.
Now you can use DeepSeek R1 for coding, AI chat, and optimizing your workflow! ππ₯
π οΈ Common Problems & Solutions
β 1οΈβ£ Model crashes due to low VRAM?
β Try 1.5B instead of 7B.
β Increase Windows Pagefile (Virtual Memory settings).
β 2οΈβ£ Model response is too slow?
β Use SSD instead of HDD.
β Close background applications.
β Optimize RAM usage.
β 3οΈβ£ βCommand not foundβ error in CMD?
β Check if Ollama is installed correctly.
β Ensure Python and dependencies are installed.
π€© Conclusion
If you followed this guide correctly, you can now run DeepSeek R1 locally without relying on third-party APIs. This is a privacy-friendly and cost-effective solution, perfect for developers and freelancers.
If you face any issues, drop a comment, and youβll get help! ππ₯
Top comments (0)