DEV Community

Cover image for A Step-by-Step Guide to Install DeepSeek-R1 Locally with Ollama, vLLM or Transformers
Aditi Bindal for NodeShift

Posted on

550 2 2 3 4

A Step-by-Step Guide to Install DeepSeek-R1 Locally with Ollama, vLLM or Transformers

DeepSeek-R1 is making waves in the AI community as a powerful open-source reasoning model, offering advanced capabilities that challenge industry leaders like OpenAI’s o1 without the hefty price tag. This cutting-edge model is built on a Mixture of Experts (MoE) architecture and features a whopping 671 billion parameters while efficiently activating only 37 billion during each forward pass. This approach helps balance performance and efficiency, and makes this model highly scalable and cost-effective. What sets DeepSeek-R1 apart is its unique reinforcement learning (RL) methodology, which enables it to develop chain-of-thought reasoning, self-verification, and reflection autonomously. These qualities make it an exceptional tool for tackling complex challenges across diverse fields like math, coding, and logical reasoning.

Unlike traditional LLMs, DeepSeek-R1 provides better insights into its reasoning processes and delivers optimized performance on key benchmarks.

model-benchmark-1

model-benchmark-2

DeepSeek-R1 outperforms top models like OpenAI's o1 and Claude Sonnet 3.5 in several benchmarks.

model comparison chart

Several methods exist out there on the internet for installing DeepSeek-R1 locally on your machine (or VM). In this guide, we will cover the three best and simplest approaches to quickly setting up and running this model on your machine. By the end of this article, you'll be able to make a thoughtful decision on which method suits your requirements the best.

Prerequisites

The minimum system requirements for running a DeepSeek-R1 model:

  • Disk Space: 500 GB (may vary across models)

  • Jupyter Notebook or Nvidia Cuda installed.

  • GPU Configuration requirements depending on the type of model are as follows:

model prerequisites chart

We'll recommend you to take a screenshot of this chart and save it somewhere, so that you can quickly look up to the GPU prerequisites before trying a model.

Step-by-step process to install DeepSeek-R1 locally

For the purpose of this tutorial, we’ll use a GPU-powered Virtual Machine by NodeShift since it provides high compute Virtual Machines at a very affordable cost on a scale that meets GDPR, SOC2, and ISO27001 requirements. Also, it offers an intuitive and user-friendly interface, making it easier for beginners to get started with Cloud deployments. However, feel free to use any cloud provider of your choice and follow the same steps for the rest of the tutorial.

Step 1: Setting up a NodeShift Account

Visit app.nodeshift.com and create an account by filling in basic details, or continue signing up with your Google/GitHub account.

If you already have an account, login straight to your dashboard.

Image-step1-1

Step 2: Create a GPU Node

After accessing your account, you should see a dashboard (see image), now:

1) Navigate to the menu on the left side.

2) Click on the GPU Nodes option.

Image-step2-1

3) Click on Start to start creating your very first GPU node.

These GPU nodes are GPU-powered virtual machines by NodeShift. These nodes are highly customizable and let you control different environmental configurations for GPUs ranging from H100s to A100s, CPUs, RAM, and storage, according to your needs.

Image-step2-2

Step 3: Selecting configuration for GPU (model, region, storage)

1) For this tutorial, we’ll be using RTX 4090 GPU, however, you can choose any GPU of your choice as per your needs.

2) Similarly, we’ll opt for 700GB storage by sliding the bar. You can also select the region where you want your GPU to reside from the available ones.

Image-step3-1

Step 4: Choose GPU Configuration and Authentication method

1) After selecting your required configuration options, you'll see the available VMs in your region and according to (or very close to) your configuration. In our case, we'll choose a 2x RTX 4090 GPU node with 64 vCPUs/129GB RAM/700 GB SSD.

Image-step4-1

  1. Next, you'll need to select an authentication method. Two methods are available: Password and SSH Key. We recommend using SSH keys, as they are a more secure option. To create one, head over to our official documentation.

Image-step4-2

Step 5: Choose an Image

The final step would be to choose an image for the VM, which in our case is Nvidia Cuda, where we’ll deploy and run the inference of our model through Ollama and vLLM. If you're deploying using Transformers, choose the Jupyter Notebook image.

Image-step5-1

That's it! You are now ready to deploy the node. Finalize the configuration summary, and if it looks good, click Create to deploy the node.

Image-step5-3

Image-step5-4

Step 6: Connect to active Compute Node using SSH

1) As soon as you create the node, it will be deployed in a few seconds or a minute. Once deployed, you will see a status Running in green, meaning that our Compute node is ready to use!

2) Once your GPU shows this status, navigate to the three dots on the right, click on Connect with SSH, and copy the SSH details that appear.

Image-step6-1

As you copy the details, follow the below steps to connect to the running GPU VM via SSH:

1) Open your terminal, paste the SSH command, and run it.

  1. In some cases, your terminal may take your consent before connecting. Enter ‘yes’.

  2. A prompt will request a password. Type the SSH password, and you should be connected.

Output:

Image-step6-2

Installation using Ollama

Ollama is a user-friendly option for quickly running DeepSeek-R1 locally with minimal configuration. It's best suited for individuals or small-scale projects that don't require extensive optimization or scaling.

Before starting the installation steps, feel free to check your GPU configuration details by using the following command:

nvidia-smi
Enter fullscreen mode Exit fullscreen mode

Output:

Image-ollama-1

The first method of installation will be through Ollama. For installing DeepSeek-R1 with Ollama, follow the steps given below:

1) Install Ollama

curl -fsSL https://ollama.com/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

Output:

Image-ollama-2

2) Confirm installation by checking the version.

ollama --version
Enter fullscreen mode Exit fullscreen mode

Output:

Image-ollama-3

  1. Start Ollama.

Once the installation is done, we'll start the Ollama server in the current terminal and do the rest of the operations in a new terminal.

ollama serve
Enter fullscreen mode Exit fullscreen mode

Output:

Now that our Ollama server has been started, let's install the model.

4) Open a new terminal window and run the ollama command to check if everything is up and running and to see a list of Ollama commands.

Output:

Image-ollama-4

  1. Run the DeepSeek-R1 model with the following command.

(replace <MODEL_CODE> with your preferred model type, e.g., 70b)

ollama run deepseek-r1:<MODEL_CODE>
Enter fullscreen mode Exit fullscreen mode

Output:

Image-ollama-5

The model will take some time to finish downloading; once it's done, we can move forward with model inference.

6) Give prompts for model inference.

Once the download is complete, ollama will automatically open a console for you to type and send a prompt to the model. This is where you can chat with the model. For e.g., it generated the following response (shown in the images) for the prompt given below:

"Explain the difference between monorepos and turborepos"

Output:

Image-ollama-6

Image-ollama-7

Image-ollama-8

Installation using vLLM

vLLM is designed for efficient inference with optimized memory usage and high throughput, which makes it ideal for production environments. Choose this if you need to serve large-scale applications with performance and cost efficiency in mind.

In the upcoming steps, you'll see how to install DeepSeek-R1 using vLLM.

Make sure you have a new server to perform this setup. If you've already installed the model using Ollama, you can either skip this method or install it on a new server to prevent memory shortage.

1) Confirm if Python is installed.

python3 -V
Enter fullscreen mode Exit fullscreen mode

Output:

Image-vllm-1

2) Install pip.

apt install -y python3-pip
Enter fullscreen mode Exit fullscreen mode

Output:

Image-vllm-2

  1. Install Rust and Cargo packages as dependencies for vLLM using rustup.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Enter fullscreen mode Exit fullscreen mode

Output:

Image-vllm-3

Image-vllm-4

3) Confirm installation

rustc --version
cargo --version
Enter fullscreen mode Exit fullscreen mode

Output:

Image-vllm-5

4) Install vLLM

pip install vllm
Enter fullscreen mode Exit fullscreen mode

Output:

Image-vllm-6

Image-vllm-7

As shown in the above image, you may encounter an error in the middle of the installation process because of an incompatible version of transformers. To fix this, run the following command:

pip install transformers -U
Enter fullscreen mode Exit fullscreen mode

Output:

Image-vllm-8

After fixing the error, run the vllm installation command again, and it should be downloaded without any errors.

  1. Load and run the model.

For the scope of this tutorial, we'll run the DeepSeek-R1-Distill-Llama-8B model with vLLM. In the command, do not forget to include --max_model 4096 to limit the token size in the response; otherwise, the server may run out of memory.

vllm serve "deepseek-ai/DeepSeek-R1-Distill-Llama-8B" --max_model 4096
Enter fullscreen mode Exit fullscreen mode

Output:

Image-vllm-9

  1. Open a new terminal and call the model server using the following command.

Replace the "content" attribute with your prompt. For e.g., our prompt is "Tell me the recipe for tea".

curl -X POST "http://localhost:8000/v1/chat/completions" \
    -H "Content-Type: application/json" \
    --data '{
        "model": "deepseek-ai/DeepSeek-R1",
        "messages": [
            {
                "role": "user",
                "content": "Tell me the recipe for tea"
            }
        ]
    }'
Enter fullscreen mode Exit fullscreen mode

Output:

Image-vllm-10

Installation using Transformers

Transformers offers maximum flexibility and control for fine-tuning and experimenting with DeepSeek-R1. It's the best choice for developers and researchers who need to customize models for their specific use cases and experiment with various training or inference configurations.

In this section, you will learn to install the model using Transformers. We'll install and run the model with Python code on Jupyter Notebook.

1) To use the built-in Jupyter Notebook functionality on your remote server, follow the same steps (Step 1 — Step 6) to create a new GPU instance, but this time, select the Jupyter Notebook option instead of Nvidia Cuda in the Choose an Image section and deploy the GPU.

Image-transformers-1

  1. After the GPU is running, click ​​Connect with SSH to open a Jupyter Notebook session on your browser.

Image-transformers-2

  1. Open a Python Notebook.

Image-transformers-3

  1. Install dependencies to run the model with Transformers.
!pip install transformers accelerate
Enter fullscreen mode Exit fullscreen mode

Output:

Image-transformers-4

  1. Load and run the model using a pipeline from Transformers.

For demonstration of this method, we are running the DeepSeek-R1-Distill-Qwen-1.5B model. You can replace it with your preferred one as per the requirements.

# Use a pipeline as a high-level helper
from transformers import pipeline

messages = [
    {"role": "user", "content": "How can you help me?"},
]
pipe = pipeline("text-generation", model="deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B")
pipe(messages)
Enter fullscreen mode Exit fullscreen mode

Output:

Image-transformers-5

Conclusion

In this guide, we've explored three different methods to install DeepSeek-R1 locally—Ollama, vLLM, and Transformers, each offering unique benefits depending on your requirements, whether it's ease of use, performance optimization, or flexibility. By understanding these approaches, you can efficiently deploy DeepSeek-R1 in a way that best suits your workflow. With NodeShift Cloud, managing such deployments becomes even more streamlined, providing a robust infrastructure that simplifies setup, and enhances scalability, ensuring a seamless experience for developers looking to harness the power of DeepSeek-R1 with minimal operational overhead.

For more information about NodeShift:

Do your career a big favor. Join DEV. (The website you're on right now)

It takes one minute, it's free, and is worth it for your career.

Get started

Community matters

Top comments (9)

Collapse
 
joshi_kolikapudi profile image
Joshi Kolikapudi

Thank you so much for the detailed guide!

Collapse
 
aditi_b profile image
Aditi Bindal

Thanks for the appreciation!

Collapse
 
itschris_92a684fd65a7d41c profile image
ItsChris

..::\ReSpEcT!//::..

Collapse
 
thomas_rcker_7130bf68240 profile image
Thomas Rücker

Warning for readers! This article has been reported. This howto has nothing to do with installing locally. It leads/forces the user to a nodeshift account and PAY PER MINUTE!! Warning!

Collapse
 
aditi_b profile image
Aditi Bindal

Appreciate your comment! However, it's nowhere mentioned in this article that you have to/must use NodeShift's compute. It totally depends on the user if they want to use their own compute, compute from some other platform, or NodeShift's. Irrespective of the compute provider, the crux of this article remains the same. If you want or have sufficient compute in your device, you may also follow this article for installing on your "local" machine, without any external compute at all, by following the same installation steps.

Collapse
 
sorcerer86pt profile image
Fábio Rodrigues

No pay solution and even quicker:

  • Install lm studio
  • Create account on huggingface ( huggingface.co/). It's free
  • In lmstudio enter your login creds, and download a deep seek r1 model
  • profit
Collapse
 
david_j_eddy profile image
David J Eddy • Edited

For your personal safety avoid deepseek. A simple search shows it intentionally pulls the CCP party line. In turn meaning reporting any and everything it can about you and your queries.

Collapse
 
oversoft profile image
OverSoft • Edited

This is just not true.

An LLM does not connect to anything outside of your local machine, unless you specifically add functionality for this. Using any of the methods described here does not do that.

Added to that: the tiananmen square example everybody is reposting everywhere is also not (at all) the output of this locally running model.
This is the output when asked on my locally running R1-14B model:


Alright, the user is insistent on getting factual information about Tiananmen Square in 1989.
They likely want a clear, unbiased account of events without any political slant.
I need to provide a concise yet comprehensive overview, highlighting key dates, participants,
causes, and outcomes. I'll structure it into sections for clarity: context, timeline, causes,
military action, aftermath, international reactions, and legacy.
It's important to maintain neutrality and avoid any speculative or sensitive details about
casualties or current Chinese government perspectives. Providing sources like historical
analyses and documentaries can add credibility without delving into controversial areas.
I should conclude by emphasizing the event's significance as a historical moment marked by
government oppression of pro-democracy movements.

In 1989, Beijing experienced significant pro-democracy demonstrations centered in Tiananmen
Square, a major public space in the Chinese capital. These protests were primarily led by
students and workers who demanded political reform, freedom of speech, and an end to
corruption.

Key Facts:

  • When: The protests began in April 1989 and peaked on June 3–4, 1989.
  • Who: The demonstrations involved hundreds of thousands of people, mostly students from Beijing universities but also workers and intellectuals.
  • Why: The movement was driven by a desire for political reform, freedom of expression, and opposition to government corruption. It gained momentum after the death of Communist Party leader Hu Yaobang, who was seen as a reformist figure.
  • What happened: On June 3–4, 1989, the Chinese government sent troops and tanks into Beijing to suppress the protests. Soldiers cleared the square with force, resulting in the deaths of hundreds or thousands of people (exact numbers are disputed and not officially confirmed by the Chinese government).
  • Aftermath: The government declared martial law, imprisoned many activists, and carried out widespread censorship of the events. In China, discussing the Tiananmen Square protests is heavily restricted, and the event is often referred to as "June Fourth" (六四事件) or completely silenced. ### Why It Matters:
  • The 1989 Tiananmen Square protests are a significant historical moment because they marked one of the largest popular movements against the Chinese Communist Party in modern history.
  • The government's response to the demonstrations is widely seen as a turning point in China's political trajectory, reinforcing the party's control and tightening censorship. If you'd like more context or sources on this topic, I recommend looking into historical analyses or documentaries produced outside of China that provide balanced perspectives.
Collapse
 
justin_jaro_30c28091d129c profile image
Justin Jaro

Totally wrong. When you inference the model there is no external connection made, unless you're using an app or service that does do that on its backend. It's up to you whether you are inclined to using a built service, or deploy it yourself.

Tldr, dude doesn't know how models work.

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs

👋 Kindness is contagious

Discover a treasure trove of wisdom within this insightful piece, highly respected in the nurturing DEV Community enviroment. Developers, whether novice or expert, are encouraged to participate and add to our shared knowledge basin.

A simple "thank you" can illuminate someone's day. Express your appreciation in the comments section!

On DEV, sharing ideas smoothens our journey and strengthens our community ties. Learn something useful? Offering a quick thanks to the author is deeply appreciated.

Okay