DEV Community

Cover image for How to Add and Use Deepseek-r1 in Your Visual Studio Code (For Free!)
Clinton Ekekenta for FastApply

Posted on • Edited on • Originally published at blog.fastapply.co

129 9 8 7 7

How to Add and Use Deepseek-r1 in Your Visual Studio Code (For Free!)

The AI revolution is happening, and Deepseek-r1 is at the forefront. This powerful Large Language Model (LLM) goes head-to-head with top AI models like GPT, excelling in reasoning, coding, and problem-solving, all while running right on your own machine. No more relying on expensive, cloud-based tools. With Deepseek-r1, you get a fast, private, and cost-effective coding assistant that’s always available when you need it.
After countless hours with tools like Cursor and other paid AI helpers, I decided to give Deepseek-r1 a shot. I discovered a game-changer: a seamless, free integration with Visual Studio Code that supercharged my workflow. Ready to dive in? Let me show you how to set it up step by step.

Why Deepseek-r1?

Before we jump into the setup, let's look at why you should consider using Deepseek-r1 as a developer:

  • You can run everything locally on your computer without a cloud provider.
  • It helps you solve complex coding tasks faster and smarter.
  • It performs well in code generation and debugging.

Installing Deepseek-r1 in VS Code?

Let's proceed to install Deepseek-r1 in your Virtual Studio Code coding environment. To do that, follow the steps below:

Step 1: Install Ollama

To get started, you’ll need Ollama, a lightweight platform that lets you run LLMs locally. Ollama is the backbone of your Deepseek-r1 setup because it will enable you to manage and run Deepseek-r1 effortlessly on your computer.

To install Ollama, head over to Ollama’s official website and download the version for your OS. Then their installation instructions to get it up and running.
Downloading Ollama online

Step 2: Download Deepseek-r1

With Ollama installed, it’s time to bring Deepseek-r1 into your coding environment. Open your terminal and run below:

ollama pull deepseek-r1
Enter fullscreen mode Exit fullscreen mode

This may take some time to install, but you must exercise some patience.

The above command will download the Deepseek-r1 model to your local computer.

Installing Deepseek-r1 locally on your computer

Once the download is complete, test it with a simple query to ensure everything’s working. Run the deepseek-r1 with the command:

ollama run deepseek-r1
Enter fullscreen mode Exit fullscreen mode

And add your test prompt:

Installing Continue.dev on VS extension

If you see a response, you’re all set! Deepseek-r1 is ready to roll.

Step 3: Install the Continue.dev Extension

Now, let’s bring Deepseek-r1 into Visual Studio Code. For this, we’ll use Continue.dev, a fantastic extension that connects VS Code to LLMs like Deepseek-r1. This extension will act as a bridge between your VS Code and Ollama, allowing you to interact with Deepseek-r1 directly within your coding environment. To install Continue.dev Extension, follow the steps:

  1. Open VS Code and go to the Extensions Marketplace.
  2. Search for Continue.dev and hit install.

Installing Continue.dev on VS extension

Step 4: Configure Deepseek-r1 in Continue.dev

With Continue.dev installed, it’s time to connect it to Deepseek-r1. Follow the steps to configure it:

  • Open the Continue.dev interface by clicking its icon in the VS Code Activity Bar.
  • Look for the model selection button at the bottom-left corner of the chat window.
  • Click the button, select Ollama as the platform, and then choose Deepseek-r1 from the list of available models.

Configuring Deepseek on VS Code

That’s it! You’re now ready to harness the power of Deepseek-r1 in your coding workflow.

What Can You Do with Deepseek-r1?

Once everything is set up, the possibilities of Deepseek-r1 are endless in your coding environment:

  • It gives you intelligent, context-aware suggestions as you type.

  • You can highlight a code block and ask Deepseek-r1 to optimize or rewrite it.

  • if you are stuck on an error, Deepseek-r1 will help you troubleshoot it.

  • You can select any snippet and get a detailed breakdown of how it works.

Here’s a quick demo of how Deepseek-r1 works in your Virtual Studio Code.

The most interesting part is that you:

  • You don't need subscriptions or hidden fees, just free and powerful AI assistance.
  • Everything runs locally, so your code stays on your machine.
  • You can tailor Deepseek-r1’s behavior to fit your specific needs.

Final Thoughts

Integrating Deepseek-r1 into Visual Studio Code has been a game-changer for my productivity. It’s fast, reliable, and incredibly versatile all without me having to spend a dive. Whether you’re a seasoned developer or just starting, this setup is worth exploring.

So, what are you waiting for? Give Deepseek-r1 a try and experience the future of coding today.

Apply to 100+ LinkedIn & Indeed Jobs in Minutes using FastApply.

Happy coding! 🚀

Image of Timescale

🚀 pgai Vectorizer: SQLAlchemy and LiteLLM Make Vector Search Simple

We built pgai Vectorizer to simplify embedding management for AI applications—without needing a separate database or complex infrastructure. Since launch, developers have created over 3,000 vectorizers on Timescale Cloud, with many more self-hosted.

Read full post →

Top comments (26)

Collapse
 
best_codes profile image
Best Codes • Edited

Two things I would like to point out:

  • DeepSeek Coder is not the same as DeepSeek R1.

NOT R1

  • ollama run deepseek-r1 pulls the 7 billion parameter model, which is very weak. The best DeepSeek R1 model has 671 billion parameters. You would run it with ollama run deepseek-r1:671b, but most devices would be far too weak to run a model of this size.

Running DeepSeek R1 on a laptop will not compare to models like GPT-4o or Claude 3.5 Sonnet.

Collapse
 
madscientist42 profile image
Frank Earl

It rather depends on the laptop...but yes. I suspect I could run the R1 on my laptop- but then...I don't have your average laptop...

Collapse
 
madscientist42 profile image
Frank Earl

And, yes, we're talking the distilled version. MOST people won't handle the full-tilt beast on their HW.

Collapse
 
guru profile image
gurup7

I am getting this error. When I choose DeepSeek Coder and ask for coding suggestions

HTTP 404 Not Found from 127.0.0.1:11434/api/chat

Collapse
 
m_zajbe_8c119c450815f660e profile image
m zajbe

you solved this problem ? i got the same problem.

Collapse
 
guru profile image
gurup7

I changed to deepseek-r1 in config file. It is not providing any coding suggestions.

Collapse
 
ranjan_barman_86115855b6e profile image
Ranjan Barman

Same here

Collapse
 
madscientist42 profile image
Frank Earl

That looks like this is looking for a local instance of a web-server running a local DeepSeek chat.

Collapse
 
ayoola_damilare_212d5bde0 profile image
Ayoola Damilare

Let me check it out

Collapse
 
vilailus profile image
Luis "shadowtampa" Gomes

My experience with this tutorial: Great thing it introduced me to ollama! But using deepseek made my laptop (samsung odyssey) fans go WILD. Besides being helpul, the only advantage of this is the fast response time it has against the web model

Collapse
 
__db20f81acdd64 profile image
Артем Гасин

In the config you have to manually update model version from "deepseek-7b" by default to "deepseek-r1", then it will work

Collapse
 
idealistic91 profile image
Jens Frerichs • Edited

The models that really help coding need a lot of VRAM. Isn't the 7b Model (for example) too weak to compete with ChatGPT o1 when it comes to generating code?

Collapse
 
daniyal_khan_25a2cb898f9a profile image
DANIYAL KHAN

Deepseek-r1 is a powerful Large Language Model (LLM) that offers developers a fast, private, and cost-effective coding assistant by running directly on their local machines. This eliminates the need for expensive, cloud-based tools and ensures that your coding assistant is always available when needed.

To integrate Deepseek-r1 into Visual Studio Code, you'll first need to install Ollama, a lightweight platform that allows you to manage and run LLMs locally. After installing Ollama, you can proceed with setting up Deepseek-r1 in your coding environment.

Collapse
 
dforrunner profile image
Mo Barut

This is awesome thank you!

Collapse
 
aimitsumori_noreply_43367 profile image
Aimitsumori Noreply

Great post!

Collapse
 
khush_311b4d836ad0d024747 profile image
Khush

is it safe to install in office system.?

Some comments may only be visible to logged-in visitors. Sign in to view all comments.

Billboard image

The Next Generation Developer Platform

Coherence is the first Platform-as-a-Service you can control. Unlike "black-box" platforms that are opinionated about the infra you can deploy, Coherence is powered by CNC, the open-source IaC framework, which offers limitless customization.

Learn more