DEV Community

Cover image for Using Deepseek r1 in Visual Studio Code for free.

Using Deepseek r1 in Visual Studio Code for free.

Varnit Sharma on January 26, 2025

With the rapid advancement in Large Language Models (LLMs), Deepseek-r1 has emerged as a top contender. Matching the performance of the GPT-o1 mode...
Collapse
 
yaireo profile image
Yair Even Or

DeepSeek cut-off date is end of 2023, which is ancient compared to Gemini 2.0 (August 2024).

It is imperative to have a recent cut-off date for front-end related work or when querring anything about recent things.

This is why I will NOT use it.

Collapse
 
rsteadman profile image
R Steadman

On the other hand, Gemini is rubbish, so there's that.

Collapse
 
amjadmh73 profile image
Amjad Abujamous

Agreed, based on personal experience.

Thread Thread
 
brandong profile image
Brandon • Edited

What can you expect? its free lol

Collapse
 
reidlai profile image
Reid Lai

Gemini is BERT which is bidirectional model not best for Q&A usage like other decoder only model, e.g. GPT , llama , etc

Thread Thread
 
yaireo profile image
Yair Even Or • Edited

So how do you explain thousands of people had rated it so highly (for coding)?
I don't think they are all Google workers who are trying to boost their own product by rating it.

Thread Thread
 
julianoe_ profile image
Julien ✏️

Lol. Well reconsider that. Why would Google want to be rated at the top of the leaderboard of an highly competitive field where perceived performance translates almost directly to investor money and market cap? 🤔🤔🤔🤔

Thread Thread
 
rsteadman profile image
R Steadman

You'd almost think Yair is one of the shareholders :D Otherwise no idea why'd he be so frantic about selling Gemini.

In his focus on some leaderboards, he also seems to forget that most coders are rubbish, who wouldn't recognise bad code if it'd slap them in the face. So their upvoting of a rubbish LLM doesn't hold any value.

Thread Thread
 
yaireo profile image
Yair Even Or

I'm not even using Gemini since Claude Sonnet came out. I am just a Google-loving guy sitting at home, loving all their products so much and everything about that company. I never worked for them nor will I ever because I think the pressure of such a workplace is a poison to the soul and I lead a relaxed life... hiking and chilling.

You have made a very good point 99% of coders are rubbish and this "contaminates" the leaderboard. But in terms or coding the voting is quite simple because you can ask it to write code and either the output works as intended or it doesn't. it is rare both AI which are pitted against each other in the battle arena both produce good working code.

Collapse
 
yaireo profile image
Yair Even Or

Rubbish?? its on the very TOP of the leaderboard:

lmarena.ai/?leaderboard

Thread Thread
 
rsteadman profile image
R Steadman

It could be on top of every leaderboard in existence, it's still inferior for coding tasks than say Claude (and since this is dev.to, that's the relevant bit).

Thread Thread
 
yaireo profile image
Yair Even Or • Edited

It's on the top for coding in the world's most famous AI leaderboard website. it has been ranked so by at least thousands of developers, so this contradicts your one and single personal experience completely.

Image description

Thread Thread
 
rsteadman profile image
R Steadman

Whatever you say buddy.

Collapse
 
julianoe_ profile image
Julien ✏️

honestly that highly depends on how you work and dev. If you build for the long run, 1 or 2 years of "new things" lacking from the model is not really a problem. You still have a brain to do that finelinning if need be. My code will be out there is 10 years so a few months of training data is not really what will hinder the tool to help me build something.

Collapse
 
pravinjadhav profile image
Pravin Jadhav • Edited

for windows follow ;


: Setting Up Deepseek-r1 (Using Windows CMD)

  1. Install Ollama

    • Download Ollama from the official website.
    • Run the downloaded file and complete the setup.
  2. Download the Deepseek-r1 Model

    • Open CMD and run this command:
     ollama pull deepseek-r1
    
  • To test if the model is working, use this command:

     curl http://localhost:11434/api/generate -d "{\"model\": \"deepseek-r1:latest\", \"prompt\": \"Why is the sky blue?\"}"
    
  • If you see output in the terminal, Deepseek-r1 is ready to go!


** Setting Up the Continue.dev Extension**

  1. Install Visual Studio Code (VS Code)

  2. Install the Continue.dev Extension

    • Open VS Code and go to the Extensions Marketplace.
    • Search for "Continue.dev" and install it.
  3. Connect Deepseek-r1 to Continue.dev

    • Open the Continue.dev extension.
    • Click the model selection button at the bottom-left corner.
    • Select "Ollama" and choose the "Deepseek-r1" model.

: Using Deepseek-r1

Now you can use Deepseek-r1’s features directly in VS Code:

  • Autocomplete: Get smart suggestions while writing code.
  • Code Refactoring: Ask the AI to optimize or rewrite your code.
  • Code Explanations: Understand what your code does with AI help.

: Why Choose Deepseek-r1?

  • Logical Reasoning: Makes smarter decisions using logical tree reasoning.
  • Transformer Technology: Excels in code generation tasks.
  • Local Execution: Runs on your machine for better privacy and faster responses.

: WhatsApp Status Message

"Deepseek-r1: Smarter than GPT, excelling in reasoning and code generation. 🚀

DeepseekR1 #AI #Coding #WindowsCMD"


Start using Deepseek-r1 in your workflow and enjoy smarter, faster coding! 🥂
(hashnode.com/@askpravinjadhav)

Collapse
 
tr11 profile image
Tiago Rangel

This will not run Deepseek r1, but rather Deepseek coder. Even if you wanted to run Deepseek r1, you would need a lot of processing power — r1 can't just run on a laptop or PC.

Collapse
 
best_codes profile image
Best Codes

r1 comes with a lot of variants:

Image description

The 1.5b variant could very easily run on a weak or older device, and the 8b variant works fine on my device.

Collapse
 
elliot_brenya profile image
Elliot Brenya sarfo

I am running deepseek-r1 on my old intel macbook

Collapse
 
syeo66 profile image
Red Ochsenbein (he/him)

Which variant? I doubt the full 671b variant works on a macbook. And everything under 32b is really only useful with fine tuning for specialized tasks.

Thread Thread
 
nguyn_anhnguyn_f7d9e52 profile image
Nguyễn Anh Nguyên

Yes

Collapse
 
syeo66 profile image
Red Ochsenbein (he/him)

Okay, I just checked: 'deepseek-r1:latest' is actually 'only' the 7b model. So, yeah, there's that.

Collapse
 
yaireo profile image
Yair Even Or

There was a better guide a few days ago already:

dev.to/shayy/run-deepseek-locally-...

Collapse
 
leob profile image
leob

Most obvious question: why would you use this instead of CoPilot? or am I comparing apples and oranges ;-)

Collapse
 
syeo66 profile image
Red Ochsenbein (he/him)

Maybe to not send data to a company you can or cannot trust

Collapse
 
leob profile image
leob

"A company you cannot trust" - DeepSeek is Chinese, does your "data" go straight to the Communist Party? ;-)

Thread Thread
 
syeo66 profile image
Red Ochsenbein (he/him)

Not if you run their models locally...

Collapse
 
keyru_nasirusman profile image
keyru Nasir Usman • Edited

What I don't understand is that how come a small startup is able to build LLM model that beats chatgpt? Even Tech giants like Google or Elon Musk didn't build LLM model that can beat chatgpt. All of a sudden a small Chinese company comes up with LLM model superior than OpenAI's chatgpt. Did they use a LLM recipe which OpenAI doesnot know?Guys if there is something i missed here, please enlighten me😊

Collapse
 
syeo66 profile image
Red Ochsenbein (he/him)

This is what you get when enthusiastic smart people try a different angle instead of just running for investors money. Yes, deepseek R1 is trained differently. They used a rule/heuristics based reward system and automated benchmarking during the training process... nothing new, but apparently nobody else thought of combining this with LLM training.

Collapse
 
ai_joddd profile image
Vinayak Mishra

Hey Varnit, had a question for you after seeing this. How good is deepseek in terms of hallucinations, as yesterday night I was reading on LLM hallucination detection.

Collapse
 
patriciathompson profile image
PatriciaThompson

Deepseek R1 in VS Code for free? That’s a game-changer! 🚀 Have you tried it yet? Curious to know how it compares to other AI coding assistants like Copilot or Codeium. firekirinus.xyz/

Collapse
 
gamelord2011 profile image
Reid Burton

Don't you have to have a pc of decent memory & processing capibilities to do this?

Collapse
 
lunaticprogrammer profile image
Varnit Sharma

not really there a variants with parameters they are trained on below 16B params most of the domestic use machines can handle

Collapse
 
gamelord2011 profile image
Reid Burton

Huh... I did not know that.

Collapse
 
siraw_tadesse profile image
Siraw Tadesse • Edited

How can we use it???

Collapse
 
majd_333_46b793b4664b7162 profile image
MAJD 333

thank u 💖

Collapse
 
rahoof_vpr profile image
Muhammed Rahoof VP

Nice🔥

Collapse
 
keyru_nasirusman profile image
keyru Nasir Usman

Is it completely free?

Collapse
 
arungithub9 profile image
Arun Muthupalaniappan

Yes

Collapse
 
taichis profile image
Taichi-S

For me still "cursor" is better option to implement software.
That autocomplete experience is wonderful.

But if I build some new software from scratch, then maybe this can work better than "cursor".

Collapse
 
taichis profile image
Taichi-S • Edited

I mean reactivity is needed to implement existed software.
Considering the article below, I think cursor is hybrid ai agent, reactive and deliberative.
geeksforgeeks.org/reactive-vs-deli...