<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aiman.eth</title>
    <description>The latest articles on DEV Community by Aiman.eth (@aimaneth).</description>
    <link>https://dev.to/aimaneth</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aimaneth"/>
    <language>en</language>
    <item>
      <title>How to Use DeepSeek R1 for Free in Visual Studio Code with Cline or Roo Code</title>
      <dc:creator>Aiman.eth</dc:creator>
      <pubDate>Tue, 28 Jan 2025 15:51:02 +0000</pubDate>
      <link>https://dev.to/aimaneth/how-to-use-deepseek-r1-for-free-in-visual-studio-code-with-cline-or-roo-code-1ia3</link>
      <guid>https://dev.to/aimaneth/how-to-use-deepseek-r1-for-free-in-visual-studio-code-with-cline-or-roo-code-1ia3</guid>
      <description>&lt;p&gt;If you're looking for an AI that excels in reasoning and is also free because it's open source, the newly launched DeepSeek R1 is a great choice. It competes with and outperforms models like GPT-4, o1-mini, Claude 3.5, among others. I tested it and have nothing but praise!&lt;/p&gt;

&lt;p&gt;If you want to run it directly in your Visual Studio Code as a code agent similar to GitHub Copilot, without spending a dime, come along as I show you how to do this using tools like LM Studio, Ollama, and Jan.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why is DeepSeek R1 so talked about these days?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It's free and open source&lt;/strong&gt;: Unlike many models that charge a fortune, you can use it without paying anything. It's even available for chat at &lt;a href="https://chat.deepseek.com." rel="noopener noreferrer"&gt;https://chat.deepseek.com.&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance&lt;/strong&gt;: It competes with and outperforms other models in tasks involving logic, mathematics, and even code generation (which is my favorite part).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multiple versions&lt;/strong&gt;: To run it locally (LLM), there are models ranging from 1.5B to 70B parameters, so you can choose what works best on your PC depending on your hardware.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easy to integrate&lt;/strong&gt;: You can connect it to VSCode using extensions like Cline or Roo Code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No costs&lt;/strong&gt;: If you run it locally, you don't pay for tokens or APIs. A graphics card is recommended, as running it solely on the CPU is slower.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Important Tips Before You Start&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Save resources&lt;/strong&gt;: If your PC isn't very powerful, stick with the smaller models (1.5B or 7B parameters) or quantized versions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;RAM Calculator&lt;/strong&gt;: Use LLM Calc to find out the minimum RAM you'll need.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Privacy&lt;/strong&gt;: Running it locally means your data stays on your PC and doesn't go to external servers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No costs&lt;/strong&gt;: Running it locally is free, but if you want to use the DeepSeek API, you'll need to pay for tokens. The good news is that their price is much lower than competitors.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Which Model to Choose? It Depends on Your PC!&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;DeepSeek R1 has several versions, and the choice depends on your hardware:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;1.5B Parameters&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RAM required&lt;/strong&gt;: ~4 GB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU&lt;/strong&gt;: Integrated (like NVIDIA GTX 1050) or a modern CPU.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What for?&lt;/strong&gt;: Simple tasks and modest PCs.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;7B Parameters&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RAM required&lt;/strong&gt;: ~8-10 GB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU&lt;/strong&gt;: Dedicated (like NVIDIA GTX 1660 or better).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What for?&lt;/strong&gt;: Intermediate tasks and PCs with better hardware.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;70B Parameters&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RAM required&lt;/strong&gt;: ~40 GB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU&lt;/strong&gt;: High-end (like NVIDIA RTX 3090 or higher).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What for?&lt;/strong&gt;: Complex tasks and super powerful PCs.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;How to Run DeepSeek R1 Locally&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Using LM Studio&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Download and install LM Studio&lt;/strong&gt;: Just go to the &lt;a href="https://lmstudio.ai/" rel="noopener noreferrer"&gt;LM Studio&lt;/a&gt; website and download the version for your system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Download the DeepSeek R1 model&lt;/strong&gt;: In LM Studio, go to the &lt;strong&gt;Discover&lt;/strong&gt; tab, search for "DeepSeek R1," and select the version most compatible with your system. If you're using a MacBook with Apple processors, keep the &lt;strong&gt;MLX&lt;/strong&gt; option selected next to the search bar (these versions are optimized for Apple hardware). For Windows or Linux, choose the &lt;strong&gt;GGUF&lt;/strong&gt; option.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load the model&lt;/strong&gt;: After downloading, go to &lt;strong&gt;Local Models&lt;/strong&gt;, select DeepSeek R1, and click &lt;strong&gt;Load&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Start the local server&lt;/strong&gt;: In the &lt;strong&gt;Developer&lt;/strong&gt; tab, enable &lt;strong&gt;Start Server&lt;/strong&gt;. It will start running the model at &lt;code&gt;http://localhost:1234&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Proceed to step 4 &lt;strong&gt;Integrating with VSCode&lt;/strong&gt;!&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;2. Using Ollama&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Install Ollama&lt;/strong&gt;: Download it from the &lt;a href="https://ollama.ai/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; website and install it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Download the model&lt;/strong&gt;: In the terminal, run*:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ollama pull deepseek-r1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*This is the main model; if you want smaller models, go to &lt;a href="https://ollama.com/library/deepseek-r1" rel="noopener noreferrer"&gt;https://ollama.com/library/deepseek-r1&lt;/a&gt; and see which command to run in the terminal.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Start the server&lt;/strong&gt;: In the terminal, execute:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ollama serve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command will start running the model at &lt;code&gt;http://localhost:11434&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Proceed to step 4 &lt;strong&gt;Integrating with VSCode&lt;/strong&gt;!&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;3. Using Jan&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Download and install Jan&lt;/strong&gt;: Choose the version for your system on the &lt;a href="https://jan.ai/" rel="noopener noreferrer"&gt;Jan&lt;/a&gt; website.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Download the model&lt;/strong&gt;: I couldn't find DeepSeek R1 directly in Jan. So, I went to the &lt;a href="https://huggingface.co/models?sort=trending&amp;amp;search=unsloth+gguf+deepseek+r1" rel="noopener noreferrer"&gt;Hugging Face&lt;/a&gt; website and manually searched for "unsloth gguf deepseek r1." I found the desired version, clicked the "Use this model" button, and selected Jan as the option. The model automatically opened in Jan, and I then downloaded it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load the model&lt;/strong&gt;: After downloading, select the model and click &lt;strong&gt;Load&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Start the server&lt;/strong&gt;: Jan automatically starts the server, usually at &lt;code&gt;http://localhost:1337&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Proceed to step 4 &lt;strong&gt;Integrating with VSCode&lt;/strong&gt;!&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;4. Integrating with VSCode&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Install the extension&lt;/strong&gt;: In VSCode, open the Extensions tab and install Cline or Roo Code.&lt;/li&gt;
&lt;/ul&gt;




&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Configure the extension for Jan or LM Studio&lt;/strong&gt;: The configuration for both &lt;strong&gt;Cline&lt;/strong&gt; and &lt;strong&gt;Roo Code&lt;/strong&gt; is practically identical. Follow the steps below:

&lt;ul&gt;
&lt;li&gt;Click on the extension and access &lt;strong&gt;"Settings"&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;In &lt;strong&gt;API Provider&lt;/strong&gt;, select &lt;strong&gt;"LM Studio"&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;In the &lt;strong&gt;Base URL&lt;/strong&gt; field, enter the URL configured in Jan or LM Studio.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Model ID&lt;/strong&gt; field will be automatically filled if you only have one model available. Otherwise, manually select the &lt;strong&gt;DeepSeek&lt;/strong&gt; model you downloaded.&lt;/li&gt;
&lt;li&gt;Finish by clicking &lt;strong&gt;"Done"&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Configure the extension for Ollama&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Click on the extension and access &lt;strong&gt;"Settings"&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;In &lt;strong&gt;API Provider&lt;/strong&gt;, select &lt;strong&gt;"Ollama"&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;In the &lt;strong&gt;Base URL&lt;/strong&gt; field, enter the URL configured in Ollama.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Model ID&lt;/strong&gt; field will be automatically filled if you only have one model available. Otherwise, manually select the &lt;strong&gt;DeepSeek&lt;/strong&gt; model you downloaded.&lt;/li&gt;
&lt;li&gt;Finish by clicking &lt;strong&gt;"Done"&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Integration complete, now just enjoy the functionalities of Cline or Roo Code.&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;DeepSeek R1 is a lifesaver for those who want a powerful AI without spending anything. With &lt;strong&gt;LM Studio&lt;/strong&gt;, &lt;strong&gt;Ollama&lt;/strong&gt;, or &lt;strong&gt;Jan&lt;/strong&gt;, you can run it locally and integrate it directly into &lt;strong&gt;Visual Studio Code&lt;/strong&gt;. Choose the model that fits your PC and start using it today!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vscode</category>
      <category>deepseekr1</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
