
The AI revolution is happening, and Deepseek-r1 is at the forefront. This powerful Large Language Model (LLM) goes head-to-head with top AI models ...
For further actions, you may consider blocking this person and/or reporting abuse
Two things I would like to point out:
ollama run deepseek-r1
pulls the 7 billion parameter model, which is very weak. The best DeepSeek R1 model has 671 billion parameters. You would run it withollama run deepseek-r1:671b
, but most devices would be far too weak to run a model of this size.Running DeepSeek R1 on a laptop will not compare to models like
GPT-4o
or Claude 3.5 Sonnet.It rather depends on the laptop...but yes. I suspect I could run the R1 on my laptop- but then...I don't have your average laptop...
And, yes, we're talking the distilled version. MOST people won't handle the full-tilt beast on their HW.
I am getting this error. When I choose DeepSeek Coder and ask for coding suggestions
HTTP 404 Not Found from 127.0.0.1:11434/api/chat
you solved this problem ? i got the same problem.
I changed to deepseek-r1 in config file. It is not providing any coding suggestions.
Same here
That looks like this is looking for a local instance of a web-server running a local DeepSeek chat.
Let me check it out
My experience with this tutorial: Great thing it introduced me to ollama! But using deepseek made my laptop (samsung odyssey) fans go WILD. Besides being helpul, the only advantage of this is the fast response time it has against the web model
In the config you have to manually update model version from "deepseek-7b" by default to "deepseek-r1", then it will work
The models that really help coding need a lot of VRAM. Isn't the 7b Model (for example) too weak to compete with ChatGPT o1 when it comes to generating code?
Deepseek-r1 is a powerful Large Language Model (LLM) that offers developers a fast, private, and cost-effective coding assistant by running directly on their local machines. This eliminates the need for expensive, cloud-based tools and ensures that your coding assistant is always available when needed.
To integrate Deepseek-r1 into Visual Studio Code, you'll first need to install Ollama, a lightweight platform that allows you to manage and run LLMs locally. After installing Ollama, you can proceed with setting up Deepseek-r1 in your coding environment.
This is awesome thank you!
Great post!
is it safe to install in office system.?
how to do it in network? i have container running on my pc, and i want to code in laptop.
NIce and cool. Very soon standard of a home will be having it own model running and managing itself. lol just waiting on those damn powerful chips
I have poked around deep seek in regard to some coding. It is far behind other AI platforms in that area. It is fast though! ha
Nice tutorial, but it feels like you’re just showing us how to install something that should’ve been obvious already. Guess some people need step-by-step hand-holding. 🤷♂️
Continue.dev not found !
you can use codegpt plugin also instead of continue.dev marketplace.visualstudio.com/items...