DEV Community

Cover image for How Run DeepSeek Locally with Ollama
Aditya
Aditya

Posted on

81 1 1 1 1

How Run DeepSeek Locally with Ollama

How to Install and Run DeepSeek Locally with Ollama

DeepSeek is a powerful open-source language model, and with Ollama, running it locally becomes effortless. In this guide, I'll walk you through installing Ollama and running DeepSeek-r1:1.5b on your command line.

AI steps

Step 1: Install Ollama

Ollama provides a simple way to run and manage AI models locally. You can install it using the following commands:

For macOS (via Homebrew)

brew install ollama
Enter fullscreen mode Exit fullscreen mode

For Linux

curl -fsSL https://ollama.ai/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

For Windows (via PowerShell)

iwr -useb https://ollama.ai/install.ps1 | iex
Enter fullscreen mode Exit fullscreen mode

Once installed, restart your terminal or command prompt.

Step 2: Pull DeepSeek Model

Now that Ollama is installed, you need to download the DeepSeek-r1:1.5b model. Run:

ollama pull deepseek-r1:1.5b
Enter fullscreen mode Exit fullscreen mode

This will fetch the model from Ollama’s registry. The first time you run it, the model will be downloaded, so it may take a while depending on your internet speed.

Step 3: Run DeepSeek in Terminal

Once the model is downloaded, you can start an interactive session with it:

ollama run deepseek-r1:1.5b
Enter fullscreen mode Exit fullscreen mode

This will launch the DeepSeek model, allowing you to input prompts and receive AI-generated responses directly from your terminal.

Step 4: Using DeepSeek in a Script

If you want to integrate DeepSeek into a script, you can use Ollama's API:

import ollama

response = ollama.chat("deepseek-r1:1.5b", "Hello, how can you assist me?")
print(response)
Enter fullscreen mode Exit fullscreen mode

Here is AI generated movie for you to chill:
AI

Conclusion

Running DeepSeek locally is now easier than ever with Ollama. Whether you're using it for personal projects, AI research, or development, this guide should help you get started. Let me know if you are still using ChatGPT or trying to over on deepseek!

Happy coding! πŸš€

Let connect on LinkedIn and checkout my GitHub repos:

Thank you

API Trace View

How I Cut 22.3 Seconds Off an API Call with Sentry πŸ•’

Struggling with slow API calls? Dan Mindru walks through how he used Sentry's new Trace View feature to shave off 22.3 seconds from an API call.

Get a practical walkthrough of how to identify bottlenecks, split tasks into multiple parallel tasks, identify slow AI model calls, and more.

Read more β†’

Top comments (0)

AWS Security LIVE!

Join us for AWS Security LIVE!

Discover the future of cloud security. Tune in live for trends, tips, and solutions from AWS and AWS Partners.

Learn More

πŸ‘‹ Kindness is contagious

Please leave a ❀️ or a friendly comment on this post if you found it helpful!

Okay