DEV Community

Cover image for How To Use DeepSeek With .NET 9 | Hands-On With DeepSeek R1, Semantic Kernel & C#
CodeStreet
CodeStreet

Posted on • Edited on

59 2

How To Use DeepSeek With .NET 9 | Hands-On With DeepSeek R1, Semantic Kernel & C#

In this post, we will integrate DeepSeek R1 into a .NET 9 using Semantic Kernel. If you’re looking to get started with DeepSeek models locally, this hands-on guide is for you.

What You Will Learn

  • How to Get Started with DeepSeek R1
  • How to Use Ollama for running local models
  • How to install and start running the DeepSeek R1 model
  • How to Use Semantic Kernel in C#

1. Prerequisites

  • Visual Studio 2022+ (with .NET 9 SDK installed) .NET 9 is still in preview, so ensure that you have the preview SDK installed.
  • Ollama (for managing and running local models)
  • DeepSeek1.5b Model

2. Installing Ollama

Ollama is a platform or tool (specific details may vary depending on the context) that allows users to interact with large language models (LLMs) locally. It simplifies the process of deploying and running LLMs like LLaMA, Phi, DeepSeek R1, or other open-source models.

To install Ollama visit its official website https://ollama.com/download and install it on your machine.

3. Installing DeepSeek R1

DeepSeek's first-generation reasoning models with comparable performance to OpenAI-o1, including six dense models distilled from DeepSeek-R1 based on Llama and Qwen.

On the Ollama website click on Models and click on deepseek-r1 and choose 1.5b parameter option

How To Use DeepSeek With .NET 9 - deepseek-r1:1.5b

Open Command Prompt and run the below command.

ollama run deepseek-r1:1.5b

It will download the model and start running automatically.

Once done, verify the model is available

ollama list

That’s it! We’re ready to integrate DeepSeek locally.

4. Creating .NET Console Application

  1. Launch Visual Studio
  2. Make sure .NET 9 is installed.
  3. Create a New Project
  4. File → New → Project…
  5. Pick Console App with .NET 9.
  6. Name Your Project
  7. e.g., DeepSeekDemoApp or any name you prefer.
  8. Target Framework Check
  9. Right-click on your project → Properties.
  10. Set Target Framework to .NET 9.

5. Integrating DeepSeek R1 with Semantic Kernel

While you could call DeepSeek via direct HTTP requests to Ollama, using Semantic Kernel offers a powerful abstraction for prompt engineering, orchestration, and more.

  1. Add Necessary NuGet Packages
<ItemGroup>
  <PackageReference Include="Codeblaze.SemanticKernel.Connectors.Ollama" Version="1.3.1" />
  <PackageReference Include="Microsoft.SemanticKernel" Version="1.35.0" />
</ItemGroup>
Enter fullscreen mode Exit fullscreen mode

6. Complete code

The Semantic Kernel can use a custom connector to talk to local endpoints. For simplicity, we’ll outline a sample approach:

Program.cs:

using Codeblaze.SemanticKernel.Connectors.Ollama;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.SemanticKernel;

var builder = Kernel.CreateBuilder().AddOllamaChatCompletion("deepseek-r1:1.5b", "http://localhost:11434");

builder.Services.AddScoped<HttpClient>();
var kernel = builder.Build();

while (true)
{
    string input = "";
    Console.WriteLine("Ask anything to Deepseek");
    input = Console.ReadLine();
    var response = await kernel.InvokePromptAsync(input);
    Console.WriteLine($"\nDeepseek: {response}\n");
}

Enter fullscreen mode Exit fullscreen mode

7. Running & Testing

  1. Ensure Ollama is Running

-Some systems auto-run Ollama; otherwise, start it

ollama run

  1. Run Your .NET App

-Hit F5 (or Ctrl+F5) in Visual Studio.

-Watch for console output—

Running DeepSeek on .NET

Support me!

If you found this guide helpful, make sure to check out the accompanying YouTube video tutorial where I walk you through the process visually. Don’t forget to subscribe to my channel for more amazing tutorials!

I appreciate it if you could buy me a coffee.
Buy me a coffee

Feel free to leave your questions, comments, or suggestions below. Happy coding!

Image of Timescale

🚀 pgai Vectorizer: SQLAlchemy and LiteLLM Make Vector Search Simple

We built pgai Vectorizer to simplify embedding management for AI applications—without needing a separate database or complex infrastructure. Since launch, developers have created over 3,000 vectorizers on Timescale Cloud, with many more self-hosted.

Read full post →

Top comments (5)

Collapse
 
stevsharp profile image
Spyros Ponaris

Thanks for sharing 🙏.

Collapse
 
codestreet profile image
CodeStreet

I am glad you like it.
thanks

Collapse
 
m0r19un profile image
Граф Безымянный

Hi, what are the hardware requirements?

Collapse
 
codestreet profile image
CodeStreet • Edited

Here is my system specifications:

OS Name Microsoft Windows 11 Pro
Version 10.0.26100 Build 26100
System Model MS-7D77
System Type x64-based PC
Processor AMD Ryzen 7 7700X 8-Core Processor, 4501 Mhz, 8 Core(s), 16 Logical Processor(s)
SMBIOS Version 3.5
BaseBoard Product PRO B650M-A WIFI (MS-7D77)
Installed Physical Memory (RAM) 32.0 GB
Available Virtual Memory 18.8 GB
Page File Space 5.25 GB
Virtualization-based security Running
Virtualization-based security Available Security Properties Base Virtualization Support, Secure Boot, DMA Protection, UEFI Code Readonly, SMM Security Mitigations 1.0, Mode Based Execution Control

You can check this : dev.to/nodeshiftcloud/a-step-by-st...

Collapse
 
m0r19un profile image
Граф Безымянный

Thanks!

Image of Docusign

🛠️ Bring your solution into Docusign. Reach over 1.6M customers.

Docusign is now extensible. Overcome challenges with disconnected products and inaccessible data by bringing your solutions into Docusign and publishing to 1.6M customers in the App Center.

Learn more