Hi. I'm going to write a bunch of tutorial on ms semantic-kernel and this is the first part. This tutorial will guide you through integrating Ollama with Microsoft Semantic Kernel and how we can use our internal data. We’ll use the llama3.1
model as an example, but you can choose any model that fits your needs.
Prerequisites
Before starting, ensure you have the following:
- .NET SDK Installed: Make sure you have .NET 8 or later installed. If not, download it from https://dotnet.microsoft.com/.
- Ollama Installed: Download and install Ollama from https://ollama.com/download.
-
A Model Downloaded: For this tutorial, we’ll use the
llama3.1
(8B) model, which you can download from https://ollama.com/library/llama3.1. Alternatively, you can use any model compatible with Ollama.
Project Initialization
1. Create a New .NET Console Project
Run the following command to create a new .NET console project:
dotnet new console -o OllamaKernel
2. Install Microsoft Semantic Kernel Packages
Add the required packages to your project by running the following commands:
dotnet add package Microsoft.SemanticKernel
dotnet add package Microsoft.SemanticKernel.Connectors.Ollama --version 1.34.0-alpha
Creating the Kernel
We will start by creating a kernel that connects to the llama3.1
model.
- Create a new file named OllamaKernel.cs in the root folder.
- Update the file with the following code:
using System;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.Ollama;
public class OllamaKernel
{
public static Kernel GetKernel(string modelId, string endpoint)
{
IKernelBuilder builder = Kernel.CreateBuilder();
builder.AddOllamaChatCompletion(modelId, new Uri(endpoint));
Kernel kernel = kernelBuilder.Build();
return kernel;
}
}
This file contains the logic for connecting your kernel to the llama3.1
model via Ollama.
Adding a Custom Plugin
Now, let’s create a plugin that interacts with the LLM. For simplicity, we’ll use hardcoded data in this example.
- Create a new file named CustomPlugin.cs in the root folder.
- Update the file with the following code:
using System.ComponentModel;
using Microsoft.SemanticKernel;
namespace OllamaKernel;
public class CustomPlugin
{
[KernelFunction]
[Description("Get user city by user name")]
static string GetUserCity(string name)
{
return name switch
{
"Tareq" => "Sylhet",
"Kevin" => "Kualalampur",
"Beau" => "New York",
"Santos" => "Mexico",
"Robert" => "Swansea",
_ => "Not Defined",
};
}
}
The GetUserCity
function takes a username as input and returns the corresponding city. If the username is not listed, it will return "Not Defined."
Testing the Integration
Now that we have both the kernel and the plugin, let’s set up the main program to test our application.
- Update Program.cs with the following code:
using OllamaKernel;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.Ollama;
#pragma warning disable SKEXP0070
// initialize kernel
Kernel kernel = OllamaKernel.OllamaKernel.GetKernel(modelId: "llama3.1", "http://localhost:11434");
// register our custom plugin to kernel
kernel.Plugins.AddFromType<CustomPlugin>();
// add kernel function arguments
OllamaPromptExecutionSettings settings = new() { FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(), Temperature = 0 };
while (true)
{
try
{
Console.WriteLine("Ask your question:");
Console.Write(">>> ");
string input = Console.ReadLine();
if (input == "/bye")
{
Console.WriteLine("Goodbye!");
break;
}
var result = await kernel.InvokePromptAsync(input, new(settings));
Console.WriteLine($"Result: {result.ToString()}");
}
catch (System.Exception ex)
{
Console.WriteLine($"Failed to invoke prompt: {ex.Message}");
}
}
Notes
- The default port for Ollama is
11434
. You can change this by setting theOLLAMA_HOST
environment variable, e.g.,OLLAMA_HOST='http://localhost:{PORT}'
. - The
modelId
is set tollama3.1
. Update this if you use a different model.
Running the Application
To test your app:
- Open your terminal.
- Run the following command:
dotnet build
Note: You may see warnings due to the prerelease version of the Ollama connector, but these can be ignored for now.
- Once the build completes, run the application.
dotnet run
Here’s output:
In above image, the program correctly identifies Tareq’s city as "Sylhet." Similarly, it outputs "Not Defined" for Sam since Sam is not listed in the GetUserCity
function.
You'll find the full source code here Github
Final Thoughts
This tutorial demonstrates the basics of integrating LLM with Microsoft Semantic Kernel with the help of Ollama. With this setup, you can further expand functionality, such as:
- Connecting the plugin to external data sources, such as databases or APIs.
- Experimenting with different execution settings to fine-tune the model’s responses.
- Implementing more advanced kernel functions to address complex use cases.
Feel free to explore the Semantic Kernel documentation to unlock more features and capabilities for your applications. Happy coding!
Top comments (0)