<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kamil Riyas</title>
    <description>The latest articles on DEV Community by Kamil Riyas (@kamilriyas).</description>
    <link>https://dev.to/kamilriyas</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kamilriyas"/>
    <language>en</language>
    <item>
      <title>Function Calling with Semantic Kernel in AST.NET Core Web API</title>
      <dc:creator>Kamil Riyas</dc:creator>
      <pubDate>Sun, 16 Mar 2025 08:26:05 +0000</pubDate>
      <link>https://dev.to/kamilriyas/function-calling-with-semantic-kernel-in-astnet-core-web-api-28j1</link>
      <guid>https://dev.to/kamilriyas/function-calling-with-semantic-kernel-in-astnet-core-web-api-28j1</guid>
      <description>&lt;p&gt;Impatient? &lt;a href="https://github.com/KamilRiyas/sk-getting-started" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my last post we saw how to integrate Semantic Kernel and invoke its chat service in a very basic example. Not we'll take our idea to the next level by implementing &lt;a href="https://huggingface.co/docs/hugs/en/guides/function-calling" rel="noopener noreferrer"&gt;Function Calling&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;In this context, if we need the LLM to know something on demand, we enable it to call the actual code(which can be a library or business logic or a wrapper to an external API), thus providing up to date information to the user. This opens up new possiblities and transforms your LLM from a chatbot to an &lt;a href="https://wotnot.io/blog/ai-agent-vs-chatbot" rel="noopener noreferrer"&gt;AI Agent&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I ran this App using only a laptop CPU. So don't worry about the hardware and install the following to get started:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;dotnet 8.0 and above&lt;/li&gt;
&lt;li&gt;local &lt;a href="https://ollama.com/download" rel="noopener noreferrer"&gt;ollama&lt;/a&gt; instance with an SLM like &lt;a href="https://ollama.com/library/llama3.2" rel="noopener noreferrer"&gt;llama3.2&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Implementation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a Plugins class calling LightsPlugin.cs. What differentiates this from a regular domain classes is the presence of the &lt;code&gt;[KernelFunction("get_lights")]&lt;/code&gt; and &lt;code&gt;[Description("Gets a list of lights and their current state")]&lt;/code&gt; annotations, that describes that behavior of each methods. The LLM understands these annotations and decides to whether use its existing knowledge or invoke the plugins to provide the answers to the users.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class LightsPlugin
{
    private readonly List&amp;lt;LightModel&amp;gt; _lights;
    private readonly ILogger&amp;lt;LightsPlugin&amp;gt; _logger;

    public LightsPlugin(ILogger&amp;lt;LightsPlugin&amp;gt; logger)
    {
        _lights = new()
           {
              new LightModel { Id = 1, Name = "Table Lamp", IsOn = false, Brightness = Brightness.Medium, Color = "#FFFFFF" },
              new LightModel { Id = 2, Name = "Porch light", IsOn = false, Brightness = Brightness.High, Color = "#FF0000" },
              new LightModel { Id = 3, Name = "Chandelier", IsOn = true, Brightness = Brightness.Low, Color = "#FFFF00" }
           };

        _logger = logger;
    }

    [KernelFunction("get_lights")]
    [Description("Gets a list of lights and their current state")]
    public async Task&amp;lt;List&amp;lt;LightModel&amp;gt;&amp;gt; GetLightsAsync()
    {
        _logger.LogInformation("getting List of Lights");
        return _lights;
    }

    [KernelFunction("change_state")]
    [Description("Changes the state of the light")]
    public async Task&amp;lt;LightModel?&amp;gt; ChangeStateAsync(LightModel changeState)
    {
        // Find the light to change
        var light = _lights.FirstOrDefault(l =&amp;gt; l.Id == changeState.Id);

        // If the light does not exist, return null
        if (light == null)
        {
            return null;
        }

        // Update the light state
        light.IsOn = changeState.IsOn;
        light.Brightness = changeState.Brightness;
        light.Color = changeState.Color;

        return light;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Inject the Kernel, Chat Completion Service and the Plugin. I have used an extension to clean up my Program.cs file.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Add Kernel and Chat Service
services.AddKernel()
    .AddOllamaChatCompletion("llama3.2", httpClient);
// Add Plugins
services.AddSingleton&amp;lt;KernelPlugin&amp;gt;(sp =&amp;gt; KernelPluginFactory.CreateFromType&amp;lt;LightsPlugin&amp;gt;(serviceProvider: sp));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In your controller, use the injected Kernel to invoke prompt.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class ChatController : ControllerBase
{
    private readonly Kernel _kernel;
    private readonly ILogger&amp;lt;ChatController&amp;gt; _logger;

    public ChatController(Kernel Kernel, ILogger&amp;lt;ChatController&amp;gt; logger)
    {
        _kernel = Kernel;
        _logger = logger;
    }

    [HttpPost]
    public async Task&amp;lt;string&amp;gt; GetResponse(UserChatRequest chatRequest)
    {
        if (chatRequest.query != null)
        {
            PromptExecutionSettings settings = new() { FunctionChoiceBehavior = FunctionChoiceBehavior.Auto() };
            var chatResult = await _kernel.InvokePromptAsync(chatRequest.query, new(settings));
            Console.WriteLine(chatResult.ToString());
            return chatResult.ToString();
        }
        else
        {
            return null;
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Execution
&lt;/h3&gt;

&lt;p&gt;This is the fun part. Now instead of just asking a general question, we can ask the LLM to Get the list of lights for which we expect it to call the plugin to get the list from native c# code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@sk_functional_calling_webapi_HostAddress = http://localhost:5279

POST {{sk_functional_calling_webapi_HostAddress}}/api/chat/
Content-Type: application/json
{
  "query": "Get the list of lights"
}

###
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Output
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Status: 200 OKTime: 10333.09 msSize: 184 bytes
FormattedRawHeadersRequest
Body
text/plain; charset=utf-8, 184 bytes
Here is the list of lights:

1. Table Lamp - Not on, Medium brightness, White color
2. Porch light - Not on, High brightness, Red color
3. Chandelier - On, Low brightness, Yellow color
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thus we can see how we can integrate the LLMs into our Apps in a way that actually makes sense. This feature is what inspired me to dive deep into LLM integration. Hope this has the same effect on you. &lt;/p&gt;

&lt;p&gt;Have a good day. &lt;/p&gt;

</description>
      <category>semantickernel</category>
      <category>functioncalling</category>
      <category>webapi</category>
      <category>csharp</category>
    </item>
    <item>
      <title>Local AI WebAPI with Semantic Kernel and Ollama</title>
      <dc:creator>Kamil Riyas</dc:creator>
      <pubDate>Mon, 20 Jan 2025 15:50:42 +0000</pubDate>
      <link>https://dev.to/kamilriyas/local-ai-webapi-with-semantic-kernel-and-ollama-3ojj</link>
      <guid>https://dev.to/kamilriyas/local-ai-webapi-with-semantic-kernel-and-ollama-3ojj</guid>
      <description>&lt;p&gt;Impatient? &lt;a href="https://github.com/KamilRiyas/sk-getting-started" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my &lt;a href="https://dev.to/skriyas/local-ai-apps-with-c-semantic-kernel-and-ollama-4p7e"&gt;last post&lt;/a&gt; we saw how to get started with local SLM using Ollama and Semantic Kernel where we called Llama3.2 model from a console applicaition.&lt;/p&gt;

&lt;p&gt;In this write-up we'll see how to integrate Semantic Kernel with Asp.net WebAPI.&lt;/p&gt;

&lt;p&gt;Please note that this is just a barebone demo, &lt;strong&gt;not the standard way&lt;/strong&gt; to use Semantic Kernel with WebAPI. I'm planning to showcase that in a future post. &lt;/p&gt;

&lt;p&gt;Make sure that you have the following things with you in your local:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;dotnet 8.0 and above&lt;/li&gt;
&lt;li&gt;local &lt;a href="https://ollama.com/download" rel="noopener noreferrer"&gt;ollama&lt;/a&gt; instance with an SLM like &lt;a href="https://ollama.com/library/llama3.2" rel="noopener noreferrer"&gt;llama3.2&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Project Setup
&lt;/h2&gt;

&lt;p&gt;We'll initialize a bare bone Asp.Net WebAPI application and install the below packages as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet new webapi -n sk-webapi -o sk-webapi
cd sk-webapi\
dotnet add package Microsoft.SemanticKernel
dotnet add package Microsoft.SemanticKernel.Connectors.Ollama --prerelease
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*At the time of this writing, Semantic Kernel's Ollama connector is still in preview. So you might want to update the package command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Coding Time
&lt;/h2&gt;

&lt;p&gt;In your program.cs file&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Just like any other WebAPI apps, create the WebApplication builder.&lt;/li&gt;
&lt;li&gt;Create an HttpClient object with the local ollama instance uri.&lt;/li&gt;
&lt;li&gt;Inject &lt;code&gt;AddOllamaChatCompletion("llama3.2", httpClient)&lt;/code&gt; to the service collection.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddControllers();

var httpClient = new HttpClient() { 
        BaseAddress = new Uri("http://localhost:11434")
};

#pragma warning disable SKEXP0070 // Type is for evaluation purposes only and is subject to change or removal in future updates. Suppress this diagnostic to proceed.
builder.Services.AddOllamaChatCompletion("llama3.2", httpClient);
#pragma warning restore SKEXP0070 // Type is for evaluation purposes only and is subject to change or removal in future updates. Suppress this diagnostic to proceed.        

var app = builder.Build();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Endpoint Setup
&lt;/h2&gt;

&lt;p&gt;Create a controller and like any other service that you'd inject, invoke the chat completion service using &lt;code&gt;IChatCompletionService&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class ChatController : ControllerBase
    {
        public readonly IChatCompletionService _chatCompletionService;
        public ChatController(IChatCompletionService chatCompletionService)
        {
            _chatCompletionService = chatCompletionService;
        }

        [HttpGet]
        public async Task&amp;lt;string?&amp;gt; GetCharResponseAsync(string input)
        {
            if (input != null)
            {
                var chatResult = await _chatCompletionService.GetChatMessageContentsAsync(input);
                return chatResult[0].ToString();
            }
            else
            {
                return null;
            }
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. For my next post i'll be implementing a sample showcasing Function Calling.&lt;/p&gt;

</description>
      <category>ollama</category>
      <category>csharp</category>
      <category>dotnet</category>
      <category>semantickernel</category>
    </item>
    <item>
      <title>Local AI apps with C#, Semantic Kernel and Ollama</title>
      <dc:creator>Kamil Riyas</dc:creator>
      <pubDate>Wed, 01 Jan 2025 13:24:14 +0000</pubDate>
      <link>https://dev.to/kamilriyas/local-ai-apps-with-c-semantic-kernel-and-ollama-4p7e</link>
      <guid>https://dev.to/kamilriyas/local-ai-apps-with-c-semantic-kernel-and-ollama-4p7e</guid>
      <description>&lt;p&gt;Welcome to my first post ever! Enough talk. Let's get started.&lt;/p&gt;

&lt;p&gt;Impatient? &lt;a href="https://github.com/KamilRiyas/sk-getting-started" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Make sure that you have the following things with you in your local:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;dotnet 8.0 and above&lt;/li&gt;
&lt;li&gt;local &lt;a href="https://ollama.com/download" rel="noopener noreferrer"&gt;ollama&lt;/a&gt; instance with an SLM like &lt;a href="https://ollama.com/library/llama3.2" rel="noopener noreferrer"&gt;llama3.2&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Project Setup
&lt;/h2&gt;

&lt;p&gt;This is going to be a quick console app. Make sure to install all the nugets mentioned below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dotnet new console -n sk-console -o sk-console
cd sk-console\
dotnet add package Microsoft.SemanticKernel
dotnet add package Microsoft.SemanticKernel.Connectors.Ollama --prerelease
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*At the time of this writing, Semantic Kernel's Ollama connector is still in preview. So you might want to update the package command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Coding Time
&lt;/h2&gt;

&lt;p&gt;In your program.cs file &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;use the necessary packages&lt;/li&gt;
&lt;li&gt;create a builder for Semantic Kernel using the &lt;code&gt;Kernel&lt;/code&gt; class and inject the &lt;code&gt;AddOllamaChatCompletion()&lt;/code&gt; service.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;

var builder = Kernel.CreateBuilder();
var uri = new Uri("http://localhost:11434");

#pragma warning disable SKEXP0070 // Type is for evaluation purposes only and is subject to change or removal in future updates. Suppress this diagnostic to proceed.
builder.Services.AddOllamaChatCompletion("llama3.2", uri);
#pragma warning restore SKEXP0070 // Type is for evaluation purposes only and is subject to change or removal in future updates. Suppress this diagnostic to proceed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once after building the kernel get the chatCompletionService using which the chat invoked.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var kernel = builder.Build();
var chatCompletionService = kernel.GetRequiredService&amp;lt;IChatCompletionService&amp;gt;();

try
{
    ChatMessageContent chatMessage = await chatCompletionService
                                    .GetChatMessageContentAsync("Hi, can you tell me a dad joke");
    Console.WriteLine(chatMessage.ToString());
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For my &lt;a href="https://dev.to/skriyas/local-ai-webapi-with-semantic-kernel-and-ollama-3ojj"&gt;next post&lt;/a&gt;, I’ll implement use the same implementation in a WebAPI project.&lt;/p&gt;

&lt;p&gt;That's it. Good day!&lt;/p&gt;

</description>
      <category>ollama</category>
      <category>semantickernel</category>
      <category>csharp</category>
      <category>dotnet</category>
    </item>
  </channel>
</rss>
