DEV Community

Kamil Riyas
Kamil Riyas

Posted on

Function Calling with Semantic Kernel in AST.NET Core Web API

Impatient? GitHub

In my last post we saw how to integrate Semantic Kernel and invoke its chat service in a very basic example. Not we'll take our idea to the next level by implementing Function Calling.

In this context, if we need the LLM to know something on demand, we enable it to call the actual code(which can be a library or business logic or a wrapper to an external API), thus providing up to date information to the user. This opens up new possiblities and transforms your LLM from a chatbot to an AI Agent.

I ran this App using only a laptop CPU. So don't worry about the hardware and install the following to get started:

  • dotnet 8.0 and above
  • local ollama instance with an SLM like llama3.2.

Implementation

  • Create a Plugins class calling LightsPlugin.cs. What differentiates this from a regular domain classes is the presence of the [KernelFunction("get_lights")] and [Description("Gets a list of lights and their current state")] annotations, that describes that behavior of each methods. The LLM understands these annotations and decides to whether use its existing knowledge or invoke the plugins to provide the answers to the users.
public class LightsPlugin
{
    private readonly List<LightModel> _lights;
    private readonly ILogger<LightsPlugin> _logger;

    public LightsPlugin(ILogger<LightsPlugin> logger)
    {
        _lights = new()
           {
              new LightModel { Id = 1, Name = "Table Lamp", IsOn = false, Brightness = Brightness.Medium, Color = "#FFFFFF" },
              new LightModel { Id = 2, Name = "Porch light", IsOn = false, Brightness = Brightness.High, Color = "#FF0000" },
              new LightModel { Id = 3, Name = "Chandelier", IsOn = true, Brightness = Brightness.Low, Color = "#FFFF00" }
           };

        _logger = logger;
    }

    [KernelFunction("get_lights")]
    [Description("Gets a list of lights and their current state")]
    public async Task<List<LightModel>> GetLightsAsync()
    {
        _logger.LogInformation("getting List of Lights");
        return _lights;
    }

    [KernelFunction("change_state")]
    [Description("Changes the state of the light")]
    public async Task<LightModel?> ChangeStateAsync(LightModel changeState)
    {
        // Find the light to change
        var light = _lights.FirstOrDefault(l => l.Id == changeState.Id);

        // If the light does not exist, return null
        if (light == null)
        {
            return null;
        }

        // Update the light state
        light.IsOn = changeState.IsOn;
        light.Brightness = changeState.Brightness;
        light.Color = changeState.Color;

        return light;
    }
}
Enter fullscreen mode Exit fullscreen mode
  • Inject the Kernel, Chat Completion Service and the Plugin. I have used an extension to clean up my Program.cs file.
// Add Kernel and Chat Service
services.AddKernel()
    .AddOllamaChatCompletion("llama3.2", httpClient);
// Add Plugins
services.AddSingleton<KernelPlugin>(sp => KernelPluginFactory.CreateFromType<LightsPlugin>(serviceProvider: sp));
Enter fullscreen mode Exit fullscreen mode
  • In your controller, use the injected Kernel to invoke prompt.
public class ChatController : ControllerBase
{
    private readonly Kernel _kernel;
    private readonly ILogger<ChatController> _logger;

    public ChatController(Kernel Kernel, ILogger<ChatController> logger)
    {
        _kernel = Kernel;
        _logger = logger;
    }

    [HttpPost]
    public async Task<string> GetResponse(UserChatRequest chatRequest)
    {
        if (chatRequest.query != null)
        {
            PromptExecutionSettings settings = new() { FunctionChoiceBehavior = FunctionChoiceBehavior.Auto() };
            var chatResult = await _kernel.InvokePromptAsync(chatRequest.query, new(settings));
            Console.WriteLine(chatResult.ToString());
            return chatResult.ToString();
        }
        else
        {
            return null;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Execution

This is the fun part. Now instead of just asking a general question, we can ask the LLM to Get the list of lights for which we expect it to call the plugin to get the list from native c# code.

@sk_functional_calling_webapi_HostAddress = http://localhost:5279

POST {{sk_functional_calling_webapi_HostAddress}}/api/chat/
Content-Type: application/json
{
  "query": "Get the list of lights"
}

###
Enter fullscreen mode Exit fullscreen mode

Output

Status: 200 OKTime: 10333.09 msSize: 184 bytes
FormattedRawHeadersRequest
Body
text/plain; charset=utf-8, 184 bytes
Here is the list of lights:

1. Table Lamp - Not on, Medium brightness, White color
2. Porch light - Not on, High brightness, Red color
3. Chandelier - On, Low brightness, Yellow color
Enter fullscreen mode Exit fullscreen mode

Thus we can see how we can integrate the LLMs into our Apps in a way that actually makes sense. This feature is what inspired me to dive deep into LLM integration. Hope this has the same effect on you.

Have a good day.

Hostinger image

Get n8n VPS hosting 3x cheaper than a cloud solution

Get fast, easy, secure n8n VPS hosting from $4.99/mo at Hostinger. Automate any workflow using a pre-installed n8n application and no-code customization.

Start now

Top comments (0)

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs

👋 Kindness is contagious

Please leave a ❤️ or a friendly comment on this post if you found it helpful!

Okay