DEV Community

Cover image for Building FatAdvisor: A .NET Nutrition AI Agent. Part 1: Building the Foundation
Dmitry Bogomolov
Dmitry Bogomolov

Posted on

Building FatAdvisor: A .NET Nutrition AI Agent. Part 1: Building the Foundation

Table of Contents

Intro

AI agents

AI is a pretty big deal nowadays, right? Large Language Models (LLMs) help us everyday with various tasks. Popular models are already replacing search engine interfaces. Also, if you're a programmer, you probably already use AI-powered features in your IDEs.

For a user, an LLM feels a bit like a huge, smart database you can query in natural language. But it's also quite intriguing, how can we make it not only answer our questions, but do things. And that's where AI agents come in.

AI agent basic scheme

There are many nice well-written articles and courses about AI agents, so I won't repeat definitions or theory here too much. We're here for some practice, aren't we? The only thing I think is crucial to understand is that a "Tool" can be basically anything that can provide data for an LLM (for example, an external database) or provide the ability to make actions (e.g. a 3rd party API allowing to modify data in an external system).

What excites me is that an agentic AI can decide by itself whether to use any of the available tools, in what order, and for what purposes. As developers we just provide some "handlebars", but it's up to the LLM how and when to use it.

Driven by a desire to build an agent by myself in educational purposes, I looked for a personally useful use case... and that's where FatAdvisor appears.

The project

Ok, FatAdvisor might sound a bit rude, I admit it. But let me explain! This agent will use the API of the FatSecret app. FatSecret is an amazing app for those who try to track their food intake. I mean, come on — I'm in my mid-30s. I want... no, I have to care about what I consume!

In general, it's great to see your calories, fats, carbs and proteins if you have specific goals — whether it’s losing some weight, gaining muscle mass, or just being curious (and nerdy) enough to track your diet no matter what.

Thus, I've been using the app for quite a while, and I became curious: how could I use this data that’s already being collected there?

So the idea was to give an AI model the access to my FatSecret profile and ask it for advice about my nutritional habits. And that’s how the name FatAdvisor appeared.

The first use case scenario for it would be:

  • retrieve my food diary for the last couple of days,
  • retrieve my weight diary for the same period,
  • analyze them and answer my questions like: “What should I eat for dinner today?” or “What should I change in my habits?”

Fortunately, FatSecret provides an HTTP REST API that can provide this data.

Fat Secret API integration

It goes technical

Choosing the technologies

Ok, now let's move on to the real world and choose the technologies for our project.

I am a .Net developer. I love it so much that writing C# for eight hours a day at work isn’t enough — I still want to keep coding in my personal time!

But seriously, I just feel like I can focus more on the agentic AI part of my projectif I use a language and framework I’m already comfortable with. Besides, .Net is a very stable and mature stack for building maintainable, production-ready solutions.

.NET already provides several great tools for integrating AI into applications. For this project, I chose Semantic Kernel — an SDK designed for creating and orchestrating AI agents. Sounds exactly like what we need!

Adopting Semantic Kernel

With the help of Semantic Kernel connectors we can use different AI model providers.
Speaking of which, I'm currently playing with GitHub Models. Free usage is available, and that is good for a Proof-of-Concept type of project.

For prototyping goals I decided not to implement a web UI yet and instead start with a simple console app.

In this first article, we’ll focus on wiring up Semantic Kernel with GitHub Models in a minimal console application. Later, we’ll expand the solution with a plugin for the FatSecret API.

Solution structure

Our system design will be very simple — I don’t want to overcomplicate things at this prototype stage. Let’s create a Visual Studio solution with three projects::

  • FatAdvisor.Console - a console application that will act as a very basic "UI" for user interaction.
  • FatAdvisor.Ai - the core project of the solution. It contains all the logic fpr working with Semantic Kernel and other AI-related things.
  • FatAdvisor.FatSecretApi - let's locate all the details of working with FatSecret API inside this project. You know, DTOs for HTTP requests/responses, HTTP clients, well, all the things you do when integrating with a third-party API.

Let's create the solution, add the projects, and set up the references between them:

dotnet new sln -n FatAdvisor

dotnet new console -o FatAdvisor.Console
dotnet new classlib -o FatAdvisor.Ai
dotnet new classlib -o FatAdvisor.FatSecretApi

dotnet sln FatAdvisor.sln add ./FatAdvisor.Console/FatAdvisor.Console.csproj
dotnet sln FatAdvisor.sln add ./FatAdvisor.Ai/FatAdvisor.Ai.csproj
dotnet sln FatAdvisor.sln add ./FatAdvisor.FatSecretApi/FatAdvisor.FatSecretApi.csproj

dotnet add ./FatAdvisor.Console/FatAdvisor.Console.csproj reference ./FatAdvisor.Ai/FatAdvisor.Ai.csproj
dotnet add ./FatAdvisor.Console/FatAdvisor.Console.csproj reference ./FatAdvisor.FatSecretApi/FatAdvisor.FatSecretApi.csproj
Enter fullscreen mode Exit fullscreen mode

Next, let’s install the required NuGet packages:

dotnet add FatAdvisor.Console package Autofac
dotnet add FatAdvisor.Console package Autofac.Extensions.DependencyInjection
dotnet add FatAdvisor.Console package Microsoft.Extensions.Configuration.EnvironmentVariables
dotnet add FatAdvisor.Console package Microsoft.Extensions.Configuration.UserSecrets
dotnet add FatAdvisor.Console package Microsoft.Extensions.Hosting
dotnet add FatAdvisor.Console package Microsoft.Extensions.Logging
dotnet add FatAdvisor.Console package Microsoft.Extensions.Logging.Console
dotnet add FatAdvisor.Console package Microsoft.SemanticKernel

dotnet add FatAdvisor.FatSecretApi package Autofac
dotnet add FatAdvisor.FatSecretApi package Microsoft.Extensions.Configuration.Abstractions
dotnet add FatAdvisor.FatSecretApi package Microsoft.Extensions.Http

dotnet add FatAdvisor.Ai package Autofac
dotnet add FatAdvisor.Ai package Microsoft.Extensions.Configuration.Abstractions
dotnet add FatAdvisor.Ai package Microsoft.SemanticKernel
Enter fullscreen mode Exit fullscreen mode

Note: The solution was tested with the following versions:

  • Autofac – 8.1.0
  • Autofac.Extensions.DependencyInjection – 10.0.0
  • Microsoft.Extensions.* – 9.0.8
  • Microsoft.SemanticKernel – 1.62.0

AI model

Now we need to choose what we'll use as the "brain" of our system. Since this project is more about self-education than building a real production-ready app, I was looking for a free, easy-to-use solution — and chose GitHub Models. This platform gives us a chance to play with various LLMs. You need to create one single token, and then you can quickly switch between models whenever you want.

For more information about GitHub Models, including limits, quick start examples and much more please visit GitHub Docs page.

First we need to create a token. There are two options: fine-grained tokens and classic tokens.. GitHub recommends to switch to fine-grained tokens when possible. The difference is that a classic token would have access to all your repositories, while for a fine-grained one, you can restrict which repositories it can access, or even grant no repository access at all (which is what we need here).

  1. Go to https://github.com/settings/personal-access-tokens
  2. Select "Fine-grained tokens" and click "Generate new token".
  3. Choose any name for your token, set an expiration time, and adjust the settings as you prefer.
  4. Under “Permissions”, click “Add permissions”, then choose “Models”.

Adding permissions to the token

  1. Click “Generate token”. After the token is generated, copy it and store it securely, because you won’t be able to view it again later.

Finally, we need to pick which model to use. Here's the Marketplace for GitHub Models where you can see what is available. For my project, I'll start with GPT‑4o mini.

Starting coding!

Dependency Injection (DI)

First, let's create an Autofac module in both Ai and Api projects. We use Autofac for dependency injection across our solution. Initially, the DI configuration will be pretty small, but in the next chapters the modules will be useful for registering new components.

By the way, if you're not familiar with the concept of modules in Autofac — they are classes that help you break down dependency injection configuration into logically isolated files. You can read more about them in the Autofac documentation.

FatAdvisor.Api/FatSecretApiModule.cs

using Autofac;
using Microsoft.Extensions.Configuration;

namespace FatAdvisor.FatSecretApi
{
    public class FatSecretApiModule : Module
    {
        private readonly IConfiguration _config;

        public FatSecretApiModule(IConfiguration config) => _config = config;

        protected override void Load(ContainerBuilder builder)
        {
        }
    }
}

Enter fullscreen mode Exit fullscreen mode

That's it for now for the Api module. It's empty cause we don't have anything to register yet.

The Ai module is a bit more interesting:

FatAdvisor.Ai/FatSecretAiModule.cs

using Autofac;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.SemanticKernel;

namespace FatAdvisor.Ai
{
    public class FatSecretAiModule : Module
    {
        private readonly IConfiguration _config;

        public FatSecretAiModule(IConfiguration config) => _config = config;

        protected override void Load(ContainerBuilder builder)
        {
            var apiKey = _config["GitHubModels:ApiKey"];
            var endpoint = _config["GitHubModels:Endpoint"];
            var modelId = "gpt-4o-mini";

            builder.Register(ctx =>
            {
                var loggerFactory = ctx.Resolve<ILoggerFactory>();

                var kernelBuilder = Kernel.CreateBuilder();

                kernelBuilder.Services.AddSingleton(loggerFactory);

                kernelBuilder.AddOpenAIChatCompletion(
                    modelId: modelId,
                    apiKey: apiKey,
                    endpoint: new Uri(endpoint));

                return kernelBuilder.Build();
            }).As<Kernel>()
              .SingleInstance();
        }
    }
}

Enter fullscreen mode Exit fullscreen mode

Here, we use configuration values to locate our GitHub Models settings (ApiKey and Endpoint). For now, the model ID is hardcoded to "gpt-4o-mini", though we could easily move it to configuration later.

Next, we register an instance of the Kernel class. This is a core class of SemanticKernel, and we will interact with the AI models through it. Also later we will register our agentic plugins here.

Program.cs

Now let's create an entry point for the console app — Program.cs.
I will describe what happens in the code through inline comments

FatAdvisor.Console/Program.cs:

using Microsoft.Extensions.Configuration;
using FatAdvisor.FatSecretApi;
using FatAdvisor.Ai;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Autofac.Extensions.DependencyInjection;
using Autofac;
using Microsoft.Extensions.Hosting;

namespace FatAdvisor.Console
{
    internal class Program
    {
        private static async Task Main(string[] args)
        {
            var host = Host.CreateDefaultBuilder(args)

                    // Adding Autofac for dependency injection
                    .UseServiceProviderFactory(
                        new AutofacServiceProviderFactory()) 

                    // Configure app settings (User Secrets 
                    // + Environment Variables)
                    .ConfigureAppConfiguration((context, config) =>
                    {
                        config.AddUserSecrets<Program>();
                        config.AddEnvironmentVariables();
                    })

                    // We want to log a lot of things
                    // to trace how it actually works
                    .ConfigureLogging(logging =>
                    {
                        logging.ClearProviders();
                        logging.AddConsole();
                        logging.SetMinimumLevel(LogLevel.Debug);
                    })

                    // Register Autofac modules and application services
                    .ConfigureContainer<ContainerBuilder>((context, builder) =>
                    {
                        // Register your modules here
                        builder.RegisterModule(
                            new FatSecretAiModule(context.Configuration));
                        builder.RegisterModule(
                            new FatSecretApiModule(context.Configuration));

                        builder.RegisterType<ConsoleAppRunner>().AsSelf();
                    })
                    .Build();

            // Create a scope and run the application
            using var scope = host.Services.CreateScope();
            var app = scope.ServiceProvider
                .GetRequiredService<ConsoleAppRunner>();
            await app.RunAsync();
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

ConsoleAppRunner

Probably, it is the most interesting part of the first draft. Inside this class, we'll actually run our model — providing a system message, simulating user input, and observing the AI’s response through the ChatService.

FatAdvisor.Console/ConsoleAppRunner.cs:

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;

namespace FatAdvisor.Console
{
    public class ConsoleAppRunner
    {
        private readonly Kernel _kernel;

        // Injection is controled by Autofac
        public ConsoleAppRunner(Kernel kernel) 
        {
            _kernel = kernel;
        }

        public async Task RunAsync()
        {
            var chatHistory = new ChatHistory();
            chatHistory.AddSystemMessage(
                "You are a nutrition and training assistant. " +
                "Although you don’t have access to any external data, " +
                "pretend that you’re analyzing the user’s recent food " +
                "and training habits based on their description. " +
                "Provide practical, personalized advice."
            );

            var userMessage =
                "Imagine I’ve been eating quite a lot of carbs " +
                "recently and training hard at the gym. " +
                "Please evaluate how that could affect my weight " +
                "and energy levels, and suggest what I could adjust.";

            chatHistory.AddUserMessage(userMessage);

            var chatCompletion = _kernel
                .GetRequiredService<IChatCompletionService>();

            var settings = new OpenAIPromptExecutionSettings()
            {
                FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(),
            };

            var response = await chatCompletion
                .GetChatMessageContentAsync(chatHistory, settings, _kernel);

            System.Console.WriteLine(response.Content);

            foreach (var kvp in response.Metadata)
                System.Console.WriteLine($"{kvp.Key}: {kvp.Value}");
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

In RunAsync(), we:

  1. Create a chat history:
    It contains both a system message (instructions for the assistant’s role and behavior) and a user message (the question or request). Together, they define the full context the model receives.

  2. Resolve the chat completion service
    From the instance of Kernel, we receive an implementation of IChatCompletionService. Since we registered OpenAIChatCompletionService in the FatSecretAiModule, this interface is automatically resolved to that implementation.

  3. Configure prompt execution behavior
    In the prompt execution settings we register only one setting, but pretty important one:

    var settings = new OpenAIPromptExecutionSettings()
    {
        FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(),
    };
    

    What does it mean? We discussed the concept of AI agents in the beginning of this article. And though we haven't created an agentic plugin yet, we can already instruct the model how to behave when plugins are presented.

    The available options are:

    • None – well, our agent won't call any functions but may describe which one it would call (good for testing purposes and for research).
    • Auto – the model decides on its own whether to call a function and how to use it.
    • Required – with this option we could actually force the agent to call some functions.

    In this exercise we prefer to use Auto mode, we will let our model shine and choose its own way to use available functions.

  4. Print the results
    Finally, we output both the model’s text response:

    foreach (var kvp in response.Metadata)
        System.Console.WriteLine($"{kvp.Key}: {kvp.Value}");
    

Running

For storing tokens and other sensitive settings, we'll use User Secretsinstead of embedding them into appsettings.json or environment files. From your terminal navigate to the FatAdvisor.Console folder and run the following commands:

dotnet user-secrets init
dotnet user-secrets set "GitHubModels:ApiKey" "<your GitHub models token>"
dotnet user-secrets set "GitHubModels:Endpoint" "https://models.inference.ai.azure.com"
Enter fullscreen mode Exit fullscreen mode

Now run the console app. Because we set up a pretty verbose log level, we will see some technical information along with the response by AI model:

dbug: Microsoft.SemanticKernel.Connectors.OpenAI.OpenAIChatCompletionService[0]

      Function choice behavior configuration: Choice:auto, AutoInvoke:True, AllowConcurrentInvocation:False, AllowParallelCalls:(null) Functions:None (Function calling is disabled)

info: Microsoft.SemanticKernel.Connectors.OpenAI.OpenAIChatCompletionService[0]

      Prompt tokens: 114. Completion tokens: 485. Total tokens: 599.

When you've been consuming a lot of carbohydrates and training hard at the gym, several factors come into play regarding your weight and energy levels:



### Effects on Weight:

1. **Glycogen Storage**: Carbohydrates are stored in your muscles and liver as glycogen. For every gram of glycogen, about 3 grams of water are stored as well. This can cause temporary weight gain as your body increases its glycogen stores.

...

Enter fullscreen mode Exit fullscreen mode

Ok, I think there's no need to copy here the whole answer. We still don't provide any personal information from FatSecret, so the AI output here is mostly a general nutrition explanation.

Let’s highlight a couple of interesting details from the log:

Function choice behavior configuration: 
    Choice:auto,
    AutoInvoke:True,
    AllowConcurrentInvocation:False,
    AllowParallelCalls:(null)
    Functions:None (Function calling is disabled)
Enter fullscreen mode Exit fullscreen mode

Here we can see that function calling is configured for “Auto” mode, but no functions are actually registered yet — which is expected at this stage of the project.

And here:

Prompt tokens: 114. Completion tokens: 485. Total tokens: 599.
Enter fullscreen mode Exit fullscreen mode

we see that we sent 114 input tokens to the model, and received 485 in response (with 599 being a sum of input and output). Tracking these stats will help us later to optimize costs and usage when we integrate more complex prompts or larger models.

Conclusions

I'd like to stop here not to overload our journey by throwing too much information at once. We have created something that we now can improve and develop. Our system already has some basic structure and AI features.

In the next chapters we will:

  • create a plugin for SemanticKernel containing functions that will allow our AI agent to access food and weight logs,
  • implement API integration with FatSecret to retrieve this data — including building an HTTP client, handling authentication, and adding a simple local token storage,
  • improve the prompts to make our FatAdvisor smarter and more helpful.

Thanks for reading — and see you in the next part!

Top comments (0)