DEV Community

Cover image for DEV Track Spotlight: Building .NET AI Applications with Semantic Kernel and Amazon Bedrock (DEV302)
Gunnar Grosch for AWS

Posted on

DEV Track Spotlight: Building .NET AI Applications with Semantic Kernel and Amazon Bedrock (DEV302)

The .NET developer community got a special treat at AWS re:Invent 2025 with DEV302, where AM Grobelny (AWS Principal Developer Advocate) and Nicki Stone (Amazon Principal Engineer) delivered a code-first deep dive into building AI applications with .NET. What made this session particularly interesting was the timing: Microsoft had just announced their shift from Semantic Kernel to the new Microsoft Agent Framework, forcing the speakers to pivot their entire talk to cover both the established and the emerging.

"We were going to do a Semantic Kernel talk," AM explained at the start. "And then, as many of you in the audience probably are following along, Microsoft said goodbye to Semantic Kernel and brought in Agent Framework. So we will have some Semantic Kernel in here, but we are also going to bring everybody into the future with Agent Framework as well."

Watch the Full Session:

The Foundation: What Makes an Agent?

Before diving into frameworks and code, AM and Nicki established a clear definition of what constitutes an agent. Drawing from AWS Distinguished Engineer Mark Brooker, who co-opted a definition from Simon Willison, they explained that an agent has two core requirements:

An LLM that exists in a loop - The system must be able to continuously process and respond to inputs

Tools to extend functionality - The LLM needs the ability to call external functions to accomplish tasks beyond its base capabilities

"That's it," AM emphasized. "But this is more of a philosophical thing than it is a technology thing."

The speakers highlighted real-world use cases where LLM applications shine: summarizing meeting notes and extracting action items, personalizing user experiences, and creating support chatbots with access to documentation. These practical applications set the stage for understanding why the frameworks they were about to demonstrate matter.

Microsoft.Extensions.AI: The Abstraction Layer

At the foundation of everything discussed in this session sits Microsoft.Extensions.AI, a library that abstracts away the complexity of working with different LLM providers. As AM explained, this space moves extremely fast, and rather than learning each individual method of invocation for different providers, Microsoft built an abstraction layer.

"Because you're just trying to typically build something like a chat client, right?" Nicki added. "So you can have an abstract chat client that then is built into a concrete class by a provider like Anthropic, or in our case AWS."

The beauty of this approach is its simplicity. With just a few lines of code, developers can add a chat client and request responses without worrying about provider-specific implementation details. This abstraction becomes the foundation for both Semantic Kernel and the newer Microsoft Agent Framework.

Semantic Kernel: The Established Framework

Semantic Kernel implements the Microsoft.Extensions.AI interfaces and provides a complete framework for building LLM applications. The framework consists of three main components:

Plugins - Function calls and tools that give the LLM access to capabilities like telling the date, checking weather, or performing web searches and RAG (Retrieval Augmented Generation)

AI Models - Access to various AI models or the ability to specify which models to use

Hooks and Filters - Middleware to intercept prompts before they reach the LLM or responses after they return, enabling data transformation, logging, observability, and permission-based security

Nicki drew a helpful parallel to JavaScript frameworks: "You remember Ajax? A lot of these, prior to things like React and Angular, you would make the Ajax calls, you would redraw the UI, right? You would write that all in JavaScript code. Then they started building frameworks that did this for you, right? So it's exactly the same idea."

The code demonstration showed how straightforward Semantic Kernel makes building an agent. With about 10 lines of code, developers can establish a Bedrock Runtime client, build the Semantic Kernel, add Bedrock as the completion service, and start making chat completion calls. The framework includes native integration with Amazon Bedrock, making it seamless to work with AWS services.

One key feature Nicki highlighted was the plugin system. By adding a simple DatePlugin (because LLMs cannot tell time), the agent gained the ability to provide current date information. The plugin uses annotations to describe its functionality, helping the LLM understand when and how to use each tool.

Microsoft Agent Framework: The Future

The announcement of Microsoft Agent Framework represented a significant shift in the .NET AI landscape. As AM noted, this framework combines the best aspects of AutoGen and Semantic Kernel into something much simpler.

"It feels like Semantic Kernel was basically like, let's get something out the door really fast because this space is moving really quickly," Nicki observed. "And then it feels like this one, they really thought out and they thought, how would a customer be using this? What easy code, what's ease of use basically for writing code for an LLM app."

The code comparison was striking. Where Semantic Kernel required about 10 lines of code, Agent Framework accomplished the same functionality with significantly less. The framework offers:

  • Tool calling capabilities
  • The essential agent loop
  • Multi-turn conversations
  • Memory support (both short-term and long-term)
  • Chat history management

However, there is a critical caveat: Agent Framework is still in preview, and unlike Semantic Kernel, it does not have native support for AWS services yet. This is where the AWS .NET team's work becomes crucial.

AWSSDK.Extensions.Bedrock.MEAI: Bridging the Gap

To enable .NET developers to use Agent Framework with Amazon Bedrock, AWS built AWSSDK.Extensions.Bedrock.MEAI. This package implements the Microsoft.Extensions.AI interfaces specifically for Amazon Bedrock, providing:

  • Embedding generator support
  • Image generation capabilities
  • Seamless integration with both Semantic Kernel and Agent Framework

"While there's not native support like there is in Semantic Kernel, you can still use Bedrock very easily," AM explained, demonstrating how developers can pull in the AWSSDK.Extensions.Bedrock.MEAI package as a dependency and immediately start working with Amazon Bedrock models through Agent Framework.

The speakers made an important ask of the community: provide feedback on how you want to use Agent Framework with AWS. Since the framework is open source and still evolving, community input will shape how AWS services integrate with these emerging tools.

Amazon Bedrock: The AI Service Foundation

Before diving into the infrastructure components, Nicki provided context on Amazon Bedrock itself: "Amazon Bedrock is our AWS cloud service that provides fully functioning models and optimizations to app developers. If I just want an LLM today, I don't wanna train anything. I wanna use one of Anthropic's models, or maybe I wanna use Nova, Amazon's model, where am I gonna go? I'm gonna go to Amazon Bedrock."

This distinction from Amazon SageMaker is important. While SageMaker serves traditional ML use cases with model training and data transformation, Amazon Bedrock focuses on providing ready-to-use foundation models for application developers who want to integrate AI capabilities quickly.

Amazon Bedrock AgentCore: A Composable Agentic Platform

Amazon Bedrock AgentCore is a comprehensive platform for building, deploying, and managing production-ready agentic applications. Rather than a single service, AgentCore provides a suite of composable, serverless components that work together to handle the complex infrastructure requirements of AI agents.

The platform consists of multiple specialized services:

Runtime - Deploys and executes containerized agentic applications with automatic scaling, session isolation, and maintenance

Gateway - Converts REST APIs with OpenAPI specifications into MCP (Model Context Protocol) servers, enabling seamless tool integration

Memory - Provides both short-term and long-term memory capabilities for agents to maintain context across conversations

Identity - Manages authentication and authorization for agent interactions

Browser - Enables agents to interact with web content programmatically

Code Interpreter - Provides isolated sandboxes for secure execution of LLM-generated code

Observability - Offers monitoring, logging, and tracing capabilities for production agents

Evaluations - Facilitates testing and validation of agent behavior

Policy - Enforces governance and compliance rules for agent operations

The key insight is that these components are composable. Developers can use just Runtime for basic deployment, add Gateway for tool integration, include Code Interpreter for data analysis capabilities, and layer on Memory for conversational context, all without managing the underlying infrastructure.

AgentCore Runtime in Detail

Runtime handles the deployment and execution of LLM applications in containers. Developers provide an Amazon ECR (Elastic Container Registry) image, and Runtime handles everything else: maintenance, scaling, and crucially, session isolation.

"You don't want questions from user Nicki leaking into user AM session," AM joked, highlighting the importance of proper session management in production LLM applications.

Runtime operates on Firecracker, AWS's microVM technology, providing isolated execution environments. Applications must implement two specific endpoints:

  • /ping - Health check endpoint
  • /invocations - The actual agent invocation endpoint

The architecture is elegant: users don't call Runtime directly. Instead, they use the AWS SDK to call InvokeAgent, which then routes the prompt to the /invocations endpoint in the Runtime container. This abstraction handles session management, scaling, and isolation automatically.

AgentCore Gateway: Automatic MCP Server Creation

Gateway emerged as one of the session's most exciting features. As AM enthusiastically explained: "You don't have to build an MCP server, right? That's the premise of this."

If you have a REST API with an OpenAPI spec, Gateway can automatically convert it into an MCP (Model Context Protocol) server. You can also use Lambda functions as targets. A single Gateway endpoint can serve multiple MCP servers, and it handles API key management for your REST APIs.

"It's really, really easy," Nicki confirmed. "We actually used it in our example and we're gonna show you how it works."

AgentCore Code Interpreter: Secure Code Execution

Code Interpreter provides an isolated sandbox for LLM-generated code execution. Nicki provided a clear use case: "Let's say I tell the LLM, Hey, I want you to read this CSV file and I want you to calculate the highest revenue in XYZ category, which is relevant to the CSV file. Well, my LLM needs to go write some code to extract the contents of that CSV and then run the calculation."

The Code Interpreter spins up an isolated container with no outbound internet access, ensuring security. The LLM can write code, execute it, and return results. The container supports Python, JavaScript, and TypeScript, and you can assign IAM permissions to allow access to other AWS resources.

AM highlighted an important architectural decision: "You may be thinking, why don't I just implement, you know, in C#, a tool that does the thing that I'm trying to do in Code Interpreter? Well, number one, Code Interpreter supports Python, JavaScript, TypeScript currently. So maybe you need to do something with a Python library, right?"

The ability to include files alongside the code makes Code Interpreter particularly powerful for data analysis scenarios using Python's rich ecosystem of statistical and data science libraries.

The Live Demo: Horoscope Agent

The speakers demonstrated their concepts with a practical horoscope agent application, showcasing the differences between Semantic Kernel and Agent Framework implementations, both deployed to AgentCore Runtime.

The Application Architecture

The demo project consisted of four main components:

HoroscopeAPI - A Lambda function deployed using AWS SAM (AWS Serverless Application Model) with API Gateway and DynamoDB. The API generates horoscopes using Amazon Bedrock and caches them in DynamoDB to avoid regenerating the same horoscope multiple times.

HoroscopeUI - An MVC application providing a user interface that interacts with AgentCore Runtime via the AWS SDK.

Semantic Kernel Implementation - An LLM application with limited capabilities (daily horoscopes only) deployed to AgentCore Runtime.

Agent Framework Implementation - A more feature-rich LLM application supporting daily, weekly, and monthly horoscopes, also deployed to AgentCore Runtime.

Semantic Kernel in Action

The Semantic Kernel implementation demonstrated two key plugins:

DatePlugin - A simple tool that returns the current date, solving the fundamental problem that LLMs cannot tell time.

HoroscopePlugin - A tool that makes direct API calls to the HoroscopeAPI to retrieve daily horoscopes.

AM highlighted an important limitation: "I'm gonna have to build a tool for each of these endpoints, right? I'm gonna have to build a tool for daily and build a tool for weekly and build a tool for monthly. That's not true if I'm using an MCP server, for example."

When asked for a weekly horoscope, the Semantic Kernel implementation could only provide a daily one because it lacked the necessary tools.

Agent Framework with MCP Integration

The Agent Framework implementation showcased the power of MCP server integration through AgentCore Gateway. Instead of manually creating tools for each API endpoint, the application connected to an MCP server that automatically exposed all the HoroscopeAPI endpoints (daily, weekly, monthly) as tools.

The code was remarkably clean. Using the standard MCP client library for .NET, the application called ListToolsAsync on the Gateway endpoint, which returned all available tools from the OpenAPI spec. Agent Framework then automatically registered these as function calls the LLM could use.

When asked for a weekly horoscope, the Agent Framework implementation successfully retrieved and displayed the information, demonstrating the flexibility of the MCP approach.

"This week brings a powerful wave of intuitive energy your way, dear Pisces," the LLM responded, calculating the appropriate date range through the API.

Invoking AgentCore Runtime

Nicki walked through the specific code needed to invoke agents deployed to AgentCore Runtime, emphasizing an important detail: "It's not the Bedrock SDK or the Bedrock Runtime SDK. It is literally Bedrock Runtime AgentCore. It's super specific. You can easily get it wrong."

The invocation code creates a RuntimeRequest object with three key components:

  • RuntimeArn - The Amazon Resource Name identifying the deployed agent
  • SessionId - Enables session isolation; new IDs create new sessions, reused IDs continue existing conversations
  • Payload - The actual prompt being sent to the agent

The code then calls InvokeAgentRuntimeAsync and processes the returned message. This abstraction handles all the complexity of routing requests to the containerized agent, managing sessions, and returning responses.

Container Requirements for Runtime

For a container to work in AgentCore Runtime, it must meet specific requirements:

  • Run on port 8080
  • Implement a /ping endpoint for health checks
  • Implement an /invocations endpoint for processing agent requests

Inside the /invocations endpoint, developers can implement any logic they want. In this demo, the endpoint called into either Semantic Kernel or Agent Framework to process the request using the respective framework's capabilities.

Important Caveats and Community Feedback

Throughout the session, AM repeatedly emphasized that all the code shown was experimental and should not be deployed to production. The Agent Framework is still in preview, and the .NET support for AgentCore lacks some of the conveniences available in Python.

"All of this is still in preview. Things break things, things don't stay the same, right? So all of this code is experimental, the best kind of code, right? The stuff that you deploy out and you go, man, maybe this will work one day," AM joked.

The speakers highlighted several areas where .NET support needs improvement:

Decorator Support - Python has decorators that automatically set up entry points for AgentCore Runtime. .NET developers must implement these manually.

Header Inspection - Session IDs and other metadata come through HTTP headers, requiring manual extraction.

Credential Retrieval - Firecracker uses MMDS (MicroVM Metadata Service) which mimics IMDSv1, but this has been removed from AWS SDK for .NET version 4. AM provided a custom MMDSCredentials class in the demo code to work around this limitation.

The speakers made a strong call to action for community feedback. With Agent Framework being open source and still evolving, and with AWS actively developing .NET support for AgentCore, now is the time for developers to share their needs and use cases.

"This is my ask to you here is that we as a community can go in, this is an open source project. We can request, 'Hey, we want Bedrock support, we want this support, we want that support,'" AM urged. "That's what I want you all to be thinking as we continue in this talk. Like how would you want to use Agent Framework with AWS and what should we take as feedback from you all?"

Key Takeaways

Microsoft Agent Framework represents the future - While still in preview, it offers a simpler, more thoughtful approach than Semantic Kernel for building LLM applications.

AWSSDK.Extensions.Bedrock.MEAI bridges the gap - Until native AWS support arrives in Agent Framework, this package enables seamless integration with Amazon Bedrock.

Amazon Bedrock AgentCore provides composable infrastructure - Rather than a monolithic service, AgentCore offers specialized components (Runtime, Gateway, Memory, Code Interpreter, and more) that developers can combine based on their needs.

MCP servers simplify tool integration - AgentCore Gateway can convert REST APIs with OpenAPI specs into MCP servers automatically, eliminating the need to manually create tools for each endpoint.

Community feedback shapes the future - With both Agent Framework and AgentCore .NET support actively evolving, developer input will directly influence how these tools develop.

Start experimenting now - While not production-ready, the code and patterns demonstrated provide a foundation for understanding where .NET AI development is heading.

AM and Nicki's advice was clear: start learning these frameworks now, provide feedback to shape their development, and prepare for a future where building sophisticated AI agents in .NET becomes increasingly straightforward. The combination of Microsoft's frameworks and AWS's infrastructure creates a powerful platform for .NET developers entering the AI space.


About This Series

This post is part of DEV Track Spotlight, a series highlighting the incredible sessions from the AWS re:Invent 2025 Developer Community (DEV) track.

The DEV track featured 60 unique sessions delivered by 93 speakers from the AWS Community - including AWS Heroes, AWS Community Builders, and AWS User Group Leaders - alongside speakers from AWS and Amazon. These sessions covered cutting-edge topics including:

  • πŸ€– GenAI & Agentic AI - Multi-agent systems, Strands Agents SDK, Amazon Bedrock
  • πŸ› οΈ Developer Tools - Kiro, Kiro CLI, Amazon Q Developer, AI-driven development
  • πŸ”’ Security - AI agent security, container security, automated remediation
  • πŸ—οΈ Infrastructure - Serverless, containers, edge computing, observability
  • ⚑ Modernization - Legacy app transformation, CI/CD, feature flags
  • πŸ“Š Data - Amazon Aurora DSQL, real-time processing, vector databases

Each post in this series dives deep into one session, sharing key insights, practical takeaways, and links to the full recordings. Whether you attended re:Invent or are catching up remotely, these sessions represent the best of our developer community sharing real code, real demos, and real learnings.

Follow along as we spotlight these amazing sessions and celebrate the speakers who made the DEV track what it was!

Top comments (0)