This is the third part of my Practical .NET Guide to AI & LLM series. If you’ve followed along, you have:
- Learned about Microsoft.Extensions.AI (ME.AI): Practical .NET Guide to AI & LLM: Introduction
- How to integrate LLMs correctly into your .NET applications and build robust AI features: How to Correctly Build AI Features in .NET
In this post, we will focus on orchestrating multiple tools ( tool chaining ) using Microsoft.Extensions.AI
. First, we will explore various patterns for chaining tools. Next, we will discuss how to design multi-tool workflows and implement these patterns using familiar .NET dependency injection. Additionally, we will cover strategies for robust error handling and debugging your tool chains, as well as how to avoid common pitfalls.
What Is Tool Chaining and Why Should You Care?
Tool chaining, sometimes called function calling or tool invocation in LLM, is the process of linking discrete AI-powered tools or agents into a pipeline, where the output of one tool becomes the input for the next. This approach allows complex workflows to be constructed by composing simple, well-defined units of behavior, each focused on a particular subtask.
By breaking down workflows into composable, swappable components, tool chaining enables high scalability, maintainability, and adaptability. Imagine a user query that needs to be classified, enriched with retrieved data, summarized, and then sent as a formatted email, with each step performed by a specialized tool:
This modular approach stands in stark contrast to monolithic workflows, where a single large model handles every stage, usually leading to rigid, hard-to-maintain solutions. There are many reasons to use tool chaining, but to summarize:
- Scalability : Isolate performance hotspots at the tool level.
- Maintainability : Swap or upgrade individual tools without rewriting the workflow.
- Flexibility : Adapt quickly to evolving requirements or new AI capabilities.
- Robustness : Failures can be contained at tool boundaries.
- Interoperability : Bring together LLMs, APIs, data services, and more, using standard .NET patterns.
Multi-Tool Workflow
Multi-tool workflows extend tool chaining by introducing orchestration logic. This includes methods for more than two tools to interact in complex sequences. Additionally, it allows for handling branching or looping and aggregating results. Let us discuss common approaches seen in AI orchestration.
Sequential Pipelines (“Chain of Responsibility”)
In a sequential pipeline, tools are arranged in a linear sequence, with each tool passing its result to the next. This is very similar to the classic Chain of Responsibility design pattern, that many developers are familiar with.
This approach is best for linear, staged transformations (e.g., preprocess → enrich → postprocess), and you can structure it using interfaces and dependency injection (DI) to construct a pipeline of handlers.
Agent-Oriented (“Swarm”) Models
Inspired by agent frameworks and multi-agent systems, these workflows delegate subtasks to specialized agents, which may run in parallel or communicate iteratively.
This model is well-suited for complex tasks that benefit from specialized agents working together, such as multi-step reasoning or collaborative problem-solving.
Branching/Conditional Workflows
Sometimes, which tool(s) are engaged depends on input or intermediate results (e.g., classify intent, then route to a specific downstream workflow).
Parallel (“Fan-out/Fan-in”) Workflows
In AI-powered systems, not every task needs to be carried out in a strict sequence. Some tasks can and should run in parallel. This is where the Fan-out/Fan-in pattern is particularly effective. It is a workflow model in which a single input is sent to multiple independent tools (fan-out), and their outputs are later collected and combined (fan-in).
This pattern is especially beneficial when tasks are non-blocking, stateless, and do not rely on each other’s outputs. You can think of it like a parallel assembly line. Each station works on the same raw material, and the final product is assembled at the end.
Implementing Tool Chains in .NET
Modern .NET applications thrive on dependency injection (DI). As we discussed in previous posts, Microsoft.Extensions.AI
is designed from the ground up to support DI-friendly architecture for AI features. With DI, your tools and orchestrators become cleanly swappable, testable, and maintainable components.
Building the Chain
Let’s start by defining a common interface that all tools in the chain will implement. This interface should have a method for executing the tool’s logic and returning a result.
C#
<textarea tabindex="-1" aria-hidden="true" readonly>public interface IAIWorkflowStep
{
Task<AIWorkflowResult> ExecuteAsync(AIWorkflowContext context, CancellationToken token = default);
}
public class AIWorkflowContext
{
public string Input { get; set; } = string.Empty;
public string? Intent { get; set; }
public string? Location { get; set; }
public object? WeatherForecast { get; set; }
public IDictionary<string, object?> Metadata { get; } = new Dictionary<string, object?>();
public DateTimeOffset CreatedAt { get; } = DateTimeOffset.UtcNow;
}</textarea>
public interface IAIWorkflowStep
{
Task<AIWorkflowResult> ExecuteAsync(AIWorkflowContext context, CancellationToken token = default);
}
public class AIWorkflowContext
{
public string Input { get; set; } = string.Empty;
public string? Intent { get; set; }
public string? Location { get; set; }
public object? WeatherForecast { get; set; }
public IDictionary<string, object?> Metadata { get; } = new Dictionary<string, object?>();
public DateTimeOffset CreatedAt { get; } = DateTimeOffset.UtcNow;
}
Implement Individual Tools as Services
For example, a classifier:
C#
<textarea tabindex="-1" aria-hidden="true" readonly>public class IntentClassifierStep : IAIWorkflowStep
{
private readonly IChatClient _chatClient;
public IntentClassifierStep(IChatClient chatClient)
{
_chatClient = chatClient;
}
public async Task<AIWorkflowResult> ExecuteAsync(
AIWorkflowContext context,
CancellationToken token = default)
{
var chatMessages = [new ChatMessage(ChatRole.User, context.Input)];
var response = await _chatClient.GetResponseAsync(chatMessages, token);
context.Intent = response.Result.Trim();
return AIWorkflowResult.Success(context);
}
}</textarea>
public class IntentClassifierStep : IAIWorkflowStep
{
private readonly IChatClient _chatClient;
public IntentClassifierStep(IChatClient chatClient)
{
_chatClient = chatClient;
}
public async Task<AIWorkflowResult> ExecuteAsync(
AIWorkflowContext context,
CancellationToken token = default)
{
var chatMessages = [new ChatMessage(ChatRole.User, context.Input)];
var response = await _chatClient.GetResponseAsync(chatMessages, token);
context.Intent = response.Result.Trim();
return AIWorkflowResult.Success(context);
}
}
And a weather lookup tool:
C#
<textarea tabindex="-1" aria-hidden="true" readonly>public class WeatherLookupStep : IAIWorkflowStep
{
private readonly IWeatherApi _weatherApi;
public WeatherLookupStep(IWeatherApi weatherApi)
{
_weatherApi = weatherApi;
}
public async Task<AIWorkflowResult> ExecuteAsync(
AIWorkflowContext context,
CancellationToken token = default)
{
if (context.Intent != "GetWeather") return AIWorkflowResult.Skipped(context);
var forecast = await _weatherApi.GetForecastAsync(context.Location, token);
context.WeatherForecast = forecast;
return AIWorkflowResult.Success(context);
}
}</textarea>
public class WeatherLookupStep : IAIWorkflowStep
{
private readonly IWeatherApi _weatherApi;
public WeatherLookupStep(IWeatherApi weatherApi)
{
_weatherApi = weatherApi;
}
public async Task<AIWorkflowResult> ExecuteAsync(
AIWorkflowContext context,
CancellationToken token = default)
{
if (context.Intent != "GetWeather") return AIWorkflowResult.Skipped(context);
var forecast = await _weatherApi.GetForecastAsync(context.Location, token);
context.WeatherForecast = forecast;
return AIWorkflowResult.Success(context);
}
}
Compose the Chain in DI Configuration
You can build a list of IAIWorkflowStep
s, or for more advanced scenarios, use a pipeline builder:
C#
<textarea tabindex="-1" aria-hidden="true" readonly>services.AddTransient<IAIWorkflowStep, IntentClassifierStep>();
services.AddTransient<IAIWorkflowStep, WeatherLookupStep>();
services.AddTransient<IAIWorkflowStep, ResponseGeneratorStep>();</textarea>
services.AddTransient<IAIWorkflowStep, IntentClassifierStep>();
services.AddTransient<IAIWorkflowStep, WeatherLookupStep>();
services.AddTransient<IAIWorkflowStep, ResponseGeneratorStep>();
Or, for true “Chain of Responsibility” style, have each step reference the next (constructor injection or factory):
C#
<textarea tabindex="-1" aria-hidden="true" readonly>// psuedo-code
services.AddTransient<IIntentClassifier, IntentClassifierStep>();
services.AddTransient<IWeatherLookup>(sp =>
{
var next = sp.GetRequiredService<IResponseGenerator>();
return new WeatherLookupStep(next);
});
services.AddTransient<IResponseGenerator, ResponseGeneratorStep>();</textarea>
// psuedo-code
services.AddTransient<IIntentClassifier, IntentClassifierStep>();
services.AddTransient<IWeatherLookup>(sp =>
{
var next = sp.GetRequiredService<IResponseGenerator>();
return new WeatherLookupStep(next);
});
services.AddTransient<IResponseGenerator, ResponseGeneratorStep>();
This structure matches the advanced DI-chain patterns seen in production .NET systems.
Orchestrating the Execution
At runtime, either iterate through the list of steps or recursively call the next step.
C#
<textarea tabindex="-1" aria-hidden="true" readonly>public class ToolChainOrchestrator
{
private readonly IEnumerable<IAIWorkflowStep> _steps;
public ToolChainOrchestrator(IEnumerable<IAIWorkflowStep> steps)
{
_steps = steps;
}
public async Task<AIWorkflowResult> ExecuteAsync(string input, CancellationToken token = default)
{
var context = new AIWorkflowContext { Input = input };
foreach(var step in _steps)
{
var result = await step.ExecuteAsync(context, token);
if (result.IsFailure)
{
// Handle failure (log, abort, etc.)
return result;
}
}
return AIWorkflowResult.Success(context);
}
}</textarea>
public class ToolChainOrchestrator
{
private readonly IEnumerable<IAIWorkflowStep> _steps;
public ToolChainOrchestrator(IEnumerable<IAIWorkflowStep> steps)
{
_steps = steps;
}
public async Task<AIWorkflowResult> ExecuteAsync(string input, CancellationToken token = default)
{
var context = new AIWorkflowContext { Input = input };
foreach(var step in _steps)
{
var result = await step.ExecuteAsync(context, token);
if (result.IsFailure)
{
// Handle failure (log, abort, etc.)
return result;
}
}
return AIWorkflowResult.Success(context);
}
}
Tool Registration Patterns
-
Explicit ordering: Ensures pipeline steps run in the correct sequence (use
IEnumerable<>
registration order). - Conditional execution: Allow steps to skip themselves using flags in the context, or based on prior results.
- Extensibility: Add new steps or tools without refactoring the pipeline or orchestrator.
Tool Calling and Function Invocation with Microsoft.Extensions.AI
The newest versions of Microsoft.Extensions.AI (ME.AI) provide first-class abstractions like IChatClient
and the ability to wire up function/tool calling directly into your pipelines, supporting LLMs from OpenAI, Azure, Ollama, and custom plugins.
How Function Calling Works
The mechanism centers around models and frameworks that support function calling—structured invocations where an LLM is told it can select and call a function, passing JSON arguments described via JSON Schema. Microsoft.Extensions.AI
wiring allows both the LLM and your server to negotiate which tool should be invoked and when.
Example: Registering Tool Functions
C#
<textarea tabindex="-1" aria-hidden="true" readonly>var builder = services.AddOpenAIChatClient("gpt-4o")
.AddTool("GetWeather", async (parameters, ct) =>
{
var location = parameters["location"].GetString();
return await weatherApiClient.GetForecastAsync(location, ct);
});</textarea>
var builder = services.AddOpenAIChatClient("gpt-4o")
.AddTool("GetWeather", async (parameters, ct) =>
{
var location = parameters["location"].GetString();
return await weatherApiClient.GetForecastAsync(location, ct);
});
- Tools are described with signature and argument schema.
- The LLM can choose when to call, and
Microsoft.Extensions.AI
orchestrates invocation and result passing.
Tool Invocation Orchestration
This mechanism naturally extends to multiple tools, with ME.AI acting as both protocol and execution layer.
ℹ️ Models must support function calling, and your orchestration must register tool/function schemas for the LLM to invoke.
Error Handling Strategies Across Chained Tools
Multi-tool workflows can introduce new failure modes, and a robust AI-enabled application must coordinate error handling, graceful degradation, retries, and clear logging across tool boundaries to ensure seamless operation.
Common Error Handling Patterns
- Try/Catch in Every Tool Step : Each workflow step should be prepared to catch and log its own exceptions, returning error context in the result.
C#
<textarea tabindex="-1" aria-hidden="true" readonly>public async Task<AIWorkflowResult> ExecuteAsync(…)
{
try
{
// Tool logic goes here
}
catch(Exception ex)
{
// Optionally, log at step level
return AIWorkflowResult.Failure("Weather lookup failed", ex);
}
}</textarea>
public async Task<AIWorkflowResult> ExecuteAsync(…)
{
try
{
// Tool logic goes here
}
catch(Exception ex)
{
// Optionally, log at step level
return AIWorkflowResult.Failure("Weather lookup failed", ex);
}
}
- Pipeline-Level Error Interception : The orchestrator can wrap the entire pipeline with an error boundary (try/catch), ensuring that unhandled exceptions bubble up in a controlled and observable manner.
-
Middleware Pattern for Cross-Cutting Concerns:
Microsoft.Extensions.AI
allows forUseXxx()
middleware additions on yourIChatClientBuilder
, similar to ASP.NET Core’s HTTP pipeline, e.g. for telemetry, retries, or timeouts:
C#
<textarea tabindex="-1" aria-hidden="true" readonly>builder.UseOpenTelemetry(loggerFactory)
.UseRetryPolicy(policy);</textarea>
builder.UseOpenTelemetry(loggerFactory)
.UseRetryPolicy(policy);
- Unified Error/Result Object : Define a common “Result” record or class that carries either value or error details, instead of raw exceptions.
C#
<textarea tabindex="-1" aria-hidden="true" readonly>public record AIWorkflowResult(
bool Success,
string? FailureReason = null,
Exception? Exception = null,
AIWorkflowContext? Context = null);</textarea>
public record AIWorkflowResult(
bool Success,
string? FailureReason = null,
Exception? Exception = null,
AIWorkflowContext? Context = null);
Best Practices for Error Handling in Tool Chains
- Fail Fast and Isolate: If a critical step fails, return promptly and don’t proceed to dependent steps.
- Surface Contextual Error Messages: Distinguish between user errors (bad input) and system errors (time-outs, network failures).
- Centralize Logging and Telemetry: Prefer structured logs with clear step boundaries and correlation IDs.
- Fallback Paths: For some steps, return a default value or send an apology message instead of complete failure.
- Do not silently swallow errors : Partial failures should be traceable and observable, especially in complex orchestration scenarios.
Debugging Multi-Tool Workflows
Debugging multi-tool workflows is a challenge: errors can propagate, data can transform across steps, and failures may occur in “foreign” code (e.g., LLM output, external API responses). You can streamline and optimize this process for efficiency by leveraging the .NET ecosystem and AI-powered tooling.
End-to-End Logging and Correlation
Instrument your pipeline to capture input, output, and exceptions at each tool boundary. Use correlation IDs, structured logs (ILogger<T>
), and telemetry to make traces easy to filter and analyze.
C#
<textarea tabindex="-1" aria-hidden="true" readonly>public class LoggingAIWorkflowStep : IAIWorkflowStep
{
private readonly IAIWorkflowStep _inner;
private readonly ILogger<LoggingAIWorkflowStep> _logger;
public async Task<AIWorkflowResult> ExecuteAsync(AIWorkflowContext ctx, CancellationToken token)
{
_logger.LogDebug("Executing {Step} with context: {Context}", _inner.GetType().Name, ctx);
var result = await _inner.ExecuteAsync(ctx, token);
if (!result.Success)
{
_logger.LogError(
"Step {Step} failed: {Reason}",
_inner.GetType().Name, result.FailureReason);
}
return result;
}
}</textarea>
public class LoggingAIWorkflowStep : IAIWorkflowStep
{
private readonly IAIWorkflowStep _inner;
private readonly ILogger<LoggingAIWorkflowStep> _logger;
public async Task<AIWorkflowResult> ExecuteAsync(AIWorkflowContext ctx, CancellationToken token)
{
_logger.LogDebug("Executing {Step} with context: {Context}", _inner.GetType().Name, ctx);
var result = await _inner.ExecuteAsync(ctx, token);
if (!result.Success)
{
_logger.LogError(
"Step {Step} failed: {Reason}",
_inner.GetType().Name, result.FailureReason);
}
return result;
}
}
Stepwise Pipeline Testing
Write unit tests and integration tests for each tool separately. Then, add scenario tests for common input flows. Use mocks/stubs for slow or external dependencies.
Common Pitfalls
Not all tool chains are created equal. AI-powered workflows, particularly when utilizing LLMs, introduce new possibilities for subtle bugs and architectural vulnerabilities.
- Treating Probabilistic LLMs as Deterministic Engines LLMs generate plausible, and not guaranteed results. Don’t use them for tasks requiring strict correctness (e.g., math, database queries) unless verifiable by another step.
- Unbounded Sequence Depth Overly long or deeply nested tool chains are hard to debug, maintain, and optimize. Keep chains focused, and favor composition over nesting.
- Ignoring Error Handling at Boundaries Every external call (to LLM, API, DB) is a potential failure point.
- Overfitting to a Specific Model or Provider Hardwiring tools tightly to OpenAI/GPT, for example, can create vendor lock-in. Favor abstractions such as Microsoft.Extensions.AI and dependency injection for portable, flexible code.
- Measuring Success with Technology Metrics, Not Business Outcomes The ultimate metric is improved business value, not just the number of chained tools or response speed.
Anti-Patterns to Watch For
- All-in-One LLM Overloading a single model with all instructions and tools (monolith). Prefer clear separation of concerns.
- Precision Anti-Pattern Expecting LLM-based steps to provide mathematically precise answers on every run; instead, layer in validation and post-processing as needed.
- Overuse of Side-Effects Tool steps should avoid mutating shared state unexpectedly; rely on explicit context passing and immutability where possible.
Conclusion
Orchestrating multi-tool workflows is essential for creating robust, scalable, and maintainable AI-powered .NET applications. By adopting the right mindset and focusing on modular pipelines, effective dependency injection, clear error boundaries, and comprehensive diagnostics, you can transform complex business challenges into manageable and testable code structures.
Microsoft.Extensions.AI offers foundational abstractions and patterns for these workflows. Modern .NET practices, such as middleware, dependency injection, and logging, ensure that your code remains testable and adaptable for the future. It is important to avoid common pitfalls, such as assuming that large language models (LLMs) are infallible or creating monolithic structures. Instead, you should leverage workflows that can evolve alongside AI tools and requirements.
As a next step, select a workflow relevant to your domain. Break it down into distinct tool stages. Connect these stages using dependency injection. Register the appropriate function calling schemas and implement end-to-end logging and visualization. Remember to iterate and refactor. Your AI features and your users will appreciate it.
The post How to orchestrate multi-tool AI workflows in .NET first appeared on Roxeem.
Top comments (0)