I had a .NET library that wrapped Azure OpenAI and Azure AI Search into
clean domain services for travel applications. It worked, but it was a
monolith - one process doing everything synchronously. When the AI call
took 30 seconds, the HTTP request sat open for 30 seconds.
This is how I fixed that.
The problem with synchronous AI calls
The original architecture was simple:
POST /api/itinerary/generate
→ calls Azure OpenAI GPT-4o directly
→ waits 10-30 seconds
→ returns itinerary
That works for a demo. It does not work in production. One slow AI call
blocks a thread. Under load, you run out of threads. The service falls over.
The solution - async messaging with RabbitMQ
I split the system into three independent services:
TravelAI.Api - HTTP gateway, publishes messages
TravelAI.SearchWorker - consumes search requests
TravelAI.AiWorker - consumes AI generation requests
Now the flow looks like this:
The HTTP response comes back in under 100ms. The AI work happens
independently. If the AI service is slow, requests queue up in RabbitMQ
rather than blocking threads in the API.
The message contracts
I defined the messages as simple C# records in the core library so all
three services share the same types:
public record ItineraryRequested(
Guid CorrelationId,
string TravellerName,
string TravellerEmail,
string Destination,
DateOnly Departure,
DateOnly ReturnDate,
string? AdditionalInstructions);
public record ItineraryGenerated(
Guid CorrelationId,
object Itinerary,
bool Success,
string? ErrorMessage);
Using records keeps contracts immutable. The correlation ID lets you trace
a request across all three services even in distributed logs.
MassTransit for the messaging layer
Rather than talking to RabbitMQ directly, I used MassTransit as an
abstraction. This means I can swap RabbitMQ for Azure Service Bus later
without touching the consumer code.
A consumer looks like this:
public class ItineraryRequestedConsumer(
IItineraryGenerationService aiService,
ILogger<ItineraryRequestedConsumer> logger)
: IConsumer<ItineraryRequested>
{
public async Task Consume(ConsumeContext<ItineraryRequested> context)
{
var msg = context.Message;
var itinerary = await aiService.GenerateAsync(
traveller, msg.Destination,
msg.Departure, msg.ReturnDate,
msg.AdditionalInstructions,
context.CancellationToken);
await context.Publish(new ItineraryGenerated(
msg.CorrelationId, itinerary, Success: true, ErrorMessage: null));
}
}
MassTransit handles retries, dead-letter queues, and error handling
automatically. If the Azure OpenAI call fails, MassTransit retries it
with exponential backoff before moving the message to the error queue.
Structured logging with Serilog
Every service logs in structured JSON using Serilog. The correlation ID
flows through every log entry so you can trace a single request across
all three services in a centralised log system.
logger.LogInformation(
"Generating itinerary {CorrelationId} for {Traveller} to {Destination}",
msg.CorrelationId, msg.TravellerName, msg.Destination);
In production you would ship these logs to Seq, Azure Log Analytics, or
Datadog. Locally they stream to the console in readable format.
OpenTelemetry distributed tracing
The API is instrumented with OpenTelemetry so every request generates a
trace that shows exactly where time was spent:
builder.Services.AddOpenTelemetry()
.WithTracing(tracing => tracing
.SetResourceBuilder(ResourceBuilder.CreateDefault()
.AddService("TravelAI.Api"))
.AddAspNetCoreInstrumentation()
.AddConsoleExporter());
In production you would export traces to Jaeger, Zipkin, or Azure Monitor
instead of the console exporter.
Docker Compose for local development
The whole system starts with one command:
docker-compose up --build
The compose file uses health checks to ensure RabbitMQ is ready before
the services start connecting to it - a detail that matters because
services connecting to a broker that is still starting up will crash and
require manual restarts.
rabbitmq:
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "ping"]
interval: 10s
retries: 5
travelai-api:
depends_on:
rabbitmq:
condition: service_healthy
What I learned
Async messaging changes how you think about APIs. Instead of
returning data, you return a correlation ID and a status URL. The client
polls or subscribes for the result. This feels strange at first but it is
the right model for any operation that takes more than a second.
MassTransit v9 requires a commercial license. I hit this when
upgrading - v8 is fully open source and supports RabbitMQ without any
license. Worth knowing before you add it to a project.
Solution build ordering matters in .NET. The solution file needs
explicit Build.0 entries for each project in each configuration. When
this is missing, dependent projects compile before their references are
built, and you get confusing type-not-found errors even though the
project reference is correctly set.
The stack
| Layer | Technology |
|---|---|
| Framework | .NET 10 |
| AI | Azure OpenAI (GPT-4o) |
| Search | Azure AI Search |
| Messaging | RabbitMQ via MassTransit 8 |
| Logging | Serilog |
| Tracing | OpenTelemetry |
Source code: https://github.com/aftabkh4n/TravelAI.Core
If you found this useful or have questions, drop a comment below.
Top comments (0)