Most order processing systems I have worked with are synchronous. The API receives the request, does the work, and returns the result. That works fine until it does not. The database is slow, a third-party service is down, or you have 500 orders arriving at the same time. Everything backs up and the API starts timing out.
This project is the async version of that. Orders come in through a REST API, get saved to PostgreSQL, and get published to Kafka. A separate consumer picks them up, processes them, and publishes a fulfilment event to Azure Service Bus. The API never waits for any of that.
The architecture
There are three projects in the solution.
OrderPipeline.Api is an ASP.NET Core 10 API. It does two things when an order arrives: saves it to PostgreSQL and publishes an event to Kafka. That is it. The processing happens somewhere else.
OrderPipeline.Consumer is a .NET Worker Service. It runs continuously, reading from the Kafka orders topic. When it picks up an order event, it updates the status to Processing, does the fulfilment work, marks it as Fulfilled in PostgreSQL, and publishes a fulfilment event to Azure Service Bus.
OrderPipeline.Core holds the shared models. Order, OrderItem, OrderEvent, and the interfaces for the repository and publisher.
The order flow
POST an order to the API:
curl -X POST http://localhost:5125/api/orders \
-H "Content-Type: application/json" \
-d "{\"customerName\": \"John Smith\", \"customerEmail\": \"john@example.com\", \"items\": [{\"productName\": \"Laptop\", \"quantity\": 1, \"unitPrice\": 999.99}]}"
The API responds immediately with a 201. In the background:
Order e6e20d66 created in database
Order e6e20d66 published to Kafka topic orders at offset 1
Then the consumer picks it up:
Consumed message from partition [0] at offset 1
Processing order e6e20d66 for customer John Smith
Fulfilment event published to Service Bus for order e6e20d66
Order e6e20d66 fulfilled successfully
The client does not wait for any of that. It can poll GET /api/orders/{id} to check the status whenever it wants.
Publishing to Kafka
The publisher uses the Confluent.Kafka producer. The order ID is the message key, which ensures all events for the same order land on the same partition.
var message = new Message<string, string>
{
Key = order.Id.ToString(),
Value = JsonSerializer.Serialize(orderEvent)
};
var result = await _kafkaProducer.ProduceAsync(_kafkaTopic, message);
_logger.LogInformation(
"Order {OrderId} published to Kafka topic {Topic} at offset {Offset}",
order.Id, _kafkaTopic, result.Offset);
Consuming from Kafka
The consumer uses AutoOffsetReset.Earliest so it picks up messages from the beginning of the topic on first start. Manual commit means a message only gets marked as processed after the work is done, not when it is received.
var config = new ConsumerConfig
{
BootstrapServers = _configuration["Kafka:BootstrapServers"],
GroupId = "order-pipeline-consumer",
AutoOffsetReset = AutoOffsetReset.Earliest,
EnableAutoCommit = false
};
If the consumer crashes mid-processing, it picks up from the last committed offset when it restarts. No orders get lost.
Publishing to Azure Service Bus
Once an order is fulfilled, the consumer publishes a fulfilment event to a Service Bus queue. Downstream systems subscribe to that queue to handle shipping, notifications, invoicing, or whatever comes next.
var message = new ServiceBusMessage(JsonSerializer.Serialize(orderEvent))
{
MessageId = orderEvent.EventId.ToString(),
Subject = "OrderFulfilled",
ContentType = "application/json"
};
await _sender.SendMessageAsync(message);
For local development, this uses the official Microsoft Azure Service Bus emulator running in Docker. The connection string just needs UseDevelopmentEmulator=true and it works identically to the real service.
The database
The Consumer and API share the same PostgreSQL database but they access it independently through EF Core. The API writes new orders. The Consumer reads and updates them. No shared state, no coupling between the two services beyond the database schema.
order.Status = OrderStatus.Processing;
await db.SaveChangesAsync();
// do the fulfilment work
order.Status = OrderStatus.Fulfilled;
order.ProcessedAt = DateTime.UtcNow;
await db.SaveChangesAsync();
Running it locally
Everything runs with Docker Compose. One command starts PostgreSQL, Kafka, Zookeeper, Kafka UI, and the Service Bus emulator.
git clone https://github.com/aftabkh4n/order-pipeline.git
cd order-pipeline
docker-compose up -d
Then start the API and Consumer in separate terminals and send a test order. The Kafka UI at http://localhost:8080 lets you watch messages flow through the topic in real time.
What I learned
Kafka startup takes time. On the first request after starting the containers, the producer waits while Kafka finishes initialising. Subsequent requests are fast. Worth knowing when you are testing and wondering why the first call takes 60 seconds.
Manual offset commit is important. With auto-commit enabled, Kafka marks a message as processed the moment it is received. If the consumer crashes before finishing the work, that message is gone. Manual commit means you commit only after the work succeeds.
The Service Bus emulator is genuinely useful. I expected it to be a rough approximation but it behaves exactly like the real service for basic queue operations. No need to touch a real Azure subscription during development.
Source code: https://github.com/aftabkh4n/order-pipeline
If you have questions or run into issues getting it running, drop a comment below.
Top comments (1)
This is a clean, textbook async pipeline — separating ingestion from processing is exactly how you stop APIs from falling over under load. One thing I'd watch: you now have two sources of truth in flight, PostgreSQL and Kafka. If the API crashes after writing to Postgres but before publishing, you've got an order that never gets processed. Have you considered using an outbox pattern or transactional writes to close that gap?