DEV Community

ProG Coder
ProG Coder

Posted on

Building a Production-Grade Microservices E-Commerce Platform with .NET

A deep dive into Clean Architecture, Vertical Slices, YARP, and Event-Driven Design.

Microservices are challenging. They promise scalability and agility, but often deliver complexity and distributed headaches. Over the past few months, I’ve been building E-Commerce Platform, a comprehensive e-commerce reference architecture, to demonstrate how to tame this complexity using modern .NET technologies.

In this post, I’ll walk you through the architecture, the design choices, and the code that powers this system.

The High-Level Architecture

The platform isn’t just a “Hello World” demo; it’s designed to mimic real-world requirements. It is composed of 9 independent microservices (Catalog, Basket, Order, Inventory, etc.) and utilizes a Polyglot Persistence strategy — meaning I use the right database for the right job (PostgreSQL, MongoDB, SQL Server, Elasticsearch, and Redis).

Architecture Diagram

Core technologies include:

  • .NET 8 & Minimal APIs for high-performance services.
  • YARP (Yet Another Reverse Proxy) as the sophisticated API Gateway.
  • RabbitMQ & MassTransit for robust asynchronous messaging.
  • Grpc for low-latency inter-service communication.
  • IdentityServer/Keycloak for centralized authentication.

Vertical Slice Architecture: The “Secret Sauce”

One of the biggest pitfalls in .NET development is over-engineered layering (Controller -> Service -> Manager -> Repository -> DAO…). It scatters logic across 5 different files for a single feature.

For this project, I adopted Vertical Slice Architecture (VSA). Instead of organizing by technical layers, I organize by Features.

A “Feature” contains everything needed to execute a specific business request: the API endpoint, the request/response DTOs, the validation logic, and the handler.

Vertical Slice Architecture

Code Spotlight: Creating a Product

Here is what the CreateProduct feature looks like in the Catalog Service. Notice how the Command, Validator, and Handler live together. The request logic is cohesive, not scattered.

public record CreateProductCommand(CreateProductDto Dto, Actor Actor) : ICommand<Guid>;

public class CreateProductCommandValidator : AbstractValidator<CreateProductCommand>
{
    public CreateProductCommandValidator()
    {
        RuleFor(x => x.Dto).NotNull();
        RuleFor(x => x.Dto.Name).NotEmpty().WithMessage(MessageCode.ProductNameIsRequired);
        RuleFor(x => x.Dto.Price).GreaterThan(1);
    }
}

public class CreateProductCommandHandler(
    IMapper mapper,
    IDocumentSession session,
    IMinIOCloudService minIO,
    ISender sender) : ICommandHandler<CreateProductCommand, Guid>
{
    public async Task<Guid> Handle(CreateProductCommand command, CancellationToken cancellationToken)
    {
        var dto = command.Dto;
        await session.BeginTransactionAsync(cancellationToken);

        // Domain Logic: Create Entity
        var entity = ProductEntity.Create(
            id: Guid.NewGuid(),
            name: dto.Name!,
            sku: dto.Sku!,
            // ... (other properties)
            price: dto.Price,
            performedBy: command.Actor.ToString());

        // External Infrastructure: Upload Images (MinIO)
        await UploadImagesAsync(dto.UploadImages, entity, cancellationToken);

        // Persistence: Store in Marten (PostgreSQL JSON)
        session.Store(entity);
        await session.SaveChangesAsync(cancellationToken);

        // Event: Publish domain event if needed
        if (entity.Published)
        {
            await sender.Send(new PublishProductCommand(entity.Id, command.Actor), cancellationToken);
        }

        return entity.Id;
    }
}
Enter fullscreen mode Exit fullscreen mode

This approach makes code navigation instant. If there’s a bug in “Create Product”, you know exactly where to look.

The Gateway: YARP

To expose these microservices to the frontend (React), I didn’t want to expose 9 different ports. I used YARP (Yet Another Reverse Proxy) to create a unified API Gateway.

It handles routing, load balancing, and even authentication termination.

"ReverseProxy": {
  "Routes": {
    "catalog-route": {
      "ClusterId": "catalog-cluster",
      "Match": {
        "Path": "/catalog-service/{catch-all}"
      },
      "Transforms": [ { "PathPattern": "{catch-all}" } ]
    },
    // ... other routes for Basket, Order, etc.
  },
  "Clusters": {
    "catalog-cluster": {
      "Destinations": {
        "destination1": {
          "Address": "http://catalog-api:8080" // Internal Docker DNS
        }
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Asynchronous Communication

Services interact in two ways:

  1. Synchronous (gRPC): For real-time data needs (e.g., Aggregator requests).
  2. 2Asynchronous (Messaging): For side effects (e.g., “OrderPlaced” -> “SendEmail”).

I use RabbitMQ with MassTransit. The Outbox Pattern is implemented to ensure that a database transaction and a message publication happen atomically — no more “zombie” data if the message broker is down!

Intelligent CI/CD with GitHub Actions

In a microservices repo with 9+ services, you don’t want to rebuild everything when you only change one line in the Catalog service.

Github Actions Flows

I implemented Path Filtering in GitHub Actions. The workflow detects exactly which service changed and only builds/tests that specific service.

# .github/workflows/_ci.yml
jobs:
  detect-changes:
    runs-on: ubuntu-latest
    steps:
      - uses: dorny/paths-filter@v3
        id: filter
        with:
          filters: |
            catalog:
              - 'src/Services/Catalog/'
            basket:
              - 'src/Services/Basket/'
            # ... other services

  build-services:
    needs: detect-changes
    if: needs.detect-changes.outputs.catalog == 'true'
    # Only runs if Catalog service changed
Enter fullscreen mode Exit fullscreen mode

This saves massive amounts of CI minutes and speeds up feedback loops significantly.

Observability: Seeing Inside the Box

With 9 microservices, you can’t just “tail the logs”. I set up a full OpenTelemetry observability stack:

  • Logs: Serilog push to Loki (viewed in Grafana).
  • Traces: OpenTelemetry pushes to Tempo.
  • Metrics: Prometheus scrapes endpoints; visualized in Grafana.

Every request generates a TraceId that propagates through YARP to the downstream services (Catalog, Inventory, etc.), allowing me to visualize the entire request waterfall.

services.AddOpenTelemetry()
    .WithTracing(tracing =>
    {
        tracing.AddAspNetCoreInstrumentation()
               .AddHttpClientInstrumentation()
               .AddOtlpExporter(opt => 
               {
                   opt.Endpoint = new Uri(otlpEndpoint);
               });
    });
Enter fullscreen mode Exit fullscreen mode

Logs

Monitoring

Conclusion

Building microservices is about trade-offs. This project attempts to balance strict architectural purity with practical maintainability.

👉 Check out the full source code on GitHub:
https://github.com/huynxtb/progcoder-shop-microservices

If you found this helpful, please give the repo a ⭐!

Top comments (0)