DEV Community

From Client Request to 24 .NET Microservices: My Solo Migration Journey

Backstory: the client's request

It all started with a fairly typical client request:

"I have a MySQL database with tables and data in Excel. I want a robust mega-system that scales, runs without downtime, and grows with the business. Ready to build it step by step, but need the right architecture from the start."

At that point, the client had MySQL tables, Excel data, and a general vision of how everything should work, plus a strong desire to "build it properly." No specific requirements for development speed, load, or SLA—just the understanding that they needed scalable architecture that wouldn't require a complete rebuild in a year or two.

Stage 1: MVP with Web API + console apps

First stage — quickly build a working prototype:

  • Migrated data from MySQL and Excel to MSSQL (client chose SQL Server as better suited for .NET ecosystem);
  • Built one Web API on .NET with core CRUD operations;
  • Added several console applications for background tasks (processing, import, reports).

Initial architecture looked like this:
Console Apps (batch jobs)

Web API (.NET + EF Core)

MSSQL (data from MySQL + Excel)

This approach delivered a working product to the client in 2-3 weeks. The system processed data, showed results, ran on the server.

Stage 2: monolith pain points

The client kept adding requirements and complicating logic, while I realized: continuing to pile everything into one service would just delay the inevitable rebuild. Classic monolith problems emerged:

  • Every change = restart of the entire service;
  • Impossible to scale selectively;
  • Downtime risk — a bug in one area could crash the whole system;
  • Maintenance complexity — all business logic in one place.

Solution: gradual migration to microservices

Once the final logic was shaped and it became clear how the end system should work, I decided to migrate to microservices architecture gradually, keeping the system operational at every step. Principle: "strangler pattern" — the monolith gradually gets "strangled," while functionality moves to new services.

Key migration principles:

  • Start with business boundaries;
  • Each new service must be fully autonomous (Docker, tests, health checks);
  • Keep system operational after each step.

Migration result: 24 microservices

I single-handedly transformed the monolith into 24 independent Web API microservices:

Project description

Monolithic .NET application migration to microservices architecture: extracted 24 Web API microservices (Auth, Application, Job, Mask, User, etc.), each with its own bounded context and dedicated controllers. Inter-service communication implemented via RabbitMQ (event bus for async task-related events and entities) and HTTP (typed HttpClient + IHttpClientFactory, service URLs and routes externalized to appsettings.json).[file:1]

Data storage in MSSQL via EF Core, API access secured with JWT authentication, health checks, Swagger, and centralized logging (Serilog, NLog) configured.[file:1]

Full project code: https://github.com/belochka1-04/WebApi_microservices [file:1]

Status and completed tasks

What I've already done:[file:1]

  • ✅ Extracted 24 Web API services with separate bounded contexts and contracts
  • ✅ Added unit/integration tests + Docker support to ApplicationService
  • ✅ Implemented health checks for all microservices (ready for Prometheus/Grafana)
  • ✅ Set up RabbitMQ and typed HttpClient communication

In progress/planned:[file:1]

  • 🔄 Expand test coverage to remaining services
  • 🔄 Standardize Docker containers across all 24 services
  • 🔄 Integrate Prometheus + Grafana for full observability

Tech stack

Complete project stack:[file:1]

Platform: .NET 8, ASP.NET Core Web API
Language: C#
Architecture: microservices (24 services), REST API, bounded contexts, SOLID, DI/IoC
Data access: Entity Framework Core, SQL Server (MSSQL), migrations
Communication: HTTP (typed HttpClient), RabbitMQ (event bus)
Security: JWT authentication
Logging: Serilog, NLog, health checks
API docs: Swagger/OpenAPI
Containerization: Docker (Dockerfile, docker-compose)
Testing: unit/integration tests

Key migration lessons (from a solo developer)

What worked great:

  • Gradual approach kept the system running
  • Docker + health checks from day one saved tons of time
  • DTOs and explicit contracts prevented mass refactoring during DB changes
  • RabbitMQ was perfect for async scenarios

What was trickier:

  • Splitting shared MSSQL tables across services (ownership boundaries)
  • Managing 24 Docker containers locally (docker-compose to the rescue)
  • Realizing microservices = code + DevOps (monitoring, logs, deployment)

What's next

The system is now production-ready, but full observability awaits:

  • Prometheus + Grafana for health check and performance metrics
  • Unified docker-compose for all 24 services + MSSQL + RabbitMQ
  • CI/CD pipeline for automated deployments

In the next article, I'll cover setting up this monitoring stack on Windows with Telegram alerts.

Top comments (0)