In our last post, we unpacked how FinWiseNest embraced a real-time, event-driven architecture to power its growing suite of microservices—from portfolio handling to transaction tracking and live market data. But as our backend matured, our local dev environment started to feel… less elegant.
Imagine opening five terminals, running dotnet run
in each one, then spinning up RabbitMQ and SQL Server separately—all before you even wrote a line of code. That’s not development. That’s sysadmin cosplay.
So we took a pause from cranking out features and tackled a new challenge: making local development a joy again.
GitHub link
The Problem: An Orchestra with No Conductor
As our ecosystem grew to five .NET microservices , RabbitMQ , and SQL Server , starting a dev session became a ritual:
- Fire up RabbitMQ.
- Launch SQL Server.
- Open five terminal windows.
- Navigate into each microservice.
- Run
dotnet run
in every single one.
It worked—but it was messy. It was fragile. And let’s be honest: it didn’t spark joy.
We needed a conductor for our orchestra.
The Fix: Compose Yourself
Enter Docker Compose —our backstage pass to a cleaner, one-command local dev experience.
Our goal was simple but ambitious:
“Start the entire FinWiseNest backend with one command—no manual setup, no terminal gymnastics.”
We achieved it in two key steps:
Step 1: Containerize All the Things
Each .NET microservice got its own multi-stage Dockerfile , creating lightweight and production-like containers. These images are optimized, portable, and consistent across machines.
Step 2: One File to Rule Them All
We leveled up our docker-compose.yml
to define the entire stack:
services:
sqlserver:
image: mcr.microsoft.com/mssql/server:2022-latest
container_name: mssql_dev
ports:
- "1433:1433"
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=YourStrong!Passw0rd
- MSSQL_PID=Developer
volumes:
- mssql_data:/var/opt/mssql
restart: unless-stopped
networks:
- backend_network
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: rabbitmq_dev
ports:
- "5672:5672"
- "15672:15672"
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=password
networks:
- backend_network
portfolioservice:
build:
context: .
dockerfile: PortfolioService/Dockerfile
ports:
- "8081:8080"
environment:
- ASPNETCORE_URLS=http://+:8080
- ConnectionStrings__DefaultConnection=Server=mssql_dev;Database=FinWiseDb;User Id=sa;Password=YourStrong!Passw0rd;TrustServerCertificate=True;
- RabbitMQ__ConnectionString=amqp://user:password@rabbitmq_dev
- ASPNETCORE_ENVIRONMENT=Development
- MarketDataService:BaseUrl=http://marketdataservice:8080
depends_on:
- rabbitmq
- sqlserver
- marketdataservice
restart: unless-stopped
networks:
- backend_network
transactionservice:
build:
context: .
dockerfile: TransactionService/Dockerfile
ports:
- "8082:8080"
environment:
- ASPNETCORE_URLS=http://+:8080
- ConnectionStrings__DefaultConnection=Server=mssql_dev;Database=FinWiseDb;User Id=sa;Password=YourStrong!Passw0rd;TrustServerCertificate=True;
- RabbitMQ__ConnectionString=amqp://user:password@rabbitmq_dev
- ASPNETCORE_ENVIRONMENT=Development
depends_on:
- rabbitmq
- sqlserver
restart: unless-stopped
networks:
- backend_network
marketdataservice:
build:
context: .
dockerfile: MarketDataService/Dockerfile
ports:
- "8083:8080"
environment:
- ASPNETCORE_URLS=http://+:8080
restart: unless-stopped
networks:
- backend_network
swisshubservice:
build:
context: .
dockerfile: SwissHubService/Dockerfile
ports:
- "8084:8080"
environment:
- ASPNETCORE_URLS=http://+:8080
restart: unless-stopped
networks:
- backend_network
taxservice:
build:
context: .
dockerfile: TaxService/Dockerfile
ports:
- "8085:8080"
environment:
- ASPNETCORE_URLS=http://+:8080
- ConnectionStrings__DefaultConnection=Server=mssql_dev;Database=FinWiseDb;User Id=sa;Password=YourStrong!Passw0rd;TrustServerCertificate=True;
depends_on:
- sqlserver
restart: unless-stopped
networks:
- backend_network
volumes:
mssql_data:
networks:
frontend_network:
driver: bridge
backend_network:
driver: bridge
- The five .NET services
- RabbitMQ
- SQL Server
The file handles image builds, port mappings, inter-service networking—everything needed to orchestrate the system locally.
The Wins: Why This Matters
This wasn’t just a nice-to-have. The shift brought real impact to our project:
-
One Command to Start It All
Just run
docker-compose up
and you’re in business. No manual steps. No forgotten dependencies. After you run the command, you should see something like this:
- Zero “Works on My Machine” Moments Everyone gets the same setup, every time. It’s like giving your team matching superhero suits.
- Clean Isolation No more stepping on each other’s toes. Each service runs in its own neatly contained box, dependency conflicts be gone.
- Smooth On-Ramp to Production These same container images will power our future Azure Kubernetes Service (AKS) deployment. We’re building dev environments that scale.
In you Docker Desktop, the current situation is like below:
Tip: I strongly recommend designating one service as the owner responsible for initializing the database. Since our SQL Server now runs in a Docker container with its own volume on the local machine, situations like removing the volume, stopping containers, or switching machines can lead to a fresh database state.
To handle this gracefully, the owning service should automatically create the database, apply migrations , and optionally seed it with sample data at startup. (This is ideal in development only, for Production is not recommended, we will talk about this in next articles)
Here’s the code we use to apply migrations when the service starts:
Which creates this:
Wrapping Up
We pressed pause on building features to fix the way we build them—and it was worth it. With Docker and Docker Compose in place, our team now enjoys a repeatable , scalable , and dare we say it, pleasant development experience.
We’ve swapped chaos for clarity.
And with this solid foundation under us, it’s full steam ahead—onto building features that truly move the needle.
Top comments (0)