A friend mentioned he was planning to migrate a Classic ASP application to .NET. In the same breath he described his current setup: one full VM per client, Hyper-V, the works.
The migration made complete sense. What caught my attention was that the deployment topology wasn't part of the plan.
That pattern — one VM per client — isn't wrong in a Classic ASP context. When your application is tightly coupled to IIS, when containerization isn't on the table, when the tooling assumes you're thinking in servers, it's just what the stack demands. It works well enough that nobody questions it.
But a .NET migration changes the calculus entirely. The hardware overhead that used to be unavoidable becomes optional. And if you're already touching the application, it's worth asking whether the infrastructure assumptions underneath it still hold.
This is what that looks like when you pull the thread.
The Actual Cost of "One VM Per Client"
Before talking about the alternative, it's worth being honest about what this pattern actually costs — because the costs are easy to normalize when you've been living with them long enough.
Every Hyper-V guest VM needs:
- A full OS kernel sitting in memory doing nothing most of the time
- A Windows Server license (per guest, not per host)
- Its own IIS instance, its own patch cycle, its own failure surface
- Manual intervention every time something needs to change across all clients
A core bug fix means touching every VM. A configuration change means touching every VM. A new client means provisioning an entire VM from scratch — OS installation, IIS setup, application deployment, firewall rules, the works.
The resource math is brutal too. Ten clients means ten full Windows Server instances competing for RAM and disk on the same host. The overhead isn't proportional, it's compounding.
When the app is Classic ASP, you live with this. When you're migrating to .NET, you don't have to anymore.
The Core Idea
One codebase. Multiple tenants. Every client isolated. One door in.
That's it. Everything else is just the implementation details.
The architecture that gets you there looks like this:
- Docker containers — one per client, lightweight, isolated, disposable
- A per-client docker-compose — each tenant gets their own database and cache, no sharing
- Nginx as the single entry point — all traffic hits port 443, nginx decides where it goes
- A provisioning panel — so spinning up a new client isn't a half-day manual process
Let's walk through each piece.
The Containers
Each client runs in their own Docker container. Inside that container lives the application. But the container isn't alone — each client gets their own docker-compose stack containing:
- The application container
- A dedicated database instance
- A dedicated cache instance
No shared database. No shared cache. No "oops, a bad query from client A just affected client B" scenarios. Full data sovereignty per tenant, by design, not by convention.
Compared to the Hyper-V setup, the resource footprint per client drops dramatically. A container shares the host kernel — it doesn't carry its own OS. A lightweight database instance adds maybe a few hundred megabytes. That's noise compared to a full Windows Server VM sitting in memory.
The counterintuitive result: you get better isolation than shared-database multi-tenancy AND better resource efficiency than the VM approach. Those two things usually trade off against each other. Here they don't, because the baseline was so inefficient to begin with.
The Single Entry Point
This is the part that ties everything together — and it's also the part that handles security almost automatically.
Nginx sits at the edge. It listens on port 443, holds the TLS certificates, and routes incoming requests to the right container based on the subdomain.
tenant1.site.com → internal port 8960
tenant2.site.com → internal port 8961
tenant3.site.com → internal port 8962
The containers themselves are never directly reachable from the public internet. They live on an internal Docker network. The only thing that can reach them is nginx, from the inside.
This isn't an accident — it's the point. One door means one attack surface. You're not managing firewall rules per container, not worrying about exposed ports, not hoping someone didn't accidentally bind a service to 0.0.0.0. The security boundary is structural, not procedural.
TLS management lives in one place. Certbot touches nginx and nothing else. A container getting compromised has no direct path to the outside world.
The nginx configuration itself is handled through per-client config fragments in /etc/nginx/conf.d/ — one file per tenant, generated from a template. Adding or removing a client means dropping or deleting a fragment and reloading nginx. The core configuration never changes.
The Two Actors
Something worth making explicit: there are two completely separate interaction flows in this system, and they never intersect at runtime.
The user from the internet hits nginx on 443. Nginx routes them to their tenant's container based on the subdomain mapping. The user never knows containers exist. They never see a port number. They just get a response.
The admin uses the provisioning panel to create and configure tenants. The panel talks to the Docker host and the nginx config directory. Once a tenant is provisioned and the nginx fragment is written, the admin panel is completely out of the picture for that tenant's runtime traffic.
This separation matters more than it might seem. A problem with the provisioning panel — a bug, a restart, maintenance — affects nobody currently using the system. The panel does its job once per tenant and steps aside. It is not a single point of failure for the running system.
The Provisioning Panel and CLI
New client onboarding goes from a half-day manual process to a form submission.
The admin panel collects the basics — client name, subdomain, initial configuration. The CLI does the rest:
- Pulls the next available port from a ports table in the database
- Generates a docker-compose from the base template with the assigned port and client config
- Spins the container stack with
docker compose up -d - Writes the nginx config fragment for the subdomain mapping
- Reloads nginx
New client is live. No SSH sessions, no manual IIS configuration, no OS installation. The whole process takes seconds.
The same CLI handles the inverse just as cleanly. Decommission a client: stop the containers, delete the fragment, reload nginx. Done.
What the Architecture Actually Looks Like
Here's the diagram — work in progress, but the bones are there:
Each container is a complete isolated tenant stack. Nginx is the single public door. The admin panel is an external actor that provisions and then gets out of the way. The user from the internet sees none of the plumbing.
What You Walk Away With
If you're planning a Classic ASP to .NET migration — or already in the middle of one — this is worth considering before you finalize the deployment plan. The path forward isn't a complete rewrite of your infrastructure. It's a change in deployment topology, and it's easier to do alongside the application migration than after.
The practical starting point:
- Containerize your application with Docker
- Define a docker-compose template that includes app, database, and cache per tenant
- Set up nginx as your reverse proxy with per-client config fragments in
conf.d/ - Build a simple ports registry to manage internal port assignments
- Wire up a basic provisioning script that generates configs and spins containers
The provisioning panel is a second step, not a prerequisite. A well-written CLI script gets you most of the value immediately.
The security properties, the resource efficiency, the operational sanity — those come with the architecture. You don't have to engineer them separately. Design the system this way and they're just there.
What's Next
The architecture above handles the deployment and routing layer cleanly, but there's another piece worth covering separately: what happens when different clients need different behavior from the same codebase.
The answer isn't forking the repository per client. That path leads somewhere painful and I've seen where it ends up.
The answer is a plugin model — a per-client override DLL that the application detects and loads at runtime if present, falling back to base behavior if not. One codebase, client-specific behavior as an optional injectable layer.
That's the next post.
Programmeur-Analyste specializing in legacy modernization and systems architecture. I map things before I build them, rescue things before I rewrite them, and document things so the next person doesn't start from zero. Legacy modernization is where I live — occasionally I write it all down.

Top comments (0)