The .NET Architecture Pattern That Looks Professional but Scales Like Trash (and What to Do Instead)
TL;DR — The “enterprise-clean” layering stack (Controller → Application Service → Use Case/Handler → Port → Adapter → Repository → ORM) wins design reviews because it looks disciplined. At scale, it quietly taxes throughput: deep call stacks, excessive allocations, container‑resolved object graphs, ORM leakage hidden behind interfaces, async theater, and cross‑cutting decorators multiplying per‑request work.
The fix is not “no architecture.” The fix is: make costs visible, keep hot paths honest, and introduce abstractions only where change/volatility is real.
This is written for systems that already crossed the “it works” phase and entered the part that matters: SLOs, p99 latency, GC pressure, query plans, and cloud cost.
The pattern that gets applause (and then invoices)
Here’s the “looks professional” request path most .NET teams ship when they want to look serious:
// Controller
public async Task<IActionResult> GetOrder(Guid id)
=> Ok(await _getOrderUseCase.ExecuteAsync(id));
// Use case
public async Task<OrderDto> ExecuteAsync(Guid id)
{
var order = await _orderRepository.GetByIdAsync(id);
return _mapper.Map<OrderDto>(order);
}
// Repository
public Task<Order?> GetByIdAsync(Guid id)
=> _db.Orders
.Include(o => o.Items)
.FirstOrDefaultAsync(o => o.Id == id);
On paper: clean boundaries, test seams, separation of concerns.
In production: you’ve created a system where costs are hidden by design.
At scale, bottlenecks rarely announce themselves as “architecture.” They show up as:
- p95/p99 latency drift that “doesn’t correlate to CPU” until it does.
- Gen0/Gen1 GC rising with traffic even when business logic is simple.
- “EF Core is slow” becoming the scapegoat for a pipeline of allocations and indirection.
- Debugging a missing index requiring a tour through five interfaces and two mappers.
The architecture didn’t fail because someone implemented it wrong.
It failed because it optimized for professional aesthetics instead of runtime reality.
1) Abstraction inflation: forwarding chains that pay rent per request
A common hallmark: multiple types exist to forward one call.
public interface IOrderService
{
Task<OrderResult> GetOrderAsync(Guid id);
}
public sealed class OrderService : IOrderService
{
private readonly IOrderUseCase _useCase;
public Task<OrderResult> GetOrderAsync(Guid id) => _useCase.HandleAsync(id);
}
public interface IOrderUseCase
{
Task<OrderResult> HandleAsync(Guid id);
}
Three types. One behavior. Zero additional business meaning.
At 5 requests/second? Nobody cares.
At 5,000 requests/second? You are paying for:
- method call indirection
- object graph resolution
- allocations (often accidental: closures, iterators, mapping, async state machines)
- deeper stacks (harder profiling, harder inlining, harder to see hot paths)
The cost isn’t one layer. The cost is multiplication across a codebase with dozens of “just in case” abstractions.
Scalable systems don’t avoid abstraction. They avoid unearned abstraction.
2) “Clean” repositories that hide the real problem: query shape
Repositories often push teams toward entity-returning APIs that look pure but are operationally blunt instruments.
public Task<Order?> GetByIdAsync(Guid id)
=> _db.Orders
.Include(o => o.Items)
.ThenInclude(i => i.Product)
.FirstOrDefaultAsync(o => o.Id == id);
This is how you accidentally ship:
- wide joins
- large graphs for small responses
- tracking when you don’t need it
- serialization bloat
- N+1 when navigation usage escapes into other layers
Then you “fix performance” by adding caching or more instances—because the architecture hides the query cost until it’s a fire.
Reads at scale are shaped for usage, not purity. The correct model for reads is often not your domain entity.
3) ORM leakage: DbContext behavior bleeds through the boundary anyway
The architecture claims “infrastructure is isolated,” but EF behavior leaks:
public sealed class Order
{
public ICollection<OrderItem> Items { get; set; } = new List<OrderItem>();
}
// Somewhere later:
var total = order.Items.Sum(i => i.Price);
If Items is not loaded the way you think, you get:
- N+1
- big object graphs
- lazy loading surprises
- accidental tracking
- query explosions
And because you buried EF behind an interface, you made it harder to reason about.
In production you don’t need a boundary that pretends EF isn’t there.
You need a boundary that makes the data shape explicit.
4) DI worship: when “testability” becomes a per-request allocation tax
DI is not the villain. Blind DI is.
services.AddScoped<IOrderService, OrderService>();
services.AddScoped<IOrderUseCase, OrderUseCase>();
services.AddScoped<IOrderRepository, OrderRepository>();
services.AddScoped<IMapper, Mapper>();
services.AddScoped<IValidator<OrderQuery>, OrderQueryValidator>();
// ...x50 more per feature
At high throughput, this pattern is expensive because of the total graph:
- constructor chains grow
- transients multiply
- GC runs more often
- cold start worsens
- container diagnostics become part of your performance story
Under load, you end up with a “clean” system where your hottest code path is:
resolve dependencies → forward calls → allocate DTOs → map objects → serialize.
Rule that scales: Use DI for volatility and cross-cutting services.
Do not DI‑wrap every tiny behavior “because architecture.”
If a class only forwards calls, it isn’t a seam. It’s a toll booth.
5) Async everywhere, throughput nowhere: async theater
Async is for I/O boundaries.
Async for CPU/in-memory work is overhead disguised as modernity.
public async Task<OrderSummary> CalculateAsync(Order order)
{
return await Task.FromResult(new OrderSummary(order.Items.Sum(i => i.Price)));
}
You added:
- async state machine
- scheduling overhead
- cognitive noise
- no benefit
At scale, the overhead compounds across layers because now everything is Task<T> by default, even when the work is immediate.
Rule that scales: Make sync code sync. Make I/O async. Be honest.
6) Cross-cutting decorators that multiply work per request
“Clean” architectures often implement logging/metrics/retries as decorators:
public sealed class LoggingOrderUseCase : IOrderUseCase
{
private readonly IOrderUseCase _inner;
private readonly ILogger<LoggingOrderUseCase> _logger;
public async Task<OrderResult> HandleAsync(Guid id)
{
_logger.LogInformation("Start {OrderId}", id);
var result = await _inner.HandleAsync(id);
_logger.LogInformation("End {OrderId}", id);
return result;
}
}
Add metrics, caching, retries, validation, auth—now your “simple request” hits 5–10 wrappers before the database.
This isn’t always wrong. It’s just rarely measured.
You’re stacking costs without owning the budget.
Rule that scales: Centralize cross-cutting concerns where possible (middleware / interceptors / OTel), and measure them.
What actually scales: boring, direct, honest design
The alternative isn’t “spaghetti.” It’s truthful composition:
- Make reads explicit.
- Keep write paths intentional.
- Introduce abstractions when change/duplication is proven.
- Treat performance as a first-class API constraint.
Here’s the “boring” version of GetOrder that scales better because it is explicit about shape and cost:
public sealed record OrderDto(Guid Id, decimal Total);
public async Task<IResult> GetOrder(Guid id, AppDbContext db, CancellationToken ct)
{
var dto = await db.Orders
.AsNoTracking()
.Where(o => o.Id == id)
.Select(o => new OrderDto(
o.Id,
o.Items.Sum(i => i.Price)))
.FirstOrDefaultAsync(ct);
return dto is null ? Results.NotFound() : Results.Ok(dto);
}
No repository. No mapper. No fake isolation.
Just a query shaped to the response.
This doesn’t forbid architecture. It forbids hiding the cost.
A pragmatic 2026 model: “thin architecture, thick measurement”
If your system has to scale, you don’t need more layers. You need better contracts.
Reads
- Use projection (
Select) into DTOs. - Prefer
AsNoTracking()on read endpoints. - Consider compiled queries for hot paths.
- Treat N+1 as a bug, not a style discussion.
Example: compiled query for a known hot endpoint:
public static class Queries
{
public static readonly Func<AppDbContext, Guid, CancellationToken, Task<OrderDto?>>
GetOrderDto =
EF.CompileAsyncQuery((AppDbContext db, Guid id, CancellationToken ct) =>
db.Orders.AsNoTracking()
.Where(o => o.Id == id)
.Select(o => new OrderDto(
o.Id,
o.Items.Sum(i => i.Price)))
.FirstOrDefault());
}
Usage:
var dto = await Queries.GetOrderDto(db, id, ct);
Writes
- Keep domain logic where it matters, but don’t force every write through 7 layers.
- Use explicit transaction boundaries where needed.
- Don’t hide EF tracking behind “repositories” that pretend it isn’t tracking.
public async Task<IResult> UpdateOrderStatus(
Guid id,
UpdateStatusRequest req,
AppDbContext db,
CancellationToken ct)
{
var order = await db.Orders.FirstOrDefaultAsync(o => o.Id == id, ct);
if (order is null) return Results.NotFound();
order.Status = req.Status;
order.UpdatedAtUtc = DateTime.UtcNow;
await db.SaveChangesAsync(ct);
return Results.NoContent();
}
Abstractions
Introduce them for volatility, not for vibes:
- multiple persistence models
- external systems that change (payments, providers)
- domain policies that evolve
- shared behavior that is actually shared
If you only have one implementation and it never changes, that interface is a costume.
The “seam” you actually want: feature boundaries, not forwarding layers
Instead of Controller → Service → UseCase → Repo, consider a feature folder boundary with explicit query/command handlers (but not ceremonial forwarding). Example layout:
/Features/Orders
GetOrder.cs
UpdateOrderStatus.cs
OrdersEndpoints.cs
OrdersMapping.cs (if needed)
GetOrder.cs holds the read model and query.
UpdateOrderStatus.cs holds the command logic.
EF usage is explicit, so cost stays visible.
This scales better organizationally too: a developer can open one folder and see the full behavior end-to-end.
“But what about testability?”
Testability that ignores production reality is theater.
The pattern usually gives you unit tests that mock the world:
_mockRepo.Setup(r => r.GetByIdAsync(It.IsAny<Guid>()))
.ReturnsAsync(new Order());
This doesn’t test:
- query shape
- indexes
- serialization size
- tracking behavior
- N+1
- real latency paths
For scalable systems, prefer:
- integration tests for data access (real DB in CI if possible)
- contract tests for API responses
- load tests for hot paths
- profiling gates for regressions
You want tests that keep you honest, not tests that keep you comfortable.
How to refactor the “professional” pattern without blowing up the codebase
You don’t delete all layers in a weekend. You remove cost where it matters.
Step 1 — Identify hot paths
Start with evidence:
- p95/p99 endpoints
- top allocation sites
- slow SQL and query plans
- GC pressure under load
Tooling that pays back quickly:
-
dotnet-counters(GC, exceptions, threadpool) -
dotnet-trace(CPU sampling) - OpenTelemetry traces (request → DB timings)
Step 2 — Collapse read paths first
Reads dominate most systems. Replace “repository returns entity graph” with explicit projections for the top endpoints.
Step 3 — Reduce indirection where it adds no behavior
If a type only forwards a call, merge it.
Step 4 — Contain EF, don’t disguise it
Keep EF usage explicit in read/write handlers. If you need a boundary, make it a query boundary, not an entity-returning repository boundary.
Step 5 — Keep DI, but stop worshipping it
Compose the real dependencies. Do not register a class just because the pattern wants another layer.
The actual definition of “professional” in 2026
Professional architecture is not the number of folders named Domain.
Professional architecture is:
- predictable p99 latency,
- understandable runtime paths,
- cheap enough to run,
- safe enough to evolve,
- and honest enough to optimize.
If your architecture looks great in diagrams but becomes untraceable under load, it isn’t professional.
It’s decorative.
Final take
If your system feels “senior” on day one but fragile on year three, you’re not crazy.
You’ve probably shipped an architecture optimized for approval, not survival.
The uncomfortable question that saves systems is always the same:
Where does the cost show up — and can we see it without digging through five interfaces?
— Written by Cristian Sifuentes
Full‑stack engineer · .NET architect · performance‑first systems thinker

Top comments (0)