Introduction
In object-oriented programming, Dependency Injection is one of the most important design techniques for building scalable and maintainable large-scale applications. While it is often associated with OOP, the underlying idea is not limited to object-oriented languages and can also be applied in functional programming environments.
To truly understand Dependency Injection, we must first understand the principle behind it. Dependency Injection is not the goal itself—it is a technique used to achieve the Dependency Inversion Principle, one of the five SOLID principles.
DIP — According to Robert C. Martin
The Dependency Inversion Principle (DIP) tells us that the most flexible systems are those in which source code dependencies refer only to abstractions, not to concretions.
In a statically typed language, like Java, this means that the use, import, and include statements should refer only to source modules containing interfaces, abstract classes, or some other kind of abstract declaration. Nothing concrete should be depended on.
This is the formal definition of the Dependency Inversion Principle. In simpler terms, it means that we should depend on abstractions (interfaces or abstract classes) rather than concrete implementations.
❌ Violates DIP (high-level depends on low-level)
public class EmailSender
{
public void Send(string to, string message)
{
Console.WriteLine($"Email to {to}: {message}");
}
}
public class OrderService
{
private readonly EmailSender _emailSender = new EmailSender(); // concrete dependency
public void PlaceOrder(string customerEmail)
{
// order logic...
_emailSender.Send(customerEmail, "Your order has been placed.");
}
}
✅ Applies DIP + DI (depend on abstraction, inject implementation)
public interface INotifier
{
void Notify(string to, string message);
}
public class EmailNotifier : INotifier
{
public void Notify(string to, string message)
{
Console.WriteLine($"Email to {to}: {message}");
}
}
public class OrderService
{
private readonly INotifier _notifier;
public OrderService(INotifier notifier) // injected abstraction
{
_notifier = notifier;
}
public void PlaceOrder(string customerEmail)
{
// order logic...
_notifier.Notify(customerEmail, "Your order has been placed.");
}
}
// Program.cs (Composition - Manual DI)
var notifier = new EmailNotifier();
var service = new OrderService(notifier);
service.PlaceOrder("nahid@example.com");
Understanding Dependency Injection
Now that we understand the Dependency Inversion Principle, let’s look at Dependency Injection.
As the example code block shows, manually injecting dependent objects requires initialization logic. For example, OrderService needs an EmailService, so the EmailService must be created beforehand. Now imagine that EmailService depends on an HttpService, which must also be created and passed in. Then HttpService depends on SocketService and CryptoService, which must also be instantiated and supplied.
The chain continues.
Managing this object graph manually quickly becomes complex and hard to maintain. Dependency Injection tools are designed to solve this problem—and more. At its core, a DI container helps us organize and resolve dependencies so we don’t have to manually construct and pass every dependency through every layer of the application.
Dependency Injection: Beyond the new Keyword
So far, we’ve discussed how Dependency Injection solves the problem of building and managing complex object graphs. But I also said it solves more than that.
You might be thinking: “My application is small. I don’t need a complex setup to manage my objects. Using the new operator in my controller, service, or repository layer is enough.”
But stop.
The real power of Dependency Injection goes beyond object creation. It directly impacts your application's performance and maintainability.
In the following sections, I will practically demonstrate how mismanaging—or completely ignoring—Dependency Injection can make your application worse than you expect. You’ll also see how proper use of DI can significantly improve both performance and long-term maintainability.
Object Creation Is Not Free
In object-oriented programming, object creation and lifecycle management are among the most critical aspects that require careful consideration.
According to the Gang of Four (GoF) design patterns, Creational Design Patterns (such as Singleton, Factory, etc.) primarily focus on managing object creation in a controlled and structured way.
So why should we care about object creation?
Because every object has both time and space complexity associated with it. Each object allocated in memory consumes space in the CLR heap (.NET), and its creation requires CPU cycles. Modern computers are fast, but at scale, these costs add up significantly.
To demonstrate this, I conducted a benchmark on the following machine:
- CPU: Intel(R) Core(TM) i7-8700 @ 3.20GHz
- Memory: 16 GB
- OS: Windows 11
- Runtime: .NET 9
The benchmark measured the overhead of creating an Entity Framework DbContext with three DbSets and tracking one active entity.
Benchmark Details:
- Iterations: 100,000 (create context + touch model + track 1 entity)
- Total time: 2.94 seconds
- Average time per iteration: 29.39 µs (~0.029 ms) ~34,000 iterations per second
- Total managed allocations: ~2.04 GB (≈ 1.90 GiB)
- Average allocation per iteration: ~20.38 KB (≈ 19.9 KiB)
Even though each iteration seems inexpensive in isolation, 100,000 iterations resulted in over 2 GB of managed allocations. That is a significant amount of memory pressure, which directly impacts garbage collection and overall application performance.
The Performance Implications of Dependency Injection
The performance implications of Dependency Injection are largely determined by object (or service) lifecycles. A DI container forces us to think explicitly about how long each object should live within an application.
To understand its impact on performance, we must first understand the different lifetimes an object can have. I will use .NET as a practical example.
Think of an application like a hotel.
Different things inside a hotel have different lifetimes. The building is shared by everyone, all day, every day. It is not rebuilt for each guest. In a .NET application, this is like a Singleton service — created once and shared for the entire lifetime of the application process. Services such as logging, configuration, or caching usually belong to this category.
A hotel room, however, is assigned to one guest for the duration of their stay. It exists only during that stay and is reset afterward. In ASP.NET Core, this is similar to a Scoped service — created once per HTTP request and disposed at the end of that request. A DbContext is a perfect example of this lifetime.
Finally, consider a dental kit provided to a guest. It is created for immediate use and discarded afterward. This is like a Transient service — a new instance is created every time it is requested. Transient services are typically lightweight and stateless.
In a real OOP application, objects behave the same way. Some should live for the entire application lifetime, some should exist only within a request boundary, and others should be created per use. Choosing the correct lifetime is not just an architectural preference — it directly affects memory usage, garbage collection, concurrency safety, and overall performance.
Object Lifetime Management in ASP.NET Core DI
Let’s walk through a practical demonstration of how the .NET DI container manages object lifetimes — and how incorrect lifetime configuration can significantly degrade application performance.
// Program.cs - Correct lifetimes in .NET DI (baseline)
// app-wide
builder.Services.AddSingleton<ICache, MemoryCacheService>();
// per request
builder.Services.AddScoped<IUnitOfWork, UnitOfWork>();
// per request (typical)
builder.Services.AddScoped<AppDbContext>();
// per use
builder.Services.AddTransient<IJsonFormatter, JsonFormatter>();
Now let’s look at an example of inconsistent use of Dependency Injection.
public interface ITokenKeyStore
{
byte[] GetKey();
}
public sealed class TokenKeyStore : ITokenKeyStore
{
private readonly byte[] _keyMaterial;
public TokenKeyStore()
{
// Simulate expensive initialization + memory allocation
_keyMaterial = new byte[10 * 1024 * 1024]; // 10 MB
new Random(42).NextBytes(_keyMaterial);
}
public byte[] GetKey() => _keyMaterial;
}
// ❌ BAD: creates a new 10MB object graph frequently → allocations + GC pressure
builder.Services.AddTransient<ITokenKeyStore, TokenKeyStore>();
// ✅ GOOD: create once, reuse safely (if thread-safe / immutable)
builder.Services.AddSingleton<ITokenKeyStore, TokenKeyStore>();
In this example, TokenKeyStore performs an expensive initialization. During construction, it allocates 10MB of memory and fills it with key material. This means that every time a new instance is created, the runtime must allocate a large object on the heap and eventually reclaim it during garbage collection.
If this service is registered as Transient, a new 10MB instance will be created every time it is requested from the container.
Under light development traffic, this may go unnoticed. But under real production load, frequent large allocations can:
- Increase garbage collection frequency
- Add CPU overhead
- Reduce throughput
- Introduce latency spikes
How DI Protects You from Circular Dependency Pitfalls
Let’s look at a simple example and understand circular dependency:
public class ServiceA
{
private ServiceB _serviceB;
public ServiceA()
{
_serviceB = new ServiceB();
}
}
public class ServiceB
{
private ServiceA _serviceA;
public ServiceB()
{
_serviceA = new ServiceA();
}
}
Now when we try to run:
var serviceA = new ServiceA();
What happens?
ServiceA creates ServiceB.
ServiceB creates ServiceA.
That new ServiceA creates another ServiceB.
And the cycle continues…
Until the runtime gives up with:
Stack overflow.
Repeated 12002 times...
This is a circular dependency—two services depending on each other in a loop. It becomes even more problematic when dealing with real-world, enterprise-grade applications that involve hundreds of objects interacting at the same time.
When using a DI container, this problem is detected early. Instead of crashing with a stack overflow, the container throws a clear exception saying a circular dependency was detected.
And that’s a good thing.
Because circular dependencies don’t just break your application — they signal that your design is fighting itself.
You can try it yourself:
// CircularService.cs
public class ServiceA
{
public ServiceA(ServiceB serviceB) { }
public void DoSomething()
{
Console.WriteLine("ServiceA is doing something.");
}
}
public class ServiceB
{
public ServiceB(ServiceA serviceA) { }
public void DoSomething()
{
Console.WriteLine("ServiceB is doing something.");
}
}
// Program.cs - Testing Circular Dependency
builder.Services.AddScoped<ServiceA>();
builder.Services.AddScoped<ServiceB>();
When we run this application using dotnet run, it throws an error during application startup:
Unhandled exception. System.AggregateException: Some services are not able to be constructed (Error while validating the service descriptor 'ServiceType: ServiceA Lifetime: Scoped ImplementationType: ServiceA': A circular dependency was detected for the service of type 'ServiceA'.
This is one of the real strengths of Dependency Injection. It prevents catastrophic runtime failures and forces architectural issues to surface early, before they evolve into performance bottlenecks or production outages.
Is There Any Performance Implication of Manual DI?
Yes — but not because manual DI is slow.
Manual DI is like driving a manual car. It’s perfectly fine. In fact, in the right hands, it can perform beautifully. But it requires discipline. You control when objects are created, how long they live, and when they are disposed.
And that’s exactly where things go wrong.
In small applications, manual DI works great. You create objects carefully, reuse what needs to be reused, and everything behaves. But as the application grows and more developers touch the code, new starts appearing everywhere. Heavy objects get recreated unnecessarily. Expensive resources are not reused. Disposal is forgotten.
Suddenly, memory usage increases, garbage collection runs more frequently, CPU usage spikes, and production starts behaving very differently from your local machine.
So no — manual DI isn’t slow.
But manual DI without strict lifetime discipline?
That’s how performance slowly degrades, one innocent new at a time.
The Maintainability Advantages of Dependency Injection
Performance matters. Stability matters.
But in real-world software, one thing matters even more:
Change.
- Requirements change.
- Business rules change.
- Technologies change.
- Teams change.
And the real question is not:
“Does my code work today?”
It’s:
“How painful will it be to change tomorrow?”
Let’s look at a simple example.
public class OrderService
{
private readonly EmailService _emailService;
public OrderService()
{
_emailService = new EmailService();
}
public void PlaceOrder()
{
// Business logic
_emailService.SendReceipt();
}
}
At first glance, this looks harmless.
But now imagine:
- You want to switch from
EmailServicetoSmsService - You want to mock notifications for testing
- You want to introduce retry logic
- You want to log notification failures
Now you must modify OrderService.
Your business logic is tightly coupled to a concrete implementation.
Now let’s refactor:
public interface INotificationService
{
void SendReceipt();
}
public class EmailNotificationService : INotificationService
{
public void SendReceipt()
{
Console.WriteLine("📧 Receipt sent via Email.");
// real-world: SMTP/SendGrid/etc.
}
}
// SMS implementation
public class SmsNotificationService : INotificationService
{
public void SendReceipt()
{
Console.WriteLine("📱 Receipt sent via SMS.");
// real-world: Twilio/etc.
}
}
public class OrderService
{
private readonly INotificationService _notificationService;
public OrderService(INotificationService notificationService)
{
_notificationService = notificationService;
}
public void PlaceOrder()
{
_notificationService.SendReceipt();
}
}
Now OrderService does not care:
- Whether it’s email
- Whether it’s SMS
- Whether it’s a mock
- Whether it’s a composite service
To switch implementations:
services.AddScoped<INotificationService, EmailNotificationService>();
// or
services.AddScoped<INotificationService, SmsNotificationService>();
No changes to business logic.
That’s maintainability.
And this is just the beginning.
So far, we only used DI to make one dependency replaceable. But once your codebase grows, DI unlocks a few other quiet advantages that most teams don’t fully take advantage of.
For example, DI makes it easy to add cross-cutting concerns—like logging, retries, caching, metrics, and tracing—without polluting your business logic. Instead of sprinkling try/catch, logger.Log(...), and retry loops inside every service, you can wrap behavior using decorators and keep the core logic clean.
DI also helps you catch design problems early (like circular dependencies) and gives you a single place to manage object lifetimes and disposal—both of which become huge for long-term stability.
I won’t dive deep into those topics here, because this article would turn into a book. But keep them in mind: once you start treating DI as a design tool—not just a convenience tool—it changes how you build and evolve systems.
Conclusion
Dependency Injection isn’t just about avoiding the new keyword. It’s about making your software behave like a well-managed hotel instead of a chaotic hostel where everyone builds their own room, shares the same toothbrush, and somehow the building gets rebuilt every time a guest checks in.
Used correctly, DI gives you control over lifetimes, protects you from circular dependency disasters, and makes change feel like a refactor—not a horror movie. Used incorrectly, it quietly burns CPU cycles, inflates memory, and makes production behave like it’s possessed.
So the next time you reach for new inside a controller, just remember: you’re not just creating an object, you might be creating a future outage.
Happy injecting — and may your dependencies never loop back to haunt you.😄
Top comments (0)