Dependency Injection (DI) is a way to structure code so that functions and classes receive the objects they need (dependencies) instead of creating them internally.
This keeps your code modular, testable, and easy to rewire.
A dependency is anything your code relies on: a logger, database client, repository, HTTP client, configuration, etc.
Step 1 — Start with explicit dependencies (baseline)
Without DI, code often looks like this:
- A function creates A() inside itself.
- A service creates its own DB client.
- Every call creates new objects, making it hard to test and control lifetimes.
DI flips the direction: construction happens outside, usage happens inside.
Step 2 — Inject into functions via a decorator (simple DI)
A very approachable Python DI technique is:
- Store factories in a registry (dependencies).
- Wrap a function with a decorator @inject().
- At call time, inspect the function’s annotations.
- Create required objects and pass them as keyword arguments.
inject_by_parameter_name.py (idea)
In the simplest version, the registry can be keyed by parameter name:
"a" -> factory returning A()
"b" -> factory returning B()
The decorator:
- reads annotations,
- for each parameter name, finds a factory,
- injects the created object into kwargs.
Pros
- Dead simple, great for learning.
Cons
- Renaming a parameter can break injection.
- String keys give weak tooling and refactor support.
- Not scalable for larger systems.
Step 3 — Inject by type
More "Pythonic" way
Instead of using strings, use types as keys:
# di_registry_by_type_example.py
dependencies = {
A: lambda: A(),
B: create_b, # or B (class itself)
}
Now injection is driven by type hints:
- If a function requests b: B, DI looks up factory for B.
- If a factory itself needs dependencies (e.g. create_b(a: A) -> B), DI can resolve those too.
This is where DI starts to feel powerful: you’re building a dependency graph.
Pros
- Refactor-friendly: parameter names don’t matter.
- IDE + type checker become useful.
- Natural fit with Python annotations.
Cons
- You quickly need recursion, cycle detection, caching (lifetimes), and clearer control.
Step 4 — Mark “injectable” parameters explicitly
Sometimes you want to distinguish “regular parameters” from those that should be resolved by DI.
A common trick is to introduce a marker generic:
# inject.py
# Marker that means: "this parameter should be injected"
type Inject[T] = T
Then you can write:
# inject.py
@inject(deps)
def handler(a: Inject[A], b: Inject[B]) -> None:
...
At runtime you “unwrap” the marker type and extract the real type A / B from the annotation (using typing.get_args() or get_type_hints(include_extras=True) depending on the approach).
This improves readability and avoids accidental injection into values that should be provided by the caller.
Step 5 — Resolve dependencies recursively
Build the object graph
The heart of DI resolution is:
- You want B.
- Find a provider/factory for B.
- Inspect what the factory needs (constructor params or function args).
- Resolve those first.
- Call the factory with resolved arguments.
- Return the built instance.
Factory can be a class or function
A DI system often supports both:
-
factory = SomeClass(resolve its__init__dependencies) -
factory = create_something(resolve its function parameters)
That makes DI flexible and allows:
- pure factory functions,
- classes as providers,
- easy composition.
Step 6 — Add scopes (lifetimes):
transient, singleton, request, session, app
Once DI works, you’ll notice a real-world problem:
Should every
resolve(A)create a newA, or reuse one?
That’s what scopes (lifetimes) solve.
Common scopes
- Transient: new instance every time you resolve.
- Singleton: one instance per container.
- Request: one instance per request context (web request, job, message).
- Session: one instance per session block (e.g., batch operation).
- App: one instance shared application-wide.
Why request scope often needs context-local storage
In concurrent environments (threads/async), “request scoped” objects must not leak between requests. A context-local mechanism (e.g., ContextVar) makes each request have its own cache.
Step 7 — Add safety: circular dependency detection
As graphs grow, cycles happen:
-
AneedsB -
BneedsA
A DI container should detect it early and produce a helpful error.
Typical technique: keep a stack (resolution path) and if the same type appears again, raise an error with the chain.
Step 8 — Where to assemble everything: the Composition Root
A key DI concept: Composition Root is the place where you wire dependencies:
- register providers,
- choose scopes,
- configure settings,
- build the container.
The rest of your code should mostly request types, not manually construct them.
This gives you:
- clean separation between “object graph assembly” and “business logic”
- easy test overrides (swap provider with fake/mock)
Step 9 — Testing with DI (the biggest win)
With DI, testing becomes straightforward:
- In production: register real implementations (DB, HTTP clients).
- In tests: register fakes/mocks for the same types.
Because consumers only depend on interfaces/types, you can swap implementations without touching the consumer code.
Production-style example (what it roughly looks like)
Below is a compact “production-ish” sketch showing how DI is typically used in a web service:
- One composition root creates and configures the container.
- A request scope is opened per incoming HTTP request.
- A handler function receives dependencies via Inject[...].
- A repository depends on a DB session, a service depends on a repo, etc.
# production_di_example.py
from __future__ import annotations
from dataclasses import dataclass
from typing import Any, Callable
# Imagine these come from your DI library / container module
type Inject[T] = T
class Scope:
APP = "app"
SINGLETON = "singleton"
TRANSIENT = "transient"
REQUEST = "request"
class Container:
def register(self, dep_type: type, factory: Callable[..., Any], scope: str) -> None: ...
def resolve(self, dep_type: type) -> Any: ...
def request(self): ... # context manager
def inject(container: Container):
def deco(fn):
def wrapper(*args, **kwargs):
# simplified: DI inspects annotations, resolves Inject[T]
return fn(*args, **kwargs)
return wrapper
return deco
# ----- Domain / infrastructure -----
@dataclass(frozen=True)
class Settings:
db_dsn: str
api_key: str
class DbEngine:
def __init__(self, settings: Inject[Settings]) -> None:
self.dsn = settings.db_dsn
class DbSession:
"""Request-scoped session/transaction handle."""
def __init__(self, engine: Inject[DbEngine]) -> None:
self.engine = engine
def close(self) -> None:
pass
class UserRepo:
def __init__(self, session: Inject[DbSession]) -> None:
self.session = session
def get_user(self, user_id: int) -> dict[str, Any]:
# query using self.session
return {"id": user_id, "name": "Alice"}
class UserService:
def __init__(self, repo: Inject[UserRepo]) -> None:
self.repo = repo
def get_profile(self, user_id: int) -> dict[str, Any]:
user = self.repo.get_user(user_id)
return {"user": user, "features": ["basic"]}
# ----- Web layer (handlers) -----
# In production, a framework (FastAPI/Starlette/etc.) calls this per request.
def create_app_container() -> Container:
c = Container()
# APP/SINGLETON: long-lived stuff
c.register(Settings, lambda: Settings(db_dsn="postgres://...", api_key="secret"), scope=Scope.APP)
c.register(DbEngine, DbEngine, scope=Scope.SINGLETON)
# REQUEST: per-request objects
c.register(DbSession, DbSession, scope=Scope.REQUEST)
c.register(UserRepo, UserRepo, scope=Scope.REQUEST)
c.register(UserService, UserService, scope=Scope.REQUEST)
return c
container = create_app_container()
@inject(container)
def get_user_handler(user_id: int, service: Inject[UserService]) -> dict[str, Any]:
# Handler does not create UserService/UserRepo/DbSession.
return service.get_profile(user_id)
# ----- “Framework glue” (very simplified) -----
def handle_http_request(user_id: int) -> dict[str, Any]:
# Each HTTP request gets its own request scope
with container.request():
result = get_user_handler(user_id)
return result
if __name__ == "__main__":
print(handle_http_request(123))
print(handle_http_request(456))
What’s “production-like” here
- Composition root is
create_app_container():- it decides wiring and lifetimes (scopes),
- it’s the only place that knows which implementation is used.
- The handler stays clean: it receives
UserServiceand focuses on business logic. - The request scope ensures
DbSessionand related objects don’t leak between requests.
Quick summary
DI in production usually means:
- assemble dependencies in one place (composition root),
- inject dependencies into services/handlers via type hints,
- control lifetimes with scopes (singleton vs request), make testing easy by swapping providers.
Top comments (0)