Overview
This article explores whether dependency injection (DI) can exist in Rust without sacrificing the language’s core philosophy of zero-cost abstractions.
We will approach the question from three angles:
- Why dependency injection still matters in Rust, even for systems built with zero-sized types and compile-time guarantees.
- How DI evolved in other ecosystems, using Java as a reference point.
- A practical Rust-oriented approach to implementing DI with compile-time guarantees.
“We’ll also show how Rust traits enable DI patterns that scale across crates, preserving zero-cost guarantees.”
All Rust source code used in this article is available in the repository:
https://github.com/amidukr/rust-dependency-injection-example
Rust DI
The Problem Rust Hasn’t Solved Yet
Rust has solved problems most languages haven’t even dared to touch: memory safety without a garbage collector, fearless concurrency, and powerful zero-cost abstractions.
But there is a class of problems Rust hasn’t fully confronted yet.
Not because Rust is incapable — but because these problems exist above the machine level. They are not about memory safety or performance. They are about composition, modularity, and architectural correctness in large systems.
Managing dependencies between dozens or hundreds of components is fundamentally different from managing memory or threads. Rust gives us powerful primitives, but the question remains:
How do we scale composition safely and maintainably?
What “Enterprise” Really Means in Rust Terms
When Rust developers hear enterprise, they often think:
slow, over-engineered, bloated
But that perception is misleading.
Enterprise systems are not bloated by accident. They are complex because composition eventually stops being trivial.
The complexity comes from business requirements, not from the technology stack.
Enterprise: The Burden We Can’t Avoid
When a company reaches a certain scale, several things inevitably happen:
- Products serve thousands or millions of users
- Systems integrate with vendors, partners, and third-party services
- Teams work independently on modules and features
- Software must evolve continuously without stopping the business
These realities create architectural pressure.
From a technical perspective, systems must support:
Scalability
At multiple levels — both in terms of users and data: including hundreds, thousands, or millions or even up to billion of concurrent users, as well as functional modules interacting across teams.
Reliability
Systems run 24/7. Services must handle failures, because dependencies on vendors, partners, or third-party services mean that failures are inevitable, and the system must continue operating despite them.
Modularity
Independent teams need to work on isolated components without breaking other parts of the system.
Flexibility
Infrastructure choices may change. Databases, messaging systems, or integrations might need to be swapped without rewriting the entire application.
Observability
To detect and respond to performance bottlenecks, integration failures, or unexpected behaviors quickly.
Extensibility
New products, markets, and regulations require systems to evolve incrementally rather than being rebuilt from scratch.
Maintainable
Every business decision introduces new dependencies. And every dependency increases the complexity of the system’s composition.
Ensuring that the system doesn’t become so convoluted that small changes introduce cascading errors.
Even with Rust’s ownership model and strong type system, manually managing this dependency graph eventually becomes impractical.
These pressures are not theoretical — they define the daily reality of enterprise software engineering. Every design decision must balance immediate business needs with long-term sustainability, especially under high concurrent load.
Where Dependency Injection Becomes Relevant
This is exactly where dependency injection becomes useful.
DI allows systems to manage complexity by separating what components need from how those dependencies are created and connected.
In practice, this means:
- Components declare their dependencies without constructing them directly
- Dependencies are provided externally, keeping components isolated
- Systems evolve gradually without breaking existing modules
- Optional features and plugins can be integrated without tightly coupling the system
DI is not just a convenience. It is a structured approach to handling inevitable architectural complexity.
Enterprise Isn’t Just Complexity — It’s Heterogeneity
Large systems are rarely uniform.
They typically contain:
- Independent components with their own dependency trees
- Stateful infrastructure such as databases, caches, and message brokers
- Optional features and plugin-style modules
- Multiple implementations of the same interface
This heterogeneity appears naturally over time.
Systems accumulate tools built years apart, libraries maintained by different teams, and components that survive long after their original authors have moved on.
Enterprise systems grow gradually, and they rarely get the chance to start over.
Rust does not eliminate these pressures. Any real system eventually faces them.
Java’s Historical Perspective: DI Was Inevitable
Java did not adopt dependency injection because it was fashionable.
It adopted DI because large systems were becoming impossible to manage without it.
Without DI, developers quickly ran into familiar problems:
- Tight coupling between components
- Fragile initialization order
- Hard-coded dependencies scattered across the codebase
- Changes in one module unexpectedly breaking another
Dependency injection emerged as a discipline for managing complexity.
Components declare what they depend on, and the system provides those dependencies when constructing the application.
This separation allows systems to evolve without collapsing under their own architecture.
DI in a Nutshell
You can think of dependency injection as a kind of runtime composition system.
If your application contains many services, modules, plugins, or optional components, something must assemble them and ensure they are wired correctly and that role belongs to the DI system.
DI is conceptually similar to package managers such as Cargo or Maven, but it operates at a different level:
- Package managers resolve dependencies between libraries at build time.
- Dependency injection resolves dependencies between components at runtime.
Loading executable code into memory is easy — the operating system handles that.
What is harder is creating objects, initializing them correctly, and ensuring that all components interact with the right dependencies.
This becomes increasingly difficult as systems grow.
Dependency injection addresses this problem directly.
How Dependency Injection Is Typically Solved in Java
Java provides one of the most mature ecosystems for dependency injection. Frameworks such as Spring or Guice automate object creation and dependency wiring almost entirely.
Let’s revisit the same example from the previous section: a simple User Management API.
We have two controllers:
- ReadController — retrieves users from a database
- WriteController — creates users and publishes events to a message broker
Both controllers depend on infrastructure services that must be created and wired correctly.
Without Dependency Injection
In a traditional manual setup, object creation and wiring might look like this:
public class Application {
public static void main(String[] args) {
Database database = new PostgresDatabase();
MessageBroker broker = new KafkaBroker();
ReadController readController = new ReadController(database);
WriteController writeController = new WriteController(database, broker);
// start application
}
}
At first glance this appears manageable. But as the application grows, the initialization code expands rapidly:
- multiple infrastructure services
- optional modules
- configuration logic
- conditional wiring depending on environment
The main method eventually becomes responsible for constructing the entire dependency graph of the application.
This approach becomes difficult to maintain and extremely fragile as the system evolves.
Dependency Injection with Spring
Dependency injection frameworks solve this by moving the responsibility of object creation and wiring to a container.
Components simply declare what they need.
@Service
public class Database {
}
@Service
public class KafkaBroker implements MessageBroker {
}
@RestController
public class ReadController {
private final Database database;
@Autowired
public ReadController(Database database) {
this.database = database;
}
}
Dependencies are declared in constructors, and the DI container automatically provides the correct instances.
The application no longer manually constructs the object graph. Instead, the framework scans components and resolves dependencies automatically.
Polymorphism in Java DI
Java DI frameworks also support multiple implementations of the same interface.
For example, an application may support several message brokers simultaneously:
@Service
public class KafkaBroker implements MessageBroker {
}
@Service
public class RabbitBroker implements MessageBroker {
}
A controller can receive all implementations at once:
@RestController
public class WriteController {
private final List<MessageBroker> brokers;
@Autowired
public WriteController(List<MessageBroker> brokers) {
this.brokers = brokers;
}
}
The DI container automatically collects all implementations of MessageBroker and injects them into the controller. This makes the system highly extensible:
- new brokers can be added
- existing ones can be removed
- the controller remains unchanged
The Cost of Traditional DI
Java DI frameworks provide powerful capabilities, but they come with trade-offs:
- dependency resolution happens at runtime
- reflection is heavily used
- errors may only appear during application startup
- dependency graphs are not always fully visible to the compiler
This runtime flexibility works well for the Java ecosystem, but it introduces overhead and reduces compile-time guarantees. Rust, on the other hand, encourages a different philosophy:
If something can be verified at compile time, it should be.
This raises an interesting question:
Can Rust achieve the same flexibility of dependency injection while preserving compile-time guarantees and zero runtime cost?
Journey into Rust Coding
Let’s try to build a dependency injection approach in Rust gradually.
We will follow the same conceptual example used in the Java section:
- a ReadController
- a WriteController
- multiple implementations of a MessageBroker
- an abstraction for database connectivity
All code used in this article can be found in the repository:
https://github.com/amidukr/rust-dependency-injection-example
Rust Without Dependency Injection
In the first example we will implement a small Rust application without dependency injection.
However, we will introduce use-traits, which will later allow us to transition naturally to a dependency injection model.
1. Defining Database Interfaces
First, let’s define the interface used to access the database.
1.1 DatabaseConnection Trait
This trait represents an abstraction for database connectivity that can support multiple implementations (Postgres, MySQL, etc.).
trait DatabaseConnection {
fn read_query(&self, query: &str);
fn write_query(&self, query: &str);
}
1.2 UseDatabaseConnection Trait
Next, we define a trait that allows components to request a database connection from a context.
trait UseDatabaseConnection {
type T: DatabaseConnection;
fn database_connection(&self) -> &Self::T;
}
This trait will later be used as the foundation of dependency resolution.
Instead of components knowing the entire application context, they simply declare that they require a DatabaseConnection.
This keeps components decoupled from the full application structure.
2. Database Implementation
Now we provide a concrete implementation of DatabaseConnection.
#[derive(Default)]
struct PostgresDatabaseConnection {}
impl DatabaseConnection for PostgresDatabaseConnection {
fn read_query(&self, query: &str) {
println!("Reading from Postgres DB: {}", query)
}
fn write_query(&self, query: &str) {
println!("Writing into Postgres DB: {}", query)
}
}
For simplicity, this example only prints messages instead of connecting to a real database.
In a real system this could be implemented using any production database library.
3. Controllers
Now we define the controllers responsible for performing application logic.
3.1 Controller Structs
#[derive(Default)]
struct ReadController {}
#[derive(Default)]
struct WriteController {}
Rust allows structs with no fields.
These zero-sized types have no runtime cost, but they still represent concrete types at compile time and can participate in abstractions.
3.2 Controller Use Traits
Next we define traits that expose controllers to other components.
trait UseReadController {
fn read_controller(&self) -> &ReadController;
}
trait UseWriteController {
fn write_controller(&self) -> &WriteController;
}
These traits allow components to access controllers without knowing anything about the application context.
3.3 Controller Context
Now we combine the previously defined traits into a context trait.
trait ControllerContext:
UseDatabaseConnection
+ UseReadController
+ UseWriteController
{}
This context describes the minimal environment required for controllers to function.
Controllers will depend only on this trait instead of the full application context.
3.4 Controller Implementation
Now we implement the controller logic.
impl ReadController {
fn do_something<C: ControllerContext>(&self, ctx: &C, argument: &str) {
ctx.database_connection()
.read_query(format!("SELECT * FROM table WHERE id = '{}'", argument).as_str());
}
}
impl WriteController {
fn do_something<C: ControllerContext>(&self, ctx: &C, argument: &str) {
ctx.database_connection().write_query(
format!("UPDATE table SET value = 'new' WHERE id = '{}'", argument).as_str(),
);
}
}
Notice something important here:
The controllers do not know about the full application context.
They only know about the traits they depend on.
This means the controller and database code could already be extracted into separate crates, reusable by any application implementing the required use-traits.
4. Wiring the Application
Now we wire all components together.
4.1 Application Context
We define a struct that holds all application components.
#[derive(Default)]
struct ApplicationContext {
read_controller: ReadController,
write_controller: WriteController,
postgres_database_connection: PostgresDatabaseConnection,
}
This struct acts as the composition root of the application.
4.2 Implement Use Traits
Next we implement the previously defined traits.
impl UseReadController for ApplicationContext {
fn read_controller(&self) -> &ReadController {
&self.read_controller
}
}
impl UseWriteController for ApplicationContext {
fn write_controller(&self) -> &WriteController {
&self.write_controller
}
}
impl UseDatabaseConnection for ApplicationContext {
type T = PostgresDatabaseConnection;
fn database_connection(&self) -> &Self::T {
&self.postgres_database_connection
}
}
By implementing these traits, ApplicationContext becomes capable of providing dependencies to components.
4.3 Controller Context Implementation
impl ControllerContext for ApplicationContext {}
Since ApplicationContext already implements the required traits, it automatically satisfies ControllerContext.
- Running the Application
Finally we run the application.
pub fn run() {
let ctx = ApplicationContext::default();
ctx.read_controller().do_something(&ctx, "argument");
ctx.write_controller().do_something(&ctx, "argument");
}
Key characteristics of this approach:
- No
dyntraits - No
ArcorRc - No runtime dependency container
All wiring is resolved at compile time through generics and monomorphization.
Multi-Threading
An attentive reader may ask:
Will this approach work in multi-threaded environments?
In Rust, thread safety is typically ensured using the Send and Sync traits.
These traits are automatically implemented by the compiler if all fields of a struct are also Send + Sync.
More details can be found in the Rust documentation:
https://doc.rust-lang.org/nomicon/send-and-sync.html
We can verify thread safety with a compile-time assertion:
const _: () = {
const fn assert_send_sync<T: Send + Sync>() {}
assert_send_sync::<ApplicationContext>();
};
If this compiles, the entire application context can safely be shared between threads.
In real systems, some components (such as database connections) may not be inherently thread-safe. In such cases, a connection pool or synchronization mechanisms such as Mutex are required.
This limitation is not related to the dependency injection approach itself, but rather to shared resource management in concurrent systems.
What the Compiler Actually Generates
If we inspect the compiled output with:
cargo asm rust_di_example::main
...
26 │ lea rbx, [rsp, +, 32]
27 │ mov rdx, rbx
28 │ call qword, ptr, [rip, +, _ZN5alloc3fmt6format12format_inner17he42ed4cf3cdc276bE@GOTPCREL]
29 │ movups xmm0, xmmword, ptr, [rsp]
30 │ mov rax, qword, ptr, [rsp, +, 16]
31 │ movups xmmword, ptr, [rsp, +, 48], xmm0
32 │ mov qword, ptr, [rsp, +, 64], rax
33 │ mov qword, ptr, [rsp, +, 32], r14
34 │ mov qword, ptr, [rsp, +, 40], 18
35 │ mov qword, ptr, [rsp], rbx
36 │ mov qword, ptr, [rsp, +, 8], r13
37 │ lea rdi, [rip, +, .Lanon.63c02f0152e6743e61fdeaf76f1d4051.26]
38 │ mov rsi, rsp
39 │ call qword, ptr, [rip, +, _ZN3std2io5stdio6_print17hba8f5eda1e4e495eE@GOTPCREL]
40 │ lea rax, [rip, +, .Lanon.63c02f0152e6743e61fdeaf76f1d4051.27]
41 │ mov qword, ptr, [rsp, +, 32], rax
42 │ mov qword, ptr, [rsp, +, 40], 19
43 │ mov qword, ptr, [rsp], rbx
44 │ mov qword, ptr, [rsp, +, 8], r13
45 │ lea r14, [rsp, +, 48]
46 │ mov qword, ptr, [rsp, +, 16], r14
47 │ lea r15, [rip, +, _ZN60_$LT$alloc..string..String$u20$as$u20$core..fmt..Display$GT$3fmt17h9d11f1d81b352ac8E]
48 │ mov qword, ptr, [rsp, +, 24], r15
49 │ lea rdi, [rip, +, .Lanon.63c02f0152e6743e61fdeaf76f1d4051.7]
50 │ mov rsi, rsp
51 │ call qword, ptr, [rip, +, _ZN3std2io5stdio6_print17hba8f5eda1e4e495eE@GOTPCREL]
52 │ lea rax, [rip, +, .Lanon.63c02f0152e6743e61fdeaf76f1d4051.28]
53 │ mov qword, ptr, [rsp, +, 32], rax
54 │ mov qword, ptr, [rsp, +, 40], 21
55 │ mov qword, ptr, [rsp], rbx
56 │ mov qword, ptr, [rsp, +, 8], r13
57 │ mov qword, ptr, [rsp, +, 16], r14
58 │ mov qword, ptr, [rsp, +, 24], r15
59 │ lea rdi, [rip, +, .Lanon.63c02f0152e6743e61fdeaf76f1d4051.8]
60 │ mov rsi, rsp
61 │ call qword, ptr, [rip, +, _ZN3std2io5stdio6_print17hba8f5eda1e4e495eE@GOTPCREL]
...
We see extremely flat assembly code with series of invocation to _ZN3std2io5stdio6_print17hba8f5eda1e4e495eE@GOTPCREL that is just printing subroutine in rust runtime
There are no runtime dependency resolution mechanisms, no dynamic dispatch, and no container logic.
The generated code mostly contains calls to standard library functions such as printing.
This demonstrates that the abstractions introduced here do not introduce runtime overhead.
Why Use-Traits Matter
At first glance, the use-trait might look like unnecessary indirection.
Why not simply pass ApplicationContext directly to every component?
The reason is crate-level decoupling.
Enterprise applications often grow into multiple crates. Controllers, database access layers, messaging integrations, and domain logic are very often implemented as reusable libraries. For example, a Spring Boot actuator–style module may contain all layers inside the DB, provide REST API endpoints, and integrate with a monitoring aggregator service — it acts as a standalone sub-program.
However, if a component directly depends on ApplicationContext, it becomes tied to the executable crate that defines it.
That creates an architectural problem:
- Libraries would depend on the application crate
- The application crate would depend on the libraries
This circular dependency makes reuse impossible.
Use-trait solve this by defining capability-based interfaces.
Instead of depending on the application context, components depend only on the capabilities they require.
Example:
trait UseDatabaseConnection {
type T: DatabaseConnection;
fn database_connection(&self) -> &Self::T;
}
A controller does not know anything about the application structure.
It simply requires that the context provides access to a database connection.
impl ReadController {
fn do_something<C: ControllerContext>(&self, ctx: &C, argument: &str) {
ctx.database_connection()
.read_query(format!("SELECT * FROM table WHERE id = '{}'", argument).as_str());
}
}
Because of this design:
-
ReadControllercan live in its own crate - The crate only exports traits describing the capabilities it needs
- Any application can use the controller by implementing those traits
The application context becomes an adapter, wiring together independent components.
Application
├── implements UseDatabaseConnection
├── implements UseReadController
└── implements UseWriteController
This pattern enables a powerful architectural property:
Components become fully reusable libraries, while the application remains responsible only for wiring them together.
In other words, use-traits allow dependency injection to cross crate boundaries while preserving Rust’s compile-time guarantees.
Without this indirection, the system collapses into a monolithic application context that cannot be decomposed into reusable modules.
Limitations of This Approach
Although this example demonstrates many useful properties, it is not yet a complete dependency injection system.
The main limitation is that ApplicationContext still has too much knowledge about component internals.
In real DI frameworks, modules often contain many components, initialization logic, and internal dependencies.
For example, consider a Spring Boot module such as Spring Data.
When you add the dependency to your project, it automatically provides:
- database driver integration
- connection pooling
- repository interfaces
- transaction management
- entity scanning
- metrics integration
- health check integration
All of this functionality is assembled automatically by the DI framework.
From the application developer’s perspective, only minimal configuration is required.
Real dependency injection modules therefore consist of entire subgraphs of components, not just individual services.
In our example we intentionally introduced two controllers to demonstrate that even a simple module may contain multiple cooperating components.
A complete dependency injection framework must also manage:
- module composition
- initialization lifecycle
- dependency resolution
- optional components
- multiple implementations
This is where the real challenge begins.
Rust With Dependency Injection
To implement dependency injection in Rust, we will build iteratively. We start from the previous “no DI” approach and gradually close the gap toward a complete DI system.
The good news is that we already have use-traits, and our components are decoupled. We can extract certain code into reusable modules.
What’s missing for a true dependency injection system:
-
ApplicationContextstill has too much knowledge about the components it uses. - Some wiring and initialization steps are still manual.
Our goal is to move the wiring into DI modules, giving each component full control over how it is connected.
Because we are still targeting compile-time injection, we cannot rely on runtime reflection (like Java DI frameworks do). Instead, we will push this logic into Rust macros, allowing compile-time wiring while preserving zero-cost abstractions.
The final code for this stage is here: Rust DI Example — DI Without Manual Initialization
1. Registering Components in ApplicationContext
In traditional DI, the application knows which modules it depends on (like Spring Data). But modules themselves should control which components they export.
In our previous example, ApplicationContext was a struct, and registering a component meant adding a field manually. This ties the application to module internals. We need a way to add fields to ApplicationContext automatically, without putting module-specific code into the executable.
We can achieve this using the combine-structs crate, which provides macros to embed multiple structs into one.
https://lib.rs/crates/combine-structs
Each module defines an embeddable struct as a context extension. When imported, ApplicationContext automatically merges all fields from these extensions.
1.1 Context Extension for PostgreSQL
#[allow(dead_code)]
#[derive(Fields)]
struct PostgresDatabaseContextExtension {
postgres_database_connection: PostgresDatabaseConnection,
}
The
Fieldsderive macro allows this struct to be merged intoApplicationContext.
1.2 Context Extension for Controllers
#[allow(dead_code)]
#[derive(Fields)]
struct ControllerContextExtension {
read_controller: ReadController,
write_controller: WriteController,
}
The controller module exports two controllers. More components can be added without touching the main executable.
1.3 Embedding Context Extensions
#[combine_fields(PostgresDatabaseContextExtension, ControllerContextExtension)]
#[derive(Default)]
struct ApplicationContext {}
The
combine_fieldsmacro merges all fields from the context extensions.ApplicationContextnow has all components automatically wired.
2. Providing Use-Traits
Previously, wiring was done via use-traits. Now that ApplicationContext doesn’t know which components exist, modules must export use-trait implementations via macros.
2.1 Macro for Database Connectivity
macro_rules! inject_postgres_impl {
() => {
impl UseDatabaseConnection for ApplicationContext {
type T = PostgresDatabaseConnection;
fn database_connection(&self) -> &Self::T {
&self.postgres_database_connection
}
}
};
}
2.2 Macro for Controllers
macro_rules! inject_controller_impl {
() => {
impl UseReadController for ApplicationContext {
fn read_controller(&self) -> &ReadController {
&self.read_controller
}
}
impl UseWriteController for ApplicationContext {
fn write_controller(&self) -> &WriteController {
&self.write_controller
}
}
impl ControllerContext for ApplicationContext {}
};
}
2.3 Injecting Components
#[combine_fields(PostgresDatabaseContextExtension, ControllerContextExtension)]
#[derive(Default)]
struct ApplicationContext {}
inject_postgres_impl!();
inject_controller_impl!();
The executable only calls these macros. Components remain isolated from the main application, and the wiring happens automatically.
3. Intermediate Conclusion
At this stage:
- No component code has been changed.
- Modules can add or remove components freely.
- Components are decoupled from each other and from the container.
- Wiring happens automatically through macros and use-traits.
This gives us a bare-minimum dependency injection system: application components are decoupled, wiring is automatic, and no single component needs full knowledge of the application.
4. Limitations
Even though we now have a working DI mechanism, it isn’t fully production-ready:
- Initialization: Components may require setup before wiring.
-
Lifecycle Management: Controlling initialization order, cleanup, or optional components can be challenging.
Next, we will explore a Rust DI framework capable of automating component initialization and lifecycle management, moving closer to a complete solution.
Dependency Injection and Initialization Cycle in Rust
So far, we have built a dependency injection (DI) container where all components are stored as fields in ApplicationContext. The next challenge is initializing these components.
The goal is to:
- Enumerate the fields of
ApplicationContext. - Identify which fields require initialization.
- Call an initialization method for each such component.
Since we want everything to happen at compile time, we need a macro to generate a Rust method that calls init() on every tagged component without runtime loops or collections.
I could not find an existing macro for this, so I implemented one myself. If you want the details, check the implementation here: di_macro/src/lib.rs.
(https://github.com/amidukr/rust-dependency-injection-example/blob/main/di_macro/src/lib.rs)
In this article, we will focus on how to use this macro, not how it works internally.
Macro Example: Enumerating Tagged Fields
Full example code: struct_enumerator.rs
1. Define a Struct with Tagged Fields
#[allow(dead_code)]
#[derive(Debug, FieldEnumerator, Default)]
pub struct MyStruct {
#[tag(init_listener)]
field_1: i32,
#[tag(init_listener)]
#[tag(start_listener)]
field_1_2: i32,
field_2: i32,
#[tag(start_listener)]
field_3: i32,
}
-
FieldEnumeratoris our custom derive macro. - Fields can have one or more tags (
init_listener,start_listener).
2. Define a Callback Macro
macro_rules! my_callback {
($struct_name:ident, $field_name:ident, $listener_type:ident) => {
println!(
"struct = {}, field = {}, type = {}",
stringify!($struct_name),
stringify!($field_name),
stringify!($listener_type),
)
};
}
- For every tagged field, the callback macro is called at compile time.
- Arguments passed:
struct_name,field_name, andlistener_type.
3. Invoke the Field Enumerator
pub fn run() {
let my_struct = MyStruct::default();
println!("my_struct = {:?}", my_struct);
enumerate_tags_MyStruct_init_listener!(my_callback);
enumerate_tags_MyStruct_start_listener!(my_callback);
}
-
enumerate_tags_MyStruct_init_listener!andenumerate_tags_MyStruct_start_listener!are generated automatically by theFieldEnumeratormacro. - The macro expands into a flat sequence of
println!()calls.
Macro Example Output:
// enumerate_tags_MyStruct_init_listener!(my_callback);
// my_callback!(MyStruct, field_1, init_listener)
println!("struct = {}, field = {}, type = {}", "MyStruct", "field_1", "init_listener")
// my_callback!(MyStruct, field_1_2, init_listener)
println!("struct = {}, field = {}, type = {}", "MyStruct", "field_1_2", "init_listener")
//enumerate_tags_MyStruct_start_listener!(my_callback)
// my_callback!(MyStruct, field_1_2, start_listener)
println!("struct = {}, field = {}, type = {}", "MyStruct", "field_1_2", "start_listener")
// my_callback!(MyStruct, field_3, start_listener)
println!("struct = {}, field = {}, type = {}", "MyStruct", "field_3", "start_listener")
Notice: No vectors, arrays, loops, or runtime collections — everything happens at compile time.
Rust Dependency Injection with Initialization
We can now use the same macro to enumerate all fields in ApplicationContext and initialize them.
Code reference: di_init.rs
We introduce a Configuration component to demonstrate how initialization can depend on runtime data.
1. Configuration Module
#[derive(Default)]
struct Configuration {
run_arguments: &'static str,
}
#[allow(dead_code)]
#[derive(Fields, Default)]
struct ConfigurationContextExtension {
configuration: Configuration,
}
trait UseConfiguration {
fn configuration(&self) -> &Configuration;
fn configuration_mut(&mut self) -> &mut Configuration;
}
macro_rules! inject_configuration_impl {
() => {
impl UseConfiguration for ApplicationContext {
fn configuration(&self) -> &Configuration {
&self.configuration
}
fn configuration_mut(&mut self) -> &mut Configuration {
&mut self.configuration
}
}
};
}
Steps:
- Define the component struct (
Configuration). - Define a context extension for
ApplicationContext. - Define a use-trait (
UseConfiguration) for wiring. - Provide a macro to implement the trait on
ApplicationContext.
Note:
Configurationis no longer zero-sized—it contains runtime data (run_arguments).
2. Database Connection Initialization
2.1 Update PostgresDatabaseConnection
#[derive(Default)]
struct PostgresDatabaseConnection {
connection_string: String,
}
- Now contains runtime data.
- Initialization depends on configuration.
2.2 Tag Component for Initialization
#[allow(dead_code)]
#[derive(Fields, ContextExtension)]
struct PostgresDatabaseContextExtension {
#[tag(init_listener)]
postgres_database_connection: PostgresDatabaseConnection,
}
-
init_listenersignals that the component requires initialization.
2.3 Define Initializable Trait
trait Initializable<C> {
fn init(ctx: &mut C);
}
- Components implementing this trait can be initialized automatically.
2.4 Implement Initialization
impl<C: UseConfiguration + UsePostgresDatabaseConnection> Initializable<C>
for PostgresDatabaseConnection
{
fn init(ctx: &mut C) {
println!("Init sequence = {}", ctx.configuration().run_arguments);
ctx.postgres_database_connection_mut().connection_string =
format!("Postgres DB on {}", ctx.configuration().run_arguments);
}
}
- Accesses ApplicationContext mutably for initialization of any of component.
2.5 Prepare ApplicationContext
#[combine_fields(
ConfigurationContextExtension,
PostgresDatabaseContextExtension,
ControllerContextExtension
)]
#[derive(Default, FieldEnumerator)]
struct ApplicationContext {}
inject_postgres_impl!();
inject_controller_impl!();
inject_configuration_impl!();
- Added
FieldEnumeratorfor tag enumeration. - Configuration module bindings included.
2.6 Initialization Sequence
impl ApplicationContext {
fn init(&mut self) {
fn call_init<T: Initializable<ApplicationContext>, F: Fn(ApplicationContext) -> T>(
ctx: &mut ApplicationContext,
_closure: F,
) {
T::init(ctx);
}
macro_rules! init_callback {
($struct_name:ident, $field_name:ident, $listener_type:ident) => {
call_init(self, |x| x.$field_name);
};
}
enumerate_tags_ApplicationContext_init_listener!(init_callback);
}
}
How it works
-
call_initfunction- This helper function takes a generic type
Tthat implementsInitializable<ApplicationContext>. - It also takes a closure
_closureof typeFn(ApplicationContext) -> T. - The trick here:
the Rust compiler monomorphizes the closureto the actual type of the field passed in, soT::init(ctx)is called with the concrete type.
- This helper function takes a generic type
- &&
init_callback!macro**- The macro expands for each field tagged with
init_listener. - It calls
call_initwith the correct field fromself, ensuring the properInitializableimplementation is invoked.
- The macro expands for each field tagged with
-
enumerate_tags_ApplicationContext_init_listener! macro
- This macro iterates over all fields in
ApplicationContextthat are marked with#[init_listener]. - For each field, it invokes
init_callback!, which triggersInitializable::initfor that specific component.
- This macro iterates over all fields in
Key Rust trick
By using the Fn trait and generics in call_init, the compiler resolves the actual type of the field at compile time.
This avoids any runtime type checks and ensures zero-cost initialization while keeping strong type safety.
2.7 Running the Application
pub fn run() {
let mut ctx = ApplicationContext::default();
ctx.configuration_mut().run_arguments = "DB_URL=127.0.0.1:5555";
ctx.init();
ctx.read_controller().do_something(&ctx, "argument");
ctx.write_controller().do_something(&ctx, "argument");
}
Sample Output:
Init sequence = DB_URL=127.0.0.1:5555
Reading from Postgres DB on DB_URL=127.0.0.1:5555: SELECT * FROM table WHERE id = 'argument'
Writing into Postgres DB on DB_URL=127.0.0.1:5555: UPDATE table SET value = 'new' WHERE id = 'argument'
-
run_argumentssuccessfully propagated into runtime data.
Performance Considerations
In this demo, some structs now hold runtime data — but this is intentional. It’s added to demonstrate initialization, just like in real applications where components manage runtime state.
The wiring mechanism itself remains zero-cost:
- All bindings are resolved at compile time through monomorphization.
Even with the initialization sequence broadcasting multiple init calls, the compiler generates a flat sequence of calls:
No loops, no runtime collections, no dynamic dispatch — everything happens at compile time, efficiently.
Limitations
- This approach is now mature and production-ready for wiring, decoupling, and initialization.
- Next steps can explore advanced topics, such as polymorphism and more complex runtime behaviors.
Dependency Injection and Polymorphism
This is the final example of the article and introduces what I would consider an advanced topic for the core engine of any dependency injection framework: polymorphism.
Many DI frameworks handle basic dependency wiring well. For example, Java Spring Boot provides a very mature implementation. However, in many other DI implementations, one important capability is often missing — the ability to handle multiple implementations of the same abstraction in a flexible and compile-time-safe way.
Let’s extend our example with a new requirement.
New Requirement
Our application should support multiple message brokers, for example:
- Kafka
- RabbitMQ
After writing data to the database, the controller should publish a message to one or more brokers.
However:
- The component does not know which brokers exist
- The container may contain multiple brokers
- The DI framework must maintain this one-to-many relationship
one component should be able to call many broker implementations without knowing which ones exist.
To make things even more interesting, we introduce the concept of profiles.
Each profile represents a different configuration of the application context.
Example:
Profile1
- PostgreSQL database
- Kafka broker
- RabbitMQ broker
Profile2
- Oracle database
- RabbitMQ broker only
Complete example:
https://github.com/amidukr/rust-dependency-injection-example/blob/main/di_example/src/examples/di_polymorphism.rs
1. Injection Macros and Profiles
First, we slightly modify our injection macros so they accept the application context type as an argument.
macro_rules! inject_configuration_impl {
($ctx:ident) => {
impl UseConfiguration for $ctx {
fn configuration(&self) -> &Configuration {
&self.configuration
}
fn configuration_mut(&mut self) -> &mut Configuration {
&mut self.configuration
}
}
};
}
This change is necessary because:
The DI module does not know which profile will be used.
Each executable can choose a different application context profile, and the macros must work with whichever profile is selected.
Oracle Database Component
2. Oracle Database Component
Now we introduce a new database implementation.
#[allow(dead_code)]
#[derive(Fields, ContextExtension)]
struct OracleDatabaseContextExtension {}
And the injection macro:
macro_rules! inject_oracle_impl {
($ctx: ident) => {
impl DatabaseConnection for $ctx {
fn read_query(&self, query: &str) {
println!("Reading from Oracle DB: {}", query)
}
fn write_query(&self, query: &str) {
println!("Writing into Oracle DB {}", query)
}
}
impl UseDatabaseConnection for $ctx {
type T = $ctx;
fn database_connection(&self) -> &Self::T {
self
}
}
};
}
Here we apply a small trick.
Instead of defining a separate struct for the database connection, we implement the trait directly on the application context.
This approach avoids additional boilerplate and works well when we know there will only be one database implementation per profile.
3. Defining Message Brokers
Now we define the abstraction for message brokers.
3.1 Broker Interface
trait BrokerSender {
fn send_to_broker(&self, value: &str);
}
3.2 RabbitMQ Broker
#[allow(dead_code)]
#[derive(Default, Fields, ContextExtension)]
struct RabbitMqContextExtension {
#[tag(broker)]
rabbit_mq: RabbitMq,
}
#[derive(Default)]
struct RabbitMq;
impl BrokerSender for RabbitMq {
fn send_to_broker(&self, value: &str) {
println!("{} sent to RabbitMq", value);
}
}
Notice the important detail:
#[tag(broker)]
This tag allows the DI framework to enumerate all brokers automatically using the same mechanism we previously used for initialization.
3.3 Kafka Broker
Kafka is implemented in exactly the same way.
#[allow(dead_code)]
#[derive(Default, Fields, ContextExtension)]
struct KafkaContextExtension {
#[tag(broker)]
kafka: Kafka,
}
#[derive(Default)]
struct Kafka;
impl BrokerSender for Kafka {
fn send_to_broker(&self, value: &str) {
println!("{} sent to Kafka", value);
}
}
- Publisher — Compile-Time Polymorphism
Now comes the most interesting part.
We define a Publisher component that sends messages to all available brokers.
trait Publisher {
fn publish(&self, value: &str);
}
Injection macro:
macro_rules! inject_publisher_impl {
($ctx:ident) => {
impl Publisher for $ctx {
fn publish(&self, value: &str) {
macro_rules! broker_callback {
($struct_name:ident, $field_name:ident, $listener_type:ident) => {
self.$field_name.send_to_broker(value);
};
}
enumerate_tags!($ctx, broker, broker_callback);
}
}
impl UsePublisher for $ctx {
type T = $ctx;
fn publisher(&self) -> &Self::T {
self
}
}
};
}
The key idea:
The publisher does not know which brokers exist.
Instead, the FieldEnumerator macro generates code that calls send_to_broker for each tagged broker.
This gives us:
- one-to-many relationship
- compile-time wiring
- no dynamic dispatch
- no runtime overhead
4.1 Helper Macro for Tag Enumeration
macro_rules! enumerate_tags {
($ctx:ident, $tag:ident, $callback:ident) => {
paste! {
[<enumerate_tags_ $ctx _ $tag >]!($callback)
}
};
}
This macro simply dispatches to the procedural macro generated earlier.
5. Application Profiles
Now we define two different application contexts.
5.1 Profile 1
#[combine_fields(
ConfigurationContextExtension,
PostgresDatabaseContextExtension,
ControllerContextExtension,
PublisherExtension,
RabbitMqContextExtension,
KafkaContextExtension
)]
#[derive(Default, FieldEnumerator)]
struct ApplicationProfile1 {}
Profile1 includes:
- PostgreSQL
- RabbitMQ
- Kafka
5.2 Profile 2
#[combine_fields(
ConfigurationContextExtension,
OracleDatabaseContextExtension,
ControllerContextExtension,
PublisherExtension,
RabbitMqContextExtension
)]
#[derive(Default, FieldEnumerator)]
struct ApplicationProfile2 {}
Profile2 includes:
- Oracle database
- RabbitMQ broker
- no Kafka
6. Initialization Macro for Context
We move the previously used initialization logic into a reusable macro:
macro_rules! application_context {
($ctx: ident) => {
const _: () = {
const fn assert_send_sync<T: Send + Sync>() {}
assert_send_sync::<$ctx>();
};
impl Initializable<$ctx> for $ctx {
fn init(ctx: &mut $ctx) {
fn call_init<T: Initializable<$ctx>, F: Fn($ctx) -> T>(
ctx: &mut $ctx,
_closure: F,
) {
T::init(ctx);
}
macro_rules! init_callback {
($struct_name:ident, $field_name:ident, $listener_type:ident) => {
call_init(ctx, |x| x.$field_name);
};
}
enumerate_tags!($ctx, init_listener, init_callback);
}
}
};
}
7. Wiring Profiles
7.1 Profile1
application_context!(ApplicationProfile1);
inject_postgres_impl!(ApplicationProfile1);
inject_controller_impl!(ApplicationProfile1);
inject_configuration_impl!(ApplicationProfile1);
inject_publisher_impl!(ApplicationProfile1);
inject_rabbit_mq_impl!(ApplicationProfile1);
inject_kafka_impl!(ApplicationProfile1);
7.2 Profile2
application_context!(ApplicationProfile2);
inject_oracle_impl!(ApplicationProfile2);
inject_controller_impl!(ApplicationProfile2);
inject_configuration_impl!(ApplicationProfile2);
inject_publisher_impl!(ApplicationProfile2);
inject_rabbit_mq_impl!(ApplicationProfile2);
8. Running the Example
fn do_run<T: Initializable<T> + Default + UseConfiguration + ControllerContext>() {
let mut ctx = T::default();
ctx.configuration_mut().run_arguments = "DB_URL=127.0.0.1:5555";
T::init(&mut ctx);
ctx.read_controller().do_something(&ctx, "argument");
ctx.write_controller().do_something(&ctx, "argument");
}
pub fn run() {
println!("Running Profile1");
do_run::<ApplicationProfile1>();
println!();
println!("Running Profile2");
do_run::<ApplicationProfile2>();
}
Example Output
Running Profile1
Configuration = DB_URL=127.0.0.1:5555
PostgresDB connection init sequence = DB_URL=127.0.0.1:5555
Reading from Postgres DB...
Writing into Postgres DB...
WriteController 'argument' sent to RabbitMq
WriteController 'argument' sent to Kafka
Running Profile2
Configuration = DB_URL=127.0.0.1:5555
Reading from Oracle DB...
Writing into Oracle DB...
WriteController 'argument' sent to RabbitMq
Final Result
With this approach we achieved:
- compile-time polymorphism
- one-to-many dependency injection
- profile-based application configuration
- no dynamic dispatch
- no runtime container
- fully monomorphized wiring
Everything is resolved at compile time while still supporting flexible application configurations.
Conclusion: Can Rust Have Zero-Cost Dependency Injection?
Throughout this article we explored whether Dependency Injection can exist in Rust without introducing runtime overhead.
Traditional DI frameworks in languages such as Java rely heavily on reflection, runtime containers, dynamic dispatch, and runtime graph construction. These features make frameworks like Spring Boot extremely flexible, but they also introduce runtime complexity and performance costs.
Rust approaches the problem differently.
Instead of relying on runtime containers, the examples in this article demonstrate how compile-time composition can be used to build a dependency injection system. Using traits, generics, procedural macros, and compile-time code generation, we can construct an application context where:
- component wiring happens at compile time
- dependencies are resolved through traits and generics
- initialization logic can be generated statically
- polymorphism can be implemented without dynamic dispatch
Because Rust performs monomorphization during compilation, every dependency binding is resolved into concrete function calls. This means the final binary contains no reflection, no dynamic lookup tables, and no runtime dependency container.
In other words, dependency injection becomes a compile-time architectural pattern rather than a runtime framework.
We also demonstrated several important features typically expected from mature DI systems:
- modular component composition through context extensions
- controlled initialization sequences
- one-to-many polymorphism for components such as brokers
- configurable application profiles
- and all of this without introducing runtime cost or dynamic dispatch
The result is a system where flexibility and performance are not in conflict.
Rust’s type system and macro system allow us to design architectures that remain fully decoupled, while still producing simple, predictable, zero-cost binaries.
This raises an interesting conclusion.
Rust may never have a DI framework that looks like Spring Boot — and it probably shouldn’t. But Rust does allow dependency injection to exist in a different form, one that embraces the language’s philosophy:
compile-time guarantees, explicit composition, and zero-cost abstractions.
Future Directions
The examples in this article intentionally keep the framework small in order to focus on the core ideas. However, a production-ready system would likely evolve further. For example, initialization often requires explicit ordering between components, where some services must be initialized before others. The current example also contains a fair amount of boilerplate, which could be significantly reduced with a more advanced procedural macro design. Heavier use of derive and attribute macros could also improve IDE code completion and developer ergonomics while keeping the system fully type-safe.
Beyond the core container mechanics, several practical features naturally follow from this model: improved testing support, built-in mechanisms for mocking and stubbing components, and the ability to override components in derived profiles — a common requirement when building test environments or specialized deployments.
Finally, dependency injection frameworks rarely exist in isolation. Systems such as Spring Boot succeeded not only because of their DI container, but because they provided a standard foundation for an ecosystem of reusable modules. A similar approach in Rust could allow libraries to integrate around a shared compile-time DI model, enabling a broader ecosystem of interoperable components while preserving Rust’s philosophy of explicit composition and zero-cost abstractions.
Curiosity about AI technologies led me to use ChatGPT to help improve the clarity and flow of this article. All technical ideas, architectural decisions, and Rust code examples are entirely my own. I am not affiliated with OpenAI, and this mention of ChatGPT is not sponsored, promoted, or intended as advertising — it is simply a note of gratitude for the support it provided in refining my writing.











Top comments (0)