DEV Community

onetomkenny
onetomkenny

Posted on

Implementing A Repository Layer With Inverted Dependency

The Continuing Story Of How Dependency Injection Was Implemented In A Large Brownfield Platform

This is the second journal entry in a series that documents the conversion of a brownfield monolith towards SOLID programming principles. See the original post for the story of our motivation.

The last post covered how to take code written directly against a volatile dependency (database) and break out a Service class that implements an Interface so that the consuming code can work against that abstraction as opposed to the implementation.

This is useful because it creates seams in your code that make it easier (possible) to do unit testing and to substitute new implementations later on.

If you have a good understanding of the previous post, then what comes next will be easier to follow.

This post continues our implementation of “The first six chapters” of the book Dependency Injection – Patterns Principles And Practices by Steven Van Deursen and Mark Seemann. Some of what follows will seem a bit radical but for reasons outlined in the last post, I found I had a high degree of confidence in what that book was telling me.

As I joked to my boss later on, “I did 90% of what the book advised and then for the other 10% I did it my way and the result is I love 90% of the work I've done and hate the other 10%”.

A Refresher...

At the end of the last post, the code related to Company was organized like this in the business (domain) layer project:

Alt Text

The Load method in the CompanyRepositoryService class looks like so:

Alt Text

The BuildClassFromProcResults method is something we wrote internally to handle the common task of calling a stored procedure and marshalling the return data into a class instance.

The fragment CompanySP.GetCompanyInfoSP(companyID) is a call to a method in our DataAccessLayer project where we define all these stored procs.

The first problem with our solution was that our DataAccessLayer project didn't really represent a layer in the architectural sense. It wasn’t doing much at all except holding stored proc definitions. Your situation is probably different but the point here is that “Projects that are named like Architectural layers” can be deceptive.

Anyway, the Business Layer has a dependency on the DataAccessLayer (the second needs to Build before the former can be built). The Dependency Injection book makes the argument for inverting that dependency to make loose coupling a part of your development process. You can read their arguments for this on pages 48-49 of the book.

Our Reasoning For An Inverted Dependency

For us the decision to do follow the author's lead was made in spite of our view that we would not likely have use for the options this creates:

We knew that we were going through the entire business layer anyway and decided it would be best to add 10% more work and risk to avoid the hazard of realizing later that we should have gotten fully modern when we had the chance.

When we thought about the testing requirements that we were signing up for by rifling through the critical business layer code, it seemed like any errors introduced by reworking the related database code would be discovered by the manual testing we already had to do anyway.

I'm happy to report that this was a solid assumption on our part. In fact once we got going, the conversions were quite smooth:

Conversions on Monster Classes typically create between zero and two bugs that are fairly obvious and are quickly discovered by testers or other devs and then easily fixed. No customer facing blowups have occurred because of DI work as yet.

Creating The SqlDataRepository Project

  • Create a new SqlDataRepository project for the solution.

  • Now move the CompanyRepositoryService.cs file (see the code organization figure above) into the new project.

Remember that the class in this file implements a ICompanyRepositoryService interface. The definition for that interface will stay in the business layer project. This moves us towards the goal of the business layer not needing to have an implementation of the ICompanyRepositoryService methods to build.

At this point, we took the opportunity to make a new implementation of the ReaderTools code noted earlier on. This code uses the DbDataReader internally and because we have a goal of removing all “Database” references from the business layer we decided to wrap this with a more generic “RepositoryReader” service.

The particulars are outside the scope of this blog, but this snippet shows a piece of the code for illustration:

Alt Text

We did something similar making a general ManagedTransaction service to handle SQL Transactions. The result being that the business layer only knows there is an IManagedTransaction interface that defines methods implementing some transaction manager.

Your situation and goals will determine how far you want to go with this, but in our case, the interfaces are all defined in the business layer and the implementations are all inside the SqlDataRepository layer. In principle, we could move over to Oracle database or an Azure data store and the business layer would be indifferent.

Finally, you could define all these repository interfaces in a third project in the hopes of making your business and repository layers build in parallel. Where you land on the tradeoff between solution complexity and build speed (not to mention “coolness”) is up to you but hopefully you can see how this inverted dependency approach creates all sorts of opportunity.

DI Containers

“OK, but once you’ve built the solution, how do you connect the code in the business layer that uses repository interfaces to their implementation?”

Another question you might have had reading part 1 is “How/where do you manage all these dependencies that you are passing into these services you are creating?”

This is where a DI Container comes in. During the .NET Rocks interview with Steven Van Deursen that I mentioned in the introductory post he advises trying something called “Pure DI” at first. This is the approach we took in part 1. You just manage creating all these dependencies by hand and see how it goes until you start to see why you (probably) need a DI container.

SimpleInjector

We selected SimpleInjector as our DI Container. Steven is the maintainer of SimpleInjector but the Dependency Injection book also covers several alternatives in detail in the later chapters.

The thing I liked most about SimpleInjector is that while it could be made to do almost anything that competing containers could do, it came with limited out-of-the-box capabilities. This is no accident: SimpleInjector is a tool used by people who’ve “Tried it all” and have a good idea about what works and what gets you into trouble.

For example, Property Injection is not supported by default. You can enable this, but the book will explain why you probably shouldn’t. Other DI Containers will just cheerfully show you how to do things you wouldn’t likely do if you knew more. This gets back into that earlier .NET Rocks interview that so impressed me.

The idea is that the tool pushes the developer to "fall into the pit of success".

If You Decide To Try SimpleInjector..

Grab the SimpleInjector Nuget package and add it to your solution. It will need to be in the projects that instantiate services (likely your business (domain) layer and your UI layer).

Alt Text

Make your SqlRepository project depend on the Business Layer project

  • Right click on your Solution in Solution Explorer and select Common Properties -> Project Dependencies. Select the SqlRepository project and click the checkbox so that it depends on BusinessLayer.

Alt Text

  • Then inside the SqlRepository project add a reference to the business layer project

Alt Text
Pro-tip: Configure VS to build all projects.

A Minor Pitfall

By default VS only looks at building the Startup project and any projects that it depends on. Since SqlRepository is none of these, it will often not get built when you “Build” or even “Rebuild solution”. This can lead to frustrating runtime errors.

Did I mention that with DI you trade compile time errors for runtime errors? This is a result of coding against an abstraction (interface) instead of an implementation. The DI Container doesn't provide the implementation until runtime.

To configure VS to build all projects:
Tools > Options > Projects And Solutions
Uncheck “Only build startup projects and dependencies on Run”

This should not adversely effect build times because VS already checks to see if anything in a project changed before it triggers a build.

We added a postbuild step to our Repository project so that it copied the built dll into the bin folder where the other dlls are all sent:

Alt Text

For reference, the string is “copy /Y "$(TargetPath)" "$(SolutionDir)\bin\$(TargetFileName)"”

Now for the last big setup step: We need to setup our DI Container with registrations for the services we built in Part 1.

Here is an example class including a GetInstance() method that returns an object implementing a specified interface using a DI Container:

Alt Text

First notice there is a “container” property which is a static variable. This class is using a singleton pattern. The container property will be setup one time by the first consumer to ask for it and then after that the same container will always be used.

Notice the get accessor calls lazy.Value. The Lazy<> code just below that works to ensure that the setup is only run once and only by one thread (Lazy blocks all other threads that make a concurrent call until the setup is complete).

The CreateContainerAndRegisterDependencies() method is where the setup action is. First we instantiate our container, and then call some methods to actually register types. Then we call Verify() which is a facility that SimpleInjector offers to validate your setup.

Now lets take a look inside the RegisterDomainDependencies() method:

Alt Text

These lines are registering (in the DI container) an implementation for each of the ICompanyService and ICompanyCheckerService interfaces. This will mean that a consumer asking the DIContainer for an ICompanyService will get a CompanyService instance and that a request for ICompanyCheckerService will return a CompanyCheckerSerice instance.

The Lifestyle specification at the end refers to the length of time the object should live. See the Dependency Injection book or the SimpleInjector documentation for more on this. We use Singleton to indicate that the same instance should always be returned to each caller for maximum performance. Transient could be used to have the container create a new instance for each caller.

Next lets look inside RegisterRepositoryDependencies()

Alt Text

This one is a bit more involved. Remember how we set the business layer to build without building SqlRepository first? We could do that because we inverted the dependency. But that means that we need to examine the compiled SqlRepository.dll at runtime and find the implementation of our interface.

Your paths may differ but the example above is consistent with the postbuild step we added to copy the dll into the bin folder

“This doesn’t look very scalable!”

You may notice the code above has us hand coding various type and implementation pairings. For real world scenarios, you can express logic that helps SimpleInjector perform auto-registration of items in bulk.

The particulars will vary for your application but on our platform we handled bulk repository registrations by creating a base IRepositoryService Interface from which every other Repository interface inherits. Then we wrote a LINQ query to get a list of all the derived interface/class pairings and feed that to Simpleinjector all at once.

For example, here is the code we use to handle the Repository registrations in all its intimidating glory:

Alt Text

There are other ways to do this. You can check the documentation for SimpleInjector or the Dependency Injection book when designing your own auto registration code. Chapter 14 covers auto-registration with SimpleInjector.

We chose to use interfaces (instead of a naming convention for services) because the interfaces are code which gives you compiler checking along with "referenced by" tooling in VS that allow you to trace every services place in the DI system. We have a base IDomainService for business layer services to distinguish those from repository services.

That AttemptSafeRegistration() method contains code that examines the service being registered and compares it to the Lifestyle requested to see if they match. Its not even close to failsafe but can at least stop developers from registering a service with public properties as a Singleton lifestyle (which wouldn’t play well with concurrent usage).

Payday finally comes:

Finally, we can see an example of how we can use the DI Container to provide us with an implementation of the interface we coded against at runtime.

Here is a snippet of how you get the CompanyRepositoryService by specifying an ICompanyRepositoryService at runtime:

Alt Text

Of course the only reason to have one of these around in our example is to make a CompanyService and the reason that exists is to make a CompanyCheckerService. Thanks to the DI Container, we can get this all done in one line:

Alt Text

That one-liner is considerably more interesting! The DIContainer didn’t just make a CompanyCheckerService for you. It saw that the lone constructor for CompanyCheckerService required a CompanyService so it made one of those. Then it made a CompanyRepositoryService since one of those is required to construct a CompanyCheckerService.

So the DI Container discovers and manages all these mappings for you. This is how you can use a few lines of code to create complex objects at runtime but still write code that is testable because the lone constructors mentioned earlier all take interfaces which can be mocked at test time.

A tortured word about Service Locator

The authors of the Dependency Injection book have a lot to say about the Service Locator Pattern which they consider to be an Anti-Pattern. I will give my “Real World Brownfield” opinion on the matter.

The first and ultimate truth for us as a brownfield platform is that “Service Locator is already everywhere”, and converting portions of it to DI makes this more obvious but on its own does nothing to fix this, nor does it make things worse.

The second problem we have is that we happen to be a classic asp.net WebForms based platform and Webforms was clearly not designed with DI in mind. The first edition of the Dependency Injection book has a chapter about about applying the concept of Composition Root to a WebForms application and its almost comically difficult to implement. We would never put that much effort into hundreds of Web Forms when that effort could be put into just converting the web forms into something more modern.

The Dependency Injection Book makes additional arguments about Service Locator leading to code that hides its dependencies. This can be especially troublesome if you are creating a redistributable library since your users have no way of knowing what services are required when they use it.

Our Approach

We are building a Platform where all devs have access to all code so hidden dependencies are more of an annoyance than a devastating problem. Furthermore, devs can (and do) make an effort to push their service locator instances up the stack towards the main entry point of the code. In this way, things slowly evolve in the direction of better until we move onto a more modern request handler than WebForms.

In the end, its important to remember that implementing DI in a BrownField applicaton doesn't create the Service Locator so much as reveal it.

Stumbling point – Unit Testing

Since the purpose of the DI Container is to supply your application with access to development or production versions of volatile dependencies, you probably don’t want container code run during your unit tests to build those very dependencies. This is something that can happen easily in a platform full of legacy code where team members are already taxed trying to develop good test writing skills as they rework legacy code to make it testable.

We briefly tried out the idea of having the DI Container return “default testing mocks” when this situation came up. Even as we created that system, it felt like the real answer was just to get better at method and test writing so that we didn’t need to do this. Indeed that bit of Lazy<> code above was first developed to deal with the multithreaded nature of the Test Running in Visual Studio.

Our "Probably Wrong Turn" With Default Testing Dependencies

We used the Moq library and reflection based code to just mock up every interface for which the DI Container would register a production implementation and then used Moq again to define default return values for various common return types.

The resulting tests didn't run dreadfully slowly, but you could feel the extra 20% time lag by the time we hit 500 tests and we knew we didn’t want to pay that penalty for 5000 tests. Also it made the tests harder to read and reason about since they counted on this secret behind the scenes behavior that developers had to keep in mind.

Finally, when we looked into removing default testing mocks, we found that only a few of our tests relied on them being there and by then we know how to rewrite them into better tests anyways.

So, I would suggest you don’t use the DI Container to manage creation of default testing mocks. As always your miles may vary. Perhaps you have mission critical monster methods that you need to wrap in some big ugly acceptance style unit tests before you feel brave enough to start refactoring them into something more manageable. Just be sure you have a good reason before you do it is my advice.

Test Environment Detection

In any event, you will probably want to have some code in your DI Container setup to detect that you are being asked to provide dependencies in a test environment.

A snippet from our platform code is:

Alt Text

The worker process name for a test environment will differ depending on what you use. The first two constants refer to the VS Test Runner and the last refers to a command line test runner that our Continuous Integration server uses for running tests.

In our case, we throw an exception when running under a test environment since unit tests should be mocking their own dependencies.

Conclusion

Between this and the previous post, we have a big picture overview of how to convert the business (domain) and repository layers of a brownfield application to feature Dependency Injection for the handling of dependencies like Database access.

The resulting Repository Service is a true architectural layer and the traditional dependence of the domain layer upon it is inverted so that it depends on the domain layer.

This state of affairs provides the Platform with new code seams that make unit testing practicable and allows for developers to substitute new dependency implementations with little to no changes required in consumer code.

Top comments (0)