Automated Unit Tests, or AUT, are a concept that most developers do not initially see as beneficial. When I was introduced to AUT, my reaction was was, “I’m going to write buggy code, to test my buggy code.” It takes takes time to see the true benefit of AUT. As time has gone on, I've become a huge proponent for AUT.
The true purpose of AUT is to allow the developer to be sure the code behaves as expected.
Burn that bold text into your head because it will be the theme of everything here.
On the surface, it sounds like I just said the same thing about buggy code testing buggy code. But that is not really what is happening. I am able to test what happens inside my code. I am able to go through different scenarios in milliseconds. I am able to verify success cases, failure cases, edge cases, and exception paths without waiting on a database, a file system, another service, or human interaction. That is where the value starts to show up.
A few years ago, a friend reached out to me saying that he needed to write AUT for his coding assessment, but he did not fully understand what the true purpose of AUT is. “Why do I need to write code to show that 2 == 2?” I’m a huge proponent of AUT, and my friend did not see the value. We discussed back and forth some different challenges that come with testing. Some of my questions were:
How do you test your classes based on what is returned from a third party service? What if that service is down?
How do you simulate exceptions?
How do you test interacting with the System namespace?
When do you test?
What if you are maintaining code that you did not write?
I got some rather lame answers, like, “It doesn’t make sense to test for a service being down, what are the chances it is going to be down?” or “Why test exceptions? I’d just write a catch for it so it doesn’t get thrown to the end user.” If you have ever worked in a distributed environment, you know systems are unavailable from time to time and it is out of your control. Networks fail. DNS changes and they aren't communicated. Tokens expire or APIs throttle. Databases go offline or move. File shares disappear. Cats and dogs living together!!! Somebody rotates a secret and forgets to tell you. I am in awe that it works at all sometimes. But, it was clear that an example was needed.
What are Automated Unit Tests?
You do not test a bridge by driving a single car over it right down the middle on a clear, calm day. You do extreme things. You add load. You check wind conditions. You make sure the supports are deep enough. You make sure parts do not shift when the weather changes, especially where there are freezing temperatures. You verify those little reflectors don't pop out of the road when hit by a truck. You look for the weak points before the public uses the bridge
That is what unit testing is doing for your code. You are not proving your code works in one happy path under perfect conditions. You are deliberately looking for the places it can bend, crack, or do something you did not intend.
What I’m highlighting are edge cases. Edge cases are the conditions that sit at the extremes of what your code is supposed to handle. They are the places most likely to show breakdowns, different behavior, and exceptions in your code. You are simulating stress on your solution, much like you were simulating stress on your bridge.
A unit test is meant to exercise a small piece of code, avoid external infrastructure, and run fast enough that developers can execute it frequently. These unit tests shouldn't depend on databases, file systems, or network resources. Fowler similarly describes unit tests as small in scope and fast enough to run constantly while coding.
These tests should be automated, quick, repeatable, and consistent. You are testing the smallest practical path in your codebase. It may be an orchestrating method. It could be a validation rule. It may be a helper method. The point is that you can go through a single path and hit the cases you need to hit.
It also has a great side effect. Writing unit testable code tends to push you toward better design. If a class is miserable to test, that is often a design smell. Maybe it has too many responsibilities. Maybe it reaches into too many dependencies. Maybe it hides logic behind framework calls or static state. Testing has a way of shining a light on that.
What good unit tests look like
Automated
Automated Unit Tests are automated - it is right there in the name. They require no manual intervention, no setup gymnastics, no babysitting, and no clicking around a UI. Kick off your tests and get a result.
That matters because the real value is not just writing the test once. The real value is rerunning it every time you make a change. A test that needs a human to prepare the environment is already losing its value.
Quick
A good unit test should execute in milliseconds. There's no network traffic, no real database work, no disk operations, and no waiting on external systems. I can have no access to the VPN, no LAN, no internet access and still execute my tests. Good unit tests are fast, isolated, and repeatable, and Fowler makes the same case for keeping them small and fast enough to run constantly during development.
That speed changes developer behavior. If your tests run in milliseconds, you will actually use them. If they take twenty minutes, your team will start asking questions about whether you really need to run them. That is where quality and value starts slipping.
Repeatable
Because unit tests do not interact with unstable outside systems, they become repeatable. Data does not need to be manually configured. A network does not need to be available. The same test can run over and over and produce the same result.
That is a massive benefit during refactoring. When the result changes, you know it is because something changed in the code or the test, not because the planets aren't in line or a shared environment had a bad morning.
Consistent
Consistency is what separates a useful test suite from a noisy one. A flaky test that passes on one run and fails on the next without any meaningful change is not a safety net. It is background noise. Non deterministic tests become effectively useless because teams stop trusting failures once they become unreliable.
That is why isolating system behavior matters so much. If your code depends on the current date, abstract the clock. If it depends on file access, wrap the file system. If it depends on an external service, introduce an interface. Then your tests can simulate exactly what you need, and they can do it the same way every time.
This allows red flags to be raised immediately when a test fails. I can't tell you the number of times when teams write poor tests, they fail, and they just continue on because it is expected.
Why we use Automated Testing
Unit tests make sure the developer understands the behavior of the code. In a good codebase, the business rules are not just buried in business logic. They are also reflected in tests that sit side by side the code and explain what is expected to happen.
The first time I really saw value in an Automated Unit Test was when I was writing validation logic. A client would pass in details to the back end that needed to be verified based on existing information in the database. If the submission was valid, process the update. If not, return an error and do not update the database.
I wrote the code and it worked. Then I went back and refactored the conditionals. In the process, I introduced a bug that always updated the database, even when the request was invalid. Had I had unit tests verifying that invalid requests never call the update method, I would have caught it immediately.
That is a big part of the real value. They are there to catch the moment you accidentally broke behavior that used to behave as expected. Regression protection is one of the major benefits of unit tests, and will help you when you come back to a module, or someone new is tinkering around.
They also serve as living documentation. A well named test tells the next developer, including future you, what the code is supposed to do. Sometimes reading a good test is faster than tracing through production code. If you ever get the chance, check out the names of some of the methods for my unit tests.
When do we execute our tests?
Run your unit tests all the time. The sooner you get feedback, the sooner you can correct the problem. Unit tests should be part of your normal workflow, not a special event. One day, I'm going to write about continuous testing in dotCover.
During development
As you build a feature, run your tests. Fowler’s guidance on unit tests is very direct here. Fast tests are valuable because they can be run constantly while programming, often after every meaningful change.
The faster the feedback, the easier it is to locate the defect. If I break something and find out thirty seconds later, I know roughly where to look because it is fresh in my mind. I'm not going to remember what I did two weeks ago, or even yesterday.
During refactoring
If you are making changes to the codebase, you need a quick way to verify that you did not break behavior. That is where a good test suite is valuable. And don't act like you're going back to write unit tests for large chunks of code after it is in production.
Refactoring without tests is like a surgeon without monitors to vitals. They may make changes in the process, but they have no idea if they broke something in the process.
In Continuous Integration
The whole point of Continuous Integration is fast feedback. When code is pushed, you want an automated compilation, and a test run telling you whether the application still behaves as expected. Running a broader commit suite as part of CI, commonly including all unit tests, because the speed and scope make them ideal for that layer of feedback. Keep in mind, these are not integration tests. Those are slower, which serve a different purpose of what you're trying to do here Because unit tests are cheap and fast, they belong in CI. They catch regressions while the change is still fresh in the engineer’s mind.
Misconceptions about Unit Tests
I grow frustrated when people get the wrong idea of unit tests. Unit tests are a tool, and not a silver bullet. I've found myself fighting some of the same battles over and over.
1. Unit tests result in bug free code
You're still going to have bugs in your code. Unit tests reduce risk. They increase confidence. They catch regressions. They absolutely do not guarantee bug free software. A unit test only tells you whether a specific behavior matched a specific expectation under a specific condition. It is a big reason why you need many tests.
We still need integration tests, system tests, exploratory testing, and plain old human judgment.
2. Unit tests are difficult to maintain
Bad unit tests are difficult to maintain. Good unit tests are usually a reflection of good design.
When production code follows sane design principles, especially explicit dependencies and separation of concerns, the tests are easier to write and easier to keep. Microsoft’s ASP.NET Core testing guidance leans on dependency injection and explicit dependencies specifically because those patterns make code testable.
Honestly, if you follow the SOLID Principles, AUT is really easy.
3. AUT is the same thing as Test Driven Development
This statement can make me red in the face. TDD is a development practice. Unit testing is a testing technique. They overlap, but they are not the same thing. Fowler has also written about how people often confuse self testing code with TDD, even though TDD is only one path to getting there.
You can write good unit tests without following a strict red, green, refactor cycle. From my experience, There aren't many shops following TDD as designed. You can tell when someone is doing TDD because their code is a bit different.
4. More tests means more quality
Garbage tests are a waste of time. Garbage tests are often a result of a misunderstanding of what unit tests are supposed to do, or a need to fill a vanity metric.
A hundred fragile, shallow, badly named tests do not make a codebase healthy. They make it noisy. Quality comes from meaningful tests that verify behavior people actually care about.
I fully support a mandate around Unit Test Code Coverage. It can be useful as a signal, but it is not the goal. You can hit a coverage number and still miss the real business rules, but you can't have a good test suite, and a low coverage percentage.
5. Unit tests eliminate the need for manual testing
They do not.
Manual testing, especially exploratory testing, still matters. Unit tests are excellent at fast, repeatable checks of expected behavior. They are not great at discovering confusing workflows, odd usability problems, or the kinds of real world chaos people create the minute your software lands in front of them.
Common challenges
1. Writing testable code
This is usually the first real hurdle. If your code reaches directly into the current time, the file system, static state, configuration, HTTP calls, and database access all in one method, testing it is going to hurt.
That pain is often telling you something useful about the design. Take a step back and redesign your code.
2. Legacy code
Legacy code often means tightly coupled code with very few seams. That makes introducing tests harder, but also more valuable.
You may not be able to drop in perfect unit coverage on day one. Sometimes the first step is characterization testing, writing tests around current behavior so you can make changes without guessing.
3. Mocking and dependency injection
There is a learning curve here. Developers who are new to interfaces, dependency injection, and test doubles often feel like this is extra ceremony with no benefit.
In practice, these design patterns let you replace unstable collaborators with predictable ones. That is exactly why you need to isolate your unit tests. Dependency injection allows swapping implementations for testing, including mocked services in controller tests.
4. Balance
How many tests do we write? How many assertions in one method are too many? What do we verify?
Those are real questions, and there is no magic number. I would rather have one sharp test that verifies a meaningful rule than five vague tests that mostly restate the code.
5. Continuous Integration
Not every shop is doing Continuous Integration, and it can be hard to bring to the team if your build process is convoluted. It can be hard to introduce testing into an existing CI process, especially if teams are already used to slower, unstable test suites. That said, this is exactly where unit tests shine, because they are the cheapest automated feedback you can add to the pipeline.
6. Skill gap
Effective unit testing requires shared understanding. The team needs to agree on what a unit test is, what belongs in one, what does not, and what “good” looks like.Otherwise, one person writes isolated tests, another person writes mini integration tests and calls them unit tests, and then everybody argues about testing while the code rots.
Getting everyone on the same page is the biggest challenge.
7. Knowing what to test
What if you're coding and you don't have the true requirements defined yet? Sometimes the logic is unclear. Sometimes the code is unclear. Sometimes product doesn't even know what is supposed to happen.
Another hidden benefit of testing brought to light.... writing tests forces clarity and engineers have to ask questions. “What is this thing actually supposed to do?” Don't get answers, and you're in a bad shape.
Hands On is the way to be
Look, I’ve been writing unit tests for over 15 years. I’ve helped others start writing unit tests. I’ve struggled, and I’ve seen others struggle to write them too. It isn’t about 2 == 2. It’s about giving the developer confidence that the code behaves as expected. You’re testing happy paths, failures, edge cases, and exception handling in milliseconds. You’re building confidence in your codebase.
Focus on the most concerning areas first. You’re going to have to isolate dependencies, and if you’ve thrown things together quickly, it will take time to get it right. Go incrementally, and as you do, treat your tests as documentation for what you’ve built.
In an upcoming post, I’m going to walk through some code samples and patterns that I’ve used. I’ll cover testing exceptions, code coverage, mocking, and all the good stuff. Hands-on is the way to go.
Top comments (0)