DEV Community

Lars Richter
Lars Richter

Posted on

Is testability a reason to change your design?

The following discussion is something I experience on a regular basis:

Person A: Why do you extract an interface for this? Can't you just make a normal class for this?
Person B: Of course I could just make a class. But it will be hard to test, because I cannot replace the implementation easily.
Persion A: So you're changing your design just for testability? Why would one do it? It still would be possible to test it, if you implement it as a simple file. Maybe it's simpler with the abstraction, but you are writing more code.
Person B: Isn't testability a good enough reason to make that design choice?
Person A: I don't know? I think I would not do it. It's just testability, you know?

Almost every time I'm "Person B". For me, testability of a feature/application is a big win. And if I can improve testability by adding a level of abstraction, that's a good thing.

Of course there are times when testability isn't that important. For "toy projects" or just small pieces of code it might not be relevant. But in most cases I'm working on complex projects with a big amount of legacy code. While working on those projects, in my opinion, testability is pretty valuable.

I would love to hear your opinion on that topic. Would you change your design just to achieve/improve testability?

Top comments (33)

Collapse
 
craser profile image
Grumpy and

Code design is always a game of trade-offs between concerns. But yes, testing/testability is an important consideration. It may help you and Person A find common ground if you're more lucid about what concrete benefits you hope to achieve by testing. It's easy to write off testing as "just testing", but it's harder to dismiss the usefulness of those tests in supporting refactoring efforts, preventing regressions, etc.

I love Sarah Mei's note on Five Factor Testing. It breaks down the goals of testing, and offers some great insight into how to write better tests and better code if you're clear about which of of those goals is most important to you.

Collapse
 
eljayadobe profile image
Eljay-Adobe

Thanks for the link to Sarah Mei's article on Five Factor Testing.

The one thing I'd change a little bit in Sarah's article is where she talks about integration tests. I don't think developers should write any integration tests (and system tests, and functional tests, and acceptance tests, and performance tests, and...), that is what the quality engineers should create. Otherwise the chance of an integration test have a "blind spot" that corresponds to the implementations self-same "blind spot" approaches unity.

In a previous project, the unit test suite (~70% code coverage) took about a second to run. Unit tests are what the developers should create. That's the proof for basic correctness, provide design (in the small) guidance, the refactor safety net, the regression catcher for violating basic correctness, and the documentation-via-code of functionality.

Unit tests makes sure the nut passes all its requirements, and the bolt passes all its requirements. But says nothing about the nut and the bolt working together. Effectively, unit tests fill the gap for languages that do not provide facilities for design-by-contract -- which is (unfortunately) most of them.

That same project, the integration test suite took over 600 hours to run. The integration test suite was the "when you put this nut and this bolt together, do they work together correctly?"

Integration tests (and system tests, and acceptance tests, and performance tests) serve a very different purpose than unit tests.

Joe Rainsberger has a good presentation Integrated Tests Are A Scam where he argues passionately that integration tests are no substitute for unit tests. I think the title is a bit inflammatory to pique curiosity.

Also, for Behavior Drive Design kind of stories that are written such that they can be executed, such as by using Cucumber story executer and Gherkin story language, those should be written by the product owner, and perhaps with assistance of the business analysts. If they are being written by testers or by developers, its being done wrong.

Collapse
 
craser profile image
Grumpy and

All excellent points. For a large, fully-functional development organization, I whole-heartedly agree with everything you've pointed out.

I think Mei and Lars (the original poster) are in similar situations, in that they are either working on small teams where roles blur, or with company/team cultures that don't fully value automated testing. In those circumstances, it's a victory just to have automated unit and integration tests, regardless of who's writing them. As they say, "Perfect is the enemy of good."

Collapse
 
n_develop profile image
Lars Richter

Hey Chris,

Thanks a lot for the tip with the "Five Factor Testing". The article is very good and pretty insightful.
Valuable stuff.
Thanks.

Collapse
 
vinaypai profile image
Vinay Pai

It's always hard to answer these questions in general. Obviously you need to weigh how complicated the code is, how long it's likely to survive, how often it's going to get worked on, and how much extra work is needed to make it testable.

That said, if your code is hard to test that is often because it's poorly structured to begin with. Patterns like loose coupling and well designed interfaces are usually pretty well correlated with testability.

Collapse
 
n_develop profile image
Lars Richter

Thanks for your feedback, Vinay.
It's true, that you should take all these things into consideration. Just like I wrote in the post, everyone should determine if testability is an important goal in this particular project. Is it just a toy project or "throw-away-project"? Why would you care about testing in this case? For me, testability is pretty important in complex systems. In these projects, tests are an important safety net.

Collapse
 
jvanbruegge profile image
Jan van Brügge

For me it's 100% yes for testability. But not for actually writing tests, that's just a nice bonus. Ask yourself: Why does this make the code more testable? Most of the time it is: Because i can inject/mock side effects, e.g. a database call. This means your design change divides the logic from the side effects and testability is just a result of this.
I am one of the maintainers of Cycle.js and we design our code to be testable and visualizable. This naturally leads to clearly seperated side effects from app logic, with the app logic being a pure function. As we all know, pure functions are way easier to test than side effectful functions, so our architecture results in testable code.

Collapse
 
n_develop profile image
Lars Richter

Why does this make the code more testable? Most of the time it is: Because i can inject/mock side effects, e.g. a database call. This means your design change divides the logic from the side effects and testability is just a result of this.

I agree. Im most cases, testable code also pushes your design towards the single responsibility principle (and also other SOLID principles like DI). And that's a good thing.

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

Having a testable design is important. However, I'm against unncessary abstractions for the purpose of testing. It oftens leads to the false abstraction anti-pattern. There are many ways to test code without adding much complexity. I think mocking as a means to testing has run wild on many projects.

Collapse
 
n_develop profile image
Lars Richter

I agree, that having a testable design is important. ☺️
But I can see your point. If the code gets overly complex just for the purpose of testing, it's not a good thing. I really don't want to promote a "testability and abstractions are the cure for everything" thought. Always use the right tool for the right job. That's important.

BUT: I don't think a single interface isn't increasing the complexity of a software system pretty much. But it might increase the testability a lot.

In the end it is a matter of your personal priorities and opinions.

For me, testability is important. I have seen the same bug get in the code over and over again. A good set of tests can prevent that.

Collapse
 
mykezero profile image
Mykezero • Edited

I think it's in "Clean Architecture" where Bob Martin says that a lot of programmers believe that the true value of the system is in its behavior.

Yes, it's the behavior which businesses value, but as programmers - the people developing the software - we need to be aware of the maintenance cost of code that's associated with choosing a design that's too locked down.

Maybe I have a mislead view of software development. I know I can easily fix code that behaves incorrectly, but has tests and is verifiable. What I can't do is fix locked-down code which has neither tests or logging.

That makes the software a black box where I cannot even begin to reason about what the software is doing in a production environment.

If the cost for an extra layer of verifiability is an interface, then give me the damn interface!

The case where I see a need for an interface is when testing manager classes.

Even though my component class is a simple domain logic class which does not use outside resources, my manager doesn't care what the implementation of that component is.

Why should I complicate my tests with the extra set up data needed to test drive the manager class by making it depend on the concrete implementation of a component class? The component could be very complicated in nature, requiring a very complicated data setup.

Of course, nobody but that one class will ever use that interface, but the interface here will lower the amount of work needed to create the test in order to verify that the system works as intended.

That is more than enough benefit to warrant the interface's creation.

Collapse
 
n_develop profile image
Lars Richter

What I can't do is fix locked-down code which has neither tests or logging.

That makes the software a black box where I cannot even begin to reason about what the software is doing in a production environment.

If the cost for an extra layer of verifiability is an interface, then give me the damn interface!

I could not have said it any better.

Thanks for your feedback, Mykezero.

Collapse
 
ben profile image
Ben Halpern

I think of Conway's law

Organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations

It refers to a structure of communication, but I think you can apply this law of thinking to any number of things that could naturally part of the design thinking. Trying too hard not to let something like testability impact the design process could be a fool's errand.

Collapse
 
codemouse92 profile image
Jason C. McDonald • Edited

We should remember: testing exists to produce better code. There may be cases where the code needs to be reasonably refactored to enable testing, but we must keep priorities straight. Don't modify the horse to fit the cart.

Collapse
 
n_develop profile image
Lars Richter

I agree. But I think that testing exists to produce correct code. But obviously correct code is better than incorrect code. ☺️

Don't modify the horse to fit the cart.

That's true. Testability is not the most important thing. Working software should be the main goal.

This discussions always remind me of the "Is TDD dead" videos. And to be clear: it's an important discussion. That's why I am posting questions like this. People need to see both sides of this discussion.

Collapse
 
codemouse92 profile image
Jason C. McDonald

Funny thing is, you'd think all this would be obvious...but our industry has a strange habit of adopting methodologies for their own sake, instead of because how they can benefit our project.

Auxiliary point, "TDD" drives me a little crazy, because it is a particular methodology of programming that doesn't work for all projects. I've worked on a few where TDD would have been more of an obstacle than an asset. We still do testing in those projects, but it isn't "TDD" per-se. In short, Testing != TDD. :)

Thread Thread
 
n_develop profile image
Lars Richter

Auxiliary point, "TDD" drives me a little crazy, because it is a particular methodology of programming that doesn't work for all projects.

It's very important to stress that. I don't think there are a lot of practices and methodologies that work for every project. Every project is different. They use different languages, frameworks and libraries. There are really big and very small projects. From a few hundred lines of code, to millions.

I said it before and I will say it again: Use the right tool (or framework or methodology) for the right job. Don't be dogmatic.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Uncle Bob said (I'm paraphrasing) that if he had to choose between having a complete test suite and the code it tested, he'd prefer the tests because he could use the tests to recreate the implementation but he can't do much with a pile of code without tests.

I agree with the point he was trying to make.

Q: If you get hit by a bit tomorrow what would the next guy or gal who has to maintain your code want to see? Clean code following SOLID principles with "good" tests? That would be my hope if I was that next guy.

The longer I do this (programming), the less patience I have for code without tests.

Thread Thread
 
codemouse92 profile image
Jason C. McDonald

The "bus factor" is my motive for leaving extensive intent inline comments and external documentation. Tests shouldn't have to be used to recreate intent, which is the ingredient from which we recreate code. In fact, I'd even say that having to recreate intent from tests is only slightly less soul-sucking than recreating from raw code. Therefore, I'd say it's a terrible motive to writing tests.

That said, yes, tests are virtually always something you should have as part of your code base.

Collapse
 
dwd profile image
Dave Cridland

I think that "BECAUSE TEST!" is roughly the same as "BECAUSE SECURITY!" or the nebulous "BECAUSE UX!". What we're after is greater confidence in the software's quality, and quality is measured along many axes, often with a trade-off to be made. Focusing exclusively on test as an end-goal is a deceptive thing because software can be well-tested and completely useless.

So for some of my own green-field projects, I do very heavy automated testing - but I didn't need to have that influence the architecture to do so. It did influence the implementation, though - Spiffing, for example, is carefully written to avoid "bushy" branching, reducing the test effort required. The test framework is written to be data-driven, too, so that users can work with their own test data as well as mine.

On the other hand, some projects don't lend themselves well to automated testing at all - I've never seen good tests for the server-to-server portions of an XMPP server. Maybe it's possible with significant work, but I suspect it's one of those things more effective to write and manually test heavily. The bugs are complex sequential issues, difficult to replicate in any useful way in automated tests without having to write half a simulated network stack. So instead, my effort goes into manual test, and support for that.

Small pieces of code don't get tested not because they're unimportant, but because one can (hopefully) manually prove them.

So I'd note that:

a) Testing is a crutch we use to avoid provability. If we could usefully prove code, then testing it would be superfluous.

b) Testing only works if the tests themselves are correct. Testing is only useful if the tests are testing that which might fail.

c) The goal is not test. The goal is confidence.

Collapse
 
n_develop profile image
Lars Richter • Edited

I think that "BECAUSE TEST!" is roughly the same as "BECAUSE SECURITY!" or the nebulous "BECAUSE UX!".

It sounds so negative, when you say it like that. :-) But to be serious: In general you are right. Testing/Testability shouldn't be the main goal. No doubt here. Nevertheless, sometimes I make decisions, like the one mentioned in the post (introducing an interface), to make something testable. Nothing more. Just make it testable (or as you like to say it "BECAUSE TEST!" ;-) ).

c) The goal is not test. The goal is confidence.

And we should always keep in mind: A working test suite gives a lot of confidence.

Collapse
 
dwd profile image
Dave Cridland

Absolutely - a working test suite is a great way to get confidence. A working and audited test suite even more so.

Collapse
 
ethansankin profile image
Eytan Sankin

Agreed. Testable code will make thing easier down the road and i think it should be a prime consideration. I'm always trying to use more and more functional programming principles to make my code easier to test. Untestable code will cause development to slow down.

Collapse
 
shiling profile image
Shi Ling • Edited

Don't test for the sake of 100% code coverage.

Not everything is worth the time or effort to test.

For example, IMHO, CRUD operations aren't quite worth the effort, because they are straightforward enough that fellow engineer can easily spot a mistake during a code review, and also because they will be used often enough in various parts of the application that mistakes would be obvious and emerge very quickly.

I automate tests (and refactor for testability) when the logic is complex enough that mistakes would be difficult to spot during code reviews, for example where there are computations and decision trees.