Earlier, @ben wrote these awesome two posts:
What was your TDD aha moment?
When Test-driven Development Clicks
In case you don't know ...
For further actions, you may consider blocking this person and/or reporting abuse
I don't mean to be a dick, but if you're making HTTP requests in your tests, you're almost certainly doing something wrong. Try dependency injection. If you use the
new
keyword inside a function, you make that function untestable.Once you learn how properly test and refactor an existing codebase, then you might have the experience necessary for TDD.
Not a problem as long as you're right 😉
So I will throw in my 0.2 cents about DI, mocks, and fakes. I will start by saying they are all great tools but if they are written against YOUR OWN code.
Lemme mention a simpler example, when you do frontend dev and you need some API, do you WRITE YOUR OWN mock server from scratch, or USE an already built mock server (like postman), make the necessary endpoints and responses, and plug it into your code (replace the url)?
I believe it's not your job to mock the world, your job is to mock the services you create, not other people's services.
WHY?
Because the moment you start doing this: stackoverflow.com/a/36425948/4565520
It means you're testing what the mock is supposed to do, and NOT what the what the service as a whole is supposed to do... you're basically cheating yourself with a passing test for that mock code that's guaranteed to work.
So, to keep your tests more "honest", and not cheating yourself, the person who wrote that service (say HTTPClient) is supposed to write its mock as well so that you don't WRITE your own mock, but you USE an already built mock.
Actually, if you find yourself mocking AuthService or HTTPClient that are supposed to be PART OF THE FRAMEWORK you use, then I guess it's time to start questioning your framework!
Some mature frameworks like django provides you with the necessary tools to make your tests easy as breeze, like calling your controllers without going through the hustle of DI and RepositoryPattern and preparing a db to run your tests on... bla bla bla!
Of course you can create mocks for each and every service you face, but aside from the fact "it's not my business"... I don't have the luxury of time to do that, and I'm sure you don't as well.
Ah... btw, this controller mentioned above has this constructor full of injected services, but it does NOT solve the problem I'm mentioning from the very beginning, so my point about "you end up testing the framework" still holds.
I disagree about not testing the UI, but it does depend on the ability to test it.
I really like Jest's snapshot tests for that use case. They allow you to verify that the UI components haven't changed unexpectedly, which is usually enough for just the visible components of the UI, and Jest can be used with React or Vue. It's there to catch unexpected breakages, and if it breaks when expected you just overwrite the snapshots.
Regarding testing the controller and service, I'm not familiar with .NET so I can't be specific, but it looks to me like the problem is that you're instantiating the things your controller method needs inside it, when instead you should be using dependency injection to resolve them. That way, when you write a test for your controller, you can inject a mock for that dependency into the controller.
While I can't give a specific example of this for .NET, I've done this in PHP many times, and hopefully this example makes sense. Here's a typical controller I might write to do something similar:
I might test this as follows using Mockery and PHPUnit:
Testing the service would be similar. The service class might look something like this:
And the test might look something like this:
If most of the actual work is done in other classes and you use dependency injection to fetch those other classes, it's relatively simple to replace those dependencies with mocks and test your controllers in isolation. Controllers shouldn't do much anyway - they should just get the input, defer to other classes to do the heavy lifting, and put together the response.
I agree entirely with you that you should not be testing that the framework is working as expected - it has its own tests for that and you shouldn't have to duplicate them. Instead you should mock out those dependencies. Your unit tests shouldn't fail because the third-party service whose API you use has gone down.
I mentioned this part in the opinionated tips cuz they're just my own opinions, but I will try to convince you for sure 😄
(First, what I'm about to say is not valid if your job is a tester)
Forget frameworks, forget programming, and forget anything as a developer.
If you have a program, what makes this program's UI marked as "valid"?
The answer is simple just try things out in the program by clicking around, and if it's working, then yeah it's okay to deploy it.
That's why I mentioned uilicious.com, you don't test components of the UI and waste your time thinking ah this widget should count X, and this part of the page should drop the Y menu when I put Z data.
Instead you just write the flow of the user:
Which is a very abstract way of testing the product generally. That's why I believe automating the user behavior in such way is more practical than unit testing the components.
Dependency Injection and mocks are two great tools, but when you say that they make testing easier, then heck no... they make your tests more abstract and you start actually testing if the language is gonna call X function or and return Y value which will do for sure!
Your example is exactly showing that:
You are asking whether it's gonna call that mock function and return true, it will for sure, this is guaranteed by PHP!
And again look what you're doing here:
This is a wrapper function and you're testing if the function client->get would work, but it's tested already in your framework, right?
That's not a job for unit tests. This is the domain of high-level acceptance testing. I've used several of the Cucumber derivatives in the past (as I now work primarily with PHP I use Behat), and they're the same. They're useful for testing that the application's behaviour is acceptable to the end user, but they're on a different level to unit tests. For instance, here's an example of a Behat or Cucumber feature for the same use case you mentioned:
I'd argue this is better than the syntax you mentioned because it's more completely decoupled from the actual underlying implementation to the point that it's very easy for even non-technical stakeholders to understand it. Each line in the test maps to a method, and anything in quotes is passed through as an argument, making it easy to reuse the steps. But you get the point - it's testing the application from an end-user's point of view, which is great, but in my experience is not sufficient in isolation. This kind of automated acceptance test tends to be comparatively slow, typically taking a minute or two to run, and that's too slow to enable real TDD, which realistically need a test suite that runs in ten seconds or less.
Also, this makes it very hard to test a wide range of scenarios. If you have two classes where one depends on the other, and both have two paths through the class, a unit test for each one should test both paths. But if you write a higher-level test for them, to have the same level of coverage you need to test four paths through the whole thing, and with more options it quickly gets out of hand to the point that you can't test all the possibilities. Don't get me wrong, this kind of high level acceptance testing is a very useful tool that has a place in my toolkit, but it's too slow to actually practice TDD, and can't cover as many scenarios as low-level unit tests can. High-level acceptance tests were my introduction to testing as a discipline, and they are the easiest way I know of to get started, but they cannot enable the sort of TDD workflow that unit tests can.
Something like a React or Vue component can be unit tested in isolation, and since that component will be used over and over again, it makes sense to do so. For instance, if you build a React component for a date picker, you don't want to have to test it over and over - you want one test file for the component that tests it in isolation. But more than anything else, if a component forms part of your UI, the thing you want to watch out for is unexpected changes in how the component renders. Using Jest snapshots, it's really, incredibly straightforward to check for that. All you have to do is render the component, and set up an expectation that it matches the snapshot, as in this example:
If I'm working with React, I generally make a point of doing snapshot tests as a bare minimum for my components, since they're really easy to set up and will catch any unexpected changes to how my UI components render more efficiently. After that, it's a judgement call. React and Vue are easy to test, but other UI's may not be so straightforward. Snapshot tests don't test that a UI component is valid, but that it hasn't changed since it was last verified to be valid - as such it doesn't replace human eyes on the UI, but complements them by letting you know when the UI changes.
Of course it is, which is why it's mocked out. We aren't testing our HTTP client, we're testing that we make the right calls to our HTTP client, and that's the critical difference. To use a common metaphor, unit tests are like double-entry bookkeeping in that they should express the inverse logic of the code under test. So for a service class that makes an HTTP request, we don't actually make a request, we just verify that our service calls the right methods on the mock, with the right arguments. A good unit test doesn't test anything other than that class.
In all fairness, controllers are an example of something that should be anaemic enough that testing them in isolation may not be worth the bother - if your controller is complex enough that it's worth unit testing, it may be a sign that it's doing more than is ideal.
I would typically pull other things out and test them with unit tests - for example, my database queries would be pulled out into separate repository classes, and things like API requests into separate service classes. I would then test that those made the right queries to the ORM, or made the right calls to the HTTP client, as appropriate, but if you have higher-level functional tests as well, then your controllers may not need separate unit tests.
It's actually very difficult to start practising TDD on an existing code base because they tend not to be sufficiently amenable to unit testing. I currently maintain a legacy Zend 1 application where most of the functionality is jammed into fat controllers, and I haven't got a hope in hell of being able to write unit tests for that until such time as I've refactored the functionality out of those controllers and into other classes. High level acceptance testing can be done on virtually any code base, but as stated isn't much use for TDD.
That's why I believe you shouldn't unit test your UI.
I completely agree, Gherkin language is just better for testing in general.
I will go more in depth with that in the coming post.
I understand what you mean, but really, do the user care about covering all the small details in the background? or do they want an overall working product?
Yeah, I really love the part that they took these concerns in mind when building these tools!
Can you please check my take with mocks and DI here: dev.to/0xrumple/comment/75bb
Exactly, I agree with you, controller shouldn't handle domain logic, model should instead.
Afterall, you know that it comes down to the cost-value matter.
By definition, you can't unit test your UI, because it's not a single unit of code. A unit test is a test for a single unit of code, be it a function, a class or other logical unit, and the UI can't be described as a single unit of code. You can, and should, write higher-level tests for it, to make sure it fits together as expected, but you can't write a unit test for your UI.
However, you can and should write unit tests for individual components of your UI. In the example I gave of a React-based datepicker, you should write tests to ensure that it renders consistently based on given input, and it reacts in the correct way to various events. Or in the case of a jQuery plugin, you should test that it makes the appropriate changes to the DOM.
Typically, the end-users aren't the ones you're held accountable by, the stakeholders are. And they're the ones who'll be affected if it turns out there's a fencepost error in a class that means clients have been undercharged. Unit tests tend to catch a different class of error than higher-level tests, because they're for testing that a class or function behaves as expected.
Plus, if there's a component you build and plan to reuse on other projects, such as a shopping cart, by definition you can't really test it alone by any means other than unit tests.
Don't get me wrong, unit tests alone won't catch every issue, but neither will high-level acceptance tests. There's a reason why Google advocate the so-called Testing Pyramid model, with unit tests making up 70% of the tests, functional tests another 20%, and high level acceptance tests the remaining 10%.
In addition, the other main benefits of unit tests are that they drive a better, more decoupled design, and tend to encourage the mental "flow" state that is most productive for developers.
No, that is categorically NOT the case. At no point in the example I gave does it make any assertions whatsoever about the response received from the mock. And it's quite patently wrong to state that it's "guaranteed to work". It's guaranteed to receive the specified input - you're simply testing how the class under test reacts to that input. To write good unit tests, you should treat the class under test as a "black box" - your test should pass in the input and verify it receives the correct output in response.
The whole reason to mock out dependencies in this context is so that we have absolute control over all the input to our class. We're setting up our expectations on our mock so we can make sure that:
For instance, say we have the following code for an API wrapper:
In this case, there are three possible paths through the method, depending on the HTTP status code returned. I'd test these as follows:
PaymentRequired
exception.RequestNotAuthorized
exception.There are no assertions whatsoever made about the response received from the mocked HTTP client - indeed, no assertions should be made about the response from the mock, because we already know those as we set them earlier in the test. Every single assertion is about how the wrapper class handles the response from the mock. As mentioned, it's about having complete control of the input to the class being tested, so that you can be absolutely certain that under a specific set of circumstances, the class will behave in the predicted fashion.
This is absolutely not true, because that part of the framework is mocked. At no time does the class under test interact with the mocked class, only with the mock. And where possible I would mock the interface that the class implements, not the actual concrete class.
I generally work with Laravel (which should feel somewhat familiar as it takes some inspiration from .NET), and in that it's commonplace to create an interface for a dependency and resolve that interface into a concrete class using dependency injection. So, for instance, if I had an application that needed to fetch weather reports for a particular geographical location, I might write the following interface:
Then I might create a class at
App\Services\Weather\YahooWeather
to implement that interface, and have the container resolve the interface to that class. If I need to migrate to OpenWeather, I simply create a new class implementing that same interface, and change the resolution, but no classes that use that service should need to change. My unit tests for the class that uses that service would mock the interface for the service, rather than any one implementation of that interface, to ensure it would work consistently with all of them.It has to be said, HTTP clients are a particularly tricky example in this regard. In the PHP world, PSR18 has been recently accepted as a standard interface for HTTP clients, but it's a long way from widespread adoption. Once it is more widely adopted by client libraries, it'll be easy to have API clients specify only that interface and have it resolve to a concrete class through DI, but until that time HTTPlug is the best option in PHP. Technically it's a bad practice to mock a concrete class rather than an interface, but sometimes it's just not practical to do otherwise, and in a context where you can rely on that class being there and remaining consistent, such as when it's part of the framework, sometimes it's just not worth the hassle of wrapping that class, and it makes more sense to just mock that concrete class than to go down the rabbit hole of creating an abstraction on top of it that implements a specific interface and mocking that interface.
If you're manually creating a lot of mocks, then yes, that is a chore, but that's no reason not to test, but a reason to look at how you're testing. And if the alternative is testing manually, I sure as hell don't have time to do that when I could just run the test suite in a matter of seconds. Plus, it's really, really scary how often there will be small changes made that are only caught by good unit tests, and if not caught could cause problems.
I think you might benefit from taking a look at some of the spec-style testing frameworks. Personally, I find that xUnit-style testing frameworks such as PHPUnit and NUnit don't enable the best workflow for TDD, partly because they require you to manually create your mock objects. I believe NSpec is the most prominent one of these in the .NET world.
A year or two ago I started using PHPSpec for an API client, and I've found that to be the best TDD experience I've ever had. A typical test method in PHPSpec looks like this:
This is about as complex as test methods ever get in PHPSpec (and most of that is because setting up requests with HTTPlug can be rather verbose), and most are far simpler. The
beConstructedWith()
method is only ever required when you want to override the default dependencies passed to the constructor - most of the time, your test will define a singlelet()
method that specifies the default dependencies used to construct the class, as in this example:The typehinted dependencies are mocked out automatically, so you need only typehint a specific class to get a mock of it, and you can then set any expectations you need on it. This results in more concise tests, and you have to write less boilerplate than for an xUnit-style test.
It's not a matter of writing a mock. I don't know enough about .NET to comment on how easy it is to mock dependencies, but I use Mockery in PHP if I write a PHPUnit test, and it's really trivial to mock a class with that, and for most purposes it wouldn't be useful to provide a mock version of that class, since you'd still need to set the expectations for it. If I mock a class with Mockery, I'm typically looking at one line of code to mock the class, and another line per expectation, which isn't exactly onerous. I have never needed to create any kind of mock server for this kind of use case.
HTTPlug is one of the few libraries I can think of that does have this sort of behaviour, as it has a mock driver that collects requests and returns specified responses, but it wouldn't require much more code to replace that driver with a mock of the driver interface.
We're exactly on the same page mate.
And all the things you mentioned (Cucumber, NSpec and high level acceptance tests) they are just what the next post is about, to be specific: BDD.
As for this code:
I still can't accept testing this piece of code since it's a mere of if-statements that they will work FOR SURE as long as the other part (the HTTPClient) is gonna respond with 402 and 401 when needed.
Though, I would surely mock it if I have some logic inside the case of 402 to ensure my logic works as expected.
I agree with you, but the high level acceptance tests would keep both the stakeholders and the end-users so happy that "everything is just working"... rather than dropping TDD cuz of "passing the deadline".
I really enjoyed and benefited from the discussion with you... I promise, you will like the coming post ;)
@matthewbdaly , I've written the second part and I would be happy to know your opinion about the discussed approach:
dev.to/0xrumple/bdd-rather-than-td...
Just like you mentioned, I use TDD only when I know every possible outcome before hand. Eg. in domain logic.
With integration or UI/end-to-end tests, my goal is generally to avoid regression, so I write few broad tests after the code has been written. The tricky part however is ensuring that these tests fail when they should -- I find it hard to tell if there are false positives without TDD.
Exactly... the worst part is that you never know in future whether new requirements are gonna make them red or not.
I'll discuss those in the coming article and what approach is actually suitable for such cases.
Like you write application code and need to refactore it if requirements change, you need to redactore unit tests too. However I you have good unit test suite and you realize that part of code can be refactored to something better, with unit tests you will quickly figure out if you broke something during refactoring, or everything works as it should and all of that without manual clicking and "quickly testing" by hand.
Yep but TDD isn't the answer for quick refactor, maybe something else can be better, more descriptive, and more time-saving.
actually it is... ¯\(ツ)/¯
I know how it feels @vbjelak
Have a look at this great video: youtube.com/watch?v=qkblc5WRn-U
It's a good start ;)
I have taught TDD a lot. I've even gotten old-timer COBOL programmers excited about it. These concepts help most:
1) Your unit tests should describe the behavior of the code. If you start with only the unit tests of the system, you should be able to rebuild the code and the system will behave the same as with the original code.
2) Unit tests should be quick to write and easy to understand. When your unit tests are complicated, your code needs refactoring, decoupling, and/or architecture needs refining.
3) Name tests well, to make them manageable. I prefer snake case: "()"
4) Red/Green/Refactor is important. All tests MUST initially fail. If they don't, you either have a duplicate test or there is a bug in the code. I have caught bugs a few times, via initial Green tests.
Is red a test failure or an error, or both?
Compiler / Interpreter errors = RED.
It is also red when the test simply fails.
Oh, that changes everything. I understood TDD as starting with a test case that fails its assertions only and getting the function set up properly beforehand (mocks, injected dependencies, etc.) was the difficult part. If it's just getting the task requirements in as test cases at the start, that's what every single TDD explanation is missing.
Oh, that is terrible news! I highly recommend looking into Roy Osherove's book, "The Art of Unit Testing", which is fantastic. Bob Martin also has some good TDD information, but I recommend reading his "Clean Code" book to get a look at how to better organize code, which is essential for good TDD. I genrally start my students (generally very junior developers) with these two books.
You can immediately find beneficial help by looking up some "Red Green Refactor" kata videos. Pluralsight may also have something useful.
Love those points, they touch the crux of the matter!
You're spoiling the fun of the next post 😁
Oops!
TDD is a massive shift in thinking for most people. It requires practice and patience. It took me about 5 days before the concept clicked, and I suddenly understood the intent. Everything suddenly made sense and I saw the dangers of traditional practices.
This forced me to learn decoupling architectures and mocking frameworks. My code vastly improved.
I founds this to be a really entertaining article. Nicely written 👍.
I will challenge you on something though: "never test your UI".
I really hope you mean "never unit test your UI", because without some level of automated end-to-end or user journey tests I'd be very nervous about making changes to a product. How can I be confident that I haven't significantly broken some part of the overall application behavior without spending way more time that I need to poking around the UI for myself? I'd much rather my test suite did that for me.
For my part, when it comes to TDD and the UI, if I'm adding something new to a user journey, the first thing I'm doing is altering the end-to-end test for that behavior to include what I want my change to look like. It will fail initially, and continue to fail until I'm done implementing my work. Once it's passing, I know for certain that the change I made plays nicely in the context of the overall application.
With e2e tools like Cypress at our disposal, there are fewer excuses than ever to avoid writing tests for our UI's.
Oops... been edited thanks for the notice 😅
Yeah, there must an automation to test some user stories in an abstract manner, I found recently another very simple tool yet SO POWERFUL tool here:
uilicious.com
TDD has its place and I've found it only really works when you have very specific things to test. I've tried TDD when there's a lot of design required and I have never managed to make it work better than TLD.
The worst part about TDD are its advocates.
Exactly man... I hope they read that and understand what you mean
Testing is an art of its own. It is a different thing to know how to develop than how to test. Ideally you have to be as a good a developer as you are a tester. But it is a different skillset. Sometimes you cannot test because you do not have the knowledge of how to do so, or you cannot see the value. These things become clear with experience and when I say experience I mean the failures.
The sad part is that you should have some technical (and architectural) experience in order to be a able to test but that experience comes after you have written some untested code (because of the fact that your technical skills are there yet - it is a vicious cycle)
I have a very simple rule that I use. Just test the behavior of the things you will expose. If the behavior is tested then the most important step is done. How deep you go after that depends on a variety of factors
Yep, behavior is king... I've written the second part here:
dev.to/0xrumple/bdd-rather-than-td...
Core domain logic, that's the strong suit of TDD and unit tests. Don't have a particularly complex domain? Then don't write so many unit tests!
TDD + DDD is really a good match!
"...Rather than testing your UI..." use Cypress.io or uiliciouse.
dev.to/simoroshka/the-worst-develo...
I agree with what @simoroshka mentioned... but when I use any fairly-documented library, I and start reading through the tests to learn how I should utilize that library without hacking it.
Take this example, I used recently this tool for i18n:
github.com/raphaelm/django-i18nfield
But it has really thin docs, not mentioned how I should go back and forth with the json data, but they have these nice tests:
github.com/raphaelm/django-i18nfie...
So, TDD might be not important in every single project, but testing is really important!