When I was in university, I had a lecturer who didn't like unit tests. He was an eldery man who worked at IBM and gave lectures about mainframes. H...
For further actions, you may consider blocking this person and/or reporting abuse
Probably yes (sorry!)
It's a classic issue of writing tests that are too coupled to implementation detail. People then get frustrated at tests because they can no longer refactor without changing everything
I'm going to speak in terms of TDD; and that does not prescribe writing tests for every class/function/whatever. It prescribes it for every behaviour. So you may write a thing that does X, internally it may have a few collaborators; don't make the mistake of writing tests for implementation detail. These are still unit tests.
Ask yourself. If I were to refactor this code, would I have to change lots of tests? The very definition of refactoring is changing the code without changing behaviour. So in theory you should be able to refactor without changing tests.
I would suggest looking into Kent Beck's book on test driven development. It's an easy read and quite short. Or if you like Go and dont want to pay any money have a look at my book. This video covers some of the main issues you talked about and probably explains what i've typed a lot better infoq.com/presentations/tdd-original
Writing tests effectively takes a while to get proficient at, but the fastest way to get there is to study and retrospect the effect tests had on your codebase
My 2¢ about this discussion.
First of all, I think that this quote is fundamental to understand why we test our code:
“Testing shows the presence, not the absence of bugs” ~ E. W. Dijkstra
It means that our tests can't prove the correctness of our code, they can only prove that our code is safe against the bugs that we are looking for.
Having 100% code coverage doesn't guarantee that our code is 100% correct and bug-free.
It only means that our code is 100% safe against the bugs that we are looking for.
There may be bugs we aren't looking for even with a 100% code coverage passing tests.
Tests show the presence, not the absence of bugs.
Chris James says: "the very definition of refactoring is changing the code without changing behavior."
The behavior refactoring refers to is external behavior, that is, the expected outcome of a piece of code, not how the code behaves internally.
When we write a test, we can make assertions about internal behavior but it can change without modifying the expected output.
That's the very definition of refactoring.
When we make assertions about the internal behavior, we are coupling our test to an implementation: internal behavior changes will likely bring to change the test.
That's why I like what Michał T. says: "code that is perfectly suited for unit tests are things that have predictable inputs and outputs, and don't have dependencies or global effects."
The assertions about the behavior of our code will likely depend on the behavior of our dependencies.
Indeed, we mock external dependencies because we don't want our code being affected by their potentially bugged outcome.
Thus, we set up our environment to have a predictable output.
That's why even if external dependencies have bugs, our unit tests can pass. And that's why unit tests aren't enough to save us from having issues.
Reducing external dependencies will make our code easier to test and less prone to side effects coming from the outside.
My last thought, starting with this quote from connectionist: "code changes happen all the time and unit tests have to change with them. It's unpleasant but necessary."
Software, by definition, is soft to adapt to changes.
Otherwise, it would have been "hard" ware.
We have to deal with it. It should not be unpleasant but the opposite: it's its the ability to change that proves the real value of software.
The frustration that we feel when we have to change our software comes from the fact that as long as we add code we tend to reduce the flexibility of our software (we add accidental complication).
Thus, adapting to changes becomes frustrating.
But it's not software's fault.
It's not our customers' fault.
It's our fault.
It's only by making our code better over time that we can reduce that frustration.
And we can make it better by performing refactoring on a regular basis.
Everything that encourages refactoring should be welcome.
I warmly recommend watching this: vimeo.com/78898380
Cheers
Adding to test the code's behavior, test that the code implements requirements: those things the end user, legal, marketing has to have. Then you get into tracing requirements to exact lines of code, and anything else can get deleted.
Thanks, I'm going to read this book :D
One of the important things that unit tests will do is to get you focused on SOLID, most notably single responsibility. It reduces the temptation to write "Swiss army knife" functions or massive blocks of if..else or switch..case code. When you work in short blocks of testable code it makes debugging so much easier. Likewise, if you find tests becoming elaborate, maybe some refactoring is needed.
When you're working on a team, having the unit test gives other developers a guide as to how a particular function should work. If they come up with use cases you didn't anticipate, it provides an easy way for them to communicate it. When you're primarily working on the backend, it gives you something to demo in the sprint retrospective/demo.
When debugging issues unit tests make it easier to locate problem areas both in integration testing and in production. Without having this testing you can spin your wheels trying to find bugs.
Alternatives to unit tests? I've had to do these when working with legacy code where there were no tests originally written. Usually, these tests were in the form of one-off sandbox applications that would exercise a particular function or set of functions, trying to track down a bug. I've found this to be more inefficient than writing tests to begin with, particularly when trying to deal with critical production problems.
Thanks for posting your experiences. ❤️ I have similar history with unit tests.
Nowadays, I no longer bother to test everything. I do not believe there is enough ROI in doing so for most of our apps. I mainly test business logic. And when I say that, I mean only business logic. I practice Dependency Rejection, so business logic code is purely logic and is very easy to test. I will highlight the difference with a couple of images.
This kind of design is what you normally see exemplified in unit test demos with interfaces being injected into business code. This makes "business code" not only responsible for business calculations but also handling IO. Despite those things being represented as interfaces, the code will likely need to know specifics like which exceptions are thrown or other effect details which are unique to type of integration. So it has the appearance of decoupling while potentially being still quite coupled.
This kind of design also creates a lot of work in unit tests, since you have to create behavioral mocks of the injected components. The fact that you need a framework to dull the pain is a hint that it is not an optimal strategy.
Instead, I do this.
Here, the business logic code (inner circle) has no knowledge of other components outside of its purview... not even their interfaces. It only takes data in and returns other data. If IO is necessary to fetch the data, the logic does not care about it and is not responsible for it. Only once it is fetched do you run the logic code. It is also fair to give the logic code an object representing no data (e.g. Null Object Pattern or a Maybe). This is ridiculously easy to test since all you only have to pass in some data and check the the output matches what you expect.
For example, I might have some logic code like this:
Then have a test like this:
How do I handle IO? I have an outer piece of code (I call a use case handler, or just "handler") which is responsible for tying the logic to other integrations (database, API calls, etc.) needed for the use case. Sometimes logic steps are interleaved with IO, and so the logic has different function/methods for each step. The handler must check the logic response from the first step and perform appropriate IO before calling the next step.
This design draws a very fine line between which types of testing is appropriate for which parts. Unit testing (even property-based) is appropriate for business logic code. Integration testing is appropriate for the integration libraries used by the handler. End-to-end testing is appropriate for the handler code itself since it may deal with multiple integrations. But the main win, and the most important thing to the business is the business code -- that decisions are correct. And this is now the easiest piece to test. The other parts are no harder to test than they were before, but still not worth the ROI for us yet.
Ah, yeah I read about these things.
But all the examples were in FP languages I didn't know, so I didn't take much from it.
You might want to search for the "humble object pattern" if you want to learn more about Kasey's testing strategy.
(I'm a unit-test-addict :) )
Did I do unit tests wrong?
In my opinion unit-tests are documentation, so if your product change, your unit-tests must be rewrited. If you had to rewrite to many tests for a little change so maybe you should make your tests more flexible, or use them only to test the "freezed part of your code" (utils functions and algorithms).
Is there an alternative?
In case of API (constantly evolving) some tools create test directly from spec (Swagger maybe).
Are integration tests enough?
It's difficult to test only a function with integration test, the scope is not the same. But testing "GET .../user/1" return the good object it could be ok. I higly recommand to use unit-test to deal with user-inputs (POST requests) because you can test the endpoint with a lot of bad entry (and check for security, malformed, bad type, ...)
Is TDD a placebo?
Personnaly it's a security I love to have :)
> maybe you should make your tests more flexible
How? :)
> or use them only to test the "freezed part of your code"
Isn't this against the TDD philosophy?
> I higly recommand to use unit-test to deal with user-inputs
How does this eliminate the problem that I only test what I had in mind anyway when writing the functionality in the first place?
Like, when I test my software I find fewer bugs than when someone else tests it. etc.
More flexibility:
It's recommanded to test only one case per test, I get the bad practice to put all my testing case into an array:
for (arg1, arg2, result) in [(1,2,3),(-1,-3,-4)]:
assert(my_sum_function(arg1, arg2) == 3)
It's bad but you can make a lot of case and change function name easily.
Maintains few tests is always better to have no test at all. To encourage your team adding tests it should be easy ;). So test the freezed functions is a good start.
I'ld love to write an article about "unexpected testing cases", I have this list of error cases:
The main thing with TDD from my understanding is that tests are the requirements, so anything that falls outside of the tests is by definition irrelevant. Most of the "test everything" recommendations come from the TDD mindset, so if you try to apply that outside of the TDD framework it can get messy.
This perspective helps limit the scope and coupling of your tests, since there is typically an astronomical number of tests that you could do, but a very finite number of testable requirements. Refactoring should not generally break tests, but if refactoring occurs across/between several modules then you will probably have some rework, but I would argue that that is more of a "redesign" than a "refactor".
One good reason to test every module/class is to reduce the scope of any bugs you do come across. If I have a suite of tests that demonstrate my module's behavior then I know where not to look for the bug. With integration/system tests alone you will have some searching to do.
I always have the feeling that is still a problem.
I get rather high leven requirements, but they are implemented by many parts of the code. So simply writing a "Req1 passes" would require to implement many many thigns till the requirement is met.
I'm bookmarking this to read later (so many good comments!) but I'll chime in with a QA perspective:
If you're working at a place with a formal QA step, test your implementation, not your requirements
I've noticed in my devs' specs they'll have tests for things like "this has been called once", "all parts of this
if
can be hit", yada yada yada, and then there will be things like "it returns all this info", "the info is sorted properly", "the info is in the right format", etc.Then if you look at my tests, they're "it returns all this info", "the info is sorted properly", "the info is in the right format", etc... things a user would see and that are in the story's acceptance criteria for the feature. Where I am, QA automation (end-to-end with a hint of integration testing) is a part of the formal definition of done, so a feature isn't considered done until both of us have written the same thing just at two different levels.
I haven't written any unit tests for Web APIs yet but here's my take on TDD:
In my part, I don't recommend writing unit tests for every class. Only for classes that changes behavior based on various arguments and conditions.
Writing unit tests helps me in various ways:
It validates my understanding of the requirement. There's a tendency for us developers to jump right into coding without fully grasping the requirement. Writing unit tests forces us to think and ask questions even before the actual coding. Which eventually saves us more time than rewriting code from previous assumptions.
It helps me make design decisions. That is, if a class is hard to test, it may still be broken down into smaller testable classes. Therefore, enforcing SRP (Single Responsibility Principle)
Acts as harness after refactoring and bug fixing. Tests should still be green after code changes. It's a quality layer that signals me that I didn't break anything.
Like @JeffD said, also a documentation. I've written and deleted a lot of unit tests. Requirements may or may not change in the future. You don't know when or if it will but for this time that it's true, its better to write unit tests than to write none in anticipation that it will just be deleted or changed in the future.
Hopefully, these insights helped you.
You're probably right.
I often read unit tests of libraries I used to understand them, but on the other hand I don't write libraries myself. They feel like they would lend themselves rather well to unit-testing, like APIs and such. UIs feel different somehow.
If you haven't already, you should read Joel Spolsky's excelent article Five Worlds. To sum it up - great programmers sometimes come up with tips and best practices that make perfect sense in the area they work in, but are not very useful and maybe even harmful in other areas.
I believe unit testing is one of these best practices. When it comes to library development, for example, unit testing are great. In other areas their RoI is too low to be useful, and other kind of tests should be preferred.
In my opinion, automated tests should be an automation of manual tests. It usually easy to decide how to test something manually. For example:
main
that prints it's output for some hard-coded inputs.These workflows are intuitive:
main
. I mean you can - but that would be a lot of work to re-create the environment needed to test that feature, and in the end it won't be very effective because that temporary environment may be different than what you use in the actual program.Since the manual testing strategy is so clear, the automated testing strategy should mimic it. Use unit tests for the library function and integration tests for the feature. Some people will insist on unit tests for the feature, but that has the exact same drawbacks of manually testing it with a custom
main
.I second this!
When working with a mature framework, or using a good library, features should usually come in the form of extensions. The purest extensions are those that are almost entirely declarative. i.e. you are just picking what functionality offered by the framework to compose into your new feature. When a piece of code simply composes, or declares constants, there is nothing to unit test. There's no such thing (at a unit level) as declaring the wrong constant or composing the wrong functionality. The declarations should trivially match your requirements, and (though we may have our opinions) there are no wrong or right requirements. If you write unit tests to re-assert declarative requirements, you will just have to change those tests as the requirements change without ever really protecting the "correctness" of anything. Also, these extensions are usually the most sensitive thing to API changes, and can double your clean-up effort if you have a framework API update.
Of course there are usually logical utilities and functional bits added with feature extensions, but those can usually be tested in isolation of the declarative bits. Their functional bits can always be made into a local mini-library, which is again just composed into the final feature, locally testable, and ideally not sensitive to changes to the API that the feature is extending.
High level integration tests are what you need to guarantee that you've composed these features properly to produce the desired effect.
My guess from the OP stating that there were hundreds of tests to change on an API change is that he was either testing declarative bits, or didn't have declarative bits properly isolated.
Did I do unit tests wrong?
I can't say for sure, but what I can say if "Trying to hit that famous 100% coverage" is a nothing but a wild goose chase. To find out why, see this article: dev.to/conectionist/why-code-cover...
Is there an alternative?
Code changes happen all the time and unit tests have to change with them.
It's unpleasant but necessary.
However, if a large part of your architecture has to change (and this happens quickly/frequently) then the problem is not with your unit tests.
It's with the architects and the faulty/rushed decisions they make when deciding upon an unstable/unreliable architecture.
Are integration tests (black- or grey-box) enough when automated?
NO!
Unit tests and integration tests serve different purposes. They are complementary. They are not meant to be a substitute for one another.
Unit tests are meant to test code. They are like a defense mechanism against yourself (or, more specifically, against accidental mistakes you might make).
The idea is the following:
Because it's possible that changes you make in some places, have undesired effects in other places. That's where unit tests come in. They tell you "No, no! If you continue with these changes, you will break something that was working well. Back to the drawing board!"
Integration tests on the other hand test functionality. They check if everything works ok when it's all put together.
Is TDD a placebo?
Certainly not. But like all things, it works only if used properly.
As a side note, don't be discouraged if your unit tests didn't catch any major bugs. That's very good! That means your a good programmer who writes very good code.
If your unit tests failed every time you ran them, it would mean you're very careless (or in love and with your head somewhere else :)) )
Think of it this way:
If you hire a security guard and you have no break-ins, are you upset that you have no break-ins?
You're probably feel that you're paying the security guard for nothing.
But trust me, if he wasn't there, you'd have more break-ins that you'd like.
Yes, I guess that's the problem.
After a few years of practice you write code that is pretty robust and the tests you write basically do nothing until the first changes to the software happen :)
From my experience unit tests are incredibly useful when developing code that is perfectly suited for unit tests, generally things that have predictable inputs and outputs, and don't have dependencies or global effects. On the other hand if you're testing boilerplate code with a lot of complex dependencies (i.e. an MVC controller) it's probably better to cover it with integration or acceptance tests.
You should move as much code as reasonably possible into unit-testable blocks, but going out of your way for 100% unit-test coverage leads to tests that aren't worth writing and updating.
Then there are tricks, like mocking outside services (so that you don't have to actually hit remote services when running acceptance tests) and comparision testing, i.e. not testing the contents of an XML document but just storing it and comparing output directly to it. When testing APIs I also automatically test inputs and outputs on endpoints against a specification, which is a pretty good way of testing both the endpoints and the specification.
I also think that unit tests are a great way to force yourself to write easily testable code, which is usually better structured than non-testable code :)
But in general code needs to be tested if you care about it working. Any endpoint you don't test will eventually be broken.
My take on unit tests is to avoid them. Write your software ina way it could be tested easily, as this will keep your code decoupled, will force you to explicitly inject external stuff and more.
But if you have a decent type system and a bunch of integration/end-to-end tests, unit tests are not worth the hazzle.
After all you dont care about implementation details as long as your module/component/insert-similar-here does the correct thing
Unit tests have the greatest ROI when either
On the other hand, unit tests have very low ROI when a feature is not business-critical and has requirements that change very frequently.
Note that the value of unit tests is like everything else: it depends.
As to alternatives, I’ve had cases where API tests (on a running test instance of the application) provided an immensely high ROI. Integration tests, in the sense of testing the collaboration of a chuck of your codebase, those have for me always had a low ROI, because of the effort in setting up while still only resembling actual production behaviour (due to the mocked parts).
I'm going to shamelessly plug my own Intro to Property-Based Testing ;-)
But seriously, PBT is a good secondary layer to proper unit tests. I'll be glad to answer any questions!
Yes, I read about this.
Even autmatically generating inputs for JavaScript tests with the help of Flow type annotations.
I also liked the idea of mutation testing.
I'm surprised that no one has yet mentioned monitoring.
The alternative to building classic object-oriented software guided by tests is to develop microservices with extensive real-time monitoring and alerting. If something's broken, the service will go down, or a metric will spike up/down, and the developer who owns the microservice needs to fix it. This approach is sometimes called programmer anarchy and requires a high level of maturity across the whole team.
That's basically what I'm doing.
Didn't consider this as an alternative to unit tests until now.
I think it totally depends on the expectations of the system, how much experience you have, and the risk you can afford.
One of the systems I've been working on for about 14 years is a CRM. It's probably about a million and a half lines of code with a few hundred movable UI components. At one point we had around 10,000 unit+integration tests but have removed many of them. The issue is it tolerates a fair amount of mistakes in edge cases as they typically don't impact many employees at a time because someone with knowledge of the business tested the feature before it went into production. My goal is to try to provide the business with a high ROI for development time and over the years I've seen what works and what doesn't. These days I typically use very few unit tests. In fact given the application I try to limit situations where I feel they are necessary at all causing fewer errors, faster turn around etc.
Most of my development is a UI, maybe some business logic, a model, and a DB. If I have tests I make a class with dependency injection just to test the business logic, and many situations there is no business logic to test. If there are very few critical paths or few people are using it I may also not add unit tests.
Financial portions of an application typically receive many more tests.
If you are a new developer unit tests may be useful to help you understand the potential issues with the patterns you use.
Unit tests don't replace integration tests. Or making tests to reproduce bug reports (seems you missed it the first time).
I think most new developers make things complicated enough to require tests because they are bored or because they think it is clever or they just don't think long term. It may unintentionally function as a training exercise. In my experience systems with a few repeated patterns over and over seem to stand the test of time much better. Much of the "complicated" code just comes from tried and true libraries you shouldn't be editing. The best business code is something someone else who isn't even a great programmer can sit down at and understand quickly so they can add additional features that are valuable to the end user.
You may however be in a very different situation. If you are writing a library to release, your company screams at you for every little error, your software will be installed in hardware and sold, it is customer facing, it is life or death, etc then I would change the way I write and test it accordingly.
As a note one way I reduced complexity (and unit tests) was to not be afraid to move complicated things to the administrative user space where possible. This also allows the business to build groups of people who may not be "programmers" but who can set up complicated business logic on a test server, test it, and move it to production without a developer even being involved. Your software will be more resilient to change. Along those lines I recommend moving anything that looks like a report to it's own department or at least its own thought process/repo.
/rant (since they canceled our fireworks due to rain) lol.
To me, you should treat tests like features and features like wizards. As Gandalf has said, "A wizard is never late, nor is he early, he arrives precisely when he means to." To me, 100% test is always too early. By writing tests to 100% completion, you are saying that your features are 100% done, and that your product, in turn, is 100% complete. So, when that requirement came in, there was no room for it, thus forcing a rewrite of the system to accommodate it. Tests aren't just an assertion that everything is complete, but a measure of how much work the product needs.
Another way to think about it, is using the same quote, but focus on the last bit. "...he arrives precisely when he means to." Rather than testing what you say (code), you test what you mean (intent/behavior/requirement). Sometimes, we developers only know code, we don't know the requirements. If that happens, then any tests that we create may be worthless, as they do not express what was intended. Due to lack of communication, you did not anticipate a new feature, thus creating new work. Some may argue that dependency injection would have solved this, but unless you apply that to littlest model, there will be some way that this will get you. This is why agile was about building smaller and communicating faster.
I like to think about tests in a different context than TDD. Rather than testing to mean asserting, I like testing to mean trying it on, sort of like shoes. If I like the result, I will lock it in. This idea harkens back to when we started programming. Code a bit, complain about a missing semicolon, compile it, play with it to see if it works, repeat. With this same idea, we just gain two things: it is automated and no CLI required to input. This will bring a different mindset into testing. Rather than building the test first or last, it is with the feature. Code a bit of the implementation. Code a bit of the test. Code a bit more of the implementation, code a bit more of the test. Rather than the test being something to assert against, it becomes an explanation of your intent. This is what it means for a test to become your documentation. Of course, that should not be your only form of documentation. Just because it passed the unit test, does not mean it is correct behavior.
So to take the point around behaviour further I would recommend having a look at BDD(behaviour driven development) there's 2 things in particular that will help. 1. the tests are writing so that when reading the test you can quickly understand what its doing and then drill into the functions to find out how its doing it. 2. Separate the tests into separate functions and split it down into Arrange, Act, Assert. It feels like more code but in the long term will help with documenting behaviour for new developers and will help you to find/fix issues with tests or with the application
I agree with what other commenters already said - I think the essence is that unit tests should guide the design of your software. They will force you to use good practices like single responsibility etc. I've also worked with systems that had a large number of unit tests that didn't seem to add any value, but coincidentally the whole system (codebase) sucked ... so this was not a proof that TDD was useless, on the contrary, TDD didn't work because the design of the system wasn't good.
Using TDD does change the way I write my code but I feel like it improves the code in terms of readability and maintainability (probably performance too but I've not tested this myself) by ensuring I use pure functions and ensuring there are as few side effects as possible.
For projects that are going to be maintained long term (more than a couple of months) I find unit tests to be super useful. If it's a very simple or short lifespan project I agree that they add needless complexity to a project.
Interesting point with regards to only being able test things you plan for, I guess this comes back to "devs shouldn't test their own code" and I'm not sure how we could improve this situation other than let QAs write some test cases too which is obviously not suitable for every business.
Not sure on alternatives really, I guess it depends on your situation and the project at hand but I don't think every project should use unit tests for the sake of using unit tests.
I haven't done any unit tests nor TDD nor any TEST related development, although I was always interested in getting to know and use such skill, but I also was always afraid of what "K" mentioned, that it requires double the time to finish a project, which I know for a fact is not such a good idea.
I read many comments here and many articles over the years, I still do not understand TDD!!! Either developers don't really understand (or disagree) about what TDD is, or I am reading the wrong articles!
Can someone, as they say in job interviews, explain to me like I am a 5 year old, what TDD is and when to use it??
When I was at uni, unit testing didn't exist. We had "black box" and "white box" testing (which kind of map to integration and unit testing). But if anything, the idea that developers needed to write any testing code at all was seen as a general failure of Computer Science. There was an emphasis on things like formal verification (so, using mathematics to verify that code is correct) and the hope that you could just specify what you wanted and the program would be automatically created.
So I'm not surprised that you lecturer wasn't a fan of unit testing. In some respects unit testing is an industry-wide wrong turn, but then unit testing is a lot easier than some of the alternatives (have a look at Z Notation - when I was at uni the course in it brought people to tears).
Hey, K!
I have to say that I do not think there are good alternatives for unit tests.
Your gripe with them so far is that they did not catch mistakes and slowed down development, right?
I do not think any testing approach will prevent programmers from making mistakes, instead you should focus on writing simple, SOLID methods that are easily unit-tested and then supplement with contract tests as explained in Integrated Tests Are A Scam.
As for slowing down development. How come? Normally when you have an API and tests for it, it should only grow in capabilities. If you have changes that require rewriting half the suite, the tests must be bad (sorry!), probably coupled to implementation details, not behavior.
As counter intuitive as it may seem, the answer to your plight is more, better unit tests, not less, and not something else.
In my opinion, unit tests are great as a scaffold while building something. I'm not sure how much they help after that. If the behavior that they're testing can be pulled up into the integration test, then that might be better, which would leave more room for refactoring the implementation (although pulling them up too early might be a waste if you don't feel the need to refactor the implementation).
So my stance at this point is: write unit tests, but don't get too attached to them.
I've been a developer for over 20 years and have never once written tests in a real project
Tell me your secret!
No secret. I just don't use automated tests
What language are you programming in?
In a professional capacity, I've worked with Visual Basic (back in the day for some Desktop apps), PHP, JavaScript, and Ruby
Nice.
How do you assure code quality?
If by code quality, you mean functioning code... manual testing. It's always worked for me
From my experience you should have at least 10 times more code in unit tests than actual code if you want to do true TDD. Also, if you achieve 100% coverage as some code metric tools tells you, you've only at least covered every line with at least one unit test. This says nothing of whether or not you've actually tested every possible input, so there will still be bugs (if you find one write a unit test before you fix it).
As for the issues you mention above, I only have a few comments.
Code bases that use unit tests at least have the benefit of being written in a testable manner. And therefore can be more maintainable.
I once thought of unit tests as useless. Because it took me more time to write them than just bust out some code. Now having successfully written a template engine using TDD I'm a believer.
If you think that all you do as a dev is put your hands on the keyboard and start typing, then having to write unit tests seems like a waste of time. But what about all those hours we spend staring at a screen trying to actually write the code or worse yet trying to figure out why we wrote it that way and why the XYZ doesn't it work like I think it should?
Now come full circle with me. What if while you were thinking about what to write or how to fix it you just write some unit tests while you thought about your issue. Some great things start to happen. First you think about the problem more. Next you are forced to come up with possible inputs, go ahead write that whacky test you don't think matters. Maybe just maybe it will help later on. Lastly, but not finally, you go back to having your hands on the keyboard more but your using the unit tests to help your thought process. And what you are left with is tested code that can be more easily refactored.
Now as for those 100 unit tests you had to refactor... Why not approach the rewrite the same way as new code? You had to think about the change, why not have some unit tests to show for your thoughts?
Lastly, if the actual typing part is taking too long. You're either doing something wrong or you don't have visual studio and resharper. ;-)
Alternative: Scenario testing
For API testing I preffer scenario tests.
Good way is docs.cucumber.io/ which is really strong.
I find this paper by Jim Coplien is readable and makes a lot of sense: rbcs-us.com/documents/Why-Most-Uni...
TL;DR? Skip to the end of the document, he summarises well :)
Thanks for this solid practical advice :D