It’s a decades-old debate of whether developers should write tests or not. While some think it’s important, others believe that it may be too much to ask from developers.
How much time do you spend writing tests?
It’s a decades-old debate of whether developers should write tests or not. While some think it’s important, others believe that it may be too much to ask from developers.
How much time do you spend writing tests?
For further actions, you may consider blocking this person and/or reporting abuse
There is only one valid answer: not nearly enough.
Even as a QA, I feel that
Sadly the only correct answer.
Same it often gets left till last.
Approved
I spend all of my time writing tests, mostly because the more time I spend writing tests the less time I spend writing code, and the less time I spend writing code, the less time I spend creating production bugs.
By only ever writing tests (for code that doesn't exist, mind you), I never create any bugs in production code.
The next level-up in my career will probably require me to stop writing code altogether, but that's too hard.
Ever-changing requirements / designs in a super rapid-dev team will break good habits like TDD, to the point that devs get too lazy to write tests for potentially throw-away code. Such is the curse of the "ship fast" mantra of today's software dev reality.
That's an interesting question. And while the correct answer is (as usual) "it depends", I think it would be cool to have some real numbers here.
Of course, this is just an estimation, but I think I spend around 25 - 50% of my coding time in my test suite. It's a big range, I know. But as I said, it depends. If I'm bug fixing, I will write a test that reproduces the bug. This won't take too long normally. But the actual coding in the production code to fix the bug might take a lot longer.
In contrast, while working on feature development, I spend way more time testing. I think this is when I reach something around 50% from time to time.
I would love to hear some more people with actual numbers/estimations.
So there are two main parts to the question. How do you test deterministic code - this is the code where I’d you give a certain input then you get a certain output. The other part is how do you do probabilistic tests for ML predictions.
For deterministic unit testing, I write the test first. If I write the test right, then it usually helps to guide the overall code. In right deadlines though , you need to focus on broad inputs and broad outputs. If you don’t have time to test one function , can you test a group of functions. The general rule I use is that if the user sees it then I would like to test it. Still figuring out best practices for probabilistic testing.
I think I spend sufficient time writing tests. Testing is the same as writing good code. It's a balance. You don't want your code too dirty, or overengineered. You want it to be a relatively simple solution for what you need today, that is easy to work with tomorrow. That's a sweet spot somewhere between too dirty and overengineered (perfectly decoupled).
In terms of tests: Tests provide confidence, but they cost time (both to write the actual test and to change it later if you make certain code changes that require the tests to also change). There is a sweet spot where they provide maximum value. Building a toy front end app? A little confidence is sufficient. Building a life-critical application? You want maximum confidence, doesn't matter if writing the code takes 4x as long as it would with no tests (or just a few tests).
For something like an ecommerce app or your average commercial project, you want pretty good confidence that everything works. That means that sometimes you only need a few tests cases and other times you need many. In general, I'd say 25% to 50% of my time is spent writing tests. This includes unit, integration and end to end tests.
That time is reclaimed because I don't have to spend as long debugging while writing code, or fixing past bugs. Better yet, there are less critical bugs that cost the company money.
I think it's up to the software teams and development life cycle.
If some specifications are complicated and software team size is not big, sometimes I don't take much time to write tests.
If the package is released/used for other projects or reusable codes, I will consider taking much time to write tests :).
The software I'm working on isn't web based, so I'm not sure if this is relevant.
I'm working to a specification, and each point in the specification must have a test. The tests have to have a specification, too, which I have to write (just a setup, input, expected output type thing). There are around 140 points in the specification.
It doesn't help that I'm new to the particular testing software we're using, so I've had to figure out how to even start writing tests.
Suffice to say, I've been writing tests for weeks and I'm not done 😅
It depends a lot on how much business logic there is involved for me. If its a lot of business logic I can spend just 20% of my time on tests. However there are plenty of situations where writing more "greenfield" features can take more time. As an example, I had to write an integration with some third party. We didn't have a good pattern on how to test these so I had to do a bit of research (decided to use VCR.py).
I never use Tests
If you understand your code, you understand the implications of your change and therefore know what you need to manually test
A rough average across 25 years as a developer - approximately 0 minutes per project
I think it's about 50% of the time spent developing a feature.