To a certain extent, I love unhappy paths, but I see too often when someone who does that manually incorrectly applies it to code.
I know some QAs that would very happily write that the sky isn't green and the sky isn't purple and the sky isn't red. When they could just write that the sky is blue and be done with it.
While that may seem silly with that example, consider an api integration tests that does like
Lovely, we no longer use HATEOAS links. Good job, team. But also, that getResponse doesn't contain 'puppies' or 'kittens' properties either. It might have a 'link' property because someone got cheeky and changed the name on us.
expect(Object.keys(getResponse)).toEqual(['thing', 'stuff', 'bunnies']); gives use better info. The negative would be testing that being unauthorized gives us ['error', 'message'] but too many people seem to think it's looking for puppies and kittens and verifying it's not there.
Somewhat related, but using De Morgan's Laws in tests is fun, too, especially during code review. .not.toBeLessThan is the same as .toBeGreaterThanOrEqual but that second function is only in more recent versions of Jasmine.
Front-end Engineer and Tester. I have a course at https://learn.webdriver.io and I do free coding tutorials at https://www.youtube.com/user/medigerati/
Some very good points! I think this would make a great follow up post, going through different scenarios and how to ensure you've covered the major scenarios, but not gone too far in testing unneeded scenarios.
I think it stems from more tests being assumed to always mean better. And having lots of tests obviously means we'll never have bugs ever.
Bugs will always happen, but good tests mean they get caught earlier and hopefully before things get merged to the main codebase, so yay! But that doesn't mean there are no bugs. And that doesn't mean that there are no missing requirements. It just means tests passed :P
And there will always be someone writing a test to verify the sky isn't red, and someone looking at the metrics and being happy test count and coverage increased.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
To a certain extent, I love unhappy paths, but I see too often when someone who does that manually incorrectly applies it to code.
I know some QAs that would very happily write that the sky isn't green and the sky isn't purple and the sky isn't red. When they could just write that the sky is blue and be done with it.
While that may seem silly with that example, consider an api integration tests that does like
expect(Object.keys(getResponse)).not.toContain('links');
Lovely, we no longer use HATEOAS links. Good job, team. But also, that getResponse doesn't contain
'puppies'
or'kittens'
properties either. It might have a'link'
property because someone got cheeky and changed the name on us.expect(Object.keys(getResponse)).toEqual(['thing', 'stuff', 'bunnies']);
gives use better info. The negative would be testing that being unauthorized gives us['error', 'message']
but too many people seem to think it's looking for puppies and kittens and verifying it's not there.Somewhat related, but using De Morgan's Laws in tests is fun, too, especially during code review.
.not.toBeLessThan
is the same as.toBeGreaterThanOrEqual
but that second function is only in more recent versions of Jasmine.Some very good points! I think this would make a great follow up post, going through different scenarios and how to ensure you've covered the major scenarios, but not gone too far in testing unneeded scenarios.
I think it stems from more tests being assumed to always mean better. And having lots of tests obviously means we'll never have bugs ever.
Bugs will always happen, but good tests mean they get caught earlier and hopefully before things get merged to the main codebase, so yay! But that doesn't mean there are no bugs. And that doesn't mean that there are no missing requirements. It just means tests passed :P
And there will always be someone writing a test to verify the sky isn't red, and someone looking at the metrics and being happy test count and coverage increased.