If this image wasn't what came to most people's mind when they heard automated testing, multiple hours and meetings would be saved. As much as the automation helps in testing, it isn't the answer to everything, and - in fact - it can add more work to your test load. As with all other software, it need maintained, and as the product evolves over time (even with only small adjustments), the automated tests need the same level of care that any other section of the product does, if not more.
Okay, enough of the lecture.
I have a project, written for both Android and iOS, that I have been given permission to add tests to: which made my day. I am trying to approach this professionally: test plan, user stories, decent documentation, and all. I spent several hours getting both the functional items and the stories written up (albeit in a brief form), so I knew what tests and checks needed done.
I can do white-box testing on this - he let me into the code, so I get to explore everything!
And found a single test: I wasn't precisely sure what the test was for, so I added it to my notes.
So off to explore! I know I want to stay someplace in the Selenium system (after all, it's the one I have the most experience with), so looked at the various options that were out there. I found two items that were recommended, and at first thought of using them both. Until I looked at the requirements.
Now, I fully understand why having a testing setup is so valuable, and why resources for testing can be an issue; the list of things to install is a quarter the line count of the program itself! So now get things ready to install (find out what I have, what I need or need to update) and get ready to go.
This is a simple application - you get a question, then push the button to get the answer. For a cat-loving person, this is grand. It isn't, however, going to make money, being very simple and built as a learning project.
So, does this even need automated testing?
Yes, it would be nice to have: to make sure that the chances of it grabbing the identical question twice (or more) in a row is low, and to insure that it doesn't error out when the number of jokes passes the number of possible questions.
But is it a need? After thinking about it for a while, I decided that, in this case, nearly doubling the size of the program just to test this one item wasn't in the best interest of having an enjoyable app. The tests, done manually alongside making sure of the hundreds of other things that might be an issue, didn't add much additional time (overall, less than 10 minutes - excluding writing up the results) and the results were what I was hoping for.
And those results? The app looks good, the cats are cute, and some of the jokes are great. The only automation I did with this (and suggested changes to the owner) were accessibility items that, to me, didn't show - mostly adjusting text size.
So, the tests weren't totally manual - but a tools that is capable of catching these details without fuss, and that was run in the same time that the manual tests were done was a nice addition, and caught things that I would not have noticed.
Manual? Automated? Neither - a day's thought gave me the proper balance of tools and time to do the best possible job, for the highest quality.