loading...

I'm a Tester, Ask Me Anything!

github logo ・1 min read

I have worked as a Software Tester in varying capacities for over 20 years. Managed QA Departments, started new QA Teams in various Start-Ups, combined QA and Release (doing both in a few companies).

Have to work with Testers on your Teams? Need to write Tests? Unclear on what are Tests? Feel free to ask me anything!

twitter logo DISCUSS (29)
markdown guide
 

I currently run all the testing in my team, whilst also being a developer. I find this an incredibly difficult task because I think, use and test the applications like a developer.

How would you suggest that I learn how to disassociate myself from being a developer to test an application correctly?

 

I find that hard, because the thought process for developing and testing is slightly different. It's not insurmountable though, part of the idea of disassociating yourself may help to role play a little, come at the test task with a persona. Customer, User, Hacker - whatever. Consider what this persona wants to accomplish with what you are testing and try to simulate those movements.

My easiest piece of advice is to look at the software and think "if I was going to break this, how would I do it?"

If you have coded to requirements, check against the Business Use Cases. Work with the PM, if you have one, or maybe someone on the Customer Support Team - they are an INVALUABLE resource. They talk to your customers all the time, so they have a good idea how customers are using the product.

Saying "Think outside the Box" is easy, doing it is hard. Figure out the Happy Path to what you are coding, then look at each step and think "what if I did z instead of x here" see what happens. Is the code resilient? Change inputs to something unsupported, try and past in lengths longer than the fields allow. Basically look at all the things you would normally use and inputs and use something completely different.

 

Some fantastic advice here, thanks for taking the time to do that :)

My pleasure, I hope it helps. Feel free to ask again if something else comes up.

 
 

How would you test your program if most of problems and bugs come from using external data and you don't have to handle them in any other way than logging it?
I have an application that's a gateway between js frontend in-browser application and backend data server, most of problems are either not compiling or data failure, for which someone else is taking care of. Right now, my unit tests focus on single components and rules that process the data, but unit testing anything else is futile. I have integration tests to check if app handles data that is typically returned, but that's all. Is there a better way to automatically test my application?

 

Testing a Black Box is just that, Black. When everything is an external dependency that have no control over its really hard, especially if there are changes in how the data you receive is structured and you don't know about it until you start showing failures and have to start checking. Or you check the feeds first and log that issues that things are not in the right format.

About the best way of handling things that are external is mocking them, simulating all your external dependencies and running tests Unit and or Functional using those mocks is a good way to expand your coverage. One of the bigger types of testing I hear more and more about now is Contract Tests, which I am sadly not as completely versed in as I would like to be since its not been something I have had a need of, in a situation like yours it may be a good fit. At least with mocks you should be able to automate your tests by bringing them up in a framework and running any tests you need, and if you have many external feeds you should be able to cheaply create multiple scenarios to test against.

 

Right now I have two or three API points to check. They are queried multiple times, though.
I feel it is useless to check all of this as unit tests :/
When i was doing a huge update, I wrote multiple functional tests, that were working on real data (all of the data is read-only for me, anyway), and check what I have to check manually, just in the debugger :/
However I in my functional tests I provided tests for the API point and was able to tell which failures were due to my bugs, and which were caused by external data.
Sadly, the corporations coverage reporting tool doesn't acknowledge my tests as a 'proper test' ;(

My view is if you cover the functional part once, that's enough, while different data inputs may be useful you shouldn't need more than one test to cover each case. Though tests should always be idempotent.

From what your saying about your testing its complete for me, and pretty much what I would do. I occasionally add in destructive tests when I can, but only if they are necessary. They should also be able to be cleaned up after and not leave the system in a bad state. Tests should be invisible to each other so it should never matter in what order they are run.

Coverage Tools can vary, but in some of them I know you can only capture certain types of actions. That's a deeper subject though.

 

Do you have any suggestions for career progression for new QAs?

I've been in my current role for 4 years and rock Protractor and Jasmine, but I don't know testing fundamentals like the ISTQB exam. I love to code but hate to write and execute manual scripts. And likely, my ability to come up with test cases is not as great as it could be, but it's not like QA gets the same kind of classes as software engineers do. I'm far more comfortable building out new test frameworks and optimizing test runtime.

It seems like if I'm to stick with QA, I'd either have to

  1. lead, so writing test plans and scripts and stuff
  2. manager, so HR
  3. move onto other automation languages, which my team doesn't need

It seems like I'm stuck unless I pivot to development.

 

I've never done any of the test exams, I actually started my career before most of those came into being. I may be jaded but I look upon most of those exams as suspect, I know in some countries they are necessary as companies in certain places like them as a way to show what you know. For myself I can show that in an interview, and in the work I have done. Also, blogging and an online presence I think can also show the same things as most of those exams, which are just expecting you to regurgitate facts rather than be able to speak about them in depth.

If you want to progress decide what you want to do. I did management, and if you like that, can work with and inspire people, its a good place to be when that fits your goal. I am more technical, I still coded and tested as a manager, and that's been my place of choice as I like being able to write and create frameworks for others to use and make my work easier. Focus on what you want to do, and learn the languages you like, as well as those that are in demand for the market you want to work at. That's key, while you may like one language, if not many are using it learn what excites you but also learn what is in demand. That way you cover both your goals, and a work path.

You don't need to be a developer if you don't want to, I can code but its not a natural fit for me. I'm probably slower on it than most people, but I can learn one fairly quickly as I need to if that's what the job requires. I fit into a QA Ops role overall, I can keep the lights on and keep things going while also keeping an eye on issues that may be coming and highlight them before they become an issue. Everyone has their niche, find yours and highlight it, you'll find a company that needs it, and when you really enjoy what you do people notice.

 

How do you decide what NOT to test? 100% test coverage is a nice thing, but realistically given time constraints or other considerations, it's not achievable most of the time and perhaps even desirable.

 

I've had the "completely tested" argument with people before, and it depends on what you are testing. Small applications you can get close, but I don't realistically feel you can get 100%, there is always something unexpected. I love this question though, since usually you get the opposite, and with the time crunch that often happens you have to choose what to cover.

This is where I use Risk Mitigation. What areas provide the lowest risk of issues? What parts have less use so if there is a problem it may be small, or affect the fewest number of users? Learn to assess what's not going to be a problem, it takes time and familiarity but its a great skill to work on.

Always know your "Happy Path" and cover that, if its a web application I have tried using web logs to see which pages are hit more often. If you have Google Analytics you can use their tools to see what areas of a web application Customers use more than others...and always touch those.

I always try to automate something every sprint, or when I get time. That way I can quickly pass through many areas and let the automation handle the details and cover new stuff. So older stuff, things that have code untouched, I may leave alone if there is no time, or quickly pass through if I have a moment.

 

I like the data driven approach of using GA to prioritize things to test.

That works well too, there is no right way to do it. You have to match the environment you have along with the business needs and pressures to provide the best quality product possible given your constraints. I've found a mix of strategies gives me better confidence that I can then prove when asked to provide detail to Upper Level Management who don't want or need the details but need context to understand a Go or No Go.

 

What is your experience with culture of testing in start-ups? I have seen many start-ups test changes on their customers as it is the quickest way to move especially at an early stage. At which stage of growth do start-ups focus on QA?

 

Typically they come in after something is written, and either Customers are getting on board, or there has been a deploy and people are using the code somewhere. Issues are reported and then someone is hired to start checking code before the Customers see it. I've come into startups anywhere from the 10th to the 30th person, depending on the size, where junior Developers are given the task to write more Unit Tests, or even do all the Testing as a way to familiarize themselves with the code.

I came in to formalize the process, check the code, and because I am lazy I start automating things so I can have something check the simple things, while I focus on more complex tasks or pieces of code. With start ups its easier as people want their code to work and they want the company to be successful, so its easier to have conversations about something being broken and fixes can be done really fast. As companies grow and more process is in place, with more defects, you tend to start building up tech debt which can get out of hand if you don't manage it well. In a start up your code base is often small enough you can respond quickly enough to deal with issues right away.

The most successful Development organization I ever worked with had a "Release Day", where pizza was brought in and each Developer wrote a script (basically a set of tasks to check, or things to do with the UI) those scripts were handed out to people who had no familiarity with that part of the product and they went through things. Some stayed on script, some strayed or just played around - sort of like how some Context Driven Testers will work. This worked well because you can't always test your own things with an open mind, but something unfamiliar will often allow people to open their minds and think more outside the box.

My problem with continually testing things out on Customers is when they start paying for a product, or relying on it more, issues and risks become larger. Then you start getting a reputation problem and often can lose Customers who expect the finished product to be "defect free". Putting a Beta Release banner on things helps, or at least Customers know what they are getting into, but after a period of time you have to start testing before release, you can't build up a Customer base on a Beta forever.

 

Would you consider integration tests (without unit tests or other kind of tests) as a valid way to check if the system is doing what it needs to do?

We did this right here: github.com/coretabs-academy/websit...

To run postman tests against django api with every commit (in gitlab)... is it enough, or do you suggest to do more than that?

 

Well it checks part of it, Integration Tests are good when you put all the pieces of a larger application together in one environment. You still need to have Unit Tests, and perhaps Functional Tests if possible, to make sure the parts are doing what you expect in isolation.

 
 

They should be testing whatever it is they are coding for. Like TDD says, "Write your Test before your Code". After all, if you don't have a test, how do you know your code is doing what you think?

 

Hey man,

Where to learn testing ? Any books , guides references , tutorials etc ... ???

 

It depends on what you want to learn, but there are plenty of books out there you can read and learn from. Janet Gregory, Michael Bolton (not the singer), the Bach brothers, Lisa Crispin are good authors to start with. They cover different types of testing, so it will give you a breadth of knowledge you can then specialize in when you find something you like. They have blogs you can look at as well and there are QA forums out there as well to look at, SQAForums and Ministry of Testing are ones that come to mind.

 

How would you teach or train a newbie tester or someone who's interested in testing/qa role?

 

Find out what their strengths are, what they are interested in and give them something to work their skills on. Once they have some confidence assign a harder project that will require more work, and research, but check in with them to make sure they are on the right path. We recently had an intern that we did this with. He knew our space, he was with us last summer, so we knew what he was capable of so we gave him a good task that was easy for some of us but challenging for him. He did it, longer than it took us, but we helped him along, regularly checked in and gave advice on what to search on and look at for examples. It worked great, and at the end we had some pretty nice additions to our test framework.

Testing is wide, there are a lot of specialties and niches in it if you want to look around. There are a bunch of different testing "schools", I'd get someone new to search for what resonates with them. Get them looking at some books, or sites to increase their knowledge. Lunch and Learns are great, not only do you need to learn something but you need to speak to it. I find that more valuable since I learn better by teaching someone else because you get questions you didn't think about and if you can't describe something you know well to another, how well do you really know it?

 

What are some of the challenges you run into in the day to day? Any common themes you have seen in your career that made testing harder than it had to be?

 

Most have been in regards to the framework choices and being able to handle new functionality when the product has changed. It's made me focus on open frameworks that can support multiple languages, especially when trying to deal with testing front and back ends. UI testing and backend (API or Database) are different and sometimes require different types of tests.

As to common themes what I tend to do over and over is try to teach people how best to handle testing outside of their own functional area. Its easy to write tests for a method when its used in one area, common libraries are harder and require additional thought and planning. I tend to learn towards a "teach you how to test" rather than write specific tests, having philosophical discussions on test strategy with developers - sometimes shallow ones to get them on board and with others in depth. It depends on the person, but I try to leave the environment better than when I found it.

 

Can you ensure an application bug free? If cannot do so, how do you make your customer or client feel safe to use your application?

 

I don't think it can be done realistically, there is often a combination of something will affect someone. There are too many environments, browsers, add ons, tools to support to cover everything. See what your Customers use, focus on the most widely used browsers/tools in use, or have the company decide a "Works best on..." banner to show you have confidence your web app will work great on Firefox, IE may have issues but its not a focus, note that so people know. Communication is key.

Be as thorough as you can with what you will support, try to be responsive to Customer issues, show them you are listening. That goes a long way to getting Customer confidence up and show you are working on resolving their issues. Having a dialog with Customers, even if its just an "Engineer" often makes them feel like they are heard and can be really calming.

Classic DEV Post from Jun 23 '19

Being a Female Programmer: How is it For You?

This probably depends on where we live and work, but personally, I have not experienced anything negative for being a female programmer in my few years of career.

Michael profile image
QA and Release Manager/Engineer and tool smith. I build Test Frameworks using whatever language suits the environment I'm a lifelong learner and that's what gets me up in the morning.

Where the wild code grows

Sign up (for free)