After being in the quality assurance (QA) space for several years, we've had a front row seat to how companies big and small do QA. The main area of struggle we see is with setting up automated end-to-end tests. End-to-end tests go through your application like a user would, to check that critical workflows like sign up aren't broken. These tests can be difficult to create and integrate into your process.
If you don't have automated tests yet, getting started can feel daunting. In this guide we explore three common pitfalls companies face, and provide solutions for how to avoid them.
Pitfall #1: Trying to run before you walk
We've had this conversation more than a few times now:
Company: "We need to run hundreds of tests on every browser and device size on every commit."
Us: "Cool! How many tests do you have now?"
Company: "...zero."
While it's great to aspire to running hundreds of tests on every commit, these lofty goals can paralyze companies from getting started in the first place.
It's easy to get hung up on a long list of "must haves" like testing different versions of Chrome or creating preview environments for every commit. When starting from scratch, the only "must have" should be running a single test however you can. Otherwise, you risk constantly delaying your testing efforts while searching for the perfect solution.
Avoid putting undue pressure on your team to build the perfect QA process upfront. Instead, start with the smallest attainable goal and go from there.
If you don't have any tests yet, create one basic test and run it only on Chrome. If you don't have a way to test every commit, run tests once a day on staging. You can always improve your process later, without sacrificing coverage today.
In short, don't beat yourself up for not having your dream QA process in place. Every company has to start somewhere. The most important thing is to start at all.
Pitfall #2: Using the wrong (or no) success metrics
There are many ways to do QA, such as having developers write tests or using an outsourced team. Regardless of how it's implemented, you should measure the success of your QA efforts.
Ultimately, QA is about providing your customers with a quality product. Automated tests check that user flows (like adding an item to the cart or logging in) aren't riddled with bugs. Therefore, a "north star" QA metric should tie into your user experience.
In practice, many companies don't know whether they're doing a good job with QA. The result is treating QA like a box to be checked, rather than a way to earn customer love and gain a competitive advantage.
We think the single best QA metric is user workflow coverage. This means the share of common user actions (like creating a comment or changing a password) that are tested.
While it's impossible to get to 100% coverage (there are infinite ways to use a site), prioritizing test cases based on what your users are actually doing keeps QA aligned with your larger business goals.
Today, few companies have the tooling in place to understand the most common and most buggy user journeys. One place to start is by reviewing user data from services like Hotjar and FullStory. You can also use QA Wolf for this, which converts your analytics into test cases.
Example of user interactions recorded with QA Wolf
Pitfall #3: Not treating automated testing as an engineering problem
Automated QA can be as simple as running one test on staging. Still, it's critical to keep your team's long-term success in mind from day one.
It can be tempting to do something - anything - to get QA off your plate. A common trap is to delegate automating QA to non-technical team members like interns or customer success. While these team members can contribute to an existing process, they typically don't have the skills to create a good process themselves.
There are two basic parts to standing up automated QA: creating stable tests and integrating them into your process. Both involve specific engineering knowledge. Just as you wouldn't rely on a product manager to shape best practices for your React components, you should make sure to give QA the engineering attention it deserves.
One manifestation of this is the rise of "no code" testing tools. Though the message is appealing (I don't have to think about QA - my intern can do it!), we have yet to see these tools work well in practice. Every site has its own quirks that only the flexibility of code can automate. No code has its place elsewhere, but when it comes to testing: just say no to no code.
If you don't have the time or skills to set up a process yourself, find someone (internal or external) with the engineering chops to build that solid foundation. Then you can invite more of your team to take QA to the next level.
Conclusion
We hope that this post has made getting started with testing seem less scary. All companies, including yours, are capable of taking that first step. And when you do, your customers will thank you. :)
At QA Wolf, we manage QA for you by converting user analytics into automated tests. If you'd like to learn more, or just want to talk testing, you can reach me at laura@qawolf.com. 🐺
Top comments (0)