DEV Community

Dasha Tsion
Dasha Tsion

Posted on

Leading a QA Team: Building QA from Scratch and Balancing Manual & Automated Testing πŸš€

When I joined Trainual, there was no QA team at all. I was the very first QA engineer in the company.

My initial challenge was to establish a testing process from scratch: set up standards, introduce metrics, and prove the value of QA in a fast-growing product team.

As the product and company grew, I helped to expand the QA function β€” building a team of four (now 7) manual testers and one automation engineer, while continuing to serve as the QA Lead.

This article shares how I built QA processes from the ground up, and how we learned to balance manual and automated testing to deliver quality at scale.


πŸ‘₯ Team growth

  • Started as the first QA – responsible for defining processes, metrics, and regression coverage.
  • Expanded into a QA team – four manual testers focusing on exploratory, regression, and complex scenarios.
  • Added automation – one automation QA engineer to accelerate regression testing and improve release confidence.

The mission: build a QA process that supports both speed and quality as the product scaled.


🎯 Initial goals of QA Autotests

From the very beginning, we defined clear objectives for introducing automation:

  • Reduce regression cycle time.
  • Catch repetitive issues earlier.
  • Provide confidence for fast releases.
  • Free up manual testers to focus on higher-value exploratory testing.

Automation ran independently from the main repo in the staging environment and became part of our release cycle.


βš–οΈ The value of automation vs manual

We quickly discovered the balance between automation and manual testing:

  • Without automation β†’ regression cycles took days for QA team of 5, slowing down delivery.
  • With automation β†’ the regression suite could run overnight for our clients (the main team works in USA, QA team works in Ukraine), providing quick feedback.

Manual QA didn’t disappear β€” instead, testers shifted to areas where automation couldn’t reach: exploratory testing, usability, edge cases.


πŸ›  Our technical stack

To build automation, we relied on:

  • Capybara for frontend coverage.
  • Selenium WebDriver for complex scenarios.

Regression suites ran nightly, producing reports on stability, failures, and coverage percentage.


🐞 How we detect and handle issues

  • Autotests were triggered for every release.
  • Failures were logged directly in Jira, so developers received instant feedback.
  • Flaky tests were monitored, flagged, and reclassified when needed.

This reduced the number of bugs slipping into production and built trust in QA.


πŸ“Š Tracking quality and coverage

We introduced transparent metrics to show stakeholders the value of QA:

  • Bug statistics by type, severity, and source (manual vs automation).
  • Coverage dashboards showing % of features covered by tests.
  • Monthly reports demonstrating how automation coverage increased over time.

By sharing these numbers with PMs and leadership, QA became a measurable and visible part of product quality.


🚧 What automation can’t cover

Despite the benefits, we made it clear that automation had limits:

  • Visual regressions (wasn't prior for us).
  • 3rd-party integrations.
  • Email/notification workflows.
  • Usability testing.

This ensured stakeholders understood why manual QA remained essential.


βœ… Results

  • Regression time reduced from days to hours.
  • Release cycles became faster and more predictable.
  • Manual testers focused on creative exploratory work instead of repetitive checks.
  • Leadership gained clear visibility into quality through metrics and dashboards.

πŸ’‘ Key takeaways

  • Being the first QA means building not just tests, but a whole process and culture of quality.
  • Automation is not a replacement β€” it’s a way to empower manual QA to focus on complex, creative testing.
  • Metrics are everything – they prove ROI, build trust, and help scale QA processes with the company.

πŸ–Ό QA Autotests Overview

As part of building QA processes, I created a visual overview of our QA strategy in Miro.

This artifact helped to:

  • Clearly explain the role of manual vs automation QA in the team.
  • Show stakeholders the goals of automation and what benefits it brings.
  • Highlight the technical stack we use.
  • Demonstrate coverage metrics, blockers, and limits of automation.

It became a reference point not only for the QA team, but also for product managers and engineers β€” making QA processes transparent and easy to understand.

QA Autotests Overview


πŸ‘‰ This case study is based on my experience as the first QA at Trainual, where I built QA from scratch, grew a team, and established a balance between manual and automated testing.

Stay tuned πŸš€

Top comments (0)