A few years ago we hired a junior tester who was obsessed with Playwright. He'd done a bootcamp, finished a Udemy course, and showed up on day one asking which framework we used so he could start writing specs. He was smart, motivated, and by week two he had a respectable suite of tests running against one of our clients' checkout flows.
The tests were green. The client was shipping. And then a support ticket came in saying the checkout page looked "weird" on certain Android devices when users had two saved addresses and tried to edit the second one.
Our automation guy opened the ticket, shrugged, and said he couldn't reproduce it. His tests were passing. The selectors were stable. The CI pipeline was happy. He genuinely believed the bug didn't exist, because his framework told him it didn't.
A senior tester on our team opened the app on her phone, followed the steps from the ticket, and found the bug in about ninety seconds. It was a z-index issue on a modal that only appeared when the second saved address had a longer street name than the first. No automated suite on earth was going to catch that, because nobody would ever think to write a test for it.
That was the moment I stopped pretending manual testing was some kind of vestigial skill.
I used to think manual testing was a phase people grew out of
I'll be honest. When I started at BetterQA I thought manual testing was the stuff you did before you got good enough to automate. The narrative in most job postings, most conference talks, most LinkedIn posts felt like a moving walkway: you start clicking buttons, you learn Selenium, you graduate to Playwright, you end up writing CI/CD pipelines and eventually you stop touching the actual product entirely. Manual testing was the ground floor. The point was to leave it.
I was wrong about this in a way that took me embarrassingly long to admit.
What I've seen across the fifty-plus engineers we have spread across twenty-four countries is that the testers who never properly learned to sit with a product and break it by hand are the ones who write the most useless automation. Not bad automation in a technical sense. Their code is often cleaner than mine. But the tests they write cover the happy path, they cover what the spec says should work, and they cover the scenarios that are easy to describe in a Jira ticket. They do not cover the weird stuff. They do not cover the stuff that matters.
Automation is a lens, and lenses have blind spots
Here's the thing nobody tells you when you're learning Playwright: an automated test can only find a bug you already suspected might exist. You have to write the assertion. You have to know what "correct" looks like. The test framework is a flashlight pointed exactly where you told it to point, and everything outside that beam is invisible.
Manual testing is what tells you where to point the flashlight.
When I watch a good manual tester work, they're not executing a test plan like a robot. They're forming hypotheses. They open the app and they notice the hover state takes half a second longer than it should. They click the back button twice in a row and see a flash of an unauthenticated view. They resize the browser to a weird width and watch the layout crack in a spot nobody thought to check. These aren't bugs that appear in any requirements document. They're the bugs end users actually hit.
Our founder has a line he repeats to anyone who'll listen: the chef shouldn't certify his own dish. The same thing applies to automation engineers certifying their own coverage. If you never sat with the product manually, you don't know what you don't know, and your automation suite is going to reflect that gap with unsettling accuracy.
The junior hire who wrote beautiful useless tests
Back to our Playwright obsessive. After the Android checkout incident, we put him on a different kind of onboarding. For six weeks he did nothing but manual exploratory sessions on client products. No scripts. No automation. Just him, a notebook, and a list of prompts like "try to confuse the login flow" and "pretend you're an impatient user and click everything twice."
He hated it at first. He told me it felt like going backwards. He'd spent a year learning automation frameworks and now we were making him take screenshots and write in English sentences. I sympathised but I didn't budge, because I'd seen what happens when testers skip this step.
By week four something clicked. He came into a standup and said "I found this thing where if you refresh during the payment redirect, the session token doesn't clear, so if the next person on that machine opens the same URL they land in the previous user's cart." Nobody had asked him to check that. No test plan contained that scenario. He'd developed the instinct for where bugs like to hide, and once he had the instinct, his automation got dramatically better. The tests he wrote a few months later were the ones that caught real regressions, not the ones that rubber-stamped the happy path.
That transformation is the actual case for making people master manual testing first. It's not about saving money on tooling licences, and it's not about the software development lifecycle or any other textbook framing. It's about developing the intuition that tells you what's worth automating in the first place.
What gets missed when you skip this step
The easy argument for manual testing is cost. You don't need a fancy framework, you don't need runners, you don't need infrastructure. True, but boring. The real argument is that manual testing is the only testing that operates on the actual product as a user would experience it, with all the irrational clicking and impatient scrolling and tab-switching that real humans do.
Automation will tell you whether the button submits the form. Manual testing will tell you that the button looks like a link, so nobody clicks it. Automation will tell you that the error message renders. Manual testing will tell you that the error message renders in a modal that traps keyboard focus and can't be dismissed on mobile. Automation will tell you the checkout completes in under three seconds. Manual testing will tell you that users keep giving up on step four because the progress bar goes backwards when you hit continue.
These are not edge cases. These are the bugs that lose you customers. And they are invisible to anyone who hasn't spent real hours touching the product the way a confused, distracted, slightly annoyed human would touch it.
What we actually do with new hires now
Every person who joins our team, regardless of their background, spends their first weeks doing manual exploration on a real client product. Automation engineers, security testers, people with ten years of experience, people with two. They all start the same way. They get a product, they get a vague prompt, and they go hunt.
We tell them explicitly: we are not evaluating how many bugs you find. We are evaluating whether you develop the instinct for where bugs tend to hide. Some people hate this. Some people thrive on it. Almost everyone, regardless of which camp they start in, writes better automation six months later because of it.
If you're early in your QA career and someone tells you manual testing is what you do until you learn Playwright, I'd push back. Manual testing is what teaches you what's worth testing at all. The framework is just the tool you pick up once you've earned the right to use it.
The automation engineer who can't reproduce a bug isn't bad at automation. He's bad at the thing that comes before automation. And the thing that comes before automation is sitting with the product, paying attention, and letting your discomfort guide you to where the real problems are hiding.
Top comments (0)