I run a QA company with 50-plus engineers spread across 24 countries. Roughly half of them do manual testing. Not because we're behind the times. Because that's what our clients need.
Every conference talk, every LinkedIn influencer, every bootcamp curriculum pushes the same story: automate everything, manual testing is a relic, if you're clicking through a UI in 2024 you're wasting money. I've heard this for years. And every year, the demand for skilled manual testers at BetterQA grows.
So let me say what I actually think. Manual testing isn't dying. But the version of manual testing that people imagine when they hear the phrase? That version probably should die.
The boring manual testing is already dead
Let me be clear about what I'm not defending.
If your manual testing process involves a tester opening a spreadsheet of 200 test cases, clicking through each one in sequence, writing "pass" or "fail" in a column, and repeating this before every release, then yes. Automate that. Automate it yesterday. That kind of work destroys morale, produces inconsistent results, and costs more per bug found than any reasonable automation framework.
We automated repetitive regression testing years ago. We built Flows, a Chrome extension that records browser interactions and replays them as tests with self-healing selectors. The entire point was to free our manual testers from the mechanical parts of the job so they could spend time on the work that actually requires a human brain.
When people say "manual testing is dying," they usually mean this repetitive, scripted, follow-the-checklist kind. And they're right. It should die. The problem is that they then leap to the conclusion that all manual testing should die, and that's where they're wrong.
What manual testers actually do now
The testers on my team who do manual work aren't clicking through login forms all day. Here's what their week actually looks like.
They spend time in exploratory testing sessions, deliberately trying to break things in ways nobody anticipated. They navigate the product the way a confused user would, not the way a specification document describes. They find bugs that no automation script would ever catch because no one thought to write a test for that scenario.
They review designs and requirements before a single line of code gets written. This is the cheapest place to find defects. A bug caught in a requirements review costs almost nothing to fix. The same bug found in production costs 100 times more. That's not an exaggeration. It's a well-documented cost multiplier that's held up across decades of software engineering research.
They do usability assessments. They sit with the product and ask questions like: would a real person understand this flow? Does this error message actually tell you what went wrong? Is the button where you'd expect it to be? Automation can tell you whether a button exists on the page. It cannot tell you whether the button makes sense.
They run accessibility checks. Not just automated scans (those miss roughly 60-70% of real accessibility barriers), but actual screen reader walkthroughs, keyboard-only navigation, cognitive load evaluation. A WCAG compliance tool will tell you that a form label exists. A manual tester will tell you that the label says "Field 3" and means nothing to anyone.
They probe for security issues. Not full penetration testing necessarily, but the kind of poking around that finds exposed data in API responses, broken authorization checks, session handling problems. With AI-generated code flooding into production, this kind of investigative work matters more than it did five years ago.
The "automate everything" pressure is real, and it's partially nonsense
I get why engineering leaders push for full automation. The pitch is seductive. Write the tests once, run them forever, get fast feedback, reduce headcount. What's not to like?
Here's what I've seen happen in practice.
A client moves to 100% automation. Their Selenium or Playwright suite covers all the happy paths beautifully. CI runs green. Everyone feels confident. Then they ship a feature where the shopping cart total displays correctly but the font is 4px and grey on grey. A human would catch that in seconds. The automation suite doesn't check font sizes because nobody thought to add that assertion. A customer screenshots it, posts it on Twitter, and suddenly "fully automated QA" looks a lot less impressive.
Another client automates their entire regression suite. Takes three months and costs a fortune. Then the product team redesigns the navigation. Forty percent of the automated tests break, not because of bugs but because the selectors changed. Now you have an automation maintenance backlog that's bigger than the original testing backlog. The team spends more time fixing tests than writing new ones.
Automation is powerful, genuinely powerful, for specific categories of testing. Cross-browser compatibility. Regression on stable features. Performance benchmarks. Data-driven tests where you need to run the same flow with 500 different input combinations. For those things, automation is not just better than manual testing, it's the only sane option.
But automation is terrible at answering "does this feel right?" It can't do creative exploration. It can't notice that the loading spinner is technically working but feels sluggish in a way that will irritate users. It can't look at a form and realize that the field order doesn't match the mental model a healthcare administrator has when processing patient intake.
What's actually changing for manual testers
Here's the honest part that the "manual testing is dead" crowd gets right, even if they get the conclusion wrong. The job description is changing fast.
Five years ago, a junior manual tester could get by with basic test case execution skills. Open the app, follow steps, report results. That's not enough anymore.
The manual testers who are thriving on our team have skills that overlap with product management, security analysis, and UX research. They understand API calls well enough to check what's happening under the hood when the UI looks fine. They use browser DevTools to inspect network requests, check response payloads, verify that sensitive data isn't leaking in places it shouldn't be. They understand enough about accessibility standards to do meaningful evaluations, not just run an axe scan and forward the results.
They're also comfortable working alongside automation. On most of our client projects, the same team handles both. A manual tester explores a new feature, finds the edge cases, documents them, and then works with the automation engineer to decide which paths are worth scripting for regression and which are one-time exploratory findings. That collaboration is where the real quality comes from. Not from one discipline replacing the other.
Our founder Tudor Brad has a line he uses a lot: "AI will replace development before it replaces QA." It sounds provocative, and he means it to be. But the core point is serious. AI tools can generate code. They can even generate test scripts. What they cannot do is understand whether a product feels right to use, whether a workflow makes sense for the specific humans who will use it, or whether a security boundary that technically exists is actually robust enough. That requires judgment, creativity, and domain knowledge that nobody has automated yet.
The vibe coding problem
This part is new, and it matters.
We're seeing more client projects where significant chunks of the codebase were generated by AI tools. GitHub Copilot, Claude, ChatGPT, whatever the flavour of the month is. The code works, mostly. It passes the unit tests that the AI also generated. And it ships with subtle bugs that only surface when a real person uses the product in ways the AI didn't anticipate.
I've seen AI-generated form validation that checked email format but not length, allowing a 10,000-character email to crash the backend. I've seen AI-generated pagination that worked perfectly for pages 1 through 10 but returned duplicate results on page 11. These aren't exotic edge cases. They're the kind of thing a manual tester finds in their first hour with the feature because they naturally try inputs that a generated test suite doesn't consider.
As AI-assisted development accelerates the speed of feature delivery, the demand for people who can thoughtfully evaluate those features goes up, not down. Features that took three months to build now take three hours. That same speed produces more surface area for defects. You need testing that can match that pace, and skilled exploratory testers are faster at covering new ground than any test automation framework.
What I tell testers who are worried about their careers
If you're a manual tester and you're nervous about automation replacing you, I understand the anxiety. But I think the threat is misidentified.
The thing that will make you irrelevant is not automation. It's refusing to evolve what "manual testing" means for you personally.
Learn how to use browser DevTools. Understand enough about APIs to read a response payload. Get comfortable with accessibility testing beyond just running a scanner. Develop a specialty: security probing, or usability evaluation, or data integrity analysis. Understand CI/CD pipelines well enough to know when and where your testing fits in the release process.
You don't need to become a programmer. But you need to be more than someone who follows a test script. The testers on my team who are most in demand with clients are the ones who can sit in a sprint planning meeting, hear a feature described, and immediately start asking questions that expose gaps in the requirements. That's not a skill automation replaces. It's a skill that makes automation more effective because it ensures the right things get automated in the first place.
The honest answer
Manual testing isn't dying. What's dying is the job description that says "execute pre-written test cases and record results." That work is being absorbed by automation, and it should be.
What's growing is the need for people who can think critically about software quality, who can explore products with creativity and suspicion, who can translate technical findings into business risk, and who can evaluate whether something that technically works actually works well for the people who'll use it.
The boring repetitive stuff? Automate it and don't look back.
The creative investigative work? That's more valuable now than it's ever been. And I don't see that changing anytime soon.
If you're interested in how we approach testing at BetterQA, or you want to see more of our thinking on QA in the AI era, check out betterqa.co/blog.
Top comments (0)