Let me tell you what agile scrum actually looks like when you're the only QA on a team of six developers.
The sprint planning deck will say "shift left" and "quality is everyone's responsibility". The retro board will have sticky notes about collaboration. And then somewhere around 4pm on a Friday, someone will drop a Slack message that says "hey, can you just run through it real quick before we deploy?"
That's the job. The rest is paperwork.
The blocker problem
In every scrum team I've joined as the single tester, there's a moment around sprint three where I stop being "the QA" and start being "the blocker". You can feel it shift. A dev finishes a ticket on Wednesday. It sits in your column until Friday because you're still testing the three tickets from Monday. Burndown chart flattens. Somebody mentions it in standup. Suddenly the conversation is about your throughput, not about the fact that five people are producing work faster than one person can verify it.
I used to try to explain the math. Six devs times two stories per sprint equals twelve things to test, plus regression, plus the bug fixes from last sprint that came back from staging. One person. Two weeks. You do it.
Nobody wants to hear the math. They want the green checkmark.
The PM who sneaks stories in
There's always one. Mine was called Radu. Lovely person, terrible for my sanity.
Radu had a habit of adding tickets to the sprint backlog on day four without acceptance criteria, without a design, and without telling anyone. You'd open Jira Monday morning and there would just be a new story sitting there called "small tweak to the checkout flow" assigned to the sprint. Small tweak. Sure.
When I asked what "done" looked like, the answer was usually "you'll know when you see it". When I asked what the edge cases were, the answer was "the client just wants it to work". When I filed a bug, the answer was "that's not what we agreed" even though nothing had been agreed because nothing was written down.
I got burned enough times that I started refusing to test anything without written acceptance criteria in the ticket. That was my little rebellion. It worked for about two sprints before Radu started writing acceptance criteria like "should work as expected" and I had to start another fight.
The sprint where everything fell over
Here's the one I still think about.
We were building a multi-tenant dashboard for a logistics client. Sprint 14. The team had committed to a big piece of work: role-based permissions across four user types. Admin, manager, driver, viewer. Each role had different screens, different buttons, different data.
I spent the planning session trying to talk the team into splitting it. Four roles is four times the test matrix. Four times the edge cases. Four times the "what happens when a manager tries to access an admin page" questions. The lead dev said it was fine, it was all using the same permission middleware, testing one role would basically cover all of them.
You can guess where this is going.
Day eight of the sprint, a dev finishes the first role and hands it over. I start testing. It's fine. Admin role works. I move to the manager role on day nine. It's also fine, mostly. I find two bugs, file them. Driver role lands day eleven. Broken in three places. Viewer role lands day twelve, which is the Friday before sprint end.
Viewer role was the one where I found that a viewer could hit the admin API directly if they knew the endpoint. No auth check on the backend route. Just the frontend hiding the button. A viewer could delete shipments. A viewer could change other users' passwords. A viewer could do pretty much anything an admin could do, as long as they opened devtools and typed the URL.
I filed it as a critical bug at 3:40pm on Friday. The lead dev told me it was out of scope for this sprint, that the frontend was hiding the buttons so it "wasn't really an issue", and that we should ship and fix it in sprint 15. The PM agreed. Deploy was scheduled for 5pm.
I made a scene. I don't make scenes often. I made this one.
I went to the CTO. I showed him the curl command. I made him run it on his own laptop against staging. He watched a viewer account delete a shipment. Deploy got cancelled. We spent sprint 15 properly implementing backend permission checks across all four roles, and the lead dev and I didn't speak for about a week.
Here's what fixed it long term: we started writing test cases during planning, not after. If a story went into the sprint, it went in with a checklist of what needed to be verified, written by me, reviewed by the dev, agreed by the PM. If you couldn't describe how to test it, you couldn't commit to it. Took me three sprints to enforce, and I had to threaten to quit once, but it stuck.
That was the real shift-left. Not a blog post. A fight.
What the ceremonies actually look like
Sprint planning is where you earn your paycheck. Not by estimating. By asking the dumb questions. "What does done look like?" "What happens when a user is offline?" "What's the behaviour when the API returns 500?" "Is there a timeout on this?" Half the value of a tester in planning is forcing the team to think about things they'd rather not think about. The devs hate it. The PM hates it. The product gets better.
Standups are mostly theatre, and that's fine. I use them to telegraph what I'm blocked on. Never in a passive aggressive way. Just plainly. "I'm testing the search filter today. If the backend contract changes, I'll need to restart. Please don't change the contract without telling me." The word "please" is doing a lot of work in that sentence.
Sprint review is where I quietly make sure the thing we're demoing is the thing I actually tested. You would not believe how often the demo branch is not the branch that was on staging when QA signed off. I learned to ask "which commit are we showing?" every single time. It makes the devs roll their eyes. I don't care.
Retros are where nothing changes, until suddenly something does. I never go in expecting wins. I go in with one observation, delivered calmly, backed by a specific example. "In sprint 14, we didn't split the permissions story and I found a critical bug 90 minutes before deploy. I think we should split stories that touch more than two user roles." That's how the test-case-during-planning rule got in. One retro. One example. No drama.
What I actually do all day
The job title says tester. The actual work is:
Writing test cases nobody asked for because nobody else will. Chasing acceptance criteria that don't exist yet. Running regression on the stuff that was "already tested last sprint" because something in the shared component library changed. Arguing about whether a bug is a bug or "by design". Automating the things that keep coming back. Reviewing PRs for testability even when nobody asked me to. Documenting what the product actually does, because the spec hasn't been updated in six months. Explaining to stakeholders why "it works on my machine" is not the same as "it works".
And on good days, exploratory testing. Which is where I find the weird stuff. The viewer-can-delete-shipments stuff. The "what happens if you paste 10,000 characters into this field" stuff. The "what if the user speaks Arabic and reads right to left" stuff. That's the bit I actually like. That's the bit that justifies the job.
The thing about being the only one
Being the single QA on a scrum team is lonely in a way people don't talk about. You're the one person whose job is to find the problems, on a team whose job is to not have problems. Every bug you file is a tiny friction. Every "can you verify this before merge" is you slowing someone down. You get used to being the person people are slightly annoyed with.
The trick, and it took me years to figure this out, is to remember that the friction isn't personal. The friction is the job. If nobody feels any friction, you're not doing the work. If everybody hates you, you're doing it wrong. The sweet spot is when the devs are mildly annoyed but they also ask you to look at their PRs before merge because they'd rather get your eyes on it than have a customer find it.
You get there by being fair. By never filing a bug in anger. By writing up what you found and what you tried and what you expected, every single time, even when you're tired. By closing bugs that turn out to be wrong, publicly, without ego. By learning the product so well that when you say "this feels wrong", people listen.
The outsourcing pitch I'm not going to make subtly
Here's the honest version. If you're the only QA on a team of six, you are going to burn out. The math doesn't work. You can optimise your process, you can push for automation, you can fight for test cases in planning, and it will still be too much work for one person. At some point the team either hires another tester, or they accept lower quality, or they bring in an independent QA partner who can scale up and down with the sprint load.
I work at BetterQA. We started in Cluj-Napoca in 2018 and now we're 50+ engineers across 24 countries. The reason we exist is because development teams shouldn't validate their own code. The chef doesn't certify his own dish. When you bring in an independent QA team, the PM can't lean on them to close bugs that "make the dev team look bad". The testers don't report to the dev manager. They report to quality. That's the whole pitch.
But even if you don't hire us, hire someone. Hire a second tester. Don't leave one person to do the verification work for a whole team. It's not fair, and it's not how you ship good software.
What I wish someone had told me on day one
Write everything down. If it's not in the ticket, it didn't happen.
Never agree to test something that doesn't have acceptance criteria. "Works as expected" is not acceptance criteria.
Find an ally on the dev team. One person who gets it. One person who will back you up in planning when you say the story is too big. You will need them.
Your job is not to prevent all bugs. Your job is to prevent the bugs that matter from reaching users.
"Can you just run through it real quick?" is never real quick. Schedule it. Put it in the ticket. Test it properly or don't test it at all.
And when you find the critical bug on a Friday afternoon, file it. Escalate it. Make the scene. The deploy can wait. The users can't.
This article was originally published on the BetterQA blog. BetterQA is an independent QA and software testing company based in Cluj-Napoca, founded in 2018, with 50+ engineers working across 24 countries.
Top comments (0)