When a startup dies, the post-mortems usually feature nice, tidy explanations: the market shifted, the money ran out, a key engineer left. These reasons are convenient for investors and neutral enough for the team. The only problem is - they're almost never the actual cause.
I'm a co-founder of GenSoft, a QA agency that usually gets called into startups once "something has already gone wrong." Over years of working with dozens of product teams, I've seen one pattern: startups rarely break because of a single dramatic event. They quietly suffocate on things everyone feels but no one wants to call by its real name.
Here are the 6 most common ones - and what can be done about each, if you treat quality as a process rather than a stage at the end of a sprint.
The cult of speed that kills quality
"The main thing is to ship faster. We'll fix quality later." I hear this phrase in some form on almost every new project we join. Sometimes it's the CTO. Sometimes it's the founder. Sometimes no one says it out loud, but it's written into everything the team does.
The problem is that "later" never comes. Instead, the team gets stuck in a cycle that looks like progress from the outside and feels like a slow fall from within. They ship a feature because of the deadline. The feature breaks something in the core. The core gets patched in firefighting mode. Instead of figuring out why it happened - they grab the next feature, because deadline again. Old bugs sit in the backlog for months tagged as low-priority, even though half the users are hitting them during onboarding.
A few months in, the speed the team was so proud of turns into an illusion. The product actually moves slower and slower, because every new feature drags a regression into three other places. Engineers spend more time putting out fires than building anything new. And leadership still measures success by the number of releases, not realizing that half of those releases are rollbacks of previous releases.
How this gets fixed in practice. Speed doesn't drop when you add quality to the process. What disappears is something else - the chaos that was masquerading as speed for a long time. In practice, it starts with three things: the team gets a clear Definition of Done, without which a feature simply isn't considered finished; critical scenarios get covered by regression tests so every new release doesn't break old ones; and QA gets plugged in at the requirements stage, not on the last day before launch. The result is a team that lives less in firefighting mode and more in predictable delivery mode.
"We need one more developer"
Startups have a persistent reflex: any problem should be solved by hiring one more developer.
- Falling behind? Let's hire.
- Too many bugs? Another developer will probably fix it.
- Slow releases? Definitely one more.
A year later, the team has twenty developers - and the chaos has only grown. Because the real problem was almost never the number of hands.
I regularly see teams where the QA-to-developer ratio is 1:15 or worse, on a product that claims to serve the trust of tens of thousands of users. I see teams without a single piece of security expertise in-house. I see teams without a tech lead capable of making architectural decisions, where every architectural decision ends up looking like a compromise between three mid-level engineers in a forty-message Slack thread.
More people in an unhealthy system isn't scaling. It's just more people suffering from the same unhealthy system.
How this gets fixed in practice. When we join teams like this, the first conversation with the founder usually starts with "we need more testers, we have too many bugs." And it ends with the admission that testing isn't actually built into the process at all - it lives somewhere separately, at the end of the sprint, when it's already too late to change anything. At GenSoft, instead of "giving you another pair of hands," we start with a process audit: where exactly are the bugs being born, at what stage could they still have been caught cheaply, and what processes need to be rebuilt so the same team starts producing half the defects. Usually, after this, the question of "hiring one more" just goes away.
Security as "we'll think about it later"
This is probably the most dangerous habit in startups. And unfortunately, the most common one.
Security tasks get postponed because they don't look like features you can show to an investor. They don't make it onto the roadmap because "we still have too few users for anyone to hack us." They get pulled out of the backlog because "we're not at that stage yet."
I've seen cases where attackers effectively swapped out the frontend of a live platform. The page users were seeing had nothing in common with the product's actual identity anymore. And even if from the outside it looks like a "temporary issue, we'll roll it back now" - the damage is already done. Trust is broken. The brand is compromised. The team moves into crisis mode for weeks. And the founder suddenly discovers that the question "why didn't you have basic security?" sounds very bad in front of investors.
The outside world doesn't care what stage you're at. If your product is easy to break, someone will try to break it. If it's easy to abuse, someone will abuse it.
How this gets fixed in practice. Security stops being a "future task" the moment the team includes it in the basic development process. In practice, this means simple but mandatory things: permission checks, negative scenarios, basic validation of critical user flows, controlling what goes to production, and a minimal security check before each release. At GenSoft we usually start right here, because security doesn't appear on its own if you only remember it at the end. It works only when it's built into the process the same way regular quality checks are.
The problem isn't the people - it's the processes
When something breaks in a startup, the leadership's first reflex is to find someone to blame.
- Maybe the developer missed something.
- Maybe QA was sloppy.
- Maybe the PM explained it badly.
- Maybe the design was unclear.
In nine cases out of ten, the real reason isn't the people. It's the processes that don't exist.
There's no proper prioritization - priorities change three times a day. Retrospectives either don't happen, or they turn into a formality where no one makes real decisions. The team runs into the same problems for months because no one documents the lessons. Communication is about tasks, not about a shared understanding of goals. There's no definition of "done" - a feature is considered finished when a developer pushes the code, not when it's tested, documented, and stable.
It especially annoys me when the chaos of constantly shifting priorities gets wrapped in the pretty label of "team flexibility." Flexibility is the ability to adapt while keeping your direction. Not throwing the team from one side to the other every three hours because the founder just talked to a client and now everything's urgent.
How this gets fixed in practice. Recurring failures almost always point to a weak process, not weak people. So the fix starts with the team stopping the pattern of reacting to incidents one by one and starting to look at the system as a whole. In practice that means: stabilize priorities, define "done," remove blurry boundaries between roles, introduce a real feedback loop after releases, and turn retrospectives into a tool for change rather than a ritual. At GenSoft we often work at exactly this level, because quality breaks down not where the bug was found, but where the process let it slip through.
Everyone does everything, so no one is responsible for anything
Startups love to talk about their flat structure and how "everyone does everything." At the early stages this genuinely looks like a strength, but over time it becomes a weakness.
Developers test "on the side" because QA doesn't have time, or there's no QA on this sprint at all. Designers get pulled into validation tasks they shouldn't be closing. Engineers change agreed-upon design decisions on their own because "it was easier this way." Product expectations get passed verbally on calls, without documentation and without clear ownership.
In an environment like this, nobody really understands anymore who's responsible for what. And when something falls over in production, everyone is sincerely surprised and says, honestly, that this isn't their area.
How this gets fixed in practice. The good news is that you don't need a complex hierarchy here. You just need clarity. The team needs to understand who is responsible for what, in what state a task transitions between roles, who makes the call on disputed points, and who is accountable for whether a feature is actually ready to ship. At GenSoft we often help teams with exactly this: removing verbal agreements as the foundation of the process and replacing them with clear ownership, where quality doesn't get lost between "everyone a little bit."
Nobody counts the real cost of bad quality
This is my favorite blind spot in startup management.
Bugs are treated as a technical problem. But they haven't been technical for a long time - they're a business problem. And their cost is far higher than anyone on the team is counting.
Lost users that no marketing budget will bring back. Extra load on support, which becomes the buffer between a broken product and disappointed customers. Delayed releases because something always has to be put out. Rework for engineers, which costs several times more than getting it right the first time. Team frustration that, six months later, turns into your best people leaving. Weaker conversion. Harder sales, where every demo call comes with a silent prayer that production doesn't go down today.
When teams actually sit down and run the numbers, it turns out the "savings" on quality cost several times more than investing in it would. But almost no one counts, because these losses are scattered across different departments, and each one only sees its slice. Marketing complains about conversion. Sales - about failed demos. Support - about load. Engineering - about rework. And nobody sees that it's all the same problem.
How this gets fixed in practice. At GenSoft, one of the first things we do during an audit is convert quality into money. How much does a single bug that reaches production cost you: engineer-hours to fix + support tickets + lost conversion + delayed releases. The moment the team has that number, the debate about "can we afford proper QA" closes itself. Because the answer is almost always - no, you can't afford not to have it.
A lot of communication, little clarity
The last point - but it runs through all the previous ones.
Modern startups communicate a lot. Calls, chats, threads, updates, channels, syncs, dailies, weeklies, retros. People literally don't leave meetings. Slack is on fire. Notion keeps growing.
And at the same time - there are no clear requirements. "We discussed it on the call" - but the Jira ticket has one sentence in it. There are no documented decisions - a verbal agreement at retro today, and tomorrow nobody remembers what was actually agreed, so they argue it again. There's no shared understanding of goals - every team has its own picture of priorities, and they don't match. The team looks busy but actually moves slower, because it's constantly context-switching.
This produces one of the most expensive patterns in product development: the team confidently builds the wrong thing. And only finds out when the feature is already in production and the metrics aren't moving.
Communication without artifacts isn't communication. It's noise that imitates work.
How this gets fixed in practice. Communication only starts working when it turns into artifacts instead of dissolving into chats and meetings. So in practice, the team needs to capture not the fact that something was discussed, but the result: what exactly are we building, why, by what criteria is it considered done, and which decisions no longer need to be re-litigated tomorrow. At GenSoft we often see that this simple discipline alone sharply reduces rework, because the team stops moving in different directions simultaneously while all assuming everyone understood the same thing.
What to do about it
This isn't the kind of problem that fixes itself. It has to be taken apart systematically. There are a few steps that have the biggest effect - and they're usually where we start at GenSoft.
Audit where quality is actually breaking down. If the team doesn't understand where in the system defects are being produced, it will almost inevitably treat symptoms instead of causes. At GenSoft we usually begin with a short process audit to see what's actually producing the recurring chaos.
Build a shift-left process. QA should be in planning, not at the end of the sprint - reading requirements, asking uncomfortable questions, writing acceptance criteria together with product, before development starts.
Add automation where it actually pays off. If the same scenarios break release after release, they're a good candidate for automation. If a flow is critical for money, onboarding, or user trust - the same. The rest can wait.
Add security to your Definition of Done. Basic security checks should live alongside the rest of the "done" criteria. Not as a big separate project, but as a normal part of the release process.
Count quality in money. Count it in hours of rework, release delays, support load, conversion drops, lost demos, and users who didn't come back. That's the point at which quality stops being an "internal engineering topic."
Invest in processes, not in extra hires. Most problems people try to solve by hiring more developers are solvable with transparent processes, clear roles, and documented decisions. It costs several times less.
Conclusion
Quality isn't a stage and it isn't a separate function at the end of the process. Quality is the way a team thinks about the product in the first place. Startups that get this hold on even in a tough market. The ones that keep pushing out new features on top of a system that's already falling apart eventually grind to a halt - even if, from the outside, it looks for a while like they're moving very fast.
If this is about your team
At GenSoft we build QA processes that actually work: an audit of where your bugs are being born, building a process from scratch, or QA team as a service.
A 30-minute call - tell us what's going on, and we'll suggest where to start.
I'm Mykhailo Malashevskyi, co-founder of GenSoft - a QA agency that helps startups and product teams build quality processes that actually work, rather than existing in Confluence for decoration.
Top comments (0)