After 6+ years in QA, I’ve realized that high coverage is often just a vanity metric. In fact, some of the best engineering teams I've worked with have lower UI coverage because they prioritize pipeline speed over script count.
Here is the truth: Automation isn't "free" time. You pay for it in maintenance and frustration every time a developer has to wait for a build to finish, only for it to fail because of a flaky selector.
1. The "Pipeline Poison" Rule
I view every new test as a potential liability to the build. If a test is flaky, it’s worse than having no test at all. It creates "noise" that the team eventually starts to ignore. Once the devs stop trusting the red lights in CI, your QA process is officially dead.
I skip automation if the feature is "vibrating": If the UI or requirements are changing every few days, you're just writing throwaway code. I wait for the feature to reach a "Solid" state—meaning it has survived at least two sprints without a major logic change—before I touch it.
I skip it if the setup is a nightmare: If I need to seed 10 different databases, bypass two-factor authentication, and mock 5 external APIs just to test one "Submit" button, the ROI isn't there. I’ll take the 30 seconds to check it manually instead of spending two days debugging a brittle setup script.
2. The Senior Decision Framework
I’ve moved away from the "Automate Everything" mindset to a "Selective Strike" strategy. Here is exactly how I categorize tickets during sprint planning:
The "Must-Automate" List
The Boring Stuff: Repetitive data entry with 50+ fields. If a human has to type it more than twice, a machine should do it.
The Math: Humans are prone to errors when verifying complex calculations, tax logic, or currency conversions. Machines don't get tired of math.
The Smoke Test: The "Happy Path" that proves the app actually starts and the user can log in. This is the heartbeat of your pipeline.
Data Integrity: Validating that what you entered in Step 1 actually shows up in the database in Step 10.
The "Do Not Touch" List
Third-Party Handoffs: Trying to automate a redirect to an external bank portal or a "Login with Google" flow. These external sites change their DOMs without telling you. Use a mock or verify it manually.
Visual "Feel": A script can tell you a button is is_visible(), but it won't tell you it’s overlapping the text or that the font is unreadable on a 13-inch laptop.
One-and-Done Features: If we’re doing a seasonal promotion that only lasts two weeks, don't spend three days scripting it.
3. The "Hybrid" Strategy: Stop Testing through the UI
One of the biggest mistakes I see is trying to drive every test through the browser. This is slow, expensive, and fragile. If you want to verify that a user's profile updated, you don't need to click through the whole UI every time.
UI-heavy approach (fragile)
This approach relies on the DOM being perfect every time.
test('user updates profile - the slow way', async ({ page }) => {
await page.goto('/settings');
await page.fill('#bio-input', 'New Bio');
await page.click('.save-button-variant-2'); // This selector will break eventually
await expect(page.locator('.success-toast')).toBeVisible();
});
Balanced approach (faster and stable)
We use the API to verify the Logic, and we use a separate, tiny test for the UI.
// 1. Check the logic via API (Milliseconds)
test('profile update data integrity', async ({ request }) => {
const response = await request.patch('/api/user/profile', {
data: { bio: 'New Bio' }
});
expect(response.ok()).toBeTruthy();
});
// 2. Check the UI once (Does the button work?)
test('save button triggers action', async ({ page }) => {
await page.goto('/settings');
await page.click('button:has-text("Save")');
// We don't need to check the DB here; the API test already did that.
});
4. The Maintenance Burden: A Case Study
Think about a standard E-commerce checkout flow. It involves:
- Adding an item to a cart.
- Entering a shipping address.
- Entering a credit card.
- Verifying the order confirmation.
If you automate this purely through the UI, you have roughly 40-50 locators that could break. If any one of them fails due to a network hiccup or a minor CSS change, your entire build stops.
My approach: I automate the "Add to Cart" and "Checkout" API calls to ensure the backend works. Then, I do a quick manual "Sanity Check" on the UI for different browser resolutions. This keeps the pipeline "Green" and keeps the developers happy.
5. The Human Element: Why Exploratory Testing Wins
Automation is a "checker"—it confirms what we already know. It doesn't discover anything new. It’s a safety net, not a bug hunter.
I save my energy for Exploratory Testing. This is where the real bugs live. A script won't notice that the app feels "laggy," that a scrollbar is hidden, or that the user flow is confusing. Use the machine to handle the repetitive, robotic stuff so you have the brainpower to actually try and break things like a real user would.
Business Impact
This approach has would result in:
- Faster and more reliable CI pipelines
- Fewer false failures and re-runs
- Higher developer trust in test results
- Reduced maintenance cost
- Faster release cycles with lower risk
Final Thought
The goal isn’t maximum coverage—it’s maximum confidence.
If a test slows down delivery without meaningfully reducing risk, it doesn’t belong in the pipeline.
Top comments (0)