As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!
I want to talk about something that changed how I build software. For a long time, I thought of testing as a separate phase, a final hurdle before shipping code. It was often tedious, sometimes skipped, and felt like a barrier to getting things done. I was wrong. Today, testing isn't a gate. It's the foundation that lets us build with confidence and move fast without breaking everything. It’s the safety net that allows for creativity. Let me show you what that looks like in practice.
Think of building a web application like constructing a complex machine with many moving parts. You wouldn't just build it, close the lid, and hope it works. You'd test each gear, each lever, and then the whole system under different conditions. Modern testing gives us the tools to do exactly that, automatically and continuously, as we build.
We start with the smallest parts. In a React application, that's a component. A button, a form input, a card. Component testing lets me verify these pieces in isolation. I can ask questions like, "Does this button call its function when clicked?" or "Does this input show an error when given bad data?" I use tools like React Testing Library for this. It encourages tests that focus on how a user interacts with the component, not its internal details.
Here’s a simple example. Imagine a Button component.
// Button.jsx
export const Button = ({ onClick, children, disabled = false }) => {
return (
<button onClick={onClick} disabled={disabled} aria-disabled={disabled}>
{children}
</button>
);
};
To test it, I write something that resembles how a person would use it.
// Button.test.jsx
import { render, screen, fireEvent } from '@testing-library/react';
import { Button } from './Button';
describe('Button Component', () => {
it('calls the onClick handler when clicked', () => {
// Create a mock function to track calls
const mockClickHandler = jest.fn();
// Render the component with our mock function
render(<Button onClick={mockClickHandler}>Save</Button>);
// Find the button by its text and "click" it
const buttonElement = screen.getByText('Save');
fireEvent.click(buttonElement);
// Assert that our mock function was called exactly once
expect(mockClickHandler).toHaveBeenCalledTimes(1);
});
it('is disabled and has proper aria attribute when disabled prop is true', () => {
render(<Button onClick={() => {}} disabled={true}>Submit</Button>);
const buttonElement = screen.getByText('Submit');
// Check the HTML attribute
expect(buttonElement).toBeDisabled();
// Check the ARIA attribute for accessibility
expect(buttonElement).toHaveAttribute('aria-disabled', 'true');
});
});
This kind of test is fast and precise. If I change the button's internal logic later—maybe I refactor how the click handler is bound—this test will still pass as long as the button behaves the same way from a user's perspective. That's the key.
But components don't live in a vacuum. They have different looks based on the data they receive—different "states." A modal can be open or closed. A data table can be loading, show data, or display an error. This is where tools like Storybook become invaluable. I can build a visual catalog of my components in every possible state. It's a living style guide and a test environment in one. Developers, designers, and product managers can all see the building blocks and verify their appearance and interaction manually. More importantly, I can pair it with automated "visual snapshot" tests to catch unexpected changes in layout or style.
Consider a ProfileCard that shows a user's avatar, name, and a loading state.
// ProfileCard.stories.jsx
import ProfileCard from './ProfileCard';
export default {
title: 'Components/ProfileCard',
component: ProfileCard,
};
// Story for the default state
export const Default = {
args: {
userName: 'Jane Doe',
userAvatar: 'https://example.com/avatar.jpg',
isLoading: false,
},
};
// Story for the loading state
export const Loading = {
args: {
isLoading: true,
},
};
// Story with a long name
export const LongName = {
args: {
userName: 'Dr. Alexander Theophilius Montgomery III',
userAvatar: 'https://example.com/avatar2.jpg',
isLoading: false,
},
};
I can then use a test runner to automatically take screenshots of each of these "stories" and compare them to a previously approved baseline. If a CSS change accidentally makes the long name overflow and break the layout, the test fails and shows me the pixel difference. This catches bugs that unit tests would miss because the component's logic might still be correct, but its presentation is broken.
Of course, putting components together into pages and flows is where the real challenge lies. This is the realm of end-to-end testing. These tests simulate a real user on a real browser, clicking through your application. They are slower and more fragile than unit tests, so we use them sparingly for the most critical user journeys. A typical flow might be "user logs in, adds an item to the cart, and completes checkout."
I often use Cypress or Playwright for these tests. They feel different from other testing tools—you write instructions as if you're guiding a robot through your site. Here's what a checkout flow might look like with Playwright.
// tests/checkout.spec.js
const { test, expect } = require('@playwright/test');
test('Complete user checkout flow', async ({ page }) => {
// 1. Go to the product page
await page.goto('https://myshop.example.com/products');
// 2. Click on the first product
await page.locator('[data-testid="product-card"]').first().click();
// 3. Add it to the cart from the product detail page
await page.locator('[data-testid="add-to-cart-button"]').click();
// 4. Verify the cart counter updates
const cartCount = page.locator('[data-testid="cart-count"]');
await expect(cartCount).toHaveText('1');
// 5. Go to the cart page
await page.locator('[data-testid="cart-icon"]').click();
// 6. Click the checkout button
await page.locator('button:has-text("Proceed to Checkout")').click();
// 7. Fill out the shipping form
await page.fill('[data-testid="shipping-name"]', 'Alex Johnson');
await page.fill('[data-testid="shipping-address"]', '123 Main St');
// ... fill other fields
// 8. Submit the form and confirm we reach the order summary
await page.locator('[data-testid="submit-shipping"]').click();
await expect(page.locator('[data-testid="order-summary"]')).toBeVisible();
await expect(page).toHaveURL(/order-confirmation/);
});
This test covers a lot of ground: navigation, interaction with multiple UI elements, form filling, and state persistence. When this test passes, I have high confidence that the core purchasing mechanic of the store is functional. I run these tests in a continuous integration pipeline before any code is merged.
Speed and performance are non-negotiable for modern web apps. A beautifully functional application is useless if it takes ten seconds to load. Performance testing is often an afterthought, but it shouldn't be. I integrate performance checks early. One way is using Lighthouse programmatically to audit key pages. I can set a budget: "The homepage must have a Lighthouse performance score above 90 on a simulated 4G connection."
// scripts/performance-audit.js
const lighthouse = require('lighthouse');
const chromeLauncher = require('chrome-launcher');
async function auditPage(url) {
console.log(`\n🔍 Running audit for: ${url}`);
// Launch a new Chrome instance for the test
const chrome = await chromeLauncher.launch({ chromeFlags: ['--headless'] });
const options = {
logLevel: 'info',
output: 'json',
onlyCategories: ['performance'],
port: chrome.port,
// Simulate a slower network and CPU
throttling: {
rttMs: 150,
throughputKbps: 1638.4, // Typical 4G speed
cpuSlowdownMultiplier: 4,
},
};
const runnerResult = await lighthouse(url, options);
await chrome.kill(); // Don't forget to close Chrome
const report = runnerResult.lhr;
const performanceScore = report.categories.performance.score * 100;
console.log(`📊 Performance Score: ${performanceScore.toFixed(0)}/100`);
// Log specific metrics
const metrics = report.audits;
console.log(`⏱️ First Contentful Paint: ${metrics['first-contentful-paint'].displayValue}`);
console.log(`🔨 Largest Contentful Paint: ${metrics['largest-contentful-paint'].displayValue}`);
console.log(`⏳ Total Blocking Time: ${metrics['total-blocking-time'].displayValue}`);
// Fail the script if below our threshold (e.g., for CI)
if (performanceScore < 90) {
console.error('❌ Performance budget failed.');
process.exit(1); // This will fail a CI/CD pipeline
} else {
console.log('✅ Performance budget met.');
}
}
// Audit our key pages
(async () => {
await auditPage('http://localhost:3000/');
await auditPage('http://localhost:3000/products');
})();
Modern applications are rarely islands. They talk to backend services, payment processors, and data providers. This introduces a new problem: when the team maintaining the backend API makes a change, how do I know my frontend will still work? Conversely, if I update my frontend code, how do I know I'm not accidentally breaking the contract the backend expects? This is where contract testing shines.
Think of a contract as a formal agreement: "Given this request, I promise to return that response." Tools like Pact let me define this agreement as code and verify both sides independently. The frontend team creates a "pact" during their tests, specifying what they expect the backend to do. This pact file is then shared, and the backend team runs their own tests to verify they satisfy all the contracts.
Here’s a simplified look from the frontend perspective.
// tests/userService.pact.test.js
const { Pact } = require('@pact-foundation/pact');
const path = require('path');
const { getUser } = require('../services/userService');
// Define the mock provider (our fake backend)
const provider = new Pact({
consumer: 'WebAppFrontend',
provider: 'UserServiceBackend',
log: path.resolve(process.cwd(), 'logs', 'pact.log'),
logLevel: 'warn',
dir: path.resolve(process.cwd(), 'pacts'), // Where to save the contract
port: 1234, // Port for the mock server
});
describe('User Service Contract', () => {
beforeAll(() => provider.setup()); // Start the mock server
afterEach(() => provider.verify()); // Verify all interactions happened
afterAll(() => provider.finalize()); // Write the pact file
describe('when a request is made for a user', () => {
beforeAll(() => {
// Define the interaction: the shape of request and response
return provider.addInteraction({
state: 'a user with id 123 exists',
uponReceiving: 'a request for user details',
withRequest: {
method: 'GET',
path: '/users/123',
headers: { 'Accept': 'application/json' },
},
willRespondWith: {
status: 200,
headers: { 'Content-Type': 'application/json' },
body: {
id: 123,
name: 'Maria Garcia',
email: 'maria@example.com',
},
},
});
});
it('will receive the correct user data', async () => {
// Our actual service function makes a request to the mock server
const user = await getUser(123);
// These assertions verify our frontend code works with the defined contract
expect(user).toBeDefined();
expect(user.name).toBe('Maria Garcia');
expect(user.email).toBe('maria@example.com');
});
});
});
When this test runs, it starts a mock server on port 1234 that will return the exact JSON we defined. The getUser(123) function calls this mock server. If our function tries to call the wrong path or can't handle the JSON structure, the test fails. When it passes, a pacts/webappfrontend-userservicebackend.json file is generated. This file is the contract. The backend team can now run their own verification against this file to ensure their actual API matches it. It decouples our teams and prevents integration surprises.
Building for everyone is a core responsibility. Accessibility testing ensures people using assistive technologies like screen readers can use our applications. It's both a moral imperative and a legal requirement in many cases. Automated tools can catch common issues, but they are just a starting point.
I integrate a simple automated check into my component tests using jest-axe.
// components/Modal.test.jsx
import { render } from '@testing-library/react';
import { axe, toHaveNoViolations } from 'jest-axe';
import { Modal } from './Modal';
expect.extend(toHaveNoViolations);
describe('Modal Accessibility', () => {
it('has no detectable accessibility violations', async () => {
// Render the modal in its open state
const { container } = render(
<Modal isOpen={true} title="Confirm Action" onClose={() => {}}>
<p>Are you sure you want to proceed?</p>
<button>Confirm</button>
</Modal>
);
// Run the accessibility audit on the container
const results = await axe(container);
// This assertion will fail if violations are found
expect(results).toHaveNoViolations();
});
// A more specific test for a known requirement
it('traps keyboard focus when open', () => {
render(
<Modal isOpen={true} title="Test" onClose={() => {}}>
<button>First</button>
<button>Second</button>
</Modal>
);
// In reality, you'd use a more sophisticated check here.
// This test would ensure focus starts on the first focusable element
// and cycles within the modal, not escaping to the background page.
// Implementation depends on your modal library or custom logic.
});
});
Automated checks catch issues like missing image alt text, poor color contrast, or invalid ARIA attributes. However, they can't tell if the logical tab order makes sense or if the screen reader announcements are helpful. For that, manual testing with a keyboard and a screen reader is irreplaceable. I make it a habit to navigate every new feature using only the Tab key.
Finally, I want to know if my tests are any good. Writing lots of tests is easy; writing effective tests is hard. This is where mutation testing comes in. It's a fascinating process. A tool like Stryker takes my code, creates small, faulty versions of it (mutants), and then runs my test suite. If my tests fail and "kill" the mutant, that's good. If the mutant survives, it means my tests didn't detect that fault. A high mutation score gives me confidence that my tests are thorough.
Running Stryker is usually straightforward from the command line. The report is what matters.
# Running Stryker for a JavaScript project
npx stryker run
# Sample of what you might see in the console output
[2023-10-27] INFO MutationTestExecutor Done in 1 minute, 42 seconds.
[2023-10-27] INFO Your mutation score has improved!
[2023-10-27] INFO Mutations: 104
[2023-10-27] INFO Killed: 98 (94.23%)
[2023-10-27] INFO Survived: 6 (5.77%)
[2023-10-27] INFO Timeouts: 0 (0.00%)
# It also points you to the surviving mutants, like:
# Survived in `/src/services/calculator.js`:
# - Replaced 'a + b' with 'a - b' on line 5.
# This tells me I need a test for the addition logic in my calculator.
Seeing that I killed 94% of mutants is reassuring. The 6% that survived are a to-do list for improving my tests. Maybe I missed a specific edge case in a calculation, or perhaps a conditional branch is never exercised. Mutation testing guides me to write better tests, not just more tests.
All these patterns form a layered verification system. Unit and component tests are my first, fast line of defense. Visual regression tests guard the look and feel. End-to-end tests verify the critical user paths. Performance, contract, and accessibility tests ensure quality attributes that are easy to degrade. Mutation testing ensures the whole system is effective.
Integrating this into a development workflow transforms the process. Every pull request can trigger this entire suite. A failing test blocks a merge. This provides immediate feedback. It means a bug introduced at 10 AM can be found and fixed by 10:05, not reported by a user at 10 PM. This safety net is what allows for confident refactoring, continuous deployment, and rapid innovation. Testing is no longer about finding bugs at the end. It's about designing software that is robust from the very first line.
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva
Top comments (0)