As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!
Let's talk about testing. I write code, and sometimes, it breaks. We've all been there. A small change in one corner of the application causes a ripple effect of errors somewhere else. It's frustrating and time-consuming. That's why I've come to rely on unit tests not as a chore, but as a foundational part of how I build software.
Think of unit testing as checking the individual bricks before you build a wall. You want to know each brick is solid, can bear weight, and fits correctly. In code, a "unit" is often a single function, a method, or a small, focused class. The goal is to verify that this one piece works exactly as it should, completely on its own. Modern JavaScript tools have turned this process from a basic checkbox into a powerful engineering practice. I want to show you some of the methods I use every day to write tests that are not just good, but genuinely helpful.
The first thing I focus on is isolation. Real code doesn't live in a vacuum. A function might call a database, send an email via an API, or write to a file. If I test it with the real database, my test becomes slow, fragile, and dependent on external systems being online. This is where mocking becomes essential.
Mocking means I create a stand-in for those external dependencies. I can tell this stand-in exactly how to behave for my test. Did the function call the API with the right data? I can check that. Did it handle an error from the database gracefully? I can make the mock throw an error and see what happens. This lets me test my logic in perfect isolation.
Let me show you a practical example. Imagine a class that processes payments. It needs a payment gateway and a logger.
class PaymentProcessor {
constructor(paymentGateway, logger) {
this.gateway = paymentGateway;
this.logger = logger;
}
async processPayment(amount, currency, customerId) {
try {
const transactionId = await this.gateway.charge(amount, currency, customerId);
this.logger.info('Payment processed', { transactionId, amount });
return { success: true, transactionId };
} catch (error) {
this.logger.error('Payment failed', { error: error.message });
return { success: false, error: error.message };
}
}
}
In a test, I don't want to charge real money. So I create mock objects.
describe('PaymentProcessor', () => {
let mockGateway;
let mockLogger;
let processor;
beforeEach(() => {
// Create fresh mocks for every test
mockGateway = {
charge: jest.fn() // This is a Jest mock function
};
mockLogger = {
info: jest.fn(),
error: jest.fn()
};
processor = new PaymentProcessor(mockGateway, mockLogger);
});
it('should log info on successful payment', async () => {
// Arrange: Set up the mock's behavior
mockGateway.charge.mockResolvedValue('fake_transaction_123');
// Act: Call the real method
const result = await processor.processPayment(100, 'USD', 'cust_abc');
// Assert: Check the results AND the interactions
expect(result.success).toBe(true);
expect(mockLogger.info).toHaveBeenCalledWith(
'Payment processed',
{ transactionId: 'fake_transaction_123', amount: 100 }
);
});
it('should log error on failed payment', async () => {
// Arrange: Make the mock throw an error
mockGateway.charge.mockRejectedValue(new Error('Card declined'));
// Act
const result = await processor.processPayment(100, 'USD', 'cust_abc');
// Assert
expect(result.success).toBe(false);
expect(mockLogger.error).toHaveBeenCalledWith(
'Payment failed',
{ error: 'Card declined' }
);
});
});
This pattern is powerful. I can simulate any scenario: slow networks, invalid responses, timeouts. My tests run in milliseconds because they're not waiting on real services. They also won't fail just because the payment service is down for maintenance.
Now, let's add more complexity. What if our processor should retry failed payments up to three times? The logic inside the class gets more intricate, but our testing approach stays clean.
class RobustPaymentProcessor extends PaymentProcessor {
constructor(paymentGateway, logger) {
super(paymentGateway, logger);
this.retryCount = 0;
this.maxRetries = 3;
}
async processPayment(amount, currency, customerId) {
try {
const transactionId = await this.gateway.charge(amount, currency, customerId);
this.logger.info('Payment processed', { transactionId, amount });
return { success: true, transactionId };
} catch (error) {
this.logger.error('Payment failed', { error: error.message });
if (error.isRetryable && this.retryCount < this.maxRetries) {
this.retryCount++;
this.logger.info(`Retrying payment (attempt ${this.retryCount})`);
// Wait a bit, then try again
await new Promise(resolve => setTimeout(resolve, 1000));
return this.processPayment(amount, currency, customerId);
}
return { success: false, error: error.message };
}
}
}
Testing this requires controlling not just the mock's response, but also its behavior across multiple calls. I can sequence mock responses.
describe('RobustPaymentProcessor retry logic', () => {
it('should retry on a retryable error and eventually succeed', async () => {
const mockGateway = { charge: jest.fn() };
const mockLogger = { info: jest.fn(), error: jest.fn() };
const processor = new RobustPaymentProcessor(mockGateway, mockLogger);
// Simulate two failures, then a success
const retryableError = new Error('Temporary gateway issue');
retryableError.isRetryable = true;
mockGateway.charge
.mockRejectedValueOnce(retryableError) // First call fails
.mockRejectedValueOnce(retryableError) // Second call fails
.mockResolvedValueOnce('txn_789'); // Third call succeeds
const result = await processor.processPayment(50, 'EUR', 'cust_xyz');
expect(result.success).toBe(true);
expect(result.transactionId).toBe('txn_789');
expect(mockGateway.charge).toHaveBeenCalledTimes(3); // Called three times
expect(mockLogger.info).toHaveBeenCalledWith(
'Retrying payment (attempt 1)'
);
});
});
This approach to mocking lets me test complex workflows and stateful logic with precision. I'm not just checking the final output; I'm verifying the journey the code takes—the sequence of calls, the arguments passed, and the internal state changes.
Asynchronous code is a core part of JavaScript, and testing it used to be a headache. Callbacks, promises, and async/await all need special handling. Modern frameworks handle this seamlessly. The key is to always await the result of your async function in the test.
it('should handle async data fetching', async () => {
const mockFetch = jest.fn();
const dataService = {
getUserData: async (userId) => {
const response = await mockFetch(`/api/users/${userId}`);
return response.json();
}
};
// Mock the resolved value of the promise
mockFetch.mockResolvedValue({
json: async () => ({ id: 'user1', name: 'Alice' })
});
const user = await dataService.getUserData('user1');
expect(user.name).toBe('Alice');
expect(mockFetch).toHaveBeenCalledWith('/api/users/user1');
});
What about code that uses setTimeout or setInterval? Testing timers can make tests slow and flaky. The solution is to use "fake timers." This replaces the real timer functions with ones you can control.
describe('Timer-based functions', () => {
beforeEach(() => {
jest.useFakeTimers(); // Replace global timers with Jest's fakes
});
afterEach(() => {
jest.useRealTimers(); // Restore real timers after each test
});
it('should execute callback after delay', () => {
const callback = jest.fn();
const delayedExecutor = {
runAfterDelay: (cb, delayMs) => {
setTimeout(() => cb('done'), delayMs);
}
};
delayedExecutor.runAfterDelay(callback, 5000);
// At this point, the callback has NOT been called yet
expect(callback).not.toHaveBeenCalled();
// Fast-forward time by 5 seconds
jest.advanceTimersByTime(5000);
// Now it should have been called
expect(callback).toHaveBeenCalledWith('done');
});
});
This is incredibly useful for testing debouncing, polling, or any kind of delayed logic. The test runs instantly because you're not actually waiting five seconds.
Organization matters. When a project grows, you might have hundreds of tests. Keeping them maintainable is crucial. I group related tests in describe blocks, often mirroring my project's file structure. For a file src/services/payment.js, I'll have a test file at tests/services/payment.test.js. Inside, I structure tests by method or by behavior.
// tests/services/payment.test.js
describe('Payment Service', () => {
describe('processSinglePayment()', () => {
it('should succeed with valid card', () => {});
it('should fail with insufficient funds', () => {});
});
describe('processBatchPayments()', () => {
it('should process all items in array', () => {});
it('should stop processing on critical error', () => {});
});
});
A technique I find particularly elegant is parameterized testing. Instead of writing five separate tests for five different inputs, I write one test template and feed it an array of data.
describe('Input validation', () => {
const validator = new InputValidator();
test.each([
['valid@email.com', true],
['invalid-email', false],
['', false],
['user@domain.co.uk', true],
['user@domain', false],
])('validates email %s as %s', (email, expectedResult) => {
expect(validator.isValidEmail(email)).toBe(expectedResult);
});
});
This makes it very easy to add new test cases. The test output will clearly show which specific input failed. It’s perfect for testing pure functions with lots of boundary conditions.
Moving beyond just "does it work," I also ask "how much of my code is exercised by tests?" This is code coverage. It's a metric, not a goal in itself—100% coverage doesn't mean bug-free code—but it's a fantastic guide. It shows me the dark, untested corners of my codebase.
Most test runners can generate coverage reports. I often configure a minimum threshold, so the build fails if coverage drops below, say, 80%. This keeps the team accountable. The reports highlight lines, branches, and functions that aren't hit by any test. A branch is like each fork in an if/else statement. Getting branch coverage means testing both the true and false paths.
Let's look at a simple function and see what full coverage requires.
function getDiscount(userType, orderAmount) {
let discount = 0;
if (userType === 'premium') {
discount += 10;
} else if (userType === 'vip') {
discount += 20;
}
// Regular users get 0
if (orderAmount > 100) {
discount += 5;
}
return Math.min(discount, 25); // Cap at 25%
}
To cover this, I need tests for:
- A 'premium' user with an order <= 100.
- A 'premium' user with an order > 100.
- A 'vip' user with an order <= 100.
- A 'vip' user with an order > 100.
- A 'regular' user (neither premium nor vip).
- A case where the total discount would exceed 25%, to test the cap.
That's at least six tests to cover all the logical paths. A coverage report would immediately show me if I missed the 'vip' branch or forgot to test the discount cap.
Sometimes, the output of a function or component is a complex object or a chunk of HTML. Writing assertions for every property is tedious. This is where snapshot testing shines. The first time you run the test, it saves the output to a file. On subsequent runs, it compares the new output to the saved "snapshot." If they differ, the test fails.
This is great for React components, configuration objects, or serialized data.
// In a Jest test for a React component
import renderer from 'react-test-renderer';
import MyComponent from './MyComponent';
it('renders correctly', () => {
const tree = renderer
.create(<MyComponent label="Submit" disabled={false} />)
.toJSON();
expect(tree).toMatchSnapshot();
});
The first run creates a __snapshots__ folder with a file containing the rendered HTML structure. If I later change the component's CSS class, the snapshot will differ, and the test will fail. I then need to decide: is this change intentional? If yes, I update the snapshot. If no, I've caught a regression. It's a powerful safety net for UI changes.
Finally, a word on performance. A test suite that takes 30 minutes to run won't be run often. I keep unit tests fast by adhering to the principles we've discussed: heavy mocking, no real I/O, and fake timers. I also use test filtering during development. I can tell the test runner to only run tests related to the files I've changed, or tests whose names match a keyword.
These techniques form a toolkit. Mocking isolates. Async patterns keep things smooth. Structure maintains clarity. Parameterization adds efficiency. Coverage guides effort. Snapshots guard against regression. Performance ensures speed.
When I started, I saw tests as a tax on writing "real" code. Now, I see them as the first user of my code. Writing a test forces me to think about the interface: What inputs does this function need? What should it return? What errors could happen? This perspective often leads me to a simpler, more robust design before I've even written the implementation. The test isn't just verifying my work; it's helping shape it. That, to me, is the real benefit of advanced unit testing. It stops being about finding bugs and starts being about writing better code from the very first line.
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva
Top comments (0)