I've been lucky enough to start my career as a software developer in the era when automated testing was already a thing. If not always the reality, then at least a thing people talked about. And that's why since my very first year as a professional developer, inspired by Kent Beck's and Uncle Bob's books, I've been trying to automate testing the systems I worked on, with varying degrees of success.
After a couple of years of working as a back-end developer, I often knew which testing strategies would pay off, and which ones would bite me. I didn't feel overwhelmed or lost on a daily basis anymore. Actually, I felt pretty proficient at my job. I knew how to leverage automated tests to fill me with confidence that the software I was building actually did what I expected it to do. But then I grew interested in front-end web development, with all its quirks and wonders. The period when I handed a ThinkPad back, got a Mac, and tasted the famous Starbucks coffee marked the beginning of me as a professional front-end developer.
But something wasn't right. I didn't feel as safe as I used to. Initially, I blamed it on various non-technical aspects of my new job, such as joining a new company or moving to a new country. Or maybe it was the coffee? However, soon I realized that the real issue was an industry-wide lack of a quality safety net. And by the industry, I mean the front-end industry. We are the people tasked with getting some data, presenting it to the user, responding to their input and possibly sending some data back to the back end. Yet, the most popular testing approaches closely resemble what we developed to test back-end systems, with just a hint of the front end.
To help me differentiate front-end testing tools from some sort of back-end testing tools or half-baked front-end testing tools, I developed a little benchmark, and I think you may find it helpful as well. Before I reveal it to you, I'd really like to highlight the fact that it relies on the assumption that the purpose of automated tests is to tell us when the behavior of the system changes and in the same time let us change the implementation, as long as the behavior stays intact, without any need of touching the tests.
Here comes the font-weight benchmark for front-end testing tooling.
Imagine that as part of your job you wrote the following CSS code:
font-weight: bold;
What would happen if you refactored it to use a number instead of a word?
font-weight: 700;
It's good if your tests still pass. It proves that they are not tightly coupled to the implementation details. In the end, both bold
and 700
lead to the same behavior of your app.
But if your tests fail, then it means they are tightly coupled to the implementation and will, in fact, make any refactoring harder.
Finally, what would happen if you made a mistake when refactoring the code above and instead wrote this?
font-weight: 800;
It's good if your tests fail now. It means they are actually testing what the front-end development is all about, which is rendering the right pixels in the right order and at the right time.
But if they keep passing, then it is a sign that the front end of your front end may have 0% test coverage.
Top comments (0)