DEV Community

Cover image for Shift-Left vs Test-First, Which One To Choose?
Istvan Forgacs
Istvan Forgacs

Posted on • Originally published at lambdatest.com

Shift-Left vs Test-First, Which One To Choose?

Twenty years ago, the test-first method became popular thanks to Kent Beck’s famous book Test Driven Development: By Example. Test-driven development is a software development method where the test code is written before the application code. It consists of the following steps:

  • Write test code for the unit.
  • Run all tests and validate that the tests fail.
  • Write the code of the unit.
  • Run tests and refactor code.
  • Repeat until the tests pass.

Case studies such as the one conducted by Microsoft and IBM proved that applying TDD reduces the number of bugs in the code. While the development time of three Microsoft and one IBM projects increased by 15-35%, the defect density decreased by 40-90%. The conclusion was that TDD is worth doing. There are other test-first methods, such as BDD or specification by examples for agile teams. In these methods, the test cases became understandable for all the stakeholders.

Nowadays, shift-left has replaced test-first. It may be a surprise for you that the concept of shift-left testing is also more than 20 years old, published by Larry Smith in 2001. However, shift-left only means that let’s start testing as early as possible. The question is, what does ‘as possible’ mean? Unfortunately, no definition for this exists. However, this question is exceptionally important in the era of DevOps and Agile. It’s obvious that developers can use TDD or BDD, which are test-first methods, thus here, shift-left equals test-first. However, TDD and BDD concentrate on units or modules, and we know that unit testing can be executed before the system or e2e testing. Hence, the important question is, can e2e testing be test-first or only just shift-left?

Executing e2e testing in a test-first way is difficult. A manual test-first solution is impossible as the application should be ready. Unfortunately, creating test automation code entirely before the application is not an easy task, as we should know the selectors in advance. We know that selectors identify the UI objects. The only way to do this is to plan the selector names before the implementation. In addition, the test automation engineer should know all the tricks before the implementation. For example, some UI elements may be covered by others so the test automation engineer should modify the test code such as adding force: true if using Cypress. We can say that test execution automation is not test-first in practice.

We know, that test automation does not just mean test execution automation but should involve test design as well. Without test design, you have no chance to find potential bugs in a complex system. Let’s try testing our Pizza application. Here I created 30 realistic mutants, i.e., 30 alternatives that are slightly different from the original. Let’s use exploratory testing first. You probably will detect 30-45% of the artificial but realistic bugs – I found only 37%. However, by applying a good test design technique such as action-state testing, you can also detect 100% as I did.

Model-based testing (MBT) is a method of test design automation. In addition, the test-first approach can be satisfied if the model has been made before coding. However, to make a model, we should know everything about the application to be implemented. The reason is that the model in traditional MBT is computer-readable, thus, every action made by the user should be incorporated into the model, and every necessary output validation should also be involved. As a consequence, we can make the model in a test-first way, but the result is that the model should be remade when the application is ready. Thus, the modeler makes a double work that is inefficient.

Does that mean that the test-first method is not possible for e2e testing? Fortunately not. The requirement for a model to be computer-readable is false. The new generation of MBT is the two-phase model-based testing. Here there are two models: the first is a high-level model that is not computer- just human readable. The computer cannot make executable test code from this model but can generate abstract tests for the tester. This is a huge difference: a human has significant domain knowledge and will know how to execute the tests independently of the implementation manually.

For example, if the high-level model consists of a user action Register, a tester can execute it independently of the implementation. In other words, the implementation can be anything, a tester can register. This means that the high-level model can be as abstract as possible so that a tester with acceptable domain knowledge can execute the generated test. A high-level model can be textual or graphical, and may or may not contain states, but should be implementation-independent.

On the other hand, a computer-readable model should contain every detail. The problem is that if the model and the implementation differ, the generated test will fail. That’s why a good low-level model can be made when the implementation is ready.

It’s obvious that the high-level model and the generated abstract tests can be made before the implementation starts, thus this part is test-first. This is a big advantage as both the model and the tests can be validated by the whole team. This validation is also strong defect prevention. My personal experience is that when making the model based on the requirements, most of the mistakes, inconsistencies, incompleteness, etc. are revealed. The cause is simple. You cannot make a good model from a bad specification.

When you make a model, you may need to cover more requirements with a single test. This is the point when inconsistent requirements are detected. Note that a perfect specification is only a dream, but high-level modelling makes it good enough to be implemented so that users love it.

Okay, the high-level model is ready, what’s next? From here the method is not test-first as the application should be ready. When this happens, the application is manually tested with the abstract test cases validated so far.

During the manual test execution, every action such as inserting text, clicking, checking, selecting, or anything is immediately logged as a test step. For example, when clicking on Submit button, our tool generates a test step:

When Submit IS clicked

where is the selector of the button and clicked is the action/operation.
The only additional step to be made by the tester is that the necessary output is not just visually validated, but marked for the test automation tool. This means that if the result is a number 100, then this number should be marked, and the test tool logs this validation event. For example, the tester hovers the mouse over the result that is 100, clicks on that value, and the tool generates a test step such as:
Then Total price IS 100

These steps starting with When or Then are the elements of the low-level model in the second phase and these are machine readable for example Total price is a selector. From this low-level model, any necessary test code can be generated. The difficult part of this generation is to solve that executing the whole test without any human interaction should pass without any necessity of debugging or fixing the generated test. If so, then this non-test-first phase becomes very short, our experience is that only about 1/3 – ¼ of the modeling/test-first part.

Conclusion

Shift-left and test-first approaches are close to each other, but test-first is better though more difficult to use. If possible, use the test-first approach over shift-left. Fortunately test automation process can be almost entirely test-first. As nowadays test automation is a must, doing it in a test-first way not only reduces the number of bugs in the system, but it’s also cost-effective.

Top comments (1)

Collapse
 
akshit_patel_22 profile image
AKSHIT PATEL

Hello ✋
I have one query regarding to automated testing:

"What are the most experience challenges faced by teams while using automated testing in ci/cd pipeline?"