I think that software testing is essential for ensuring the quality and reliability of software. I believe that a comprehensive testing strategy, including a mix of different types of testing, is necessary to identify and fix any issues with the software before it is released to users. I also believe that testing should be an ongoing process, with tests being performed regularly throughout the development cycle to catch problems early on and prevent them from becoming bigger issues later. Additionally, I believe that testing should be an integral part of the development process, with developers and testers working together to ensure that the software meets the specified requirements and works as intended.
In this article about testing software I want to explain what software testing is and elaborate on my philosphy.
What is software testing and why is it important
Software testing is the process of evaluating a software application or system to determine whether it meets the specified requirements and works as intended. It is an important part of the software development process because it helps to identify bugs, defects, and other issues with the software, and ensures that the final product is of high quality and fit for its intended purpose.
Software can be tested manually and also automatically. The two are not mutually exclusive. Automated testing is not a replacement for manual testing, but it is a useful complement to manual testing. Automated tests can help to quickly and efficiently identify problems with the software, while manual testing can be used to explore the software and test it in ways that automated tests may not cover.
What is manual testing?
In manual testing the tester manually performs a series of actions on the software, such as clicking on buttons and entering data, to test its functionality and identify any bugs or defects.
This is in contrast to automated testing, in which the testing is performed using specialized software tools that automatically execute the test cases. Manual testing is useful for testing the user interface and other aspects of the software that may be difficult to test using automated tools. It is also useful for exploratory testing, in which the tester has the freedom to try out different scenarios and test the software in ways that may not be covered by the pre-defined test cases.
A lot of companies employ specific QA (Quality Assurance or Quality Assistance) experts that help with manual testing as well as with automated testing.
What is automated testing?
Automated testing is a way of testing software using special tools that can run tests automatically without human intervention. These tools can run tests quickly and accurately, and they can be used to test parts of the software that are difficult to test manually. Automated testing is useful because it can help to identify problems with the software and ensure that it works as intended.
- Automated tests can be run quickly and repeatedly, which is useful for performing regression testing to ensure that changes to the software haven't introduced new bugs.
- Automated tests can be run without human intervention, which is useful for running tests overnight or in continuous integration environments.
- Automated tests are (or should be) consistent and accurate, since they are performed using the same set of instructions every time.
- Automated tests can cover a larger number of test cases, since they don't rely on a human tester to manually perform each test.
- Automated tests can be used to test parts of the software that are difficult or impossible to test manually, such as performance or security.
There are many different types of testing, but some common ones include:
- Unit testing: This involves testing individual components or units of the software to ensure they function as intended.
- Integration testing: This involves testing the integration of different components or units of the software to ensure they work together as expected.
- System / End-to-end testing: This involves testing the entire system as a whole to ensure it meets the specified requirements and works as intended.
- Regression testing: This involves testing the software after making changes to ensure that the changes have not introduced new bugs or defects.
- Acceptance testing: This involves testing the software from the end user's perspective to ensure that it is usable and meets their needs.
- Performance testing: This involves testing the software to ensure it performs well under expected workloads and doesn't crash or slow down.
- Security testing: This involves testing the software to ensure it is secure and protects sensitive data from unauthorized access.
- Smoke testing: This involves testing the software to ensure that the most important functions of a software application or system are working properly in a live environment.
The test pyramid
The test pyramid is a concept in software testing that suggests that the majority of tests should be unit tests, which are the lowest-level tests, followed by a smaller number of integration tests, and even fewer high-level end-to-end tests. This concept is called the "test pyramid" because it is visualized as a pyramid, with the unit tests forming the base and the end-to-end tests forming the tip of the pyramid.
The idea behind the test pyramid is that unit tests are fast and cheap to write and maintain, and they provide good coverage of the code. Integration tests are a bit more expensive to write and maintain, but they test the integration of different components or units of the software, which is important for ensuring that the software works as intended. End-to-end tests, on the other hand, are the most expensive to write and maintain, but they provide the highest level of confidence that the software works as intended from the end user's perspective. The test pyramid is a general guideline, and the specific mix of tests will depend on the specific software being developed and the goals of the testing.
There are different visual models representing the distribution of tests in the system like the test cone, the test iceberg, the test ladder and the test mosaic. If you are interested, you can do further research.
Not always a 100% test pyramid distribution is the right way to go. The specific approach to testing will depend on the specific software being developed and the goals of the testing. The test pyramid is a useful starting point, but it may not be the best approach for every situation.
The danger of tests
Unit tests are testing individual components (units) of the software to ensure that they function as intended.
A unit is the smallest testable part of the software, such as a function, and a unit test is a test that exercises that unit and verifies its behavior. The idea behind unit testing is to test each unit of the software in isolation, without the need to set up a complex test environment or coordinate the execution of multiple tests. This makes unit tests fast and easy to write and maintain, and they can provide good coverage of the code.
Integration tests are testing the integration of different components or units of the software to ensure that they work together as expected.
The problem with the test pyramid is that a lot of unit tests in the real world are actually not testing behavior nor integration. I am sure you have encounted mock heavy test suites that basically test if a method calls 3 other functions but nothing else is tested. They do not guarantee actual business logic nor do they guarantee actual integration with other parts of the application.
While these tests can still be helpful, they also have some disadvantages, including:
- They do not actually test any behavior. There is no real business value in testing that function A calls function B.
- Thy can be brittle, meaning that they can break easily if the software is changed. This is because like integration tests, mock heavy unit tests often test the interaction between multiple components, and any changes to the interface of one of those components or the structure of the program can affect the behavior of the test.
- The may not provide complete coverage of the software being tested. They may not exercise all the individual components or units of the software in the same way that real unit and integration tests do.
Overall, unit and integration tests are an important part of the software testing process, but they should be used in combination with other types of testing, such as end-to-end tests, to provide the most comprehensive coverage of the software being tested. In addition, mocks should be the last resort not the first tool in the box. Often, it is a good idea to rethink what a unit boundary is and opt for more integration tests rather than test every single method with a lot of mocks.
100% Code Coverage
Code coverage is a measure of how much of your source code is executed when you run your tests. It helps you determine how thorough your tests are, and can help you identify areas of your code that are not being exercised by your tests. This is important because untested code is potentially more likely to contain bugs. Often teams try to aim for 100% code coverage.
Achieving 100% code coverage does not necessarily mean that your code is free of bugs or that it is adequately tested. It just means that your tests are exercising all of the lines of code in your source code. However, it is possible to have tests that cover all lines of code but still do not adequately test the functionality of your code. For example, you might have tests that only call a function with trivial input values, which means that the function is not being tested under more realistic or challenging conditions. In this case, even though you have 100% code coverage, your tests are not providing a sufficient level of assurance that your code is working correctly.
Top-Down and Bottom-Up testing
Top-down testing and bottom-up testing are both software testing approaches that are used to test the functionality of a system.
In top-down testing, tests are written at the highest level of the system and then gradually move down to lower levels. This approach starts by testing the overall functionality of the system and then drills down into the individual components and their interactions. The goal of top-down testing is to test the system from the top level and ensure that all of the components work together as intended.
In bottom-up testing, also known as outside-in testing, tests are written at the lowest level of the system first, and then gradually move up to higher levels. This approach starts by testing the individual components of the system and then moves up to test the interactions between these components. The goal of bottom-up testing is to test the system from the perspective of the user or client, focusing on the interactions between the different components of the system.
Both top-down and bottom-up testing have their own strengths and weaknesses. Top-down testing can be useful for identifying defects at the higher levels of the system, but it may not adequately test the interactions between the individual components. Bottom-up testing, on the other hand, can be effective for identifying defects in the interactions between the different components of the system, but it may not adequately test the overall functionality of the system. It is often best to use a combination of both approaches in order to get the most thorough and effective testing.
TDD - Test Driven Design
Test-driven design (TDD) is an approach in which tests are written for a piece of code before the code itself is written. The tests are initially written to fail, and then the code is written to make the tests pass. This approach helps to ensure that the code is written to satisfy the requirements of the tests, and that it is properly tested and working as expected.
In TDD, the development process typically follows these steps:
- Write a test that defines a piece of functionality that you want to add to your code.
- Run the test and confirm that it fails, because the functionality does not yet exist in the code.
- Write the code to make the test pass.
- Run the test again and confirm that it passes.
- Refactor the code to improve its design and maintainability, without changing its functionality.
- Repeat these steps for each piece of functionality that you want to add to your code.
TDD can help to ensure that your code is well-designed, properly tested, and free of defects. It can also help to make your code more modular and easier to maintain.
One common criticism of TDD is that it can be time-consuming and may not always be the most efficient way to develop software. Writing tests for every piece of code that you write can take extra time and effort, and it may not always be clear what tests should be written or how to write effective tests. This can lead to tests that are too broad or too specific, which can make the development process more difficult and less efficient.
Another criticism of TDD is that it can lead to over-testing, where too many tests are written for a given piece of code. This can make the test suite unnecessarily large and complex, and can make it more difficult to maintain and update the tests as the codebase changes over time.
Additionally, some critics argue that TDD can lead to a focus on testing at the expense of other important aspects of software development, such as design and architecture. This can result in code that is well-tested but not well-designed, which can make the code more difficult to maintain and evolve over time.
Overall, while TDD can be a useful tool for ensuring that code is properly tested and working as expected, it is important to use it in the right circumstances and to balance it with other important considerations in software development.
Top comments (0)