What software testing is and is NOT
- Testing is to execute a program with the intent of finding as many defects as possible and/or gaining sufficient confidence in the software system under test
- Testing is not cut-and-fit improvement of a user interface or requirements elicitation by prototyping.
- Testing is not the verification of an analysis or design model by syntax checkers or simulation.
- Testing is not the scrutiny of documentation or code by humans in inspections, reviews, or walkthroughs.
- Testing is not static analysis of code using a language translator or code checker.
- Testing is not the use of dynamic analyzers to identify memory leaks or similar problems.
- Testing is not debugging, although successful testing should lead to debugging.
Some Definitions
Error
The very root cause. Errors are committed by people.
Fault
A fault is the result of a human error in the software documentation, code, etc.
Failure
A failure occurs when a fault executes.
Incident
Consequences of failures; failure occurence may or may not be apparent to the user.
Defect
Any of the above, usually a fault, aka bug.
The fundamental chain of SW dependability threats:
Error----propagation---->Fault----causation---->Failure----results in---->Incident
What is a Test Case and Test Suite (Test Set)?
- A test case is set of inputs and the expected outputs for a unit/module/system under test.
- A test suite (test set) is set of test cases (at least one)
- Without the expected outputs, a test case is not complete.
Types of Test Activities
Exploratory (Human Based)
Design test values based on domain knowledge of the program and human knowledge of testing.
Criteria-based
Design test values to satisfy coverage criteria, covering all lines of codes.
Test Automation
Embed test values into executable scripts.
Test Execution
Run tests the on the software and record the results.
Test Evaluation
Evaluate results of testing, report to developers.
Types of Testing
- Functional testing is testing the functional requirements.
- Non-functional testing is testing the non-functional requirements. Such as,
- Performance, Load, Endurance, Localization, Recovery, Security, Stress etc.
Exhaustive Testing
Testing a software system using all the possible inputs, not possible mostly.
For example exhaustive testing of a compiler program would be compiling all possible codes, which is not possible.
When can we say testing is enough?
A test-adequacy criterion leads to test obligations. Test coverage is the percentage of obligations that are met.
-
Functional tests black-box, Structural tests are white-box
- Specification -> Black-Box Testing
- Implementation -> White (glass)-Box Testing
Missing functionallities cannot be revealed by white-box techniques. -> at Spesification
Unexpected functionallities cannot be revealed by black-box techniques. -> at Implementation
Testing throughout the SDLC - in Waterfall
- SDLC: Software Development Life Cycle
- Much of the life-cycle development artifacts provides a rich source of test data.
- Identifying test requirements and test cases early helps shorten the development time.
- They may help reveal faults.
- It may also help identify early low testable specifications or design.
Analysis --- Design --- Implementation --- Testing:
Preparation for Test --- Preparation for Test --- Unit Testing --- Acceptance system Testing
Black-Box Testing
- Based on the definition of a program's specification, as opposed to its structure (code, interface)
- The notion of complete coverage can also be applied to function (Black-box) testing.
- Rigorous specifications have another benefit, they help black-box testing, e.g., categorize inputs, derive expected outputs.
- In other words, they help test case generation and test oracles.
Different Techniques
Equivalence Class Testing
We would like to have a sense of complete testing and we would hope to avoid test redundancy.
- Equivalence classes partitions of the input set in which input data have the same effect on the program. (e.g. the result in the same output).
- Entire input set is covered in testing, completeness.
- Disjoint classes to avoid redundancy.
- Test cases one element of each equivalence class.
- Guessing the likely system behaviour might be needed.
Boundary-Value Analysis
In equivalence class testing, we partitioned input domains into equivalence classes, on the assumption that the behavior
of the program is similar for all input values of a equivalence class.
Above assumption may not ve true in all cases as some typical programming errors happen to be at the boundary between
different equivalence classes.
This is what boundary value testing focuses on, simpler but complementary to equivalence class testing.
Catergory-Partition Testing
- The system is divided into individual functions that can be independently tested.
- The method identifies the parameters of each function and, for each parameter, identifies distinct categories.
- Besides parameters, environment characteristics, under which the function operates (characteristics of the system state), can also be considered.
- Categories are major properties or characteristics for each parameter
- The categories are further subdivided into choices in the same way as equivalence partitioning is applied (possible “values”).
- A broader idea compared to Equivalence-Class Partitioning.
- In essence: Category-partition testing integrates equivalence class testing, boundary analysis, and adds a few refinements.
Decision Tables
- To help express test requirements in a directly usable form.
- Easy to understand and support the systematic derivation of test cases
- To support automated or manual generation of test cases.
- A particular response or response subset is to be selected for testing by evaluating many related conditions
- Ideal for describing situations in which a number of combinations of actions are taken under varying sets of conditions, e.g., control systems
Cause-Effect Graphs
- Graphical technique that helps derive decision tables
- Aims at supporting interaction with domain experts and the reverse engineering of specifications, for the purpose of testing.
- Identify causes (conditions on inputs, stimuli) and effects (outputs, changes in system state).
- Causes have to be stated in such a way to be either true or false.
- Can be used to explicitly specify constraints on causes and effects.
- They can help select more “significant” subset of input/output combinations and build smaller decision tables.
Top comments (0)