Automated testing is a critical component of modern software development practices. However, its maintenance costs are often high, making it difficult to achieve high automated test coverage in projects with limited resources. Naturally, some consider using visual low-code platforms to simplify the writing and maintenance of test cases. But, the fundamental challenge in automated test maintenance is not about visualization, but rather the fragility of test cases.
Generally, the test cases we write adopt an external perspective: provide input, call a function, and then check the return result. However, business functions are rarely so-called pure functions; their execution inevitably involves numerous side effects, such as reading/writing databases, concurrent access, generating random numbers, etc. This leads to situations where calling the same test case with the same input parameters may yield non-deterministic return results. For example, after executing a transfer operation, the account balance decreases. Executing the same transfer operation again might fail. To overcome this non-determinism, we are forced to manually write extensive data initialization code and formulate result checks as a form of fuzzy matching. Because this process is very verbose, it's difficult to implement it rigorously. Especially when dirty data exists or data structures change frequently, test cases implemented based on an external perspective appear particularly fragile. This article introduces the NopAutoTest automated testing framework used in the Nop platform. It is a backend application automated testing framework fully integrated and co-designed with the Nop platform. It leverages various model information within the Nop platform and effectively mitigates the fragility of test cases through a series of means such as recording and playback, data-driven approaches, and model transformations. For specific implementation, refer to the nop-auto-test and nop-match modules.
I. The Particularity of Low-Code Platforms
The fundamental particularity of low-code platforms is not the visual operation interface they provide, but rather their internal model-based logical structure. If a low-code platform is sufficiently model-based, then all side effects should be observable.
Data = Input + Output + Side Effects
If the observed side effect data is added to the input and output dataset, we obtain the complete information set of the system, thereby eliminating all unknown side effects, restoring the test case to a pure function with complete determinism, and overcoming the fragility problem caused by incomplete information.
In application development, common side effects include the following:
- Database Read/Write: Application system behavior can vary greatly based on different data states in the database. Furthermore, beyond the result data returned by the interface, the impact of business operations on the entire system may be more reflected in modifications to core business data in the database. To verify correct business implementation, testers generally need to execute database verification scripts to confirm that the data state in the database meets integrity requirements, in addition to checking interface result data.
- Random Numbers and Time: Program code inevitably uses the system clock to record the current operation time and needs to randomly generate variable data like card numbers, order numbers, primary key IDs, etc., that are non-repeating for each execution.
- Asynchronous Processing: After a business interface returns a result, some asynchronous tasks might still be executing. The final system state may only stabilize after a period of time.
- Cache Read/Write: Cache read/write is similar to database read/write. However, caches are often optional components from a business perspective. When verifying core business correctness, consider disabling the cache or actively clearing it.
- External Infrastructure and Environment: Normal system operation might depend on external infrastructure, such as service registries, message queues, third-party services, etc., and may also have certain requirements for external network environment configuration.
In the Nop platform, all business objects are managed through the Dependency Injection container, and all operations with side effects are provided through modeled engines or service interfaces. In this context, it provides the following technical support for automated testing:
- Recording and Playback of Database Read/Write: The NopOrm data access engine is a full ORM engine similar to Hibernate. It can record all database records read and modified by the application. When a test case is executed for the first time, it records the database read/write data. After enabling snapshot execution mode, the testing framework uses the recorded data to create database tables and insert initial data records. Simultaneously, after the test case runs, it automatically verifies that the database modifications are consistent with the recorded changes.
> Even batch updates executed via EQL statements like
update xx set yy=zzcan be parsed by the ORM engine to obtain the data records before and after modification. -
Variable Annotation for Random Numbers and Time: For data that changes with each execution, like random numbers and time, they can be marked as
AutoTestVariable. The testing framework is responsible for tracking the propagation process of these variables. Specifically, based on the primary-foreign key association relationships defined on the ORM model, we can automatically identify all foreign key variables that reference variable data. In the final data verification process, the verification condition becomes variable matching.
checkMatch("@var:NopAuthSession@sessionId", visitLog.sessionId) -
Waiting for Asynchronous Processing to Complete: The testing framework can monitor the execution status of asynchronous processing queues, waiting for all transient processing to complete, thereby obtaining deterministic results that satisfy eventual consistency.
waitUnti(()-> taskService.isAllProcessed(), 1000); Integrated Docker Environment: The testing framework can integrate a Docker environment via the testcontainers library, running some required infrastructure within the Docker environment.
II. Data-Driven Testing
The NopAutoTest framework is a data-driven testing framework. This means that generally, we don't need to write any code to prepare input data or verify output results. We only need to write a skeleton function and provide a set of test data files. Let's look at a specific example:
nop-auth/nop-auth-service/src/test/io/nop/auth/service/TestLoginApi.java
nop-auth/nop-auth-service/cases/io/nop/auth/service/TestLoginApi
class TestLoginApi extends JunitAutoTestCase {
// @EnableSnapshot
@Test
public void testLogin() {
LoginApi loginApi = buildLoginApi();
//ApiRequest<LoginRequest> request = request("request.json5", LoginRequest.class);
ApiRequest<LoginRequest> request = input("request.json5", new TypeReference<ApiRequest<LoginRequest>>(){}.getType());
ApiResponse<LoginResult> result = loginApi.login(request);
output("response.json5", result);
}
}
The test case inherits from the JunitAutoTestCase class and uses input(fileName, javaType) to read external data files, casting the data to the type specified by javaType. The specific data format is determined by the file extension, which can be json/json5/yaml, etc.
After calling the function under test, the result data is saved to an external data file using output(fileName, result), instead of writing result verification code.
2.1 Recording Mode
When testLogin executes in recording mode, it generates the following data files:
TestLoginApi
/input
/tables
nop_auth_user.csv
nop_auth_user_role.csv
request.json5
/output
/tables
nop_auth_session.csv
response.json5
The /input/tables directory records all database records that were read, with one CSV file per table.
Even if no data was read for a table, an corresponding empty file is generated. This is because, in verification mode, the table names recorded here are needed to determine which tables need to be created in the test database.
If we open the response.json5 file, we can see content like this:
{
"data": {
"accessToken": "@var:accessToken",
"attrs": null,
"expiresIn": 600,
"refreshExpiresIn": 0,
"refreshToken": "@var:refreshToken",
"scope": null,
"tokenType": "bearer",
"userInfo": {
"attrs": null,
"locale": "zh-CN",
"roles": [],
"tenantId": null,
"timeZone": null,
"userName": "auto_test1",
"userNick": "autoTestNick"
}
},
"httpStatus": 0,
"status": 0
}
Note that accessToken and refreshToken have been automatically replaced with variable matching expressions. This process requires no manual intervention from the programmer.
As for the recorded nop_auth_session.csv, its content is as follows:
_chgType,SID,USER_ID,LOGIN_ADDR,LOGIN_DEVICE,LOGIN_APP,LOGIN_OS,LOGIN_TIME,LOGIN_TYPE,LOGOUT_TIME,LOGOUT_TYPE,LOGIN_STATUS,LAST_ACCESS_TIME,VERSION,CREATED_BY,CREATE_TIME,UPDATED_BY,UPDATE_TIME,REMARK
A,@var:NopAuthSession@sid,067e0f1a03cf4ae28f71b606de700716,,,,,@var:NopAuthSession@loginTime,1,,,,,0,autotest-ref,*,autotest-ref,*,
The first column _chgType indicates the data change type: A-Added, U-Updated, D-Deleted. The randomly generated primary key has been replaced with the variable matching expression @var:NopAuthSession@sid. Also, based on the information provided by the ORM model, the createTime and updateTime fields are bookkeeping fields and do not participate in data matching verification. Therefore, they are replaced with *, indicating a match with any value.
2.2 Verification Mode
After the testLogin function executes successfully, we can enable the @EnableSnapshot annotation to switch the test case from recording mode to verification mode.
In verification mode, the test case performs the following operations during the setUp phase:
- Adjust configurations like
jdbcUrlto force the use of a local in-memory database (H2). - Load the
input/init_vars.json5file to initialize the variable environment (optional). - Collect table names corresponding to the files in the
input/tablesandoutput/tablesdirectories, generate corresponding DDL statements based on the ORM model, and execute them. - Execute all
xxx.sqlscript files in theinputdirectory to perform custom initialization on the newly created database (optional). - Insert the data from the
input/tablesdirectory into the database.
During test case execution, if the output function is called, it compares the output JSON object with the recorded data pattern file based on the MatchPattern mechanism. The specific comparison rules are introduced in the next section.
If expecting the test function to throw an exception, use the error(fileName, runnable) function:
@Test
public void testXXXThrowException(){
error("response-error.json5",()-> xxx());
}
During the teardown phase, the test case automatically performs the following operations:
- Compare the data changes defined in
output/tableswith the current state in the database to determine if they match. - Execute the verification SQL defined in the
sql_check.yamlfile and compare it with the expected result (optional).
2.3 Test Updates
If the code is modified later and the test case's return results change, we can temporarily set the saveOutput property to true to update the recorded results in the output directory.
@EnableSnapshot(saveOutput=true)
@Test
public void testLogin(){
....
}
III. Object Pattern Matching Based on Prefix-Guided Syntax
In the previous section, the matching data template files only contained fixed values and variable expressions @var:xx. The variable expressions use the so-called prefix-guided syntax (for a detailed introduction, refer to my article DSL Layered Syntax Design and Prefix-Guided Syntax). This is an extensible Domain-Specific Language (DSL) design. First, we note that the @var: prefix can be extended to more cases, e.g., @ge:3 means greater than or equal to 3. Second, this is an open design. We can add more syntax support at any time, ensuring no syntax conflicts between them. Third, this is a localized, embedded syntax design. The String->DSL transformation can enhance any string into an executable expression, for example, representing field matching conditions in CSV files. Let's look at a more complex matching configuration:
{
"a": "@ge:3",
"b": {
"@prefix": "and",
"patterns": [
"@startsWith:a",
"@endsWith:d"
]
},
"c": {
"@prefix": "or",
"patterns": [
{
"a": 1
},
[
"@var:x",
"s"
]
]
},
"d": "@between:1,5"
}
This example introduces complex and/or matching conditions via @prefix. Similarly, we can introduce conditional branches like if, switch, etc.
{
"@prefix":"if",
"testExpr": "matchState.value.type == 'a'",
"true": {...},
"false": {...}
}
{
"@prefix":"switch",
"chooseExpr": "matchState.value.type",
"cases": {
"a": {...},
"b": {...}
},
"default": {...}
}
Here, testExpr is an XLang expression, where matchState corresponds to the current matching context object, and value can be used to get the data node currently being matched. Depending on the return value, it will choose to match the true or false branch.
Here, "@prefix" corresponds to the explode mode of the prefix-guided syntax, which expands the DSL into a JSON-formatted abstract syntax tree. If direct JSON embedding is not allowed due to data structure constraints, such as when used in CSV files, we can still use the standard form of the prefix-guided syntax.
@if:{testExpr:'xx',true:{...},false:{...}}
Simply encode the parameters corresponding to if into a JSON string and prefix it with @if:.
The syntax design method of prefix-guided syntax is very flexible and does not require the syntax formats of different prefixes to be completely uniform. For example, @between:1,5 means greater than or equal to 1 and less than or equal to 5. The data format following the prefix is only recognized by the parser corresponding to that prefix, allowing us to design corresponding simplified syntax as needed.
If only partial fields in an object need to satisfy matching conditions, use the symbol * to ignore other fields.
{
"a":1,
"*": "*"
}
IV. Multi-Step Related Testing
To test multiple related business functions, we need to pass correlation information between them. For example, after logging into the system, obtain an accessToken, then use that accessToken to get detailed user information, perform other business operations, and finally pass the accessToken as a parameter to call logout.
Because a shared AutoTestVars context environment exists, business functions can automatically pass correlation information via AutoTestVariable. For example:
@EnableSnapshot
@Test
public void testLoginLogout() {
LoginApi loginApi = buildLoginApi();
ApiRequest<LoginRequest> request = request("1_request.json5", LoginRequest.class);
ApiResponse<LoginResult> result = loginApi.login(request);
output("1_response.json5", result);
ApiRequest<AccessTokenRequest> userRequest = request("2_userRequest.json5", AccessTokenRequest.class);
ApiResponse<LoginUserInfo> userResponse = loginApi.getLoginUserInfo(userRequest);
output("2_userResponse.json5", userResponse);
ApiRequest<RefreshTokenRequest> refreshTokenRequest = request("3_refreshTokenRequest.json5", RefreshTokenRequest.class);
ApiResponse<LoginResult> refreshTokenResponse = loginApi.refreshToken(refreshTokenRequest);
output("3_refreshTokenResponse.json5", refreshTokenResponse);
ApiRequest<LogoutRequest> logoutRequest = request("4_logoutRequest.json5", LogoutRequest.class);
ApiResponse<Void> logoutResponse = loginApi.logout(logoutRequest);
output("4_logoutResponse.json5", logoutResponse);
}
The content of 2_userRequest.json5 is:
{
data: {
accessToken: "@var:accessToken"
}
}
We can use @var:accessToken to reference the accessToken variable returned from the previous step.
Integration Test Support
In integration testing scenarios where we cannot automatically identify and register AutoTestVariable through the underlying engine, we can register them manually in the test case:
public void testXXX(){
....
response = myMethod(request);
setVar("v_myValue", response.myValue);
// Subsequent input files can then reference the variable defined here via @var:v_myValue
request2 = input("request2.json", Request2.class);
...
}
In integration testing scenarios, we need to access an externally deployed test database and can no longer use the local in-memory database. In this case, we can configure localDb=false to disable the local database.
@Test
@EnableSnapshot(localDb=false)
public void integrationTest(){
...
}
EnableSnapshot has various switch controls, allowing flexible selection of which automated testing supports to enable:
public @interface EnableSnapshot {
/**
* If the snapshot mechanism is enabled, it will by default force the use of a local database and use recorded data to initialize it.
*/
boolean localDb() default true;
/**
* Whether to automatically execute SQL files in the input directory.
*/
boolean sqlInit() default true;
/**
* Whether to automatically insert data from the input/tables directory into the database.
*/
boolean tableInit() default true;
/**
* Whether to save collected output data to the result directory. When saveOutput=true, the checkOutput setting is ignored.
*/
boolean saveOutput() default false;
/**
* Whether to verify that the recorded output data matches the current data in the database.
*/
boolean checkOutput() default true;
}
V. Data Variants
A significant advantage of data-driven testing is its ease in implementing refined testing for edge scenarios.
Suppose we need to test system behavior after a user account becomes delinquent. We know that depending on the amount of the delinquency and its duration, system behavior might change significantly near certain thresholds. Constructing a complete history of user consumption and settlement is very complex, making it difficult to create a large amount of user data with subtle differences in the database for edge scenario testing. If using a data-driven automated testing framework, we can simply copy existing test data and make fine-tuned adjustments directly on it.
The NopAutoTest framework supports this refined testing through the concept of Data Variants. For example:
@ParameterizedTest
@EnableVariants
@EnableSnapshot
public void testVariants(String variant) {
input("request.json", ...);
output("displayName.json5",testInfo.getDisplayName());
}
After adding the @EnableVariants and @ParameterizedTest annotations, when we call the input function, it reads data that is the merged result of the data in the /variants/{variant}/input directory and the /input directory.
/input
/tables
my_table.csv
request.json
/output
response.json
/variants
/x
/input
/tables
my_table.csv
request.json
/output
response.json
/y
/input
....
First, the test case executes ignoring the variants configuration, recording data to the input/tables directory. Then, after enabling the variant mechanism, the test case executes again for each variant.
Taking the testVariants configuration as an example, it will actually be executed three times: the first time variant=_default, meaning it uses the original input/output directory data. The second time executes the data in the variants/x directory, and the third time executes the data in the variants/y directory.
Because the data between different variants is often highly similar, we don't need to fully copy the original data. The NopAutoTest framework adopts the unified design of the Reversible Computation theory here, utilizing the built-in delta merging mechanism of the Nop platform to simplify configuration. For example, in the /variants/x/input/request.json file:
{
"x:extends":"../../input/request.json",
"amount": 300
}
x:extends is the standard delta extension syntax introduced by the Reversible Computation theory. It means inheriting from the original request.json, only modifying its amount property to 300.
Similarly, for the data in /input/tables/my_table.csv, we can only include the primary key column and the columns that need customization. Its content will then be automatically merged with the corresponding file in the original directory. For example:
SID, AMOUNT
1001, 300
The entire Nop platform is designed and implemented from the ground up based on the principles of Reversible Computation. For its specific content, refer to the reference documents at the end.
Data-driven testing, to some extent, also reflects the so-called reversibility requirement of Reversible Computation. That is, the information we have expressed through DSLs (JSON data and matching templates) can be reversely extracted and then processed into other information. For example, when data structures or interfaces change, we can write unified data migration code to migrate test case data to the new structure without needing to re-record the test cases.
VI. Markdown as a DSL Carrier
The Reversible Computation theory emphasizes replacing general imperative programming with descriptive DSLs, thereby reducing the code volume corresponding to business logic at various domains and levels, implementing low-code through a systematic approach.
In terms of test data expression and verification, besides using formats like JSON/YAML, consider using the Markdown format, which is closer to a document form.
In testing for the XLang language, we have defined a standardized markdown structure for expressing test cases:
# Test Case Title
Specific explanatory text, using general markdown syntax. These explanations are automatically ignored during test case parsing.
'''Language of the test code block
Test Code
'''
* ConfigName: ConfigValue
* ConfigName: ConfigValue
For concrete examples, refer to the test cases of TestXpl:
TestXpl
VII. Summary
The Nop platform is a new-generation low-code platform built from scratch based on the principles of Reversible Computation. It adopts a forward-design approach that is DSL-first, model-first, and automated-testing-first, rather than being derived by combining existing program frameworks with partial low-code transformations. It can overcome difficulties present in currently publicly known low-code solutions in many aspects.
NopAutoTest is an organic component of the Nop platform. It fully utilizes the existing model information within the Nop platform for automatic inference and combines the differential structure definition syntax unique to Reversible Computation, effectively reducing the maintenance cost of automated test cases.
Within the technical system of the Nop platform, low-code is positioned at the development stage, meaning low-code generates code based on models, tightly integrating with manually written code. No-code is positioned at the runtime stage, interacting with users through visual interfaces to achieve customization and adjustment of partial logic. NopAutoTest is a testing framework supporting low-code development. Using it does not require deploying additional services separately; it can be integrated into general development DevOps processes, using the existing maven test command to execute tests.
On the other hand, although NopAutoTest provides integration with the JUnit testing framework, its core code is actually independent of any unit testing framework. Therefore, it is possible to integrate its functionality into the runtime engine. For example, in the UI, we could provide a debug switch. When enabled, it indicates recording a test case, automatically tracking all subsequent backend calls, recording all accessed database data and changes made to the database, and then packaging them into an offline test case.
For a detailed introduction to the theory of Reversible Computation, please refer to my previous articles:
Top comments (0)