This article provides an in-depth look at the exception handling mechanism of HarmonyOS NEXT.
1. Background Information
With the official release of the native HarmonyOS, HarmonyOS NEXT has fully adopted a self-developed, full-stack architecture built entirely on HarmonyOS. This marks a complete decoupling from the Android Open Source Project (AOSP) and the creation of a unified kernel and runtime environment. HarmonyOS NEXT uses ArkTS as its primary development language and is powered by ArkCompiler and a distributed architecture to build an intelligent ecosystem. This technology evolution improves system performance and security but also introduces new challenges for the monitoring of application stability.
This article provides an in-depth look at the exception handling mechanism of HarmonyOS NEXT, combining real-world engineering practices to systematically explain the end-to-end crash handling process from crash capture and context collection to symbolication and stack restoration. This article aims to help developers build high-availability and maintainable native HarmonyOS applications. For HarmonyOS application development, system-level crash events can be categorized and handled at two technical levels:
Native-layer crashes (NativeCrash): When an application calls low-level native code written in C/C++ and fails to properly handle system signals such as SIGSEGV (invalid memory access), SIGABRT (intentional abnormal termination), SIGFPE (floating-point operation error), SIGILL (illegal instruction), and SIGBUS (bus error), the system automatically generates a NativeCrash event. These crashes typically involve memory management issues (such as null pointer dereference and out-of-bound array access), resource race conditions, or hardware faults. Root cause analysis of such crashes requires core dump inspection and debug symbol mapping.
JavaScript/ArkTS-layer crashes (JsError): During development at the application logic layer using JavaScript or ArkTS, if runtime exceptions are not properly caught, such as undefined variable access (ReferenceError), type conversion error (TypeError), syntax error (SyntaxError), and assertion failure (AssertionError), the system records a JsError event. Such exceptions can be debugged by using the DevTools console logs, Promise chain error tracing, or try/catch blocks for error catching.
By establishing a closed loop of crash monitoring-analysis-fix, developers can significantly improve application stability metrics and reduce user churn. This article focuses on practical methods and underlying principles for the entire crash analysis workflow from instrumentation and monitoring to exception analysis.
2. Exception Simulation
Scenario 1: Simulate a C++ crash. On the UI, create a button that triggers a C++ crash when it is tapped.
ItemButton({ btnName: "CPP_CRASH", fonSize: 12 }).width('30%').onClick(() => {libentry.createCppCrash()})
The C++ crash is constructed in the following way:
// Construct a C++ crash.static napi_value CreateCppCrash(napi_env env, napi_callback_info info) {throw std::runtime_error("This is a CPP CRASH");}
When the button is tapped to trigger the C++ crash, the application crashes. The exception stack trace can be found under FaultLog in DevEco Studio.
Scenario 2: Simulate a JsError crash. On the UI, create a button that triggers a JS crash when it is tapped.
// Create an exception.ItemButton({ btnName: "JS_CRASH", fonSize: 12 }).width('30%').onClick(() => {throw new Error('An error occurred');})
Similarly, the exception stack trace can be found under FaultLog in DevEco Studio.
For developers, it is critical that the crash details and stack trace are reported and effective stack trace symbolication is performed when an application crashes. The following sections introduce an effective practical approach for crash analysis.
3. Instrumentation and Monitoring Based on HiAppEvent
The HiAppEvent system capability can be used to identify and collect crash events.
Register a watcher as early as possible: Add a watcher in the startup logic of the application (such as the onCreate method of EntryAbility.ets) to ensure all types of crashes can be captured at the earliest opportunity.
Focus on the onReceive callback: This is the entry point for handling and reporting crash data. When an application crashes (or during the next startup), the system calls this function and passes all crash information as parameters.
import { hiAppEvent, hilog } from '@kit.PerformanceAnalysisKit';// ...let watcher: hiAppEvent.Watcher = {name: "MyCrashWatcher",appEventFilters: [ /* ... */ ],onReceive: (domain: string, appEventGroups: Array<hiAppEvent.AppEventGroup>) => {hilog.info(0x0000, 'MyCrashReporter', 'Crash event received!');for (const eventGroup of appEventGroups) {for (const eventInfo of eventGroup.appEventInfos) {const crashReport = {time: eventInfo.params['time'],crash_type: eventInfo.params['crash_type'],// ... Other metadata.};const logFilePath = eventInfo.params['external_log'] ? eventInfo.params['external_log'][0] : null;uploadCrashReport(crashReport, logFilePath);}}}};hiAppEvent.addWatcher(watcher);
- Implement the uploadCrashReport function: This requires custom implementation, which involves two tasks.
The first is to report the crashReport JSON object to your server.
The second is, if logFilePath exists (typically in the case of native crashes), to read the log file at that path and upload it together with crashReport, either as a file stream or as text content.
- Obtain Build Artifacts for Crash Stack Symbolication Crash stack symbolication in HarmonyOS requires specific build artifacts, including the ArkTS debugging artifact (source map file), C++ debugging artifact (.so file), and code obfuscation artifact (nameCache file). Similar to the concept of symbol tables in iOS and Android, these build artifacts store symbols and related information from the source code, such as variables, functions, and classes. They serve as a mapping between memory addresses and function names, file names, and line numbers.
These build artifacts containing symbol mapping information are essential inputs for reconstructing crash stack traces. In HarmonyOS, the three types of build artifacts are obtained in the following way:
ArkTS debugging artifact (source map file): It is generated in release mode in the following directory: {ProjectPath}/{ModuleName}/build/{product}/cache/default/default@CompileArkTS/esmodule/release/sourceMaps.map.
C++ debugging artifact (.so file):
To retain symbol tables and debugging information during release builds, specify "arguments": "-DCMAKE_BUILD_TYPE=RelWithDebInfo" under buildOption/externalNativeOptions in build-profile.json5:
{"apiType": "stageMode","buildOption": {"externalNativeOptions": {"path": "./src/main/cpp/CMakeLists.txt","arguments": "-DCMAKE_BUILD_TYPE=RelWithDebInfo","cppFlags": "",}},...}
The .so file with debugging information can be found in the {ProjectPath}/{ModuleName}/build/{product}/intermediates/libs directory.
**3. Code obfuscation artifact (nameCache file): It is the de-obfuscation mapping table generated in release mode. It is located in the following directory: {ProjectPath}/{ModuleName}/build/{product}/cache/default/default@CompileArkTS/esmodule/release/obfuscation.
Each .so C++ debugging artifact can be matched with its corresponding crash stack trace using the build ID in a one-to-one relationship. The build ID is a hash value generated during compilation and uniquely identifies a specific build of an .so file. During symbolication, it is essential to ensure that the debugging .so file matches exactly the version required by the current stack trace. Otherwise, the symbolicated line numbers and function names will be incorrect. In the following example, the compiled artifacts and stack trace in the current scenario are used. You can run the file command in the directory where .so files are stored to obtain the build ID of the current .so file, as shown in the figure.
The following figure shows the crash stack trace in the simulated crash scenario.
We can see that the file UUID in the stack trace matches the UUID of the .so file obtained by running the shell command, confirming that symbolication can proceed as expected.
- Principles of Crash Stack Symbolication To symbolize the raw crash stacks that are reported, we need to use some specialized tools. The official HarmonyOS platform recommends hstack, a powerful tool for symbolizing crash stacks of release builds that have been obfuscated, converting them into stack traces corresponding to the source code. It supports Windows, macOS, and Linux. During the symbolication process, hstack acts like a scheduler, relying on llvm-addr2line, source map, and symbol deobfuscation at its core.
5.1 Principles of C++ Stack Symbolication
5.1.1 Interpreting the Stack
Let us first take a look at a reported C++ stack log. Each line contains the following key information:
00 pc 0000000000006f98 /data/storage/el1/bundle/libs/arm64/libentry.so(996f532bb3d4b6a1a911675ec4a018291d3038c5)
● #00: the stack frame index. 00 represents the top of the stack, which is the direct location of the crash.
● pc 0000000000006f98: the program counter (PC) address. This is a crucial piece of information for symbolication, indicating the relative address of the CPU instruction executed in the libentry.so library.
● /data/storage/el1/bundle/libs/arm64/libentry.so: the path to the shared library. It indicates the .so file where the crash occurred. This is the runtime path on the device.
● (996f532bb3d4b6a1a911675ec4a018291d3038c5): the build ID. As mentioned before, this build ID uniquely matches a specific .so file.
5.1.2 How llvm-addr2line Works
llvm-addr2line is one of the most widely used C++ stack symbolication tools in the industry. Its internal mechanism applies perfectly to HarmonyOS scenarios, relying primarily on two components contained in the debugging .so file.
- DWARF debugging information (the map from addresses to the source code)
In release builds, debugging information is often omitted to reduce the binary size. In this case, the DWARF section is removed. Without DWARF, llvm-addr2line cannot function properly, and hstack symbolication will fail. As mentioned before, debugging information must be enabled before symbolication. Therefore, the compiler generates a detailed map, the DWARF information, and packages it into the debugging .so file.
The DWARF information includes the following data:
● The mapping between each machine instruction address (such as 0x6f98) and the corresponding line of source code in a .cpp file.
● The function that the address belongs to, along with its parameters.
● Complex inline function call relationships.
- C++ name demangling
To support C++ features such as function overloading, the compiler transforms human-readable function names (such as OH_NativeXComponent_RegisterCallback) into unique but human-unreadable names (such as _Z35OH_NativeXComponent_RegisterCallbackP18OH_NativeXComponentP29OH_NativeXComponent_Callback). This process is known as name mangling. In the DWARF information, llvm-addr2line locates these mangled names. To make them readable, it performs the reverse process, which is demangling, to restore the original C++ function signatures.
5.2 Principles of Source Map Analysis
For crashes that occur in ArkTS, the source map file maps unreadable machine code or bytecode back to the original, human-readable source code locations. The sourceMaps.map file is an aggregated JSON file. Let us break down how it works step by step.
5.2.1 File Key
When the compiler processes ArkTS files, it embeds a unique string identifier, which is called a key, into the final .abc bytecode file for each source file. In the sourceMaps.map file, a key serves as the primary key that stores all mapping information associated with the corresponding source file. When a runtime exception occurs, the stack trace directly throws this key. Once the analysis tool obtains the key, it can quickly locate the corresponding mapping block in the massive sourceMaps.map file with O(1) complexity without any traversal, just like a dictionary lookup.
Here is a key example and its explanation: entry|har1|1.0.0|src/main/ets/pages/w.ts.
entry: the name of the current module (from the name field in oh-package.json5).
har1|1.0.0: the name and version of the dependency package. This is critical for resolving dependency conflicts and managing versions. Even if two versions of the same HAR dependency package exist, the key precisely identifies which version the crash originated from. For in-module code, this part corresponds to the name and version of the module.
src/main/ets/pages/w.ts: the relative path of the source code file within the project.
5.2.2 mappings
The mappings field is the core of the source map V3 specification. It looks like a long string of gibberish, such as "AAAA,IAAM,CAAC...". However, it is actually encoded by using variable-length quantity (VLQ), an efficient compression scheme for position data. The parser decodes this string based on specific rules to reconstruct a complete mapping table that links line and column numbers in the transformed code (bytecode) to those in the original source code.
The encoding works incrementally: The first mapping point records an absolute position, and subsequent points record only the delta from the previous one. For example, if two mapping points differ by just a few columns in the source code, VLQ may encode that delta by using only one or two characters, which significantly reduces the file size.
5.2.3 sources and names
sources: an array that lists all source file names involved in the module. Each mapping decoded from mappings includes an index that points to this array to identify the source file it corresponds to.
names: an array that contains all variable and property names used in the code. If code obfuscation was applied (for example, myLongVariableName is obfuscated as a), the mappings record includes an index pointing to this array. During analysis, the parser retrieves the original variable name myLongVariableName by using this index.
5.2.4 Linking to nameCache for Deobfuscation
In large projects, code obfuscation is common. The obfuscation tool generates a nameCache file that records the mappings between the original and obfuscated variable, function, and property names. When source map analysis requires the restoration of obfuscated names, it must first identify the current module and dependent version to locate the matching nameCache file.
The entry-package-info and package-info fields serve this purpose: linking the correct nameCache resource for deobfuscation. During the restoration, the parser uses these fields to ensure that it loads the right version of the nameCache file, enabling precise deobfuscation.
5.3 Principles of nameCache Deobfuscation
Deobfuscation can be seen as the second refinement stage after source map analysis. While the source map shows where the issue occurred (file name and line numbers), deobfuscation indicates what the issue is (function and variable names).
The nameCache.json file is a JSON object that uses the original file paths as keys. Once the parser restores the original file name based on source map analysis, it can use that name as a key to retrieve the corresponding obfuscation data. In the nameCache.json file, several dictionaries are maintained:
IdentifierCache: records the mappings between regular variables/some function names and their obfuscated forms.
MemberMethodCache: records mappings for class member methods.
PropertyCache: records mappings for global property names. When property obfuscation is enabled, for access like this.userName, userName may be globally replaced by a shorter name.
obfName: maps original file names to obfuscated file names. While source maps can also handle file name mappings, this offers a more direct method for lookups.
entryPackageInfo: handles version verification. It ensures that the current nameCache.json file matches the exact version of the code where the crash occurred. This is a critical safeguard for maintaining accuracy in automated and CI environments.
The following figure shows the overall deobfuscation process.
5.4 Summary
So far, we have outlined the complete end-to-end crash analysis mechanism on the HarmonyOS platform, from the lowest-level native exceptions to the highest-level ArkTS exceptions:
● Native layer (C++): When a crash occurs in an .so file, hstack invokes llvm-addr2line, which uses the DWARF information embedded in the debugging .so file to convert machine instruction addresses (such as 0x6f98) into the corresponding C++ file names, line numbers, and function signatures.
● ArkTS layer (location mapping): When an exception occurs in ArkTS code, the analysis tool first uses the sourceMaps.map file. It locates the mapping block by using the unique key in the stack trace and decodes the mappings string to restore the line and column numbers of the .abc bytecode to the corresponding file name and line number of the ArkTS source code.
● ArkTS layer (name restoration): Finally, the analysis tool loads the nameCache.json deobfuscation file. Using the file name and line number obtained in the previous step, it searches through the mapping tables, such as IdentifierCache, to perform range matching. This restores obfuscated identifiers (such as g2) to their original, meaningful function, variable, or property names as written by the developers.
- Conclusion This article introduced the crash monitoring and analysis solution for HarmonyOS applications. System crashes fall into two main categories: native-layer crashes (C/C++ signal exceptions) and JS/ArkTS-layer crashes (runtime exceptions). Crash monitoring is implemented through HiAppEvent instrumentation. During development, you can simulate both types of crashes by button triggers and obtain stack traces in DevEco Studio. The core of the monitoring solution is to register a watcher at application startup. By using the onReceive callback, it collects key crash data such as the type and timestamp, establishing a closed-loop system for crash reporting and analysis. This approach ultimately enhances application stability and reduces user churn. Building and maintaining such a comprehensive monitoring and analysis system requires significant engineering investment. For teams seeking an out-of-the-box solution that allows them to focus on business development, mature options are already available. For example, Alibaba Cloud Application Real-Time Monitoring Service (ARMS) provides the Real User Monitoring (RUM) SDK for HarmonyOS, which enables non-intrusive collection, reporting, and analysis of exception data. With session tracing and page correlation, it helps developers quickly reproduce issues and pinpoint the exact location of an error in the code. For integration details, see the official document. If you have any questions, you can join the RUM support DingTalk group (group ID: 67370002064) for consultation.









Top comments (0)