DEV Community

SameX
SameX

Posted on

Application of Model Lightweighting in HarmonyOS Next Intelligent Driving Assistance System

This article aims to deeply explore the application of model lightweighting technology in the intelligent driving assistance system based on the Huawei HarmonyOS Next system (up to API 12 as of now), and summarize it based on actual development practices. It mainly serves as a vehicle for technical sharing and communication. There may be mistakes and omissions. Colleagues are welcome to put forward valuable opinions and questions so that we can make progress together. This article is original content, and any form of reprint must indicate the source and the original author.

I. Intelligent Driving Scenarios and Model Lightweighting Strategy Planning

(1) Key Scenario Analysis

  1. Lane Detection Scenario Lane detection is one of the basic functions of the intelligent driving assistance system, which is crucial for vehicles to stay within the lane. In various road conditions, such as highways, urban roads, curves, and night driving, the system needs to accurately identify the position, shape, and type (solid lines, dashed lines, etc.) of the lane lines. This requires the model to handle the impacts of different lighting conditions, road conditions, and weather conditions on lane line recognition, ensuring high accuracy and reliability. For example, in rainy or snowy days, the lane lines may be partially blocked or blurred, and the model needs to have strong robustness to accurately detect the lane lines.
  2. Obstacle Recognition Scenario Timely and accurate identification of obstacles on the road is the key to ensuring driving safety. Obstacles include vehicles, pedestrians, traffic signs, road construction facilities, etc. The model needs to quickly recognize obstacles at different distances, angles, and speeds, and judge their types, positions, and motion states. For example, when driving at high speed, the system should be able to detect distant obstacles in advance to provide the driver with sufficient reaction time; in a complex urban traffic environment, it should accurately identify various types of obstacles to avoid misjudgment and missed detection.

(2) Model Lightweighting Strategies Based on HarmonyOS Next

  1. Strategies Considering Hardware Resources The hardware resources of intelligent driving devices are limited, especially the computing power and storage capacity of the in - vehicle computing unit. Therefore, in the model lightweighting strategy, lightweight model architectures are preferred. For example, network structures based on the MobileNet series or the EfficientNet - Lite series are adopted. These architectures focus on reducing the number of parameters and computational complexity in design while maintaining certain performance. For MobileNet, depth - separable convolutions are used to replace traditional convolutions, greatly reducing the computational load. During the model training process, training parameters are adjusted according to the hardware resources, such as using a smaller batch size to avoid memory overflow. At the same time, hardware acceleration technologies are reasonably utilized, such as using the in - vehicle GPU or NPU (Neural Network Processing Unit) to accelerate the model inference and improve the computational efficiency.
  2. Strategies Meeting Safety Requirements Safety is the primary requirement of intelligent driving. During the model lightweighting process, it is necessary to ensure that the accuracy and stability of the model are not significantly affected. Conservative pruning and quantization strategies are adopted to avoid over - optimization leading to a decline in model performance. For example, during the pruning process, a lower pruning ratio is set, and through multiple trials, a balance point that can both reduce the number of parameters and ensure safety performance is found. For quantization, appropriate quantization ranges and precisions are selected to ensure that the quantized model will not misjudge due to precision loss when making critical safety decisions. In addition, a model backup and redundancy mechanism is established. When the main model fails or malfunctions, it can be quickly switched to the backup model to ensure the continuous and safe operation of the intelligent driving system.
  3. Lightweighting Strategies with Collaborative Distributed Capabilities Leveraging the distributed capabilities of HarmonyOS Next, different model tasks are assigned to different in - vehicle devices or computing units to work collaboratively. For example, the lane detection model and the obstacle recognition model are respectively deployed on different edge computing nodes, which can be in - vehicle computers, intelligent sensors, etc. Through distributed communication, data sharing and collaborative processing between models are achieved. In terms of data processing, distributed data management is used to perform distributed pre - processing on the collected image data. For example, operations such as image cropping and normalization are performed on the nodes close to the camera, and then the processed intermediate data is transmitted to the model computing node, reducing the data transmission volume and processing time. At the same time, according to the real - time running state of the vehicle, the calculation task allocation of the model is dynamically adjusted. For example, when the vehicle is driving at high speed, more resources are preferentially allocated to the obstacle recognition model to improve the system's response speed to safety threats.

II. Development of Key Functions and Coping with Technical Challenges

(1) Realization of Lightweighting for Lane Detection Model

  1. Code Examples of Structure Optimization and Quantization (Taking the MobileNet - based Lane Detection Model as an Example, Using Related Frameworks such as MindSpore Lite)
import mindspore_lite as mslite

// Load the original MobileNet - based lane detection model
let model = mslite.Model.from_file('mobilenet_lane_detection.ckpt');

// Structure optimization - Pruning
let pruner = new mslite.Pruner();
pruner.set_pruning_method('structured');
pruner.set_pruning_ratio(0.2); // Set the pruning ratio to 20%
let pruned_model = pruner.do_pruning(model);

// Quantization
let quantizer = new mslite.Quantizer();
quantizer.set_quantization_method('uniform');
quantizer.set_quantization_params(-0.5, 0.5, 8); // Set the quantization range and number of bits
let quantized_model = quantizer.do_quantization(pruned_model);

// Save the lightweighted model
quantized_model.save('mobilenet_lane_detection_light.ckpt');
Enter fullscreen mode Exit fullscreen mode
  1. Optimization Effects and Performance Improvement After structure optimization and quantization, the number of parameters of the lane detection model is reduced by about 40%, and the storage size is reduced from the original 10MB to about 6MB. The inference speed on in - vehicle devices is increased by about 30%, which can meet the requirements of real - time lane detection. In actual tests, the detection accuracy of the model in different road conditions remains above 90%. Even in night or adverse weather conditions, it can still accurately detect the lane lines, providing reliable lane - keeping assistance for intelligent driving.

(2) Balancing Accuracy and Speed of the Obstacle Recognition Model

  1. Data Processing Optimization
    • Data Augmentation: For the obstacle recognition scenario, diverse data augmentation operations are carried out. This includes randomly scaling, translating, and rotating obstacle images, as well as simulating different lighting conditions and occlusion situations. For example, by randomly occluding part of the obstacle image, the model can learn to accurately recognize the features of the obstacle even under partial occlusion. At the same time, combined images of different types of obstacles in different scenarios are added to improve the model's recognition ability for complex scenarios.
    • Data Pre - processing: More refined normalization and standardization methods are adopted. For image data, the normalization parameters are dynamically adjusted according to statistical information such as the brightness and contrast of the image, making the data more in line with the requirements of model training. For example, in the night - driving scenario, the normalization range of image brightness is reduced to improve the model's ability to recognize obstacles in low - light environments.
  2. Model Optimization Technologies
    • Model Structure Improvement: In the deep - learning - based obstacle recognition model, an attention mechanism is introduced to enable the model to focus on the key areas in the image and improve the recognition accuracy. For example, a spatial attention mechanism is adopted to highlight the features of the obstacle area and reduce the interference of background information. At the same time, the number of network layers and neurons of the model is optimized to reduce the computational complexity while ensuring accuracy.
    • Model Compression and Quantization: A mixed - precision training method is adopted. During the model training process, low - precision data types (such as 16 - bit floating - point numbers) are used for calculation in some layers to reduce memory occupation and computational load. After the model training is completed, a quantization operation is carried out to convert the model parameters into 8 - bit integers, further compressing the model size. Through these optimization measures, while maintaining the obstacle recognition accuracy above 95%, the inference speed of the model is increased by about 40%, meeting the real - time requirements of the intelligent driving system for obstacle recognition.

(3) Integration of Lightweight Models and Compatibility Assurance

  1. Integrating Models into the Intelligent Driving System The lightweighted lane detection model and obstacle recognition model are integrated into the intelligent driving system. Through the application development framework of HarmonyOS Next, communication interfaces are established between the models and vehicle sensors (such as cameras, radars, etc.) and control systems (such as steering systems, braking systems, etc.). For example, when the lane detection model detects that the vehicle deviates from the lane, a signal is sent to the control system through the interface to trigger the steering system to make fine adjustments, keeping the vehicle driving within the lane. During the integration process, it is ensured that the input and output data formats of the models are compatible with other components of the system, achieving seamless docking.
  2. Compatibility Testing and Problem - Solving Comprehensive compatibility testing is carried out on the integrated system, including in - vehicle hardware of different vehicle models, different versions of the operating system, and various sensor devices. During the testing process, it is found that due to the different resolutions and viewing angles of the cameras of different vehicle models, the input data of the model varies, affecting the recognition effect. To address this problem, an adaptive image pre - processing module is developed, which automatically adjusts the image cropping and scaling ratios according to the camera parameters, enabling the model to adapt to different input data. At the same time, the communication delay problem between the model and the vehicle control system is solved. By optimizing the communication protocol and data transmission method, it is ensured that the model's decisions can be timely transmitted to the control system, guaranteeing the safe and stable operation of the intelligent driving system.

III. System Testing and Reliability Enhancement

(1) System Testing Environment and Methods

  1. Construction of the Simulated Driving Environment A simulated driving environment is constructed, including devices such as a driving simulator, a virtual scene generator, and a sensor emulator. The driving simulator is used to simulate the driver's operating behaviors, such as acceleration, braking, and steering; the virtual scene generator can create various real - world road scenes, such as urban roads, highways, mountain roads, etc., as well as different weather conditions and traffic situations; the sensor emulator simulates the signal output of in - vehicle cameras, radars, etc., providing test data for the model. In this way, the performance of the intelligent driving assistance system can be comprehensively tested in a laboratory environment.
  2. Performance Testing Indicators and Testing Process
    • Performance Testing Indicators: Mainly include the model's detection accuracy, recall rate, F1 - score, inference speed, system response time, etc. For example, in the obstacle recognition test, calculate the proportion of obstacles correctly recognized by the model (accuracy rate), the proportion of actually existing obstacles detected by the model (recall rate), and the comprehensive evaluation indicator F1 - score. At the same time, measure the time from when the model receives sensor data to when it outputs the recognition result (inference speed), and the time from when the system detects an abnormal situation to when it takes corresponding measures (such as braking, evasion, etc.) (system response time).
    • Testing Process: First, set different test scenarios in the simulated driving environment, and each scenario contains multiple test cases. For example, in the urban road scenario, set test cases with different types of obstacles, different traffic flows, and different lighting conditions. Then, run the intelligent driving assistance system and record the performance indicators of the model in each test case. Finally, conduct statistical analysis of the test results to evaluate the overall performance of the system.

(2) Analysis of Test Results and Reliability Optimization

  1. Analysis of Test Results Through the analysis of the system test results in the simulated driving environment, it is found that although the model can achieve high detection accuracy and fast inference speed under normal circumstances, there are still problems in some extreme cases. For example, when strong light shines directly on the camera or when heavy rain causes serious road water reflection, the accuracy of the lane detection model will drop below 80%; when multiple obstacles appear quickly and continuously with partial occlusion, the recall rate of the obstacle recognition model will decrease to about 85%. In addition, the system response time sometimes exceeds the safety threshold in complex scenarios, affecting driving safety.
  2. Reliability Optimization Measures
    • Enhancement of Model Robustness: For extreme lighting and weather conditions, the model is trained and optimized specifically. Training data in special scenarios such as strong light, weak light, heavy rain, and heavy snow are added, enabling the model to better adapt to different environments. At the same time, an adversarial training method is adopted to let the model learn features that are robust to lighting and weather changes. For example, image data simulating different lighting and weather conditions is generated through a generative adversarial network (GAN) to expand the training set. After optimization, the accuracy of the lane detection model in extreme lighting and weather conditions is increased to above 90%.
    • Establishment of a System Fault - Tolerance Mechanism: A fault - tolerance mechanism is established in the intelligent driving system. When the model has an abnormal or wrong decision, the system can correct it in a timely manner or take safety measures. For example, when the obstacle recognition model makes wrong identifications continuously for many times, the system automatically reduces the vehicle speed, prompts the driver to pay attention to the road conditions, and tries to re - initialize the model or switch to a backup model for detection. In addition, hardware redundancy design is increased, such as using multiple cameras or sensors for data collection. When one sensor fails, the system can continue to work relying on other sensors, improving the reliability of the system.

(3) Future Development Prospects of Model Lightweighting in the Field of Intelligent Driving

  1. Trend of Hardware - Software Collaborative Optimization In the future, intelligent driving assistance systems will pay more attention to hardware - software collaborative optimization. With the continuous development of in - vehicle chip technology, more high - performance chips specifically designed for intelligent driving will emerge, such as GPUs and NPUs with stronger AI computing capabilities. Model lightweighting technology will be closely combined with these hardware to develop model architectures and optimization algorithms that are more suitable for hardware characteristics. For example, the sparse computing capabilities of the hardware are utilized to further optimize the model pruning and quantization strategies, achieving higher computational efficiency. At the same time, the software level will continuously optimize the model training and inference frameworks, improving the development efficiency and performance of the model.
  2. Multi - Modal Data Fusion and Model Lightweighting Intelligent driving systems will increasingly rely on multi - modal data, such as camera images, radar point cloud data, lidar data, etc. Model lightweighting technology needs to adapt to the requirements of multi - modal data fusion and develop lightweight models that can effectively process multiple data types. For example, a model architecture that integrates convolutional neural networks (CNNs) and recurrent neural networks (RNNs) or graph neural networks (GNNs) is designed to process image and sequence data respectively, while reducing the overall complexity of the model through model compression techniques. In the data fusion process, efficient data pre - processing and feature extraction methods are adopted to reduce data redundancy and improve the learning efficiency and performance of the model.
  3. Combination of Reinforcement Learning and Model Lightweighting Reinforcement learning has great potential in intelligent driving decision - making. In the future, the combined application of model lightweighting technology and reinforcement learning will be explored. Lightweight models are used to quickly process environmental information, providing an efficient state representation for reinforcement learning algorithms. At the same time, the decision - making ability of reinforcement learning is utilized to optimize the behavior of the model. For example, in the path planning and obstacle - avoidance decision - making of intelligent driving, the lightweight model recognizes the road environment, and the reinforcement learning algorithm selects the optimal driving strategy based on the information output by the model. This combination will further improve the intelligence level and real - time decision - making ability of the intelligent driving assistance system, laying the foundation for the realization of fully autonomous driving. It is hoped that through the introduction of this article, it can provide some references and inspirations for developers in the field of intelligent driving regarding the application of model lightweighting in HarmonyOS Next, and jointly promote the development of intelligent driving technology. If you encounter other problems during the practice process, you are welcome to communicate and discuss together! Haha!

AWS GenAI LIVE image

Real challenges. Real solutions. Real talk.

From technical discussions to philosophical debates, AWS and AWS Partners examine the impact and evolution of gen AI.

Learn more

Top comments (0)

Qodo Takeover

Introducing Qodo Gen 1.0: Transform Your Workflow with Agentic AI

Rather than just generating snippets, our agents understand your entire project context, can make decisions, use tools, and carry out tasks autonomously.

Read full post

👋 Kindness is contagious

Please leave a ❤️ or a friendly comment on this post if you found it helpful!

Okay