Leveraging HarmonyOS 5’s AI vision capabilities, this solution enables structured facial recognition by selecting images from device albums. Through system‑level API invocations, it completes end‑to‑end development from media resource access and image decoding to AI inference, ultimately outputting structured data including facial positions and feature points. In the wave of digital transformation, enterprise applications have seen a surge in demand for secure identity verification. This case builds an intelligent verification system adapted to business scenarios based on HarmonyOS 5’s AI vision services.
Detailed Development Steps
1. Module Dependency Configuration
// Import core functional modules
import { photoAccessHelper } from '@kit.MediaLibraryKit'; // Media library access
import { fileIo } from '@kit.CoreFileKit'; // File IO operations
import { image } from '@kit.ImageKit'; // Image processing
import { faceDetector } from '@kit.CoreVisionKit'; // AI vision capabilities
import { promptAction } from '@kit.ArkUI'; // UI interaction components
import { JSON } from '@kit.ArkTS'; // Data serialization
2. Media Resource Access
/**
* Secure access to album resources
* Note: Declare ohos.permission.READ_IMAGEVIDEO in config.json
*/
private async selectPhoto() {
try {
const photoPicker = new photoAccessHelper.PhotoViewPicker();
const result = await photoPicker.select({
MIMEType: photoAccessHelper.PhotoViewMIMETypes.IMAGE_TYPE,
maxSelectNumber: 1
});
if (!result?.photoUris?.length) {
promptAction.showToast({ message: 'No image selected' });
return null;
}
return result.photoUris[0];
} catch (error) {
console.error(`Album access failed: ${error.code}, ${error.message}`);
}
}
3. Image Decoding Processing
/**
* Image preprocessing pipeline
* @param uri Image resource identifier
* @returns Pixel map object for AI inference
*/
private async processImage(uri: string) {
try {
const file = await fileIo.open(uri, fileIo.OpenMode.READ_ONLY);
const imageSource = image.createImageSource(file.fd);
const pixelMap = await imageSource.createPixelMap({
desiredSize: { width: 1024 } // Control processing resolution
});
await fileIo.close(file.fd);
return pixelMap;
} catch (error) {
console.error(`Image processing failed: ${error.code}`);
}
}
4. Face Detection Engine
/**
* Perform facial feature analysis
* @param pixelMap Input image data
* @returns Array of facial features
*/
private async detectFaces(pixelMap: image.PixelMap) {
try {
faceDetector.init(); // Initialize detection engine
const visionConfig: faceDetector.VisionInfo = {
pixelMap,
analyzerMode: faceDetector.AnalyzerMode.ANALYZER_MODE_SIMPLE,
faceDetectMode: faceDetector.FaceDetectMode.SPEED_MODE
};
const results = await faceDetector.detect(visionConfig);
return results.filter(face => face.faceProbability >= 0.9); // Confidence filtering
} finally {
faceDetector.release(); // Release detection resources
}
}
5. Result Visualization
// Structured data display optimization
private showDetectionResult(results: faceDetector.FaceInfo[]) {
const formattedData = results.map((face, index) => ({
[`Face ${index + 1}`]: {
Confidence: face.faceProbability.toFixed(2),
Position: `[${face.faceRect.left}, ${face.faceRect.top}]`,
Size: `${face.faceRect.right - face.faceRect.left}×${face.faceRect.bottom - face.faceRect.top}`,
FeaturePoints: face.faceLandmarks.length
}
}));
promptAction.showDialog({
title: `Detected ${results.length} faces`,
message: JSON.stringify(formattedData, null, 2),
confirm: { value: 'OK' }
});
}
Complete Implementation Code
@Entry
@ComponentV2
struct FaceDetector {
@Local detectionResult: string = 'Awaiting detection...';
async checkFace() {
const imageUri = await this.selectPhoto();
if (!imageUri) return;
const pixelMap = await this.processImage(imageUri);
if (!pixelMap) return;
const faces = await this.detectFaces(pixelMap);
this.showDetectionResult(faces);
}
build() {
Column() {
Button('Face Recognition Detection')
.width('80%')
.height(48)
.fontColor(Color.White)
.backgroundColor('#007DFF')
.margin({ bottom: 20 })
.onClick(() => this.checkFace())
Text(this.detectionResult)
.fontSize(16)
.textAlign(TextAlign.Center)
}
.padding(20)
.width('100%')
.height('100%')
}
}
Key Technical Analysis
1. Permission Management
Configuration in module.json5
:
"requestPermissions": [
{
"name": "ohos.permission.READ_IMAGEVIDEO",
"reason": "Access album images"
}
]
2. Performance Optimization Suggestions
-
Image Scaling: Control resolution via the
desiredSize
parameter -
Mode Selection:
SPEED_MODE
for speed,QUALITY_MODE
for accuracy - Resource Release: Close file descriptors and release detectors promptly
3. Data Structure Description
Facial information includes:
interface FaceInfo {
faceRect: Rect; // Facial rectangle area
faceLandmarks: Point[]; // Feature point coordinates
faceProbability: number; // Confidence [0–1]
facePoseAngle: PoseAngle; // Head pose angle
}
Best Practices
- Exception Handling: Add checks for network/storage availability
-
User Experience: Add loading animations (
LoadingProgress
) -
Performance Monitoring: Use
hiTrace
to record latency stages -
Extended Capabilities: Integrate
@kit.AIEngineKit
for liveness detection
Through this case, developers can quickly master the core development model of HarmonyOS AI vision services and extend advanced functions such as face comparison and expression recognition. It is recommended to adjust parameters with reference to the official documentation during actual development.
Top comments (0)