A production-ready MRZ scanner app can be a powerful tool for businesses and organizations. It automates the process of scanning MRZs from passports and ID cards, detects faces, captures document images, and exports results to PDF.
In this tutorial, we'll walk through how to implement these features in an Android app using the Dynamsoft MRZ Scanner SDK, CameraX, and ML Kit.
Demo Video: Android MRZ Scanner
Prerequisites
- Android Studio
- A 30-day FREE trial license for Dynamsoft MRZ Scanner SDK
Project Overview
The application consists of two main activities:
-
MainActivity
: Handles the user interface, displays scan results, and provides PDF export functionality -
CameraXActivity
: Manages camera preview, MRZ scanning, face detection, and document detection
Step 1: Add dependencies
In the build.gradle
file, add the following dependencies:
dependencies {
// Dynamsoft MRZ Scanner SDK
implementation 'com.dynamsoft:mrzscannerbundle:3.0.3100'
// CameraX dependencies
implementation 'androidx.camera:camera-core:1.3.0'
implementation 'androidx.camera:camera-camera2:1.3.0'
implementation 'androidx.camera:camera-lifecycle:1.3.0'
implementation 'androidx.camera:camera-view:1.3.0'
// ML Kit Face Detection
implementation 'com.google.mlkit:face-detection:16.1.5'
// iText for PDF generation
implementation 'com.itextpdf:itext7-core:7.1.15'
}
Step 2: Set Up CameraXActivity
Begin by setting up CameraXActivity
, which handles the camera preview and scanning logic:
public class CameraXActivity extends AppCompatActivity {
private static final String TAG = "CameraXActivity";
private static final int REQUEST_CODE_PERMISSIONS = 10;
private static final String[] REQUIRED_PERMISSIONS = {Manifest.permission.CAMERA};
private PreviewView previewView;
private ExecutorService cameraExecutor;
private CaptureVisionRouter mRouter;
private ScannerConfig configuration;
private String mCurrentTemplate = "ReadPassportAndId";
private boolean isProcessing = false;
private Bitmap mLastProcessedBitmap; // Store the last bitmap for face detection
private Bitmap mLastDocumentBitmap; // Store the last cropped document
// Result validation fields
private static final int VALIDATION_FRAME_COUNT = 5;
private static final int REQUIRED_MATCHES = 2;
private java.util.List<String> recentResults = new java.util.ArrayList<>();
private java.util.Map<String, Integer> resultCounts = new java.util.HashMap<>();
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_camerax);
previewView = findViewById(R.id.preview_view);
// Get configuration from intent
configuration = (ScannerConfig) getIntent().getSerializableExtra("scanner_config");
// Initialize license
if (configuration != null && configuration.getLicense() != null) {
LicenseManager.initLicense(configuration.getLicense(), this, (isSuccess, error) -> {
if (!isSuccess) {
error.printStackTrace();
}
});
}
// Initialize CaptureVisionRouter
initializeCVR();
// Setup close button
ImageView closeButton = findViewById(R.id.iv_close);
closeButton.setOnClickListener(v -> {
setResult(RESULT_CANCELED);
finish();
});
cameraExecutor = Executors.newSingleThreadExecutor();
// Request camera permissions
if (allPermissionsGranted()) {
startCamera();
} else {
ActivityCompat.requestPermissions(this, REQUIRED_PERMISSIONS, REQUEST_CODE_PERMISSIONS);
}
}
}
Step 3: Implement Camera Preview with CameraX
Use CameraX to set up the camera preview and configure resolution for better image quality:
private void startCamera() {
ListenableFuture<ProcessCameraProvider> cameraProviderFuture = ProcessCameraProvider.getInstance(this);
cameraProviderFuture.addListener(() -> {
try {
ProcessCameraProvider cameraProvider = cameraProviderFuture.get();
Preview preview = new Preview.Builder().build();
preview.setSurfaceProvider(previewView.getSurfaceProvider());
// Create a ResolutionSelector with target resolution 1920x1080
ResolutionSelector resolutionSelector = new ResolutionSelector.Builder()
.setResolutionStrategy(new ResolutionStrategy(
new Size(1920, 1080),
ResolutionStrategy.FALLBACK_RULE_CLOSEST_HIGHER_THEN_LOWER))
.build();
ImageAnalysis imageAnalyzer = new ImageAnalysis.Builder()
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
// Set target resolution to 1920x1080 for better image quality
.setResolutionSelector(resolutionSelector)
.setOutputImageFormat(ImageAnalysis.OUTPUT_IMAGE_FORMAT_YUV_420_888)
.build();
imageAnalyzer.setAnalyzer(cameraExecutor, new MRZAnalyzer());
CameraSelector cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA;
try {
cameraProvider.unbindAll();
cameraProvider.bindToLifecycle(this, cameraSelector, preview, imageAnalyzer);
} catch (Exception exc) {
Log.e(TAG, "Use case binding failed", exc);
}
} catch (ExecutionException | InterruptedException e) {
Log.e(TAG, "Error starting camera", e);
}
}, ContextCompat.getMainExecutor(this));
}
Setting the resolution to 1920x1080 improves image quality for MRZ, document, and face image extraction.
Step 4: Implement MRZ Scanning
Create the MRZAnalyzer
class to process camera frames using Dynamsoft MRZ Scanner SDK:
private class MRZAnalyzer implements ImageAnalysis.Analyzer {
@Override
public void analyze(@NonNull ImageProxy imageProxy) {
if (isProcessing) {
imageProxy.close();
return;
}
isProcessing = true;
try {
// Convert ImageProxy to Bitmap
Bitmap bitmap = imageProxyToBitmap(imageProxy);
if (bitmap != null) {
// Store the bitmap for potential face detection
mLastProcessedBitmap = bitmap;
mLastDocumentBitmap = detectAndCropDocument(bitmap);
// Process with Dynamsoft SDK
CapturedResult capturedResult = mRouter.capture(bitmap, mCurrentTemplate);
CapturedResultItem[] items = capturedResult.getItems();
for (CapturedResultItem item : items) {
if (item instanceof ParsedResultItem) {
ParsedResultItem parsedItem = (ParsedResultItem) item;
if (isValidMRZResult(parsedItem)) {
// Use validation strategy instead of immediate return
if (shouldReturnResult(parsedItem)) {
runOnUiThread(() -> {
returnResult(parsedItem);
});
return;
}
}
}
}
}
} catch (Exception e) {
Log.e(TAG, "Error processing frame", e);
} finally {
isProcessing = false;
imageProxy.close();
}
}
}
The imageProxyToBitmap
method converts the camera frame to a bitmap for processing:
private Bitmap imageProxyToBitmap(ImageProxy imageProxy) {
try {
ImageProxy.PlaneProxy[] planes = imageProxy.getPlanes();
ByteBuffer yBuffer = planes[0].getBuffer();
ByteBuffer uBuffer = planes[1].getBuffer();
ByteBuffer vBuffer = planes[2].getBuffer();
int ySize = yBuffer.remaining();
int uSize = uBuffer.remaining();
int vSize = vBuffer.remaining();
byte[] nv21 = new byte[ySize + uSize + vSize];
yBuffer.get(nv21, 0, ySize);
vBuffer.get(nv21, ySize, vSize);
uBuffer.get(nv21, ySize + vSize, uSize);
android.graphics.YuvImage yuvImage = new android.graphics.YuvImage(nv21, android.graphics.ImageFormat.NV21, imageProxy.getWidth(), imageProxy.getHeight(), null);
java.io.ByteArrayOutputStream out = new java.io.ByteArrayOutputStream();
yuvImage.compressToJpeg(new android.graphics.Rect(0, 0, imageProxy.getWidth(), imageProxy.getHeight()), 100, out);
byte[] imageBytes = out.toByteArray();
Bitmap bitmap = android.graphics.BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
// Rotate bitmap 90 degrees clockwise for portrait mode
if (bitmap != null) {
android.graphics.Matrix matrix = new android.graphics.Matrix();
matrix.postRotate(90);
bitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(), matrix, true);
}
return bitmap;
} catch (Exception e) {
Log.e(TAG, "Error converting ImageProxy to Bitmap", e);
return null;
}
}
Note: In portrait mode, the camera frames are rotated 90 degrees clockwise. We apply this rotation to the bitmap to ensure correct orientation for MRZ scanning.
Step 5: Validate MRZ Results Over Multiple Frames
To ensure accurate results, we implement a validation strategy that requires multiple consistent readings before accepting a result:
private boolean isValidMRZResult(ParsedResultItem item) {
if (configuration != null && configuration.getDetectionType() == EnumDetectionType.VIN) {
return item.getCodeType().equals("VIN");
} else {
// MRZ validation
java.util.HashMap<String, String> entry = item.getParsedFields();
return entry.get("sex") != null &&
entry.get("issuingState") != null &&
entry.get("nationality") != null &&
entry.get("dateOfBirth") != null &&
entry.get("dateOfExpiry") != null;
}
}
private boolean shouldReturnResult(ParsedResultItem item) {
// Create a unique identifier for this MRZ result based on key fields
String resultKey = createResultKey(item);
// Add this result to our recent results list
recentResults.add(resultKey);
// Update the count for this result
Integer currentCount = resultCounts.get(resultKey);
resultCounts.put(resultKey, (currentCount != null ? currentCount : 0) + 1);
// Keep only the last VALIDATION_FRAME_COUNT results
if (recentResults.size() > VALIDATION_FRAME_COUNT) {
String removedResult = recentResults.remove(0);
int count = resultCounts.get(removedResult);
if (count <= 1) {
resultCounts.remove(removedResult);
} else {
resultCounts.put(removedResult, count - 1);
}
}
// Check if we have at least REQUIRED_MATCHES of the same result
int finalCount = (currentCount != null ? currentCount : 0);
boolean shouldReturn = finalCount >= REQUIRED_MATCHES;
if (shouldReturn) {
Log.d(TAG, "MRZ validation passed: " + finalCount + " matches out of " + recentResults.size() + " frames");
} else {
Log.d(TAG, "MRZ validation pending: " + finalCount + " matches, need " + REQUIRED_MATCHES);
}
return shouldReturn;
}
private String createResultKey(ParsedResultItem item) {
// Create a unique key based on critical MRZ fields that should remain consistent
java.util.HashMap<String, String> entry = item.getParsedFields();
StringBuilder keyBuilder = new StringBuilder();
// Use document number as primary identifier
String documentNumber = entry.get("passportNumber") != null ? entry.get("passportNumber") :
entry.get("documentNumber") != null ? entry.get("documentNumber") :
entry.get("longDocumentNumber");
if (documentNumber != null) {
keyBuilder.append(documentNumber).append("|");
}
// Add other critical fields
keyBuilder.append(entry.get("dateOfBirth")).append("|");
keyBuilder.append(entry.get("dateOfExpiry")).append("|");
keyBuilder.append(entry.get("nationality")).append("|");
keyBuilder.append(entry.get("issuingState"));
return keyBuilder.toString();
}
Step 6: Detect and Crop the Face from the Passport Image
After successfully scanning the MRZ, use ML Kit’s Face Detection API to detect and crop the face image from the passport photo:
private void detectAndCropFace(android.content.Intent intent) {
try {
// Get the current frame bitmap for face detection
Bitmap currentBitmap = getCurrentFrameBitmap();
if (currentBitmap == null) {
// No face image available, proceed with normal result
setResult(RESULT_OK, intent);
finish();
return;
}
// Configure face detector for faster detection
FaceDetectorOptions options = new FaceDetectorOptions.Builder()
.setPerformanceMode(FaceDetectorOptions.PERFORMANCE_MODE_FAST)
.setLandmarkMode(FaceDetectorOptions.LANDMARK_MODE_ALL)
.setClassificationMode(FaceDetectorOptions.CLASSIFICATION_MODE_ALL)
.setMinFaceSize(0.05f)
.enableTracking()
.build();
FaceDetector detector = FaceDetection.getClient(options);
InputImage image = InputImage.fromBitmap(currentBitmap, 0);
detector.process(image)
.addOnSuccessListener(new OnSuccessListener<List<Face>>() {
@Override
public void onSuccess(List<Face> faces) {
if (!faces.isEmpty()) {
// Get the largest face (most likely the main subject)
Face largestFace = getLargestFace(faces);
Bitmap croppedFace = cropFaceFromBitmap(currentBitmap, largestFace);
if (croppedFace != null) {
// Save cropped face to cache file and pass the file path
String faceImagePath = saveBitmapToCache(croppedFace, "face");
intent.putExtra("face_image_path", faceImagePath);
Log.d(TAG, "Face detected and saved to: " + faceImagePath);
}
}
setResult(RESULT_OK, intent);
finish();
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
Log.e(TAG, "Face detection failed: " + e.getMessage());
// Proceed without face image
setResult(RESULT_OK, intent);
finish();
}
});
} catch (Exception e) {
Log.e(TAG, "Error in face detection: " + e.getMessage());
setResult(RESULT_OK, intent);
finish();
}
}
private Bitmap cropFaceFromBitmap(Bitmap originalBitmap, Face face) {
try {
Rect boundingBox = face.getBoundingBox();
// Add some padding around the face
int padding = Math.min(boundingBox.width(), boundingBox.height()) / 4;
int left = Math.max(0, boundingBox.left - padding);
int top = Math.max(0, boundingBox.top - padding);
int right = Math.min(originalBitmap.getWidth(), boundingBox.right + padding);
int bottom = Math.min(originalBitmap.getHeight(), boundingBox.bottom + padding);
int width = right - left;
int height = bottom - top;
if (width > 0 && height > 0) {
return Bitmap.createBitmap(originalBitmap, left, top, width, height);
}
} catch (Exception e) {
Log.e(TAG, "Error cropping face: " + e.getMessage());
}
return null;
}
private String saveBitmapToCache(Bitmap bitmap, String prefix) {
try {
// Create a file in the cache directory
File cacheDir = getApplicationContext().getCacheDir();
File imageFile = new File(cacheDir, prefix + "_" + System.currentTimeMillis() + ".jpg");
// Save the bitmap to the file
FileOutputStream fos = new FileOutputStream(imageFile);
// Compress with high quality to avoid artifacts
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, fos);
fos.flush();
fos.close();
// Return the file path
return imageFile.getAbsolutePath();
} catch (Exception e) {
Log.e(TAG, "Error saving bitmap to cache: " + e.getMessage());
return null;
}
}
To avoid memory issues, we store the cropped face as a file in the cache directory and pass the file path between activities, instead of passing the bitmap object directly.
Step 7: Detect and Rectify MRZ Documents
To detect and crop the document from a camera frame, use the foundational API from the Dynamsoft MRZ SDK with a preset template:
private Bitmap detectAndCropDocument(Bitmap originalBitmap) {
try {
CapturedResult capturedResult = mRouter.capture(originalBitmap, EnumPresetTemplate.PT_DETECT_AND_NORMALIZE_DOCUMENT);
CapturedResultItem[] items = capturedResult.getItems();
for (CapturedResultItem item : items) {
if (item instanceof DeskewedImageResultItem) {
DeskewedImageResultItem deskewedImageResultItem = (DeskewedImageResultItem) item;
return deskewedImageResultItem.getImageData().toBitmap();
}
}
Log.d(TAG, "Document detection using Android APIs not yet implemented");
return originalBitmap;
} catch (Exception e) {
Log.e(TAG, "Error in document detection: " + e.getMessage());
return originalBitmap;
}
}
Step 8: Return Results to MainActivity
Use an intent to return results to the main activity, including parsed MRZ data, the face image path, and the document image path:
private void returnResult(ParsedResultItem item) {
android.content.Intent intent = new android.content.Intent();
intent.putExtra("status_code", 1); // Success
intent.putExtra("doc_type", item.getCodeType());
java.util.HashMap<String, String> entry = item.getParsedFields();
if (item.getCodeType().equals("VIN")) {
// Handle VIN results
java.util.HashMap<String, String> vinData = new java.util.HashMap<>();
vinData.put("vinString", entry.get("vinString"));
vinData.put("wmi", entry.get("WMI"));
vinData.put("region", entry.get("region"));
vinData.put("vds", entry.get("VDS"));
vinData.put("checkDigit", entry.get("checkDigit"));
vinData.put("modelYear", entry.get("modelYear"));
vinData.put("plantCode", entry.get("plantCode"));
vinData.put("serialNumber", entry.get("serialNumber"));
intent.putExtra("result", vinData);
setResult(RESULT_OK, intent);
finish();
} else {
// Handle MRZ results
intent.putExtra("nationality", item.getFieldRawValue("nationality"));
intent.putExtra("issuing_state", item.getFieldRawValue("issuingState"));
String number = entry.get("passportNumber") != null ? entry.get("passportNumber") :
entry.get("documentNumber") != null ? entry.get("documentNumber") :
entry.get("longDocumentNumber");
intent.putExtra("number", number);
// Create properly formatted MRZ result data
java.util.HashMap<String, String> resultData = new java.util.HashMap<>(entry);
// Extract and set firstName and lastName from the parsed fields
String primaryIdentifier = entry.get("primaryIdentifier");
String secondaryIdentifier = entry.get("secondaryIdentifier");
if (primaryIdentifier != null) {
resultData.put("lastName", primaryIdentifier);
}
if (secondaryIdentifier != null) {
// Secondary identifier contains all given names
resultData.put("firstName", secondaryIdentifier.trim());
}
// Calculate age from date of birth if not directly available
String dateOfBirth = entry.get("dateOfBirth");
if (dateOfBirth != null && !dateOfBirth.isEmpty() && entry.get("age") == null) {
try {
// Date format is typically YYMMDD
if (dateOfBirth.length() >= 6) {
int birthYear = Integer.parseInt(dateOfBirth.substring(0, 2));
int birthMonth = Integer.parseInt(dateOfBirth.substring(2, 4));
int birthDay = Integer.parseInt(dateOfBirth.substring(4, 6));
// Convert 2-digit year to 4-digit year
if (birthYear <= 30) {
birthYear += 2000;
} else {
birthYear += 1900;
}
// Calculate age properly considering current date
java.util.Calendar birthDate = java.util.Calendar.getInstance();
birthDate.set(birthYear, birthMonth - 1, birthDay);
java.util.Calendar currentDate = java.util.Calendar.getInstance();
int age = currentDate.get(java.util.Calendar.YEAR) - birthDate.get(java.util.Calendar.YEAR);
// Adjust age if birthday hasn't occurred this year yet
if (currentDate.get(java.util.Calendar.DAY_OF_YEAR) < birthDate.get(java.util.Calendar.DAY_OF_YEAR)) {
age--;
}
resultData.put("age", String.valueOf(age));
}
} catch (Exception e) {
Log.e(TAG, "Error calculating age from date: " + dateOfBirth, e);
}
}
// Ensure documentType is available
if (entry.get("documentType") == null) {
resultData.put("documentType", item.getCodeType());
}
intent.putExtra("result", resultData);
// Add document image if available
if (mLastDocumentBitmap != null) {
String documentImagePath = saveBitmapToCache(mLastDocumentBitmap, "document");
intent.putExtra("document_image_path", documentImagePath);
Log.d(TAG, "Document image saved to: " + documentImagePath);
}
// For MRZ results, try to detect and crop face from the current frame
detectAndCropFace(intent);
}
}
Step 9: Save Results to PDF and Share
Use iText to generate a PDF with the scan results, including face and document images:
private void sharePdf() {
if (scannedInfo.isEmpty()) return;
try {
createPdf();
} catch (Exception e) {
e.printStackTrace();
return;
}
btnOpenPdf.setVisibility(View.VISIBLE); // Show Open PDF button after creation
Uri pdfUri = FileProvider.getUriForFile(this, getPackageName() + ".provider", lastCreatedPdfFile);
Intent shareIntent = new Intent(Intent.ACTION_SEND);
shareIntent.setType("application/pdf");
shareIntent.putExtra(Intent.EXTRA_STREAM, pdfUri);
shareIntent.addFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION);
startActivity(Intent.createChooser(shareIntent, "Share PDF"));
}
private void openPdf() {
try {
createPdf();
} catch (Exception e) {
e.printStackTrace();
return;
}
try {
Uri pdfUri = FileProvider.getUriForFile(this, getPackageName() + ".provider", lastCreatedPdfFile);
Intent openIntent = new Intent(Intent.ACTION_VIEW);
openIntent.setDataAndType(pdfUri, "application/pdf");
openIntent.addFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION);
openIntent.addFlags(Intent.FLAG_ACTIVITY_NO_HISTORY);
startActivity(openIntent);
} catch (Exception e) {
// If no PDF app is available or there's an error, show a message
e.printStackTrace();
// Optionally show a toast or dialog to inform the user
}
}
private File createPdf() throws IOException {
String fileName = currentDocType + "_Scan_" + System.currentTimeMillis() + ".pdf";
File pdfFile = new File(getExternalFilesDir(Environment.DIRECTORY_DOCUMENTS), fileName);
lastCreatedPdfFile = pdfFile; // Save the last created PDF file
PdfWriter writer = new PdfWriter(pdfFile);
PdfDocument pdf = new PdfDocument(writer);
Document document = new Document(pdf);
// Add face image if available
if (faceBitmap != null) {
File tempImage = new File(getCacheDir(), "face.jpg");
FileOutputStream fos = new FileOutputStream(tempImage);
faceBitmap.compress(Bitmap.CompressFormat.JPEG, 100, fos);
fos.close();
Image faceImage = new Image(ImageDataFactory.create(tempImage.getAbsolutePath()));
// Calculate appropriate size while maintaining aspect ratio
float pageWidth = pdf.getDefaultPageSize().getWidth() - 50; // Margin
float imageWidth = Math.min(pageWidth, 300); // Max width 300pt
float aspectRatio = (float) faceBitmap.getWidth() / faceBitmap.getHeight();
float imageHeight = imageWidth / aspectRatio;
faceImage.setWidth(imageWidth);
faceImage.setHeight(imageHeight);
document.add(new Paragraph("Face Image:").setBold());
document.add(faceImage);
document.add(new Paragraph("\n"));
}
// Add document image if available
if (documentBitmap != null) {
File tempDocImage = new File(getCacheDir(), "document.jpg");
FileOutputStream fos = new FileOutputStream(tempDocImage);
documentBitmap.compress(Bitmap.CompressFormat.JPEG, 100, fos);
fos.close();
Image docImage = new Image(ImageDataFactory.create(tempDocImage.getAbsolutePath()));
// Calculate appropriate size while maintaining aspect ratio
float pageWidth = pdf.getDefaultPageSize().getWidth() - 50; // Margin
float imageWidth = Math.min(pageWidth, 450); // Max width 450pt for document
float aspectRatio = (float) documentBitmap.getWidth() / documentBitmap.getHeight();
float imageHeight = imageWidth / aspectRatio;
docImage.setWidth(imageWidth);
docImage.setHeight(imageHeight);
document.add(new Paragraph("Document Image:").setBold());
document.add(docImage);
document.add(new Paragraph("\n"));
}
for (String info : scannedInfo) {
document.add(new Paragraph(info).setTextAlignment(TextAlignment.LEFT));
}
document.close();
return pdfFile;
}
Step 10: Run the Android MRZ Scanner App
The MRZ scanner app is now complete, with full support for:
- Real-time MRZ recognition
- Document detection and correction
- Face cropping
- Result exporting to PDF
Source Code
https://github.com/yushulx/android-camera-barcode-mrz-document-scanner/tree/main/examples/MrzVin
Top comments (0)