A document scanner app is a software application that leverages your device's camera to capture images of physical documents and transform them into digital formats. Typically, these apps can scan documents, photos, receipts, business cards, and more.
Document scanner apps are useful in a variety of industries and scenarios, including but not limited to education, business, finance and healthcare. You may be familiar with some well-known document scanner apps like Adobe Scan, CamScanner, and Microsoft Office Lens. The purpose of this article is to guide you in developing your own cross-platform document scanner app using Flutter and Dynamsoft Document Normalizer. With a single codebase, you can build a document scanner app that runs on Windows, Android, iOS and Web platforms.
Try Online Demo with Your Mobile Devices
https://yushulx.me/flutter-document-scanner/
Exploring the Differences Between Flutter MRZ Scanners, Barcode Scanners, and Document Scanners
In previous projects, we've developed both a Flutter MRZ scanner app and a Flutter barcode scanner app. The Flutter plugins used in these three apps are largely identical. The main distinction is the SDKs employed for specific vision tasks: flutter_ocr_sdk
for the MRZ detection, flutter_barcode_sdk
for the barcode scanning, and flutter_document_scan_sdk
for the document edge detection and perspective correction.
Most of the UI code is shared among the three apps. For instance, they all utilize a tab bar for navigation between the home page, history page, and about page. The camera control logic remains consistent across all apps. The primary distinction is that the document scanner app includes an editing page to adjust the perspective of the scanned document. Once the quadrilateral adjustment is completed, the document is cropped and rectified on a separate saving page. Various filters such as grayscale, black and white, and color are available to enhance image quality.
Needed Flutter Plugins
- flutter_document_scan_sdk: Wraps Dynamsoft Document Normalizer SDK for Flutter, supporting Windows, Linux, Android, iOS and Web. A valid license key is required to use the plugin.
- image_picker: Provides an easy way to pick an image/video from the image library, or to take a picture/video with the camera.
- shared_preferences: Wraps platform-specific persistent storage for simple data (NSUserDefaults on iOS and macOS, SharedPreferences on Android, etc.).
- camera: Provides APIs for identifying cameras, displaying a preview, and capturing images or video from a camera.
- share_plus: Shares content via the platform share UI.
- url_launcher: Launches URLs, making it easy to open a web page.
- flutter_exif_rotation: Rotates images automatically based on the EXIF orientation on Android and iOS.
Getting Started with the App
-
Create a new Flutter project with the command:
flutter create documentscanner
-
Add the dependencies to
pubspec.yaml
:
dependencies: flutter_document_scan_sdk: ^1.0.2 image_picker: ^1.0.0 shared_preferences: ^2.1.1 camera: git: url: https://github.com/yushulx/flutter_camera.git camera_windows: git: url: https://github.com/yushulx/flutter_camera_windows.git share_plus: ^7.0.2 url_launcher: ^6.1.11 flutter_exif_rotation: ^0.5.1
-
Create a
global.dart
file to store global variables:
import 'package:flutter_document_scan_sdk/flutter_document_scan_sdk.dart'; FlutterDocumentScanSdk docScanner = FlutterDocumentScanSdk(); bool isLicenseValid = false; Future<int> initDocumentSDK() async { int? ret = await docScanner.init( 'DLS2eyJoYW5kc2hha2VDb2RlIjoiMjAwMDAxLTE2NDk4Mjk3OTI2MzUiLCJvcmdhbml6YXRpb25JRCI6IjIwMDAwMSIsInNlc3Npb25QYXNzd29yZCI6IndTcGR6Vm05WDJrcEQ5YUoifQ=='); if (ret == 0) isLicenseValid = true; await docScanner.setParameters(Template.color); return ret ?? -1; }
-
Replace the contents in
lib/main.dart
with the following code:
import 'package:flutter/material.dart'; import 'tab_page.dart'; import 'dart:async'; import 'global.dart'; Future<void> main() async { runApp(const MyApp()); } class MyApp extends StatelessWidget { const MyApp({super.key}); Future<int> loadData() async { return await initDocumentSDK(); } @override Widget build(BuildContext context) { return MaterialApp( title: 'Dynamsoft Barcode Detection', theme: ThemeData( scaffoldBackgroundColor: colorMainTheme, ), home: FutureBuilder<int>( future: loadData(), builder: (BuildContext context, AsyncSnapshot<int> snapshot) { if (!snapshot.hasData) { return const CircularProgressIndicator(); } Future.microtask(() { Navigator.pushReplacement(context, MaterialPageRoute(builder: (context) => const TabPage())); }); return Container(); }, ), ); } }
Implementing the Features of the Document Scanner App
Check out the UI design guidelines:
In the following paragraphs, we will delve into the features of the document scanner app. Our focus will be on utilizing the edge detection and perspective correction API, implementing the edge editing page, and creating the document saving page.
Document Edge Detection and Perspective Correction
The document scanner app offers two methods for scanning a document: real-time scanning via the camera stream or detection from a selected image file. When the camera is employed, the video streaming buffer is received and the detectBuffer()
method is invoked. As for an image file, you can either call the detectFile()
or decode the image to a buffer firstly by calling decodeImageFromList()
and then invoke the detectBuffer()
method. Both methods return a Future<List<DocumentResult>?>
object.
// Image file
XFile? photo = await picker.pickImage(source: ImageSource.gallery);
// detectFile()
var results = await docScanner.detectFile(photo.path);
// detectBuffer()
Uint8List fileBytes = await photo.readAsBytes();
ui.Image image = await decodeImageFromList(fileBytes);
ByteData? byteData =
await image.toByteData(format: ui.ImageByteFormat.rawRgba);
if (byteData != null) {
List<DocumentResult>? results = await docScanner.detectBuffer(
byteData.buffer.asUint8List(),
image.width,
image.height,
byteData.lengthInBytes ~/ image.height,
ImagePixelFormat.IPF_ARGB_8888.index);
}
// Video streaming buffer
Future<void> processDocument(List<Uint8List> bytes, int width, int height,
List<int> strides, int format, List<int> pixelStrides) async {
var results = docScanner.detectBuffer(bytes[0], width, height, strides[0], format);
...
}
The DocumentResult
object contains the following properties:
-
List<Offset> points
: The coordinates of the quadrilateral. -
int confidence
: The confidence of the result.
Based on the points
property, the document can be cropped and rectified by calling the normalizeFile()
or normalizeBuffer()
method.
// File
var normalizedImage = await docScanner.normalizeFile(file, points);
// Buffer
Future<void> handleDocument(Uint8List bytes, int width, int height, int stride,
int format, dynamic points) async {
var normalizedImage = docScanner
.normalizeBuffer(bytes, width, height, stride, format, points);
}
Both methods return a NormalizedImage
object that represents the rectified image:
class NormalizedImage {
/// Image data.
final Uint8List data;
/// Image width.
final int width;
/// Image height.
final int height;
NormalizedImage(this.data, this.width, this.height);
}
To display the image, the Uint8List
data needs to be decoded to a ui.Image
object via the decodeImageFromPixels()
method:
decodeImageFromPixels(normalizedImage.data, normalizedImage.width,
normalizedImage.height, pixelFormat, (ui.Image img) {
});
Edge Editing Page
Auto-detection is not always accurate. That's why we need an edge editing page to adjust the quadrilateral. The page contains a OverlayPainter
widget to draw the quadrilateral and image, a GestureDetector
widget to handle the touch events and re-draw the quadrilateral, and a Stack
widget to display the image and the quadrilateral.
class OverlayPainter extends CustomPainter {
ui.Image? image;
List<DocumentResult>? results;
OverlayPainter(this.image, this.results);
@override
void paint(Canvas canvas, Size size) {
final paint = Paint()
..color = colorOrange
..strokeWidth = 10
..style = PaintingStyle.stroke;
if (image != null) {
canvas.drawImage(image!, Offset.zero, paint);
}
Paint circlePaint = Paint()
..color = colorOrange
..strokeWidth = 20
..style = PaintingStyle.fill;
if (results == null) return;
for (var result in results!) {
canvas.drawLine(result.points[0], result.points[1], paint);
canvas.drawLine(result.points[1], result.points[2], paint);
canvas.drawLine(result.points[2], result.points[3], paint);
canvas.drawLine(result.points[3], result.points[0], paint);
if (image != null) {
double radius = 20;
canvas.drawCircle(result.points[0], radius, circlePaint);
canvas.drawCircle(result.points[1], radius, circlePaint);
canvas.drawCircle(result.points[2], radius, circlePaint);
canvas.drawCircle(result.points[3], radius, circlePaint);
}
}
}
@override
bool shouldRepaint(OverlayPainter oldDelegate) => true;
}
Widget createCustomImage() {
var image = widget.documentData.image;
var detectionResults = widget.documentData.documentResults;
return FittedBox(
fit: BoxFit.contain,
child: SizedBox(
width: image!.width.toDouble(),
height: image.height.toDouble(),
child: GestureDetector(
onPanUpdate: (details) {
if (details.localPosition.dx < 0 ||
details.localPosition.dy < 0 ||
details.localPosition.dx > image.width ||
details.localPosition.dy > image.height) {
return;
}
for (int i = 0; i < detectionResults.length; i++) {
for (int j = 0; j < detectionResults[i].points.length; j++) {
if ((detectionResults[i].points[j] - details.localPosition)
.distance <
100) {
bool isCollided = false;
for (int index = 1; index < 4; index++) {
int otherIndex = (j + 1) % 4;
if ((detectionResults[i].points[otherIndex] -
details.localPosition)
.distance <
20) {
isCollided = true;
return;
}
}
setState(() {
if (!isCollided) {
detectionResults[i].points[j] = details.localPosition;
}
});
}
}
}
},
child: CustomPaint(
painter: OverlayPainter(image, detectionResults!),
),
)));
}
body: Stack(
children: <Widget>[
Positioned.fill(
child: createCustomImage(),
),
const Positioned(
left: 122,
right: 122,
bottom: 28,
child: Text('Powered by Dynamsoft',
textAlign: TextAlign.center,
style: TextStyle(
fontSize: 12,
color: Colors.white,
)),
),
],
),
Document Rectification and Saving Page
The document rectification and saving page contains a OverlayPainter
widget to draw the rectified document image, a Radio
group to select the image format,and an ElevatedButton
widget to save the image.
Widget createCustomImage(BuildContext context, ui.Image image,
List<DocumentResult> detectionResults) {
return FittedBox(
fit: BoxFit.contain,
child: SizedBox(
width: image.width.toDouble(),
height: image.height.toDouble(),
child: CustomPaint(
painter: OverlayPainter(image, detectionResults),
)));
}
<Widget>[
Theme(
data: Theme.of(context).copyWith(
unselectedWidgetColor:
Colors.white, // Color when unselected
),
child: Radio(
activeColor: colorOrange,
value: 'binary',
groupValue: _pixelFormat,
onChanged: (String? value) async {
setState(() {
_pixelFormat = value!;
});
await docScanner.setParameters(Template.binary);
if (widget.documentData.documentResults!.isNotEmpty) {
await normalizeBuffer(widget.documentData.image!,
widget.documentData.documentResults![0].points);
}
},
),
),
const Text('Binary', style: TextStyle(color: Colors.white)),
Theme(
data: Theme.of(context).copyWith(
unselectedWidgetColor:
Colors.white, // Color when unselected
),
child: Radio(
activeColor: colorOrange,
value: 'grayscale',
groupValue: _pixelFormat,
onChanged: (String? value) async {
setState(() {
_pixelFormat = value!;
});
await docScanner.setParameters(Template.grayscale);
if (widget.documentData.documentResults!.isNotEmpty) {
await normalizeBuffer(widget.documentData.image!,
widget.documentData.documentResults![0].points);
}
},
)),
const Text('Gray', style: TextStyle(color: Colors.white)),
Theme(
data: Theme.of(context).copyWith(
unselectedWidgetColor:
Colors.white, // Color when unselected
),
child: Radio(
activeColor: colorOrange,
value: 'color',
groupValue: _pixelFormat,
onChanged: (String? value) async {
setState(() {
_pixelFormat = value!;
});
await docScanner.setParameters(Template.color);
if (widget.documentData.documentResults!.isNotEmpty) {
await normalizeBuffer(widget.documentData.image!,
widget.documentData.documentResults![0].points);
}
},
)),
const Text('Color', style: TextStyle(color: Colors.white)),
]
ElevatedButton(
onPressed: () async {
String imageString =
await convertImagetoPngBase64(normalizedUiImage!);
final SharedPreferences prefs =
await SharedPreferences.getInstance();
var results = prefs.getStringList('document_data');
List<String> imageList = <String>[];
imageList.add(imageString);
if (results == null) {
prefs.setStringList('document_data', imageList);
} else {
results.addAll(imageList);
prefs.setStringList('document_data', results);
}
close();
},
style: ButtonStyle(
backgroundColor: MaterialStateProperty.all(colorMainTheme)),
child: Text('Save',
style: TextStyle(color: colorOrange, fontSize: 22)),
)
Known Issues on Flutter Web Platform
- Size Limitation of Web Local Storage: The rectified images are converted to base64 strings and saved with shared_preferences. When the total size of the images you're trying to save exceeds the size limitation of web local storage (typically around 5MB), it can lead to issues such as the app crashing or unexpected behavior.
- Image codec error:
The
decodeImageFromPixels()
method may not work normally compared to other platforms.
Web platform:
Windows platform:
Top comments (1)
Perfect thank you for this blog🥳