How to Implement AI-Powered Image Recognition in Flutter 4.0 Apps with TensorFlow Lite 2.18 and AWS Rekognition 5.0
AI-powered image recognition enables apps to identify objects, scenes, and text in images, unlocking use cases like product scanning, content moderation, and accessibility tools. Flutter 4.0’s stable support for latest Dart features, combined with TensorFlow Lite 2.18 for on-device inference and AWS Rekognition 5.0 for cloud-based analysis, lets you build fast, scalable recognition features. This guide walks through the full implementation.
Prerequisites
- Flutter 4.0 SDK and Dart 3.2+ installed
- Android Studio or VS Code with Flutter plugins
- AWS account with IAM user permissions for Rekognition
- Pre-trained TensorFlow Lite 2.18 model (e.g., MobileNet V2 for image classification)
- AWS CLI configured with valid credentials
Step 1: Set Up Flutter Project and Dependencies
Create a new Flutter 4.0 project, then add the following dependencies to pubspec.yaml:
dependencies:
flutter:
sdk: flutter
tflite_flutter: ^0.10.4 # Compatible with TF Lite 2.18
aws_rekognition: ^5.0.0 # AWS Rekognition 5.0 Dart SDK
image_picker: ^1.0.4 # For capturing/selecting images
permission_handler: ^11.1.0 # For runtime permissions
aws_common: ^0.4.0 # AWS shared utilities
flutter:
assets:
- assets/models/ # Add your TF Lite model here
Run flutter pub get to install dependencies, then place your TensorFlow Lite 2.18 model (e.g., mobilenet_v2_1.0_224.tflite) and labels file in assets/models/.
Step 2: Configure AWS Rekognition 5.0
Create an IAM user in your AWS console with the AmazonRekognitionFullAccess policy, then generate access and secret keys. Add a configuration file aws_config.dart to your project:
class AwsConfig {
static const String accessKey = "YOUR_AWS_ACCESS_KEY";
static const String secretKey = "YOUR_AWS_SECRET_KEY";
static const String region = "us-east-1"; // Your preferred AWS region
}
Note: Never hardcode credentials in production. Use environment variables or AWS Cognito for secure credential management.
Step 3: Implement On-Device Recognition with TensorFlow Lite 2.18
Load the TF Lite model and labels, then process images for inference:
import 'package:tflite_flutter/tflite_flutter.dart';
import 'dart:io';
import 'package:image/image.dart' as img;
class TFLiteService {
late Interpreter _interpreter;
late List _labels;
Future loadModel() async {
_interpreter = await Interpreter.fromAsset('assets/models/mobilenet_v2_1.0_224.tflite');
_labels = await rootBundle.loadString('assets/models/labels.txt').then((v) => v.split('
'));
}
Future>> classifyImage(File image) async {
// Resize image to 224x224 (MobileNet input size)
img.Image? inputImage = img.decodeImage(await image.readAsBytes());
img.Image resizedImage = img.copyResize(inputImage!, width: 224, height: 224);
// Convert to input tensor (1x224x224x3, float32)
var input = List.filled(1 \* 224 \* 224 \* 3, 0.0).reshape(\[1, 224, 224, 3\]);
for (int y = 0; y < 224; y++) {
for (int x = 0; x < 224; x++) {
img.Pixel pixel = resizedImage.getPixel(x, y);
input\[0\]\[y\]\[x\]\[0\] = pixel.r / 255.0;
input\[0\]\[y\]\[x\]\[1\] = pixel.g / 255.0;
input\[0\]\[y\]\[x\]\[2\] = pixel.b / 255.0;
}
}
// Run inference
var output = List.filled(1 \* \_labels.length, 0.0).reshape(\[1, \_labels.length\]);
\_interpreter.run(input, output);
// Parse results (top 3 labels)
List\> results = \[\];
for (int i = 0; i < \_labels.length; i++) {
if (output\[0\]\[i\] > 0.1) { // Confidence threshold
results.add({'label': \_labels\[i\], 'confidence': output\[0\]\[i\]});
}
}
results.sort((a, b) => b\['confidence'\].compareTo(a\['confidence'\]));
return results.take(3).toList();
}
}
Step 4: Implement Cloud Recognition with AWS Rekognition 5.0
Use the AWS Rekognition 5.0 SDK to call the detectLabels API for cloud-based analysis:
import 'package:aws_rekognition/aws_rekognition.dart';
import 'package:aws_common/aws_common.dart';
class RekognitionService {
late RekognitionClient _client;
RekognitionService() {
_client = RekognitionClient(
credentials: AwsClientCredentials(
accessKey: AwsConfig.accessKey,
secretKey: AwsConfig.secretKey,
),
region: AwsConfig.region,
);
}
Future> detectLabels(Uint8List imageBytes) async {
final response = await _client.detectLabels(
DetectLabelsRequest(
image: Image(bytes: imageBytes),
maxLabels: 10,
minConfidence: 70,
),
);
return response.labels ?? [];
}
}
Step 5: Build the App UI
Create a simple UI to pick images, toggle between on-device and cloud recognition, and display results:
import 'package:flutter/material.dart';
import 'package:image_picker/image_picker.dart';
class ImageRecognitionScreen extends StatefulWidget {
@override
_ImageRecognitionScreenState createState() => _ImageRecognitionScreenState();
}
class _ImageRecognitionScreenState extends State {
File? _selectedImage;
List> _tfLiteResults = [];
List _rekognitionResults = [];
bool _isLoading = false;
bool _useOnDevice = true; // Toggle between TF Lite and Rekognition
final ImagePicker _picker = ImagePicker();
final TFLiteService _tfLiteService = TFLiteService();
final RekognitionService _rekognitionService = RekognitionService();
@override
void initState() {
super.initState();
_tfLiteService.loadModel();
}
Future _pickImage() async {
final XFile? image = await _picker.pickImage(source: ImageSource.gallery);
if (image != null) {
setState(() => _selectedImage = File(image.path));
_runRecognition();
}
}
Future _runRecognition() async {
if (_selectedImage == null) return;
setState(() => _isLoading = true);
try {
if (_useOnDevice) {
_tfLiteResults = await _tfLiteService.classifyImage(_selectedImage!);
_rekognitionResults = [];
} else {
Uint8List bytes = await _selectedImage!.readAsBytes();
_rekognitionResults = await _rekognitionService.detectLabels(bytes);
_tfLiteResults = [];
}
} catch (e) {
print("Error: $e");
} finally {
setState(() => _isLoading = false);
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text("AI Image Recognition")),
body: Padding(
padding: EdgeInsets.all(16),
child: Column(
children: [
// Image Picker Button
ElevatedButton(
onPressed: _pickImage,
child: Text("Pick Image from Gallery"),
),
SizedBox(height: 16),
// Toggle Switch
SwitchListTile(
title: Text(_useOnDevice ? "On-Device (TF Lite)" : "Cloud (AWS Rekognition)"),
value: _useOnDevice,
onChanged: (val) => setState(() => _useOnDevice = val),
),
SizedBox(height: 16),
// Selected Image
if (_selectedImage != null) Image.file(_selectedImage!, height: 200),
SizedBox(height: 16),
// Loading Indicator
if (_isLoading) CircularProgressIndicator(),
// Results
if (_tfLiteResults.isNotEmpty) ...[
Text("TF Lite Results:", style: TextStyle(fontWeight: FontWeight.bold)),
..._tfLiteResults.map((r) => Text("${r['label']}: ${(r['confidence'] * 100).toStringAsFixed(2)}%")),
],
if (_rekognitionResults.isNotEmpty) ...[
Text("AWS Rekognition Results:", style: TextStyle(fontWeight: FontWeight.bold)),
..._rekognitionResults.map((l) => Text("${l.name}: ${l.confidence?.toStringAsFixed(2)}%")),
],
],
),
),
);
}
}
Step 6: Handle Runtime Permissions
Add permission requests for Android and iOS to access the camera and gallery. Update android/app/src/main/AndroidManifest.xml with:
For iOS, add to ios/Runner/Info.plist:
NSCameraUsageDescription
Need camera access to capture images
NSPhotoLibraryUsageDescription
Need photo library access to select images
Use permission_handler to request permissions at runtime before picking images.
Step 7: Test the App
Run the app on an emulator or physical device with flutter run. Test both on-device and cloud recognition modes with sample images. Ensure TF Lite returns results without internet, and Rekognition returns detailed labels with internet connectivity.
Best Practices
- Use TensorFlow Lite 2.18 for offline, low-latency use cases; switch to AWS Rekognition 5.0 for higher accuracy and advanced features like face detection or text extraction.
- Optimize TF Lite models with post-training quantization to reduce size and improve inference speed.
- Never hardcode AWS credentials: use AWS Cognito or environment variables for production apps.
- Implement error handling for network failures (Rekognition) and model loading issues (TF Lite).
Conclusion
By combining Flutter 4.0’s cross-platform capabilities with TensorFlow Lite 2.18 and AWS Rekognition 5.0, you can build powerful AI image recognition features that work online and offline. Extend this implementation with custom models, real-time camera inference, or additional Rekognition APIs like face comparison or content moderation.
Top comments (0)