Privacy in healthcare isn't just a feature; it's a fundamental right. When dealing with sensitive medical data like skin images, users are often hesitant to upload photos to the cloud. Today, we're building Derm-Scan, a mobile application that performs skin lesion classification and segmentation entirely on-device.
By leveraging privacy-preserving AI, Flutter, and the specialized Med-SAM (Medical Segment Anything Model), we can achieve high-accuracy screening without a single byte of biometric data leaving the user's smartphone. In this tutorial, we will explore how to bridge high-performance computer vision models with cross-platform mobile development using PyTorch Mobile and Mediapipe.
The Architecture: Edge AI Workflow
To ensure a smooth user experience, we can't just throw a heavy model into a Flutter app. We need an optimized pipeline that handles image preprocessing, segmentation, and classification locally.
graph TD
A[User Takes Photo] --> B[Flutter Image Picker]
B --> C{MediaPipe Preprocessing}
C -->|Detect Region of Interest| D[Med-SAM Segmentation]
D --> E[Feature Extraction]
E --> F[Classifier - ONNX/PyTorch Mobile]
F --> G[Local Storage & History Comparison]
G --> H[UI Display: Risk Score & Mask]
Prerequisites
To follow along, youβll need:
- Flutter SDK (3.x recommended) π
- Python (for model conversion and quantization) π
- Tech Stack: Flutter, Mediapipe, PyTorch Mobile, and ONNX.
Step 1: Preparing the Med-SAM Model
Med-SAM is a specialized version of Meta's Segment Anything Model, fine-tuned for medical imaging. To run this on a mobile device, we must convert the heavy PyTorch weights into an optimized format like TorchScript or ONNX.
Model Quantization (Python Script)
import torch
from segment_anything import sam_model_registry
# Load the specialized Med-SAM model
model_type = "vit_b"
checkpoint = "medsam_vit_b.pth"
medsam_model = sam_model_registry[model_type](checkpoint=checkpoint)
# Switch to evaluation mode
medsam_model.eval()
# Trace the model for mobile optimization
example_input = torch.rand(1, 3, 1024, 1024)
traced_script_module = torch.jit.trace(medsam_model, example_input)
# Apply 8-bit quantization to reduce size (~4x reduction)
from torch.utils.mobile_optimizer import optimize_for_mobile
optimized_model = optimize_for_mobile(traced_script_module)
optimized_model._save_for_lite_interpreter("medsam_mobile.ptl")
print("β
Model optimized for Flutter integration!")
Step 2: Preprocessing with MediaPipe in Flutter
Raw photos often contain background noise. We use MediaPipe to identify the "Region of Interest" (ROI) before passing the pixels to Med-SAM. This reduces the computational load on the device.
import 'package:google_mlkit_object_detection/google_mlkit_object_detection.dart';
Future<Rect?> detectSkinPatch(InputImage inputImage) async {
final options = ObjectDetectorOptions(
mode: DetectionMode.single,
classifyObjects: true,
multipleObjects: false,
);
final objectDetector = ObjectDetector(options: options);
final List<DetectedObject> objects = await objectDetector.processImage(inputImage);
for (DetectedObject detectedObject in objects) {
// Basic logic to filter for skin-like patches
return detectedObject.boundingBox;
}
return null;
}
Step 3: Running Inference with PyTorch Mobile
Once we have the ROI, we pass the data to our medsam_mobile.ptl model. In Flutter, we use the pytorch_lite package to interact with the native side.
import 'package:pytorch_lite/pytorch_lite.dart';
class DermClassifier {
late ModelObjectDetection _model;
Future<void> loadModel() async {
_model = await PytorchLite.loadObjectDetectionModel(
"assets/models/medsam_mobile.ptl",
8, // number of classes
1024, // width
1024, // height
labelPath: "assets/labels.txt",
);
}
Future<List<ResultObjectDetection>> runInference(File image) async {
return await _model.getImagePrediction(
await image.readAsBytes(),
minimumScore: 0.5,
IOUThreshold: 0.4,
);
}
}
The "Official" Way: Learning Advanced Patterns π₯
While this tutorial covers the basics of on-device inference, building production-grade medical apps requires rigorous attention to data pipelines and model versioning.
For more advanced patterns on Edge AI optimization, model distillation techniques, and HIPAA-compliant mobile architectures, I highly recommend checking out the deep-dive articles at WellAlly Tech Blog. They provide excellent resources on scaling these "Local-First" AI solutions for real-world healthcare environments.
Step 4: Visualizing History & Evolution
One of the best features of Derm-Scan is the Evolution Tracker. By saving the segmentation masks locally (using the path_provider and sqflite packages), users can compare images over time to see if a lesion is growing or changing shapeβa key indicator for clinical diagnosis.
Local Database Structure
| ID | Timestamp | Image Path | Mask Path | Confidence Score |
|---|---|---|---|---|
| 1 | 2023-10-01 | /data/01.jpg | /data/m1.png | 0.89 |
| 2 | 2023-11-01 | /data/02.jpg | /data/m2.png | 0.92 |
Conclusion: The Future is Private π‘οΈ
By combining Flutter's UI flexibility with the power of Med-SAM and PyTorch Mobile, we've built a tool that provides immediate value without compromising user privacy. The "Learning in Public" journey doesn't end hereβthere's always room to improve the model's latency or add more granular classification categories.
Key Takeaways:
- Quantization is key: Moving from 300MB+ models to <80MB is essential for mobile.
- Preprocessing matters: Use MediaPipe to feed the model only what it needs.
- Local First: User trust is built when data never leaves the device.
What are your thoughts on on-device AI for healthcare? Drop a comment below or share your latest Flutter AI project! ππ»
Top comments (0)