DEV Community

Gaurav Pandey
Gaurav Pandey

Posted on

Building RecallMe: An On-Device AI Companion for Dementia Care Using Flutter & Kiro

Building RecallMe: An On-Device AI Companion for Dementia Care with Flutter and Kiro

How I built a privacy-first mobile AI app that helps people with dementia recognize faces and recall memories entirely on-device


The Problem

Dementia affects over 55 million people worldwide. One of the most heartbreaking challenges is when loved ones can no longer recognize family members or recall cherished memories. Traditional solutions often rely on cloud-based AI, raising privacy concerns and requiring constant internet connectivity.

What if we could build an AI assistant that runs entirely on-device, respects privacy, and helps dementia patients maintain their connections with loved ones?

Introducing RecallMe

RecallMe is a Flutter-based mobile application that combines:

  • On-device face recognition using Google ML Kit and custom embedding algorithms
  • AI-powered memory conversations with context-aware chat
  • Voice interaction for natural, accessible communication
  • Smart routine management with notifications and progress tracking

All processing happens locally on the device no cloud, no privacy concerns, fully offline-capable.

The Tech Stack

Frontend & Framework

  • Flutter 3.7.0+ with Dart
  • Provider for state management
  • Hive for local NoSQL database
  • Material Design with custom dementia-friendly theme

AI & ML

  • Google ML Kit for face detection (Arm-optimized TensorFlow Lite)
  • Custom 256-dimensional embedding algorithm combining:
    • Color histograms (64D)
    • Spatial grid features (64D)
    • Gradient features (6D)
    • LBP texture analysis (16D)
    • Quadrant averages (4D)
  • Azure OpenAI (GPT-4) for memory conversations (can be replaced with on-device model)
  • Cosine similarity for face matching

Native Integration

  • Kotlin for Android native code (face recognition, TTS)
  • Method Channels for Flutter-Kotlin communication
  • Native TextToSpeech engine

Voice & Media

  • speech_to_text package for voice input
  • camera package for face recognition
  • image package for image processing

How It Works: The Face Recognition Pipeline

The face recognition system is the heart of RecallMe. Here's how it works:

// 1. Face Detection (using ML Kit)
Future<List<FaceDetection>> detectFaces(Uint8List imageBytes) async {
  final result = await _channel.invokeMethod('detectFaces', {
    'imageBytes': imageBytes,
  });
  return result.map((face) => FaceDetection.fromMap(face)).toList();
}

// 2. Generate Embedding (custom algorithm in Kotlin)
// Combines multiple feature extraction techniques
// Returns 256-dimensional vector

// 3. Match Against Stored Embeddings
double similarity = calculateCosineSimilarity(
  newEmbedding, 
  storedEmbedding
);
if (similarity > 0.45) {
  // Match found!
}
Enter fullscreen mode Exit fullscreen mode

The custom embedding algorithm runs entirely in Kotlin, optimized for Arm architecture using NEON SIMD instructions for vectorized operations.

Building with Kiro: How AI-Powered Development Accelerated the Process

One of the most exciting aspects of this project was using Kiro, an AI-powered IDE, to accelerate development. Here's how Kiro transformed my workflow:

Steering Documents: Setting the Foundation

I created three steering documents in .kiro/steering/ that gave Kiro deep context about the project:

  1. product.md: Defined the dementia-friendly design principles, target users, and core features
  2. structure.md: Established the layered architecture (Presentation → State → Business Logic → Data) and code organization patterns
  3. tech.md: Specified the technology stack, dependencies, and build commands

These documents ensured Kiro understood:

  • The warm color palette (soft yellows, creams, oranges) for dementia-friendly design
  • The file naming conventions (snake_case, _screen.dart suffix)
  • The Provider pattern for state management
  • The Hive database structure

Vibe Coding: Rapid Feature Development

With steering documents in place, I used vibe coding to build features through natural conversation:

Example Conversation:

Me: "Build a routine management screen with add/edit/delete functionality, notification scheduling, and completion tracking"

Kiro: Generates complete implementation following our architecture patterns, using Provider for state, Hive for persistence, and our warm color theme

This approach let me:

  • Build 15+ screens with consistent patterns
  • Implement complex features like timezone-aware notifications
  • Maintain architectural consistency across the entire codebase
  • Iterate quickly while keeping code quality high

The Impact

What would have taken weeks of development was completed in days. Kiro's understanding of our architecture meant every generated component fit seamlessly into the existing codebase.

Key Features in Action

1. Face Recognition

Users can point the camera at someone and instantly see:

  • "This is Sarah (85% confidence)"
  • All processing happens on-device
  • Works offline, respects privacy

2. Memory Recall

Tap a photo, and the AI assistant:

  • Remembers previous conversations (last 5 messages)
  • Includes photo metadata (name, year, person, memory word)
  • Responds in short, simple sentences perfect for dementia patients
  • Speaks the response using text-to-speech

3. Routine Management

  • Schedule daily routines with precise times
  • Get notifications with timezone awareness
  • Track completion across Home, Schedule, and Daily Tasks screens
  • View weekly progress reports

4. Voice Interaction

  • Speak questions naturally
  • Get voice responses
  • No typing required—perfect for accessibility

Privacy-First Architecture

All data stays on the device:

  • Face embeddings stored locally in Hive database
  • Photos stored in app's private directory
  • No cloud synchronization
  • Encrypted storage for sensitive data (API keys, PIN)

This is crucial for healthcare applications where privacy is paramount.

Arm Architecture Optimization

RecallMe is optimized for Arm-based devices:

  • NEON SIMD for vectorized histogram calculations
  • Big.LITTLE aware task scheduling
  • Efficient memory patterns for image processing
  • GPU acceleration when available (via ML Kit)

The app runs smoothly on mid-range Arm devices, making it accessible to a wide range of users.

The Development Journey

Building RecallMe was a learning experience in:

  • On-device ML: Balancing accuracy with performance
  • Accessibility: Designing for users with cognitive impairments
  • Privacy: Building AI that respects user data
  • Integration: Combining multiple AI technologies seamlessly

Open Source & Community Impact

RecallMe is open source, with the goal of:

  • Helping other developers learn on-device ML techniques
  • Providing dementia-friendly design patterns
  • Demonstrating Arm optimization best practices
  • Inspiring more privacy-first AI applications

Try It Yourself

The project is available on GitHub with:

  • Complete source code
  • Detailed setup instructions
  • Architecture documentation
  • Arm device build guide

git clone https://github.com/Gaurav-derry/Recall
cd recallme
flutter pub get
flutter run## What's Next?

Future enhancements could include:

  • On-device LLM replacement for Azure OpenAI
  • Multi-language support
  • Caregiver dashboard with analytics
  • Integration with health monitoring devices

Conclusion

RecallMe demonstrates that sophisticated AI can run entirely on mobile devices while respecting privacy and accessibility. By combining Flutter's cross-platform capabilities, on-device ML, and Kiro's AI-powered development workflow, we can build applications that make a real difference in people's lives.

The technology is here. The tools are accessible. The impact is real.


Technologies: Flutter, Dart, Kotlin, Google ML Kit, Azure OpenAI, Hive, Provider

Development Tools: Kiro (AI-powered IDE), Android Studio, VS Code

Top comments (0)