<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: noor yadallee</title>
    <description>The latest articles on DEV Community by noor yadallee (@noor_y).</description>
    <link>https://dev.to/noor_y</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/noor_y"/>
    <language>en</language>
    <item>
      <title>Real-Time Sign Language Translation with MediaPipe, Flutter, and Gemini Nano</title>
      <dc:creator>noor yadallee</dc:creator>
      <pubDate>Sat, 16 May 2026 17:15:14 +0000</pubDate>
      <link>https://dev.to/noor_y/real-time-sign-language-translation-with-mediapipe-flutter-and-gemini-nano-21e9</link>
      <guid>https://dev.to/noor_y/real-time-sign-language-translation-with-mediapipe-flutter-and-gemini-nano-21e9</guid>
      <description>&lt;h1&gt;
  
  
  SignSpeak
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Real-Time Sign Language Translation with MediaPipe, Flutter, and Gemini Nano
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Build with AI - 2026 | Noor&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jpyri793mseq1284agc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jpyri793mseq1284agc.png" alt=" " width="800" height="1778"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;SignSpeak&lt;/strong&gt; is a mobile application designed to bridge the communication gap for the Deaf and Hard-of-Hearing community. By combining on-device computer vision with an on-device large language model, it translates American Sign Language (ASL) gestures and custom hand signs into natural, complete English sentences in real time - entirely offline, with no data sent to any server. Its defining feature is a fully customizable vocabulary: new signs, names, or even full phrase shortcuts can be added by any user without retraining from scratch.&lt;/p&gt;


&lt;h2&gt;
  
  
  1. MediaPipe: The Digital Skeleton of Gesture Recognition
&lt;/h2&gt;
&lt;h3&gt;
  
  
  What is MediaPipe?
&lt;/h3&gt;

&lt;p&gt;MediaPipe is an open-source framework developed by Google for building multimodal, on-device machine learning pipelines. It provides production-ready solutions for tasks such as face detection, pose estimation, and - most relevant here - hand landmark detection. MediaPipe is designed to run efficiently on mobile hardware without requiring a network connection, making it a natural fit for a privacy-first application.&lt;/p&gt;

&lt;p&gt;In SignSpeak, the &lt;strong&gt;Hand Landmarker&lt;/strong&gt; solution is used. It detects and tracks the 3D coordinates of 21 individual points (landmarks) on a human hand from a single camera frame. These points cover every joint of every finger plus the wrist, providing a rich geometric representation of any hand shape or pose.&lt;/p&gt;
&lt;h3&gt;
  
  
  The 21 Hand Landmarks
&lt;/h3&gt;

&lt;p&gt;Each landmark is expressed as a normalised (x, y, z) coordinate. The x and y values are pixel positions relative to the image width and height, and z represents depth relative to the wrist. The 21 points are distributed as follows:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Index&lt;/th&gt;
&lt;th&gt;Landmark&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;WRIST&lt;/td&gt;
&lt;td&gt;Base anchor point for all normalisation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1–4&lt;/td&gt;
&lt;td&gt;THUMB&lt;/td&gt;
&lt;td&gt;CMC → MCP → IP → TIP&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5–8&lt;/td&gt;
&lt;td&gt;INDEX FINGER&lt;/td&gt;
&lt;td&gt;MCP → PIP → DIP → TIP&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9–12&lt;/td&gt;
&lt;td&gt;MIDDLE FINGER&lt;/td&gt;
&lt;td&gt;MCP → PIP → DIP → TIP&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13–16&lt;/td&gt;
&lt;td&gt;RING FINGER&lt;/td&gt;
&lt;td&gt;MCP → PIP → DIP → TIP&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;17–20&lt;/td&gt;
&lt;td&gt;PINKY FINGER&lt;/td&gt;
&lt;td&gt;MCP → PIP → DIP → TIP&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Landmark 0 (WRIST) is the anchor for all normalisation. Landmark 9 (MIDDLE MCP) defines the span.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  How MediaPipe is Used in SignSpeak
&lt;/h3&gt;

&lt;p&gt;Since sign language frequently involves both hands, SignSpeak configures MediaPipe to track up to two hands simultaneously. For each detected hand, the (x, y, z) values of all 21 landmarks are extracted, producing a raw vector of 63 values per hand. With two hands, this gives &lt;strong&gt;126 raw values per frame&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To make the model robust to differences in hand size, camera distance, and position within the frame, these coordinates are normalised before being used as model input:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Wrist-relative:&lt;/strong&gt; Each coordinate is shifted by subtracting the wrist position (landmark 0), so the wrist is always at the origin (0, 0, 0).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scale-normalised:&lt;/strong&gt; Each shifted coordinate is divided by the Euclidean distance between the wrist and the middle finger MCP joint (landmark 9). This "span" acts as a scale factor, so the same sign made close to or far from the camera produces an identical feature vector.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The formula in Python is &lt;code&gt;span = np.linalg.norm(mid_mcp - wrist)&lt;/code&gt;. In Dart (Flutter), the equivalent must use &lt;code&gt;sqrt(dx*dx + dy*dy + dz*dz)&lt;/code&gt; - &lt;strong&gt;not&lt;/strong&gt; the squared distance - to match the training data exactly.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Critical: Feature Extraction Must Match Exactly&lt;/strong&gt;&lt;br&gt;
The single most common cause of poor accuracy when moving from Python to Flutter is a mismatch in normalisation. The Python training pipeline uses &lt;code&gt;np.linalg.norm&lt;/code&gt; (Euclidean distance, i.e. square root). The Flutter inference code must use the identical formula. Using squared distance instead will produce a completely different scale and render the model unreliable.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  2. Gemini Nano: Bringing Intelligence to the Edge
&lt;/h2&gt;
&lt;h3&gt;
  
  
  What is Gemini Nano?
&lt;/h3&gt;

&lt;p&gt;Gemini Nano is the smallest and most efficient model in Google's Gemini family, purpose-built for on-device inference. Unlike cloud-based LLMs, Gemini Nano runs entirely on the device's hardware using Android's AICore runtime, requiring no internet connection and sending no user data off-device. It is available on supported devices including the Pixel 8 series and Samsung Galaxy S24 series.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why an LLM for Sign Language?
&lt;/h3&gt;

&lt;p&gt;ASL has a grammatically different structure from spoken English. Signers routinely omit articles (a, the), auxiliary verbs (is, are, was), and tense markers. A direct transcription of detected signs would produce telegraphic output such as "WATER NEED" or "NAME MY". Gemini Nano's role in SignSpeak is to act as an interpreter: it receives the raw sign tokens and a context-aware prompt, and returns a fluent English sentence (because LLMs are only next word predictors am I right :D). The prompt is designed with few-shot examples:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are a sign language interpreter. Convert ASL sign tokens into
natural fluent English sentences.

WATER NEED       → I need some water please.
NAME MY NOOR     → My name is Noor.
HELP ME PLEASE   → Could you please help me?

Output ONLY the final sentence. No explanation. Under 15 words.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  How to Install and Enable Gemini Nano on Android
&lt;/h3&gt;

&lt;p&gt;Gemini Nano is accessed via the AICore system service. Enabling it requires the following steps on a supported device:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Update AICore:&lt;/strong&gt; Open the Google Play Store and search for "AICore". Ensure it is updated to the latest version.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable Developer Options:&lt;/strong&gt; Go to Settings → About Phone and tap Build Number seven times until Developer Mode is enabled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable Gemini Nano:&lt;/strong&gt; In Settings → System → Developer Options, scroll to find "Gemini Nano" or search for "AICore Settings". Toggle on "Enable Gemini Nano" and "Enable On-Device Model".&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wait for model download:&lt;/strong&gt; The on-device model downloads silently in the background. This can take 10–15 minutes on Wi-Fi. The device must be charging and connected to Wi-Fi. The model is not available until this download completes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify availability:&lt;/strong&gt; Connect the device via USB and run:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;adb shell cmd aicore status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should confirm the model is downloaded and AICore is running. In SignSpeak, Gemini Nano is accessed via a Flutter &lt;code&gt;MethodChannel&lt;/code&gt; that bridges to the Android-native AICore SDK. If the on-device model is unavailable, the app transparently falls back to the Gemini Flash API (cloud) or a rule-based sentence assembler.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. The Python Pipeline: From Camera to Trained Model
&lt;/h2&gt;

&lt;p&gt;The gesture recognition model is built using a three-stage Python pipeline located in the &lt;code&gt;custom_model/&lt;/code&gt; directory. All three scripts share the same MediaPipe feature extraction logic to ensure consistency between training data and inference.&lt;/p&gt;




&lt;h3&gt;
  
  
  collect.py - The Data Gatherer
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Purpose
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;collect.py&lt;/code&gt; uses your PC webcam and MediaPipe to capture hand landmark data for each sign you want to recognise. Instead of recording raw video or images, it records only the 126 normalised landmark coordinates per frame - this is both compact and already in the exact format the model needs.&lt;/p&gt;

&lt;h4&gt;
  
  
  How to Use
&lt;/h4&gt;

&lt;p&gt;Install dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;mediapipe opencv-python numpy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the script from the &lt;code&gt;custom_model/&lt;/code&gt; directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python collect.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The webcam window opens showing a live hand skeleton overlay. For each sign in the &lt;code&gt;SIGNS&lt;/code&gt; list:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get into position with your hand(s) in the correct gesture.&lt;/li&gt;
&lt;li&gt;Press &lt;strong&gt;SPACE&lt;/strong&gt;. A 5-second countdown begins - use this time to settle into the pose.&lt;/li&gt;
&lt;li&gt;The script automatically captures 10 frames at 0.5-second intervals. A white border flashes on each capture. The terminal prints each shot as &lt;code&gt;OK&lt;/code&gt; or &lt;code&gt;MISSED&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;When you have enough samples for a sign, press &lt;strong&gt;N&lt;/strong&gt; to advance to the next one.&lt;/li&gt;
&lt;li&gt;Press &lt;strong&gt;Q&lt;/strong&gt; at any time to quit and save all collected data so far.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;Two-handed signs:&lt;/strong&gt; Signs marked as &lt;code&gt;TWO_HANDED&lt;/code&gt; (e.g. WELCOME, HELP, MORE) display a purple &lt;code&gt;[TWO-HANDED]&lt;/code&gt; label. The "Hands:" counter in the top-right turns green only when the required number of hands are detected. For two-handed signs, ensure both hands are fully visible in the frame before pressing SPACE.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Customising the Sign Vocabulary
&lt;/h4&gt;

&lt;p&gt;The list of signs to collect is defined at the top of the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;SIGNS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;none&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="c1"&gt;# always required as the negative class
&lt;/span&gt;    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;hello&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;my&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;noor&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="c1"&gt;# a custom sign - any gesture you define
&lt;/span&gt;    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;welcome&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;# add more here...
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;'none'&lt;/code&gt; must always be first. It is the negative class - the gesture the model predicts when no recognisable sign is being made. Every other sign can be any word, name, or phrase you choose. The gesture itself is entirely up to you: pick any distinct, static hand shape for each token.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbk7qoycbijotxopijo2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbk7qoycbijotxopijo2.png" alt=" " width="800" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Adding New Signs Without Recollecting Everything
&lt;/h4&gt;

&lt;p&gt;Link to github for trying out the gesture recognition: &lt;a href="https://github.com/Y-Noor/bwai-2026-custom-model-mediapipe" rel="noopener noreferrer"&gt;https://github.com/Y-Noor/bwai-2026-custom-model-mediapipe&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the most practical features of this pipeline is the ability to add new signs incrementally. Because the training data is stored as a plain CSV file, you can append rows for new signs without discarding the data you have already collected.&lt;/p&gt;

&lt;p&gt;To add a new sign to an existing dataset:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Add the new sign to the SIGNS list&lt;/strong&gt; in &lt;code&gt;collect.py&lt;/code&gt;. Place it at the end, after all existing signs. Do not reorder or remove any existing entries - the label index (position in the list) must stay the same.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set the starting index:&lt;/strong&gt; At the bottom of the script, set &lt;code&gt;sign_idx&lt;/code&gt; to the index of your new sign so collection jumps straight to it without re-running existing signs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run collect.py:&lt;/strong&gt; New rows will be appended to the existing &lt;code&gt;data/landmarks.csv&lt;/code&gt;. All previous sign data is preserved.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Update labels.txt:&lt;/strong&gt; The script automatically rewrites &lt;code&gt;data/labels.txt&lt;/code&gt; with the full updated list.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retrain:&lt;/strong&gt; Run &lt;code&gt;python train.py&lt;/code&gt;. It reads the updated CSV, detects all present classes, and trains a new model with the expanded vocabulary.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;Why you must retrain even for one new sign:&lt;/strong&gt; The output layer of the MLP is sized to the number of classes. Adding a new sign changes that count, so a new model must be exported. However, all the previously collected data remains valid and is automatically included - you only spend time collecting samples for the new sign.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Files Generated by collect.py
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;data/landmarks.csv&lt;/code&gt; - one row per captured frame, containing a label index followed by 126 normalised landmark coordinates.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;data/labels.txt&lt;/code&gt; - the ordered list of sign names, one per line. The line number (zero-indexed) is the label ID used in the CSV.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  train.py - The Brain Builder
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Purpose
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;train.py&lt;/code&gt; reads the collected CSV, trains a Multi-Layer Perceptron (MLP) neural network using TensorFlow, and exports the result as a quantised TFLite model ready for mobile deployment.&lt;/p&gt;

&lt;h4&gt;
  
  
  How to Use
&lt;/h4&gt;

&lt;p&gt;Install dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;tensorflow scikit-learn pandas matplotlib
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python train.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Training typically completes in 1–3 minutes. EarlyStopping monitors validation accuracy and halts training once the model stops improving, so you do not need to manually tune the number of epochs.&lt;/p&gt;

&lt;h4&gt;
  
  
  Model Architecture
&lt;/h4&gt;

&lt;p&gt;The classifier is a four-layer MLP. The input is the 126-value normalised feature vector; the output is a softmax probability distribution over the number of sign classes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input:   126 features
         ↓
Dense 128  + BatchNorm + Dropout(0.3)
         ↓
Dense 64   + BatchNorm + Dropout(0.2)
         ↓
Dense 32
         ↓
Dense N    + Softmax         (N = number of sign classes)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;BatchNorm stabilises training between layers. Dropout prevents the network from over-relying on any single feature, which matters because many signs share similar finger configurations. The final Softmax layer outputs a confidence percentage for each class.&lt;/p&gt;

&lt;h4&gt;
  
  
  Files Generated by train.py
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;exported_model/gesture_classifier.tflite&lt;/code&gt; - the trained model, quantised to INT8 (~45 KB). This is the file loaded by Flutter.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;exported_model/gesture_labels.txt&lt;/code&gt; - the label list corresponding to the model's output indices. Must be copied to Flutter alongside the &lt;code&gt;.tflite&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;exported_model/training_curves.png&lt;/code&gt; - accuracy and loss curves across epochs for both training and validation sets.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;exported_model/confusion_matrix.png&lt;/code&gt; - a grid showing true vs. predicted labels across the test set. Signs on the diagonal are correctly classified; off-diagonal entries reveal confusions between visually similar signs.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  test_model.py - The Real-World Check
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwe8czl3a01tv9nnafwub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwe8czl3a01tv9nnafwub.png" alt=" " width="800" height="641"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Purpose
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;test_model.py&lt;/code&gt; lets you verify the trained model against your own live webcam before deploying to a phone. It is the fastest way to catch issues with specific signs before going through the Flutter build cycle.&lt;/p&gt;

&lt;h4&gt;
  
  
  How to Use
&lt;/h4&gt;

&lt;p&gt;Install dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;tflite-runtime opencv-python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python test_model.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script opens your webcam, runs hand landmark detection, and performs inference on every frame. The display shows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The predicted sign label in large text at the bottom of the frame, greyed out when confidence is below the 75% threshold.&lt;/li&gt;
&lt;li&gt;A confidence percentage next to the label.&lt;/li&gt;
&lt;li&gt;A ranked bar chart of the top 3 predictions on the right side - useful for diagnosing which signs are being confused with each other.&lt;/li&gt;
&lt;li&gt;A live FPS counter and hand count in the top bar.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Target:&lt;/strong&gt; Aim for consistent confident predictions (≥85%) on all signs before moving the model to Flutter. If two signs are frequently confused, check the confusion matrix from &lt;code&gt;train.py&lt;/code&gt; and collect additional samples for those specific classes.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Importing the Trained Model into Flutter
&lt;/h2&gt;

&lt;p&gt;Once &lt;code&gt;test_model.py&lt;/code&gt; confirms the model performs well, it can be integrated into the Flutter app in five steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Copy Model Files into the Project
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;copy exported_model&lt;span class="se"&gt;\g&lt;/span&gt;esture_classifier.tflite  signspeak&lt;span class="se"&gt;\a&lt;/span&gt;ssets&lt;span class="se"&gt;\m&lt;/span&gt;odels&lt;span class="se"&gt;\&lt;/span&gt;
copy exported_model&lt;span class="se"&gt;\g&lt;/span&gt;esture_labels.txt         signspeak&lt;span class="se"&gt;\a&lt;/span&gt;ssets&lt;span class="se"&gt;\m&lt;/span&gt;odels&lt;span class="se"&gt;\&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Register Assets in pubspec.yaml
&lt;/h3&gt;

&lt;p&gt;Both files must be declared in &lt;code&gt;pubspec.yaml&lt;/code&gt; so the Flutter build system bundles them into the app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;flutter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;uses-material-design&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;assets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;assets/models/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Add the tflite_flutter Dependency
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tflite_flutter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;^0.12.1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run &lt;code&gt;flutter pub get&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Load and Run the Model
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;TfliteGestureClassifier&lt;/code&gt; service handles loading, feature extraction, and inference:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Initialise once at app startup&lt;/span&gt;
&lt;span class="n"&gt;_interpreter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;Interpreter&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;fromAsset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="s"&gt;'assets/models/gesture_classifier.tflite'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nl"&gt;options:&lt;/span&gt; &lt;span class="n"&gt;InterpreterOptions&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="n"&gt;labelData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;rootBundle&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;loadString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="s"&gt;'assets/models/gesture_labels.txt'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;labelData&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At inference time, the raw MediaPipe landmarks are normalised and passed to the interpreter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Build the 126-feature input vector&lt;/span&gt;
&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;extractFeatures&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;allHands&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toList&lt;/span&gt;&lt;span class="p"&gt;()];&lt;/span&gt;  &lt;span class="c1"&gt;// [[f0..f125]]&lt;/span&gt;
&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;List&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="kt"&gt;List&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;filled&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_labels&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;length&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="n"&gt;_interpreter&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Map index to label&lt;/span&gt;
&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="n"&gt;maxIdx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;indexWhere&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;reduce&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="n"&gt;label&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_labels&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;maxIdx&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="n"&gt;conf&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="n"&gt;maxIdx&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Normalise Identically to collect.py
&lt;/h3&gt;

&lt;p&gt;The feature extraction in Dart must be byte-for-byte equivalent to the Python normalisation. The critical rule is to use the square root (Euclidean norm), not the squared distance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Dart - must match collect.py exactly&lt;/span&gt;
&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="n"&gt;dx&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;lm&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;x&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;wristX&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="n"&gt;dy&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;lm&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;y&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;wristY&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="n"&gt;dz&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;lm&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;z&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;wristZ&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sqrt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dx&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;dx&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;dy&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;dy&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;dz&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;dz&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;// dart:math sqrt&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;21&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lm&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;x&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;wristX&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lm&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;y&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;wristY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lm&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;z&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;wristZ&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;✅ &lt;strong&gt;The app automatically detects the model.&lt;/strong&gt; If &lt;code&gt;gesture_classifier.tflite&lt;/code&gt; is present in &lt;code&gt;assets/models/&lt;/code&gt;, &lt;code&gt;TfliteGestureClassifier&lt;/code&gt; loads it on startup and the app switches from the built-in rule-based classifier to the trained model. The debug console prints &lt;code&gt;TFLite classifier loaded: N classes&lt;/code&gt; to confirm. If the file is absent, the app falls back to geometry rules silently.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;em&gt;SignSpeak - Build with AI 2026&lt;/em&gt;&lt;br&gt;
&lt;em&gt;MediaPipe · Gemini Nano · TensorFlow Lite · Flutter&lt;/em&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>gemini</category>
      <category>mediapipe</category>
      <category>flutter</category>
    </item>
  </channel>
</rss>
