DEV Community

Sherin Joseph Roy
Sherin Joseph Roy

Posted on • Edited on

I Turned My Old Android Phone Into an L2 Autonomous Driving System (Flutter + C++)

Modern cars ship with expensive L2 driver assistance systems. Most of the heavy lifting for these systems is just computer vision running on a small chip behind the dashboard. Guess what else has a camera, a GPU, and a decent SoC? Your phone.

I decided to build a shadow mode ADAS that runs entirely in my pocket. I call it Zyra ADAS. It requires absolutely no cloud connectivity. There is no network latency. It just watches the road and predicts what a real autonomous system would do in real time.

Here is how I built a highly optimized, lock-free perception engine using Flutter, C++, and Vulkan.

The Problem with Cloud AI for Safety

Sending video frames to a cloud server for processing is fine for basic image recognition. It is completely useless when you are moving at 80 km/h and need to know if a car is braking ahead of you. You need on-device processing.

The Architecture: Bypassing the Framework

The app is built with Flutter, but you cannot afford Flutter's MethodChannel serialization costs when processing video at high frame rates. Bouncing JSON strings across threads adds way too much overhead.

I bypassed it completely using dart:ffi.

The Flutter UI only handles the camera stream and drawing the overlays. The actual hot path lives in a single C++ shared object (libzyra_perception.so). We pass the raw YUV camera frames directly from the hardware into the native engine with zero copy memory pointers.

C++ owns the entire heavy lifting process:

  • Converting YUV to RGB and letterboxing
  • Running YOLOv8n inference for object detection
  • Per class Non-Maximum Suppression (NMS)
  • Canny and HoughLinesP for classical lane tracking

Dart just reads the final struct of bounding boxes and lane coordinates.

Real-Time Performance on Mobile Silicon

To make YOLOv8n run smoothly on a phone, I used NCNN with Vulkan compute. NCNN is incredible for mobile deployment. It uses FP16 packed storage and Winograd convolutions to squeeze every drop of performance out of the mobile GPU.

The results speak for themselves. On my daily driver Realme smartphone with a Snapdragon 662 (a mid range chip from 2020), I am hitting about 105ms end to end inference time. That is roughly 10 FPS on older hardware. On modern flagship chips like the Snapdragon 8 Gen 2, it easily hits a sustained 30 FPS.

I designed the engine with a bounded queue. If the inference falls behind, the engine explicitly drops the older frame. There is no silent buffering and no latency creep. It always shows you what is happening exactly right now.

What is Next?

I am currently prepping this software to mount inside a Tata Tigor EV for a massive data collection rig. The next step is fusing the phone IMU and GPS data into the perception pipeline to build out proper vehicle dynamics and forward collision warnings.

You can dig into the C++ engine and the FFI bridge here:

GitHub logo Sherin-SEF-AI / Zyra-ADAS

Android L2 ADAS shadow-mode system. On-device YOLOv8n + classical lane tracking with Vulkan-accelerated NCNN inference. Flutter UI + C++ NDK engine.

Zyra ADAS

Your phone is now an L2 ADAS shadow system.

Real-time object detection + lane tracking on Android, powered by on-device NCNN inference with Vulkan acceleration. No cloud, no latency, no compromise.

Android Flutter C++ NCNN License

What it does  •  Architecture  •  Performance  •  Quick start  •  Roadmap


Why Zyra ADAS

Modern cars ship L2 driver assistance that costs thousands of dollars. Most of the hard work is computer vision running on a small SoC behind the dashboard. Your phone has that same SoC. It has a camera, a GPU, GPS, accelerometers, and a screen bright enough to see in sunlight.

Zyra turns it into a shadow-mode ADAS: it watches the road and predicts what a real L2 system would do, side by side with what you actually do. No vehicle control, no liability, just perception that runs in your pocket.

Built for riders, fleet operators, researchers, and anyone who wants to…

Have you ever tried bridging heavy C++ computer vision pipelines directly to Flutter? Let me know your architecture choices in the comments.

Top comments (0)