I live in a reserved forest area in Tripura, Northeast India. Here, the "Cloud" is a luxury. With 4G hotspots with 1000ms+ latency, the AI tools the world takes for granted simply don't work. This forced me to look deeper into Efficient ML—not as a performance optimization, but as a survival necessity.
The Replication Study:
Inspired by the foundational work of Bengio et al. and the MIT CSAIL Efficient AI Lab, I conducted a replication study of neural network quantization (INT8) on MobileNetV2. While laboratory benchmarks from MIT suggest that INT8 quantization maintains ~98% accuracy while providing a 4x compression, I wanted to know: Does this hold up in the wild?
The "Forest" Data vs. The "Lab" Data:
My field validation revealed a reality that simulations often miss. I documented a 57x higher network latency variance during the monsoon season compared to the stable institutional WiFi used in CSAIL’s benchmarks. More importantly, my power consumption analysis showed that quantized models enabled 2.5x longer battery operation—the difference between a tool working through a power outage and a device going dark.
Connecting with MIT Faculty:
My work sits at the intersection of two MIT powerhouses:
Prof. Song Han (Efficient AI Lab): I was deeply inspired by his "Design-Automation-for-Efficient-AI" approach. My project Veritas applied his quantization principles to compress DistilBERT from 255MB to 64MB for browser-based inference.
Prof. Dina Katabi (NETMIT): Her work on wireless sensing and "invisibles" is revolutionary. Living in a low-signal environment, I saw a massive gap in how AI handles "adversarial" network conditions.
I want to bridge the gap between "Efficient Algorithms" and "Infrastructural Resilience." I want to work to develop auto-ML techniques that are not just hardware-aware, but environment-aware—models that dynamically adjust precision based on available battery and real-time signal-to-noise ratios. I want to ensure that the AI revolution doesn't stop at the "Last Mile" of internet connectivity.
Top comments (1)
Great to see you're back with such insightful blogs! Your breakdown on the disparity between laboratory and field results underscores the need for more nuanced assessments of model performance. It's a valuable reminder that innovation must be grounded in practicality.
Would love to read more about your work and learn from you. 🙇