<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rami Kronbi</title>
    <description>The latest articles on DEV Community by Rami Kronbi (@ramikronbi).</description>
    <link>https://dev.to/ramikronbi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ramikronbi"/>
    <language>en</language>
    <item>
      <title>Seeing in the Dark: Real-Time Thermal Super-Resolution (That Actually Runs on Edge Devices)</title>
      <dc:creator>Rami Kronbi</dc:creator>
      <pubDate>Mon, 02 Feb 2026 00:13:53 +0000</pubDate>
      <link>https://dev.to/ramikronbi/seeing-in-the-dark-real-time-thermal-super-resolution-that-actually-runs-on-edge-devices-3nc7</link>
      <guid>https://dev.to/ramikronbi/seeing-in-the-dark-real-time-thermal-super-resolution-that-actually-runs-on-edge-devices-3nc7</guid>
      <description>&lt;p&gt;Thermal cameras are practically superpowers. But there's a catch: unless you have $20,000 to drop on military-grade hardware, thermal vision looks like a blurry, low-res mess.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1jgq81qoigwhisailjk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1jgq81qoigwhisailjk.jpg" alt="Example output of a high-end thermal camera" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A few months ago, I was working on a drone project. The goal was simple: strap a thermal camera to a drone and detect objects in real-time. It sounds like something out of a sci-fi movie, flying at night, spotting heat signatures, perfect situational awareness.&lt;/p&gt;

&lt;p&gt;However, the "objects" in question were just glowing blobs. A person looked like a smudge; a car looked like a slightly larger smudge. The resolution on affordable thermal sensors is horribly low. For a computer vision model trying to do object detection, this is a nightmare. You can't classify what you can't see.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I had two options:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Buy a high-resolution thermal camera (and I live off of noodles for the rest of my life).&lt;/li&gt;
&lt;li&gt;Fix the hardware limitations with software.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I decided to build a Deep Learning model to upscale these low-res thermal images into crisp, high-definition video in real-time.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem: Why Standard AI Failed Me
&lt;/h2&gt;

&lt;p&gt;If you've messed around in image upscaling, you've probably heard of ESRGAN or similar "Super-Resolution" (SR) models. They are fantastic at taking a tiny JPEG and turning it into a 4K wallpaper.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzzlezgqqk9otopxe2it.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzzlezgqqk9otopxe2it.png" alt="Thermal image showing problems with detection on thermal imagery" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, why not just use that?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;They are too slow. Most state-of-the-art super-resolution models are heavy. They utilize millions of parameters. On a massive GPU, they might run at 5 or 7 FPS. That's fine for photos, but for a drone flying at 15kph? That latency is fatal. By the time the frame is processed, the drone has already crashed into the tree it didn't see.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Thermal is not RGB. Thermal images don't have "colors" in the traditional sense; they have temperature gradients. Standard models trained on ImageNet (cats, dogs, and cars) hallucinate textures that don't exist in heat maps. They try to add "fur" to a heat blob.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I needed an architecture that was lightweight, incredibly fast, and understood the physics of heat.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Solution: Enter IMDN (and a lot of coffee)
&lt;/h2&gt;

&lt;p&gt;I settled on an architecture called IMDN (Information Multi-Distillation Network).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfwrnif4je5po0hkg48v.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfwrnif4je5po0hkg48v.webp" alt="Example IMDN performance, retrieved from original IMDN paper" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Without getting too bogged down in the math (the code is onsuper resolution GitHub if you want the nerdy-details), the brilliance of IMDN is that it doesn't try to reconstruct the entire image at every single layer.&lt;/p&gt;

&lt;p&gt;Instead, it uses a "distillation" process. It extracts features, keeps what's useful, and passes the rest down the line. This drastically reduces the computational cost.&lt;/p&gt;

&lt;p&gt;What is also interesting about this model is that you can train it to upscale to any scale you want (2x, 3x, 4x, 5x, etc.). You aren't limited to fixed increments, giving you the flexibility to balance resolution and speed exactly how your project needs it.&lt;/p&gt;

&lt;p&gt;Implementing the architecture was tricky, specifically adapting the Information Distillation Blocks (IDB) to handle single-channel thermal data effectively without losing the high-frequency details (the sharp edges where hot meets cold).&lt;/p&gt;

&lt;p&gt;But the architecture was only half the battle.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Secret Struggle: The Data Nightmare
&lt;/h2&gt;

&lt;p&gt;In Deep Learning, everyone talks about the model, but the real war is won in the dataset.&lt;/p&gt;

&lt;p&gt;There is no "ImageNet for Thermal Super-Resolution" that you can just download and hit train. I had to get creative. I spent weeks pulling data from widely different sources, and manually curating a massive mixed dataset.&lt;/p&gt;

&lt;p&gt;This was the hardest part of the project. Thermal data is noisy and the resolutions vary wildly. I had to clean, normalize, and align thousands of images to create a "Ground Truth" that the model could actually learn from.&lt;/p&gt;

&lt;p&gt;I also used a transfer learning trick: leveraging weights from RGB domains and "teaching" them to interpret thermal gradients, which gave the model a head start on understanding edges and shapes.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Results: Breaking the Real-Time Barrier
&lt;/h2&gt;

&lt;p&gt;After weeks of training and tweaking the loss functions to prioritize thermal contrast, the results were… honestly, better than I expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwaa850uokig2xd4eruq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwaa850uokig2xd4eruq1.png" alt="x3 Enhanced image using my custom model" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Metrics:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;PSNR: 34.2 dB (This is the signal-to-noise ratio. Anything above 30 is considered excellent quality).&lt;/li&gt;
&lt;li&gt;SSIM: 0.840 (Structural Similarity - meaning the upscaled image actually looks like the original scene, not a hallucination).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can see the difference immediately. The "blob" on the left becomes a distinct object with edges and shape on the right.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Speed Test
&lt;/h3&gt;

&lt;p&gt;This is where the IMDN architecture shines. On my laptop (RTX 3070), the model achieves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;~130 FPS at 2x scale&lt;/li&gt;
&lt;li&gt;~60 FPS at 4x scale&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is absurdly fast. That's not just "real-time"; that's "faster than the camera can record."&lt;/p&gt;




&lt;h2&gt;
  
  
  The "Whoa" Moment: Edge Deployment
&lt;/h2&gt;

&lt;p&gt;Here's the thing, the drone can't carry my laptop :) However, it carried an Nvidia Jetson Orin&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9c85363fs4uvm1w51u1t.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9c85363fs4uvm1w51u1t.jpeg" alt="Drone application with thermal camera" width="646" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before delving into how it ran on the Jetson, it is important to note that what is considered real-time for thermal images differs from RGB. A thermal camera has at best 20 FPS acquisition rate, so running the model at 20–30 FPS is considered realtime, since you're utilizing all the bandwith of the camera.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To Achieve 20–30 FPS on Jetson, the following tweaks were made:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pipeline was implemented in C++&lt;/li&gt;
&lt;li&gt;Model was converted to TensorRT (~97% conversion accuracy)&lt;/li&gt;
&lt;li&gt;Multithreaded Inference with some optimizations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;30 FPS on an edge device is the holy grail. It means you can run this super-resolution model inline with your object detection model. The drone sees the low-res thermal frame, upscales it to HD, and detects the object , all in less than 20 milliseconds.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;This isn't just about making cooler-looking images. This is about accessibility.&lt;/p&gt;

&lt;p&gt;High-resolution thermal cameras cost a fortune. By using efficient AI, we can take a cheap, low-res sensor and simulate the performance of a sensor that costs 10x as much.&lt;/p&gt;

&lt;p&gt;For search and rescue drones, autonomous vehicles driving at night, or industrial monitoring, this is a game changer. We can finally have high-fidelity thermal vision without the high-fidelity price tag.&lt;/p&gt;




&lt;h2&gt;
  
  
  Attribution
&lt;/h2&gt;

&lt;p&gt;I have open-sourced the code for this, however a bit of attribution would be nice :)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Portfolio: &lt;a href="https://ramikronbi.com" rel="noopener noreferrer"&gt;Rami Kronbi&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;LinkedIn: &lt;a href="https://linkedin.com/in/rami-kronbi" rel="noopener noreferrer"&gt;Rami Kronbi&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/Kronbii" rel="noopener noreferrer"&gt;Kronbii&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;SRC: &lt;a href="https://github.com/Kronbii/thermal-super-resolution" rel="noopener noreferrer"&gt;Thermal Super Resolution&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>deeplearning</category>
      <category>iot</category>
      <category>machinelearning</category>
      <category>performance</category>
    </item>
    <item>
      <title>AI Should Serve Society - Not Just Industry and Billionaires</title>
      <dc:creator>Rami Kronbi</dc:creator>
      <pubDate>Fri, 09 Jan 2026 22:16:28 +0000</pubDate>
      <link>https://dev.to/ramikronbi/ai-should-serve-society-not-just-industry-and-billionaires-37c9</link>
      <guid>https://dev.to/ramikronbi/ai-should-serve-society-not-just-industry-and-billionaires-37c9</guid>
      <description>&lt;p&gt;&lt;em&gt;Build with purpose. Others will follow.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4px1q22zk4pwcj21cvj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4px1q22zk4pwcj21cvj.png" alt="Ai and Power" width="735" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI is moving fast. It is moving faster than our laws, faster than our ethics, and often, faster than our collective sense of responsibility.&lt;/p&gt;

&lt;p&gt;The real question isn’t how powerful AI can become.&lt;/p&gt;

&lt;p&gt;The question that keeps me up at night is &lt;strong&gt;&lt;em&gt;who does it actually serve?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Right now, the vast majority of AI innovation is optimized for three things: &lt;strong&gt;scale, profit, and market dominance&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And look — that’s not inherently wrong. Businesses need to grow.&lt;br&gt;&lt;br&gt;
But it is incomplete.&lt;/p&gt;

&lt;p&gt;When AI serves only industry titans and billionaires, we miss out on its most profound potential: &lt;strong&gt;the ability to uplift society at large.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Technology Is Never Neutral
&lt;/h2&gt;

&lt;p&gt;We have to stop pretending that algorithms are objective.&lt;/p&gt;

&lt;p&gt;Every model we train, every dataset we curate, and every deployment strategy we choose reflects a human choice.&lt;/p&gt;

&lt;p&gt;We are constantly answering silent questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;What problem are we solving?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Who actually benefits from this solution?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;And who gets left behind?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When we design purely for efficiency or revenue, technology naturally gravitates toward the people who already have money and power. It follows the path of least resistance.&lt;/p&gt;

&lt;p&gt;But society’s hardest problems — &lt;strong&gt;accessibility, safety, healthcare gaps, educational inequality, and the climate crisis&lt;/strong&gt; — rarely sit at the top of a revenue roadmap.&lt;/p&gt;

&lt;p&gt;That is exactly why leadership matters.&lt;/p&gt;




&lt;h2&gt;
  
  
  Using AI for Society Is a Choice
&lt;/h2&gt;

&lt;p&gt;Building AI for social good doesn’t mean abandoning technical excellence or innovation. It means redirecting that brilliance with intention.&lt;/p&gt;

&lt;p&gt;True leadership in this space — whether you are an engineer, a researcher, or a founder — isn’t about who builds the biggest model.&lt;/p&gt;

&lt;p&gt;It’s about choosing problems that matter, even if they don’t scale immediately.&lt;/p&gt;

&lt;p&gt;It’s about designing systems people can understand and trust, rather than black boxes that alienate them.&lt;/p&gt;

&lt;p&gt;It’s about measuring success by the impact you leave on a community, not just the valuation you raise in a seed round.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yd4rqnib2qg0gc0eqil.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yd4rqnib2qg0gc0eqil.jpeg" alt="Human Centered AI Illustration" width="735" height="443"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How We Actually Do This
&lt;/h2&gt;

&lt;p&gt;So how do we move this from an abstract ideal to reality?&lt;/p&gt;

&lt;p&gt;It starts by getting out of the bubble — building from real human pain points, not tech-first ideas looking for a problem.&lt;/p&gt;

&lt;p&gt;It means collaborating outside the tech echo chamber: sitting down with educators, doctors, and community leaders who understand the nuance of the problems we’re trying to solve.&lt;/p&gt;

&lt;p&gt;It means designing for constraints and accessibility, not just for the ideal user with the fastest internet connection.&lt;/p&gt;

&lt;p&gt;Sometimes, the most revolutionary thing you can do is ship a smaller, focused solution that solves one real problem incredibly well.&lt;/p&gt;




&lt;h2&gt;
  
  
  Building Where Nothing Existed: A Personal Example
&lt;/h2&gt;

&lt;p&gt;My team and I experienced this firsthand when we built &lt;strong&gt;OmniSign&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We saw a massive gap in accessibility for the Deaf community in Lebanon — there was no real-time tool to bridge the communication barrier.&lt;/p&gt;

&lt;p&gt;But when we started, we hit a wall. There wasn’t even a dataset for Lebanese Sign Language.&lt;/p&gt;

&lt;p&gt;The resources didn’t exist because the market hadn’t deemed it “profitable enough” to build them.&lt;/p&gt;

&lt;p&gt;We could have stopped there.&lt;/p&gt;

&lt;p&gt;Instead, we realized that if we wanted AI to serve this community, we had to do the heavy lifting ourselves.&lt;/p&gt;

&lt;p&gt;We built the dataset from scratch and developed the model to translate Lebanese Sign Language in real time.&lt;/p&gt;

&lt;p&gt;We didn’t wait for permission. We didn’t wait for big tech.&lt;br&gt;&lt;br&gt;
We built it because it was necessary.&lt;/p&gt;

&lt;p&gt;For more insight, visit the &lt;a href="https://laythayache.com/projects/omnisign" rel="noopener noreferrer"&gt;official project website&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bfwoq72mc15shsuvc0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bfwoq72mc15shsuvc0r.png" alt="Real-time Sign Language Translation" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Kind of AI We Should Be Proud Of
&lt;/h2&gt;

&lt;p&gt;The AI worth building isn’t just faster — it’s safer.&lt;/p&gt;

&lt;p&gt;It doesn’t blindly replace people. It empowers them.&lt;/p&gt;

&lt;p&gt;It reaches those usually written off as “not the target market.”&lt;/p&gt;

&lt;p&gt;This isn’t about rejecting industry. It’s about expanding our definition of responsibility.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;AI is going to shape society whether we intend it to or not.&lt;/p&gt;

&lt;p&gt;The difference between a future of exploitation and one of empowerment is &lt;strong&gt;who leads the conversation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you are building AI today, you are already shaping that future.&lt;/p&gt;

&lt;p&gt;The real power move isn’t optimizing for the top 1%.&lt;br&gt;&lt;br&gt;
It’s choosing to build for the rest of us.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Build with purpose. Others will follow.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>showdev</category>
      <category>software</category>
    </item>
  </channel>
</rss>
