<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ross Kim</title>
    <description>The latest articles on DEV Community by Ross Kim (@_630fdf100267a43420f70).</description>
    <link>https://dev.to/_630fdf100267a43420f70</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/_630fdf100267a43420f70"/>
    <language>en</language>
    <item>
      <title>The Present and Future of EIS — A Camera Module Developer's View on AI, Sensors, and Software-Defined Cameras</title>
      <dc:creator>Ross Kim</dc:creator>
      <pubDate>Wed, 22 Apr 2026 11:43:32 +0000</pubDate>
      <link>https://dev.to/_630fdf100267a43420f70/the-present-and-future-of-eis-a-camera-module-developers-view-on-ai-sensors-and-15k3</link>
      <guid>https://dev.to/_630fdf100267a43420f70/the-present-and-future-of-eis-a-camera-module-developers-view-on-ai-sensors-and-15k3</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Over the previous three articles, we have looked at EIS (Electronic Image Stabilization) from multiple angles.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first article laid out the differences between OIS, EIS, and HIS, positioning EIS within the broader landscape of image stabilization.&lt;/li&gt;
&lt;li&gt;The second article took us inside the EIS pipeline through Gyroflow, an open-source project.&lt;/li&gt;
&lt;li&gt;The third article examined the collision between Rolling Shutter and EIS, explaining why RSC (Rolling Shutter Correction) is necessary.
This final article closes the EIS series. So far we have focused on "how current EIS works." This time we focus on &lt;strong&gt;"where EIS is heading."&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From a camera module developer's perspective, three major trends are palpable on the ground today — &lt;strong&gt;AI-based EIS&lt;/strong&gt;, &lt;strong&gt;sensor technology evolution&lt;/strong&gt;, and &lt;strong&gt;software-defined cameras&lt;/strong&gt;. Predicting the future always carries the risk of being wrong, so instead of offering the rosy outlooks typical of marketing materials, I'll focus on changes I actually feel in module development work.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI-Based EIS — From Gyros to Learning
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Limits of Conventional EIS
&lt;/h3&gt;

&lt;p&gt;The EIS we have discussed so far was fundamentally a combination of &lt;strong&gt;gyro sensors + traditional computer vision + mathematical models&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read angular velocity from the gyro to estimate camera orientation&lt;/li&gt;
&lt;li&gt;Separate intended motion from shake using a low-pass filter&lt;/li&gt;
&lt;li&gt;Warp frames using rotation matrices&lt;/li&gt;
&lt;li&gt;Correct row-by-row timing differences with RSC
This structure has been validated over many years and is still used in most commercial products. However, it has a few structural limitations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;First, it is weak against translation.&lt;/strong&gt; Gyros only measure rotation. Vertical bounce while walking, or back-and-forth motion inside a vehicle, cannot be fully estimated from gyro data alone. Even with an accelerometer added, integration drift becomes a major problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second, it does not account for scene context.&lt;/strong&gt; Gyro-based EIS only knows "how much the camera moved," not "what is in the frame." It applies the same correction whether the subject is centered, near, or far.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Third, crop margin is wasted.&lt;/strong&gt; A generous crop margin is reserved to handle worst-case shake, but such large shake rarely occurs in practice. As a result, FOV is sacrificed unnecessarily most of the time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where AI Steps In
&lt;/h3&gt;

&lt;p&gt;Deep learning-based approaches tackle these limitations in a different way. There are three main directions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Learned motion estimation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of or alongside gyro data, pixel-level motion between frames (optical flow) is estimated using learned models. This provides a motion field that takes into account subject depth and scene structure. Google Pixel's video stabilization is said to have evolved in this direction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Learned frame synthesis&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Areas lost to cropping are filled in using information from surrounding frames, or heavily shaken frames are reconstructed from multiple frames. This can &lt;strong&gt;reduce crop margins or even retain most of the original FOV&lt;/strong&gt;. Research tracks like "Motion-Aware Video Frame Interpolation" and "Neural Video Stabilization" fall in this direction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. End-to-end learning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A neural network learns directly from input video to stabilized output video, without a traditional pipeline. While many good results have been reported in research, commercial deployment remains limited due to challenges in real-time mobile processing and output consistency.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Reality of AI EIS on the Ground
&lt;/h3&gt;

&lt;p&gt;Reading academic papers and tech media, AI-based EIS looks like it has already solved everything. The reality in module development is a bit different.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Power and thermals.&lt;/strong&gt; Running neural inference in real time on mobile devices requires NPUs or GPUs. Keeping them under high load throughout video capture causes serious heat and battery drain. The typical compromise is "high-quality AI EIS only in specific modes."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It does not fully replace the gyro.&lt;/strong&gt; AI-based motion estimation works well when the scene has many feature points and lighting is sufficient, but performance degrades sharply in low light or against monochromatic backgrounds. Realistic products are moving toward a &lt;strong&gt;hybrid of gyro and AI&lt;/strong&gt;: the gyro guarantees a baseline of stability, while AI adds scene awareness to push quality higher.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Training data bias.&lt;/strong&gt; Neural network performance depends heavily on training data. It performs well in certain scenes (bright outdoors, urban environments), but in environments not represented in training data (underwater, extremely low light, industrial settings), unpredictable artifacts can emerge. From a camera module supplier's perspective, this translates into &lt;strong&gt;increased validation burden&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In summary, AI-based EIS is a clear direction, but rather than "gyro-based EIS fully exiting the stage," the more accurate picture from the field is that "a gyro + AI collaborative structure is becoming the standard."&lt;/p&gt;




&lt;h2&gt;
  
  
  Sensor Technology Evolution — Hardware for EIS
&lt;/h2&gt;

&lt;p&gt;It is easy to focus solely on software algorithms, but much of EIS's progress owes itself to &lt;strong&gt;advances in sensor hardware&lt;/strong&gt;. Here are a few changes I see on the ground.&lt;/p&gt;

&lt;h3&gt;
  
  
  Higher-Precision, Smaller Gyro Sensors
&lt;/h3&gt;

&lt;p&gt;The performance of 6-axis IMUs built into smartphones has visibly improved in recent years. Sampling rates now exceed 1–2 kHz, and noise density and drift characteristics continue to improve. This directly lifts the quality of data that EIS depends on.&lt;/p&gt;

&lt;p&gt;Perhaps more importantly, &lt;strong&gt;hardware synchronization between image sensor and gyro&lt;/strong&gt; is becoming standard. The readout time synchronization issue discussed in the third article directly affects gyro timestamp accuracy, and recent platforms solve this at the hardware level.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stacked CMOS and Fast Readout
&lt;/h3&gt;

&lt;p&gt;Stacked CMOS technology, pioneered by Sony, separates the pixel layer and the circuit layer and stacks them. This allows more circuit area without increasing the overall sensor size, enabling &lt;strong&gt;significantly faster readout&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Faster readout directly reduces Rolling Shutter distortion. Rolling Shutter distortion is fundamentally proportional to "the time difference between the top and bottom rows of a frame," so reducing that time difference reduces the distortion, even with a Rolling Shutter architecture.&lt;/p&gt;

&lt;p&gt;Going further, &lt;strong&gt;Stacked sensors with integrated DRAM&lt;/strong&gt; use high-speed buffering to approximate Global Shutter behavior. They are not true Global Shutter, but they offer a realistic way to significantly mitigate Rolling Shutter's drawbacks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Global Shutter on Mobile?
&lt;/h3&gt;

&lt;p&gt;In theory, Global Shutter sensors would eliminate Rolling Shutter problems at the root. So will mobile eventually transition to Global Shutter?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From an image-capture camera perspective, not anytime soon.&lt;/strong&gt; As discussed in the third article, the triple wall of pixel size, low-light performance, and cost remains high. This applies not only to the main camera but &lt;strong&gt;equally to the front camera&lt;/strong&gt;. The primary use cases for front cameras (selfies, video calls) are more often in low-light conditions, and there is little high-speed distortion to solve, so there is weak motivation to switch to Global Shutter. As a result, &lt;strong&gt;cameras that users "shoot" with — whether main or front — are likely to continue using Rolling Shutter + RSC.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On the other hand, sensing-oriented cameras follow a different path.&lt;/strong&gt; Receivers in ToF and structured light systems, tracking cameras in AR/VR headsets, barcode scanners in industrial devices — in the domain of sensors that the system uses to perceive the world, rather than image-makers for the user, Global Shutter is already close to standard. In this domain, &lt;strong&gt;low latency and distortion-free instantaneous capture&lt;/strong&gt; matter more than low-light performance.&lt;/p&gt;

&lt;p&gt;In other words, "Global Shutter is entering mobile" is accurate, but the path is &lt;strong&gt;sensing, not imaging&lt;/strong&gt;. In the territory that EIS addresses — user-facing video — Global Shutter is unlikely to become mainstream in the near future.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gyro-in-Sensor
&lt;/h3&gt;

&lt;p&gt;There is movement toward integrating a gyro sensor within the image sensor package itself. This minimizes the physical distance between image sensor and gyro, allowing vibration phases to match more precisely and hardware synchronization to be cleaner.&lt;/p&gt;

&lt;p&gt;From a module developer's perspective, this is welcome — &lt;strong&gt;the BOM simplifies, and the sensor vendor takes responsibility for gyro characteristics&lt;/strong&gt; as well. The trade-off is reduced flexibility in sensor selection.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary: Hardware Sets the Ceiling for Algorithms
&lt;/h3&gt;

&lt;p&gt;No matter how refined an EIS algorithm is, there is a clear ceiling if input data quality is poor. Conversely, when sensor and gyro quality improve, the same algorithm yields visibly better results.&lt;/p&gt;

&lt;p&gt;From a software developer's standpoint, it's easy to say "the algorithm determines quality." But working in module development, you feel every day that &lt;strong&gt;hardware sets the ceiling for algorithms&lt;/strong&gt;. Even in the age of AI-based EIS, this relationship will not change.&lt;/p&gt;




&lt;h2&gt;
  
  
  Software-Defined Camera — Boundaries Disappear
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is a Software-Defined Camera?
&lt;/h3&gt;

&lt;p&gt;Software-Defined Camera (SDC) is not a term with a rigid definition, but the concept circulating in the industry can be summarized as:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"A structure in which a camera's final output quality is determined primarily by software processing (especially AI/ML), rather than by hardware such as sensors and lenses."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This concept is not new. Features like Google Pixel's HDR+, Night Sight, and Magic Eraser were early forms of SDC. The reason Pixel — whose hardware specs appeared unremarkable on paper — maintained top-tier photo quality was entirely due to software.&lt;/p&gt;

&lt;p&gt;In the context of EIS, what SDC means is that &lt;strong&gt;"stabilization is no longer something software does after the sensor captures the frame; the entire capture pipeline is designed with stabilization as a built-in premise."&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  A Pipeline Without Boundaries
&lt;/h3&gt;

&lt;p&gt;The traditional camera pipeline was divided into these stages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sensor readout → RAW → ISP (demosaic, noise reduction, tone mapping) → EIS/RSC → Encoder&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each stage received the output of the previous one and processed it independently, in a block structure. Recent pipelines, however, are rapidly blurring these boundaries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-frame Fusion.&lt;/strong&gt; Instead of a single frame, multiple frames are taken in simultaneously, and noise reduction, HDR, super-resolution, and stabilization are all performed &lt;strong&gt;together&lt;/strong&gt;. Here, EIS is no longer an independent stage — it becomes part of the multi-frame alignment process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Neural ISP.&lt;/strong&gt; There is a movement toward replacing or augmenting traditional ISP blocks with neural networks. Demosaicing, noise reduction, and detail recovery are being unified into a single network. If this trend expands to include EIS as well, &lt;strong&gt;an end-to-end network of "RAW input → stabilized and corrected output"&lt;/strong&gt; becomes a real possibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Computational Viewfinder.&lt;/strong&gt; The viewfinder image the user sees is already a computed result. It used to be "preview is light, high-end processing applies only to the final capture." Now, the preview is processed close to the final result. EIS has become a feature that must operate from the preview stage.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Module Developer's Dilemma
&lt;/h3&gt;

&lt;p&gt;The shift toward SDC places camera module developers in a peculiar position.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On one hand&lt;/strong&gt;, because software compensates for hardware weaknesses, there is more headroom in module design. Optical defects that would have been classified as defects in the past can now be saved by software correction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On the other hand&lt;/strong&gt;, optimizing only for the module's "optical performance" is no longer enough. What matters is &lt;strong&gt;how well the module meshes with the entire software pipeline&lt;/strong&gt;. Even with the same image sensor, the final result changes completely depending on ISP tuning, gyro synchronization accuracy, and calibration data quality.&lt;/p&gt;

&lt;p&gt;As a result, the value chain of the camera module industry is shifting from "pure hardware supply" toward &lt;strong&gt;the integrated provision of "hardware + calibration data + drivers + ISP tuning."&lt;/strong&gt; This is precisely the change I have been feeling firsthand since starting this series.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where EIS Is Heading — Closing the Series
&lt;/h2&gt;

&lt;p&gt;Closing this four-part EIS series, I want to organize a few thoughts that run through the whole. What started as a simple story about shake correction turned out to be a story that expanded to cover the entire camera system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First, EIS is not just a "shake-correction feature."&lt;/strong&gt; It is an integrated problem that spans the full camera system — gyro, sensor, ISP, lens correction, encoder. That is why this series covered OIS/HIS, the pipeline, Rolling Shutter, and AI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second, the evolution of EIS is the interaction of hardware and software.&lt;/strong&gt; A good algorithm reaches its true potential only on a good sensor, and a good sensor only becomes meaningful when paired with well-tuned algorithms. Discussing EIS while looking at only one side is half an understanding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Third, the role of the camera module developer is changing.&lt;/strong&gt; In the past, the core was "supplying optical modules that meet specs." Going forward, competitiveness lies in "providing systems that integrate into a software pipeline." I write this series from the middle of that shift.&lt;/p&gt;

&lt;p&gt;There are many topics on EIS that this series did not cover — actual tuning workflows, EIS in automotive cameras, the very different requirements of VR/AR, and more. Future articles may pick these up as the opportunity arises.&lt;/p&gt;

&lt;p&gt;The next series will address another pillar of camera module development: &lt;strong&gt;ISP tuning&lt;/strong&gt;. If EIS was "the domain of motion," ISP is "the domain of light and color."&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article is based on practical experience in camera module development. Statements about the future reflect a reading of publicly known technology trends and changes observed on the ground, and are not predictions or descriptions of any specific product roadmap. Comments, corrections, and opinions are always welcome.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>eis</category>
      <category>camera</category>
    </item>
    <item>
      <title>Rolling Shutter and EIS — Why Image Stabilization Can Make Your Video Worse</title>
      <dc:creator>Ross Kim</dc:creator>
      <pubDate>Fri, 17 Apr 2026 11:47:23 +0000</pubDate>
      <link>https://dev.to/_630fdf100267a43420f70/rolling-shuttertoeis-shou-burebu-zheng-gaying-xiang-wowai-maseruli-you-454m</link>
      <guid>https://dev.to/_630fdf100267a43420f70/rolling-shuttertoeis-shou-burebu-zheng-gaying-xiang-wowai-maseruli-you-454m</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the previous article, we looked at the internal structure of the EIS pipeline through Gyroflow: gyro data acquisition → integration → smoothing → frame warping. But when you actually implement this pipeline, you hit a problem that's easy to overlook.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"I applied EIS correction, but the video looks even more wobbly — like jello."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The culprit is &lt;strong&gt;Rolling Shutter&lt;/strong&gt;. Rolling Shutter is a physical characteristic of the sensor, unrelated to EIS itself. But when EIS transforms frames without accounting for it, the distortion gets amplified. This article covers what Rolling Shutter is, how it conflicts with EIS, and why a separate correction step is necessary.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is Rolling Shutter?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qwog7entflkp1xxjt3j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qwog7entflkp1xxjt3j.png" alt=" " width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Global Shutter vs Rolling Shutter
&lt;/h3&gt;

&lt;p&gt;Image sensors capture a frame in one of two fundamental ways.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Global Shutter&lt;/strong&gt; exposes all pixels simultaneously — every pixel starts and stops exposure at the same instant. Every pixel in a single frame captures the same moment in time. This method is used in CCD sensors and some industrial CMOS sensors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rolling Shutter&lt;/strong&gt; exposes each row sequentially. The first row begins exposure, then the second row starts slightly later, then the third, and so on. By the time the full frame is read out, there's a time difference of several to tens of milliseconds between the top row and the bottom row.&lt;/p&gt;

&lt;p&gt;Nearly all CMOS sensors in smartphones and consumer cameras today use Rolling Shutter. The reason is straightforward — lower manufacturing cost, simpler circuit design, and easier high-resolution implementation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distortions Caused by Rolling Shutter
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lnisor6cuumvqj50tyc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lnisor6cuumvqj50tyc.png" alt=" " width="800" height="612"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the camera moves during capture with a Rolling Shutter sensor, each row captures the scene at a slightly different point in time. This produces three characteristic distortions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Skew&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the camera pans horizontally at speed, vertical lines tilt diagonally. The top rows capture the scene at one horizontal position, while the bottom rows capture it after the camera has shifted. Buildings and poles appearing to lean is a classic example.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Jello/Wobble&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the camera vibrates during video recording, each row captures a slightly different position. The result is a jello-like wobbling distortion across the entire frame. This is commonly seen in dashcam footage or video shot while running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Partial Exposure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Very brief light sources like a flash or lightning illuminate only certain rows, leaving the rest dark. This shows up as a bright band across part of the frame in still photos.&lt;/p&gt;

&lt;p&gt;For video recording and EIS, the most problematic distortions are skew and jello — especially the &lt;strong&gt;jello effect&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Happens When EIS Meets Rolling Shutter
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Core Assumption of EIS: "All Pixels in a Frame Were Captured at the Same Instant"
&lt;/h3&gt;

&lt;p&gt;Recall the EIS pipeline from the previous article. EIS estimates camera rotation from gyro data and warps the entire frame in the opposite direction to cancel out that rotation. There's a critical assumption embedded in this process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"One frame represents one moment in time."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In other words, EIS applies the same correction value (rotation matrix) to every pixel in the frame. With a Global Shutter sensor, this assumption holds perfectly. With a Rolling Shutter sensor, it doesn't.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Goes Wrong
&lt;/h3&gt;

&lt;p&gt;Consider a concrete scenario.&lt;/p&gt;

&lt;p&gt;Assume the camera is vibrating clockwise. EIS reads the gyro data and determines: "This frame was captured with a 2-degree clockwise rotation, so I'll apply a 2-degree counter-clockwise correction."&lt;/p&gt;

&lt;p&gt;But due to Rolling Shutter, the actual situation is different:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Top rows: captured at 1 degree of clockwise rotation&lt;/li&gt;
&lt;li&gt;Middle rows: captured at 2 degrees of clockwise rotation&lt;/li&gt;
&lt;li&gt;Bottom rows: captured at 3 degrees of clockwise rotation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When EIS uniformly applies "2 degrees counter-clockwise" to the entire frame:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Top: 1° rotation + (-2° correction) = &lt;strong&gt;-1° overcorrected&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Middle: 2° rotation + (-2° correction) = &lt;strong&gt;0° correct&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Bottom: 3° rotation + (-2° correction) = &lt;strong&gt;+1° undercorrected&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is that the top and bottom of the frame shift in opposite directions, &lt;strong&gt;creating a new jello effect that wasn't there before, or amplifying an existing one.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the real explanation behind "I turned on EIS and the video looks even shakier."&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is Rolling Shutter Correction (RSC)?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why Row-Level Correction Is Needed
&lt;/h3&gt;

&lt;p&gt;To solve this problem, the EIS warping stage must not apply a single correction value to the entire frame. Instead, &lt;strong&gt;each row (or group of rows) needs its own correction value based on the exact moment that row was captured.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is &lt;strong&gt;Rolling Shutter Correction (RSC)&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  How RSC Works
&lt;/h3&gt;

&lt;p&gt;The key to RSC is leveraging the temporal resolution of gyro data. As mentioned in the EIS pipeline article, gyro data is typically sampled at 1kHz or higher. At 30fps, one frame spans about 33ms. With a 1kHz gyro, that's 33 gyro samples within a single frame.&lt;/p&gt;

&lt;p&gt;If the Rolling Shutter readout time (the total time to sequentially read all rows) is, say, 20ms, then the exact capture time of each row can be calculated, and the camera orientation at that time can be estimated from the corresponding gyro data.&lt;/p&gt;

&lt;p&gt;In summary:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Calculate the capture time of each row within the frame (from readout time and row number)&lt;/li&gt;
&lt;li&gt;Interpolate gyro data at that time to determine camera orientation&lt;/li&gt;
&lt;li&gt;Apply a different correction transform to each row&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Effectively, warping is performed on a "per-row" basis rather than a "per-frame" basis.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Do You Know the Readout Time?
&lt;/h3&gt;

&lt;p&gt;One of the most critical parameters for RSC is the sensor's &lt;strong&gt;readout time&lt;/strong&gt;. This value varies depending on the sensor's row count, clock speed, and resolution mode.&lt;/p&gt;

&lt;p&gt;In commercial products, readout time is either specified in the sensor datasheet or passed as part of the frame metadata through the ISP driver. Internally at manufacturers, this value is often hardcoded into the ISP pipeline.&lt;/p&gt;

&lt;p&gt;In the open-source Gyroflow, the value can be entered manually by the user, automatically extracted from video metadata, or estimated through a calibration video. Since cameras and sensors vary enormously unlike commercial products, this process is essential.&lt;/p&gt;




&lt;h2&gt;
  
  
  RSC Implementation in Commercial Products
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Smartphones
&lt;/h3&gt;

&lt;p&gt;In smartphone ISPs, EIS and RSC typically operate as a single integrated pipeline. The ISP receives readout time information from the sensor, synchronizes it with gyro data timestamps, and performs row-level correction. All of this runs in real time within the ISP hardware accelerator.&lt;/p&gt;

&lt;p&gt;Major mobile ISPs including Qualcomm Spectra, Samsung SLSI, and MediaTek Imagiq all have built-in RSC capabilities. However, RSC quality varies between ISP vendors and depends on the tuning state.&lt;/p&gt;

&lt;h3&gt;
  
  
  Action Cameras / Drones
&lt;/h3&gt;

&lt;p&gt;Products like GoPro and DJI handle EIS+RSC through proprietary ISPs or dedicated processors. Action cameras face particularly harsh vibration environments, making RSC critically important. GoPro's HyperSmooth and DJI's RockSteady are both pipelines that include RSC.&lt;/p&gt;

&lt;h3&gt;
  
  
  Post-Processing
&lt;/h3&gt;

&lt;p&gt;Post-processing software like Gyroflow and Adobe Premiere's Warp Stabilizer also support RSC. The advantage of post-processing is unlimited look-ahead since the entire video can be analyzed, but the challenge lies in knowing the readout time accurately.&lt;/p&gt;




&lt;h2&gt;
  
  
  Practical Issues with Rolling Shutter and EIS
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Readout Time Synchronization
&lt;/h3&gt;

&lt;p&gt;RSC precision is directly determined by the synchronization accuracy between gyro data timestamps and sensor readout timing. Even a few milliseconds of sync error can cause corrections to misalign, leaving residual jello effects or creating new distortions.&lt;/p&gt;

&lt;p&gt;In commercial products, the gyro sensor and image sensor share a hardware timer, or the ISP aligns gyro data using frame sync signals as a reference. In cases like Gyroflow where external gyro logs are used, this synchronization must be estimated in software, which can reduce precision.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resolution and Readout Time
&lt;/h3&gt;

&lt;p&gt;Even with the same sensor, readout time changes depending on the resolution mode (4K vs 1080p), binning settings, and frame rate. In 4K mode, more rows need to be read, so readout time increases and Rolling Shutter distortion grows. Conversely, binning reduces the effective row count and shortens readout time.&lt;/p&gt;

&lt;p&gt;In practice, these values are managed per sensor mode and must be accurately passed to the ISP. If the value is wrong, RSC applies corrections at the wrong timing, resulting in distortion that's subtle but clearly visible.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Rolling Shutter is a physical characteristic arising from the sequential row-by-row exposure of the sensor. On its own, it causes skew and jello effects, but the problem becomes more severe when combined with EIS. When EIS assumes "the entire frame was captured at the same instant" and applies uniform correction, the per-row timing differences from Rolling Shutter get amplified into visible distortion.&lt;/p&gt;

&lt;p&gt;RSC (Rolling Shutter Correction) solves this by leveraging the high temporal resolution of gyro data to apply individual corrections matched to each row's capture time. Today's major mobile ISPs process EIS and RSC as an integrated pipeline, and post-processing software like Gyroflow also supports RSC.&lt;/p&gt;

&lt;p&gt;In the next article — the final installment of the EIS series — I plan to cover EIS from a camera module developer's perspective: its present and future, including AI-based EIS, sensor technology evolution, and the direction of software-defined cameras.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was written based on hands-on experience in camera module development. It describes publicly available technical principles and general industry practices, not the internal implementation of any specific product. If you find any errors or have feedback, please leave a comment.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>eis</category>
      <category>camera</category>
    </item>
    <item>
      <title>Understanding the EIS Pipeline Through Gyroflow — A Look Inside Image Stabilization via Open Source</title>
      <dc:creator>Ross Kim</dc:creator>
      <pubDate>Mon, 13 Apr 2026 12:16:51 +0000</pubDate>
      <link>https://dev.to/_630fdf100267a43420f70/understanding-the-eis-pipeline-through-gyroflow-a-look-inside-image-stabilization-via-open-source-b12</link>
      <guid>https://dev.to/_630fdf100267a43420f70/understanding-the-eis-pipeline-through-gyroflow-a-look-inside-image-stabilization-via-open-source-b12</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the previous article, I covered the differences between OIS, EIS, and HIS. For EIS, I explained it roughly as "correcting frames based on gyro data," but what actually happens inside is more complex than that suggests.&lt;/p&gt;

&lt;p&gt;EIS is not simply "cropping a shaky video to stabilize it." It is a pipeline that reads data from gyro sensors, estimates camera motion, decides which movements to keep and which to remove, and finally transforms each frame. &lt;/p&gt;

&lt;p&gt;In this article, I will reference an open-source project called &lt;strong&gt;Gyroflow&lt;/strong&gt; to examine what each stage of the EIS pipeline actually does. While commercial EIS implementations are proprietary, Gyroflow implements the same principles as open source, making it an excellent learning resource.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is Gyroflow
&lt;/h2&gt;

&lt;p&gt;Gyroflow is an open-source video stabilization software that uses &lt;strong&gt;gyroscope data&lt;/strong&gt;. It was originally built for post-processing stabilization of action camera and drone footage.&lt;/p&gt;

&lt;p&gt;Here are its key features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gyro data-based&lt;/strong&gt;: Estimates motion vectors from gyro sensor data rather than image analysis (optical flow)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wide camera support&lt;/strong&gt;: Can directly read gyro data from major action cameras including GoPro, DJI, Sony, and Insta360&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lens profiles&lt;/strong&gt;: Includes built-in distortion models for various lenses, including fisheye&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU acceleration&lt;/strong&gt;: Supports near-real-time processing speeds via OpenCL/wgpu&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rust-based&lt;/strong&gt;: Core logic is written in Rust, making the code relatively readable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The source code is available on GitHub, and the community is active. Beyond practical use, it holds significant value as a reference for understanding how EIS works internally.&lt;/p&gt;




&lt;h2&gt;
  
  
  EIS Pipeline Overview
&lt;/h2&gt;

&lt;p&gt;The EIS pipeline can be broken down into five stages. The diagram below shows the overall flow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnbpfug09jfh56weeirl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnbpfug09jfh56weeirl.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's walk through each stage.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stage 1: Gyro Data Acquisition
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;gyroscope sensor&lt;/strong&gt; measures the camera's &lt;strong&gt;angular velocity&lt;/strong&gt;. In simple terms, it records &lt;strong&gt;in which direction and how fast&lt;/strong&gt; the camera is rotating across three axes (&lt;strong&gt;pitch, yaw, roll&lt;/strong&gt;).&lt;/p&gt;

&lt;p&gt;This data is typically recorded alongside video frames at the camera firmware level. Action cameras like GoPro embed gyro data in the video file's metadata. Gyroflow parses this metadata to extract the gyro data.&lt;/p&gt;

&lt;p&gt;There are some critical points here:&lt;/p&gt;

&lt;h3&gt;
  
  
  Timestamp Synchronization
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;timestamps of gyro data and video frames must match precisely&lt;/strong&gt;. Since the gyro sensor's sampling rate (typically 200Hz–8kHz) and the video frame rate (30fps, 60fps, etc.) differ, time synchronization is essential to correctly match gyro data to specific frames.&lt;/p&gt;

&lt;p&gt;To illustrate how sensitive this synchronization is: &lt;strong&gt;an offset of just 10ms can cause the correction direction to reverse relative to the actual shake, effectively doubling the shake amplitude&lt;/strong&gt;. At 30fps, one frame is approximately 33ms, so 10ms is only one-third of a frame — yet it is fatal to the result. Gyroflow allows adjusting this offset either automatically or manually, while commercial products typically handle synchronization at the hardware level.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gyro Data Noise
&lt;/h3&gt;

&lt;p&gt;Gyro sensor data always contains noise. There is &lt;strong&gt;drift&lt;/strong&gt; — a slow accumulation of error — as well as high-frequency noise. Using this noisy data directly results in jittery or unstable corrections. Therefore, preprocessing of the gyro data itself (such as a &lt;strong&gt;low-pass filter&lt;/strong&gt;) is necessary.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stage 2: Motion Estimation — Camera Orientation Calculation
&lt;/h2&gt;

&lt;p&gt;Integrating gyro data (angular velocity) over time yields the camera's &lt;strong&gt;orientation&lt;/strong&gt;. The direction the camera was pointing at each frame is expressed as a &lt;strong&gt;quaternion&lt;/strong&gt; or rotation matrix.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Angular velocity data → Time integration → Per-frame camera orientation (quaternion)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Several practical issues arise in this process:&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration Drift
&lt;/h3&gt;

&lt;p&gt;Integrating angular velocity accumulates error. This is &lt;strong&gt;gyro drift&lt;/strong&gt;. It is negligible over short intervals, but over tens of seconds to several minutes of footage, it builds up to significant levels. Gyroflow offers an option to use &lt;strong&gt;optical flow&lt;/strong&gt;-based auxiliary estimation alongside gyro data to correct this drift.&lt;/p&gt;

&lt;h3&gt;
  
  
  Coordinate System Transformation
&lt;/h3&gt;

&lt;p&gt;The gyro sensor's coordinate system does not always match the camera (image) coordinate system. The &lt;strong&gt;axis mapping&lt;/strong&gt; varies depending on how the sensor is mounted on the circuit board. This is why Gyroflow has an &lt;strong&gt;IMU orientation&lt;/strong&gt; setting. In commercial products, this mapping is fixed in firmware.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stage 3: Smoothing — Which Movements to Keep, Which to Remove
&lt;/h2&gt;

&lt;p&gt;This stage is the core of EIS and has the greatest impact on output quality.&lt;/p&gt;

&lt;p&gt;Camera movements fall into two categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Intended movements&lt;/strong&gt;: Smooth camera motion the operator deliberately performs, such as panning and tilting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unintended movements&lt;/strong&gt;: Unwanted shake from hand tremor, walking vibration, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal of EIS is to remove the latter while preserving the former. The challenge is that perfectly distinguishing between the two automatically is impossible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Limitations of a Simple Low-Pass Filter
&lt;/h3&gt;

&lt;p&gt;The most intuitive approach is to apply a &lt;strong&gt;low-pass filter (LPF)&lt;/strong&gt; to the camera orientation data — remove high-frequency shake while passing through low-frequency motion.&lt;/p&gt;

&lt;p&gt;However, this method has a fundamental problem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Set the cutoff frequency low&lt;/strong&gt; → Hand shake is well suppressed, but fast panning introduces lag (the video feels like it is trailing behind)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set the cutoff frequency high&lt;/strong&gt; → Panning response is good, but hand shake remains&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ultimately, a fixed LPF cannot simultaneously achieve "stable yet responsive" results.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gyroflow's Smoothing Algorithms
&lt;/h3&gt;

&lt;p&gt;Gyroflow provides several smoothing algorithms, notably:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Plain 3D&lt;/strong&gt;: Gaussian smoothing in quaternion space. Simple but produces basic results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Horizon lock&lt;/strong&gt;: A special mode that locks the horizon. Frequently used for drone footage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fixed camera&lt;/strong&gt;: Removes all movement as if the camera were fixed. Tripod effect.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Velocity dampened&lt;/strong&gt;: Dynamically adjusts smoothing intensity based on camera movement speed. Strong smoothing when stationary, weak smoothing during panning.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Among these, &lt;strong&gt;velocity dampened&lt;/strong&gt; is the most practical. It takes an &lt;strong&gt;adaptive approach&lt;/strong&gt; to overcome the limitations of a simple LPF: "reduce smoothing when the camera is moving fast, strengthen smoothing when stationary."&lt;/p&gt;

&lt;p&gt;Commercial EIS products also use similar adaptive smoothing in principle, but the specific implementations vary by manufacturer and are not publicly disclosed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stage 4: Frame Warping
&lt;/h2&gt;

&lt;p&gt;Once a "corrected camera orientation" has been obtained through smoothing, the actual video frames need to be transformed.&lt;/p&gt;

&lt;p&gt;The frame is rotated/transformed by the difference between the original camera orientation and the corrected camera orientation. This process is called &lt;strong&gt;frame warping&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Warping Works
&lt;/h3&gt;

&lt;p&gt;Simplified, the flow looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Original frame's camera orientation: &lt;strong&gt;Q_original&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Smoothed target orientation: &lt;strong&gt;Q_smoothed&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Correction rotation: &lt;strong&gt;Q_correction = Q_smoothed × Q_original⁻¹&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Apply this correction rotation to the image&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Multiplying by Q_original⁻¹ (the inverse of the quaternion) means "reversing the original frame's rotation." It is an operation that cancels out the original shaky orientation and moves the frame to the desired smooth orientation.&lt;/p&gt;

&lt;p&gt;In practice, &lt;strong&gt;lens distortion correction&lt;/strong&gt; is also performed alongside the rotation. If the image is rotated while wide-angle fisheye distortion remains, the edges exhibit a warping artifact. This is why Gyroflow requires a &lt;strong&gt;lens profile&lt;/strong&gt; — the distortion model must be known for accurate warping.&lt;/p&gt;

&lt;h3&gt;
  
  
  FOV Loss
&lt;/h3&gt;

&lt;p&gt;Rotating/transforming a frame creates empty areas at the edges. These empty areas must be hidden by cropping the video, which reduces the &lt;strong&gt;field of view (FOV)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is the most fundamental trade-off in EIS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stronger correction&lt;/strong&gt; → more cropping → &lt;strong&gt;greater FOV loss&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Preserving FOV&lt;/strong&gt; → correction intensity must be reduced → shake remains&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Gyroflow allows users to directly adjust this crop ratio and offers a &lt;strong&gt;Dynamic Zoom&lt;/strong&gt; feature that automatically calculates the optimal crop per frame. Commercial products typically use a fixed crop (around 10–15%) or resolution oversampling (4K capture → 1080p stabilized output) to make FOV loss imperceptible.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stage 5: Output
&lt;/h2&gt;

&lt;p&gt;Encoding the warped frames produces the final stabilized video. Since Gyroflow is a post-processing tool, this occurs as a separate rendering step, whereas commercial real-time EIS executes this entire pipeline simultaneously with each frame capture.&lt;/p&gt;

&lt;p&gt;Real-time processing introduces additional constraints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Look-ahead limitation&lt;/strong&gt;: Future frames available for smoothing are limited&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Computational resources&lt;/strong&gt;: Processing must complete within GPU/DSP resource budgets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Thermal/power&lt;/strong&gt;: Must operate within the mobile device's thermal and power budget&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because of these constraints, even when using the same principles, there is a quality gap between post-processing (Gyroflow) and real-time (commercial EIS) output.&lt;/p&gt;




&lt;h2&gt;
  
  
  Differences Between Gyroflow and Commercial EIS
&lt;/h2&gt;

&lt;p&gt;While Gyroflow helps us understand EIS principles, there are several important differences from commercial EIS:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Gyroflow (Post-processing)&lt;/th&gt;
&lt;th&gt;Commercial EIS (Real-time)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Processing timing&lt;/td&gt;
&lt;td&gt;After recording&lt;/td&gt;
&lt;td&gt;Simultaneous with recording&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Smoothing look-ahead&lt;/td&gt;
&lt;td&gt;Entire video available&lt;/td&gt;
&lt;td&gt;A few to tens of frames&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lens distortion correction&lt;/td&gt;
&lt;td&gt;Separate lens profile&lt;/td&gt;
&lt;td&gt;Built into ISP&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gyro synchronization&lt;/td&gt;
&lt;td&gt;Manual/auto estimation&lt;/td&gt;
&lt;td&gt;Hardware synchronization&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Computational resources&lt;/td&gt;
&lt;td&gt;PC GPU (unlimited)&lt;/td&gt;
&lt;td&gt;Mobile DSP (limited)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FOV handling&lt;/td&gt;
&lt;td&gt;Dynamic crop available&lt;/td&gt;
&lt;td&gt;Typically fixed crop&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In commercial products, this entire pipeline runs with hardware acceleration inside the &lt;strong&gt;ISP (Image Signal Processor)&lt;/strong&gt;, often operating in hybrid mode coordinated with OIS.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;On the surface, EIS can be summarized as "cropping to suppress shake," but internally, a complex pipeline exists:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Gyro sensor data acquisition&lt;/strong&gt; and time synchronization&lt;/li&gt;
&lt;li&gt;Camera orientation estimation through &lt;strong&gt;angular velocity integration&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smoothing&lt;/strong&gt; to separate intended movements from shake&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frame warping&lt;/strong&gt; that accounts for lens distortion&lt;/li&gt;
&lt;li&gt;Managing the trade-off between &lt;strong&gt;FOV loss&lt;/strong&gt; and correction intensity&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Gyroflow implements this pipeline transparently as open source, making it an excellent reference for understanding how EIS works. For those interested in reading the actual code, I recommend checking out the GitHub repository.&lt;/p&gt;

&lt;p&gt;In the next article, I plan to cover the relationship between Rolling Shutter and EIS — why rolling shutter correction is additionally required in EIS.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was written based on hands-on experience in camera module development. The descriptions of Gyroflow reference its publicly available source code and documentation, and do not describe the internal implementation of any specific product. If you find any errors or have feedback, please leave a comment.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>eis</category>
      <category>software</category>
      <category>camera</category>
    </item>
    <item>
      <title>OIS vs EIS vs HIS — A Camera Module Developer's Guide to Image Stabilization</title>
      <dc:creator>Ross Kim</dc:creator>
      <pubDate>Fri, 10 Apr 2026 06:48:03 +0000</pubDate>
      <link>https://dev.to/_630fdf100267a43420f70/ois-vs-eis-vs-his-a-camera-module-developers-guide-to-image-stabilization-45c0</link>
      <guid>https://dev.to/_630fdf100267a43420f70/ois-vs-eis-vs-his-a-camera-module-developers-guide-to-image-stabilization-45c0</guid>
      <description>&lt;h2&gt;
  
  
  Why Image Stabilization Matters
&lt;/h2&gt;

&lt;p&gt;Smartphone camera megapixel counts keep climbing year after year, but what truly determines video quality lies elsewhere: &lt;strong&gt;Image Stabilization&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In video recording, frame-to-frame shake accumulates, making footage uncomfortable to watch no matter how high the resolution. For photos, faster shutter speeds can mitigate the effects of hand shake in bright conditions, but in low light, slower shutter speeds make motion blur an unavoidable problem.&lt;/p&gt;

&lt;p&gt;There are three main approaches to solving this: &lt;strong&gt;OIS (Optical)&lt;/strong&gt;, &lt;strong&gt;EIS (Electronic)&lt;/strong&gt;, and &lt;strong&gt;HIS (Hybrid)&lt;/strong&gt;, which combines the two. Let's break down the principles, pros and cons, and real-world differences in how each actually works.&lt;/p&gt;




&lt;h2&gt;
  
  
  OIS — Optical Image Stabilization
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;p&gt;OIS physically moves the lens or image sensor to counteract hand shake. When the gyro sensor inside the camera module detects movement, an actuator shifts the lens assembly or sensor in the opposite direction to correct the optical axis.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementation Methods
&lt;/h3&gt;

&lt;p&gt;OIS is broadly divided into two approaches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lens-shift&lt;/strong&gt;: Moves a portion of the lens group along the X and Y axes. This is the traditional approach with a long history in the camera industry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sensor-shift&lt;/strong&gt;: Moves the image sensor itself. Apple introduced this with the iPhone 12 Pro Max and has since expanded it across the entire iPhone lineup. Samsung also adopted sensor-shift OIS with the Galaxy S26 Ultra. Chinese manufacturers like Huawei (Pura 70 Ultra, Pura 80 Ultra) and OPPO have also implemented it in their flagships. Since the sensor is lighter than the lens, it offers faster response times and a wider correction range, and adoption is expanding across the flagship market.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Actuator types include VCM (Voice Coil Motor), SMA (Shape Memory Alloy), and ball-guide mechanisms, selected based on tradeoffs between module size, power consumption, and correction range.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Optical correction means &lt;strong&gt;no image quality loss&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Stabilizes the light reaching the sensor, making it &lt;strong&gt;strong in low-light conditions&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Effective for both still photos and video&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Additional components (actuators, gyros, driver ICs) &lt;strong&gt;increase module cost and size&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Correction range has &lt;strong&gt;physical limits&lt;/strong&gt; (typically around ±1 degree)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Durability concerns&lt;/strong&gt; from drops and impacts (OIS lock mechanisms required)&lt;/li&gt;
&lt;li&gt;Additional power consumption&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  EIS — Electronic Image Stabilization
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;p&gt;EIS corrects hand shake through software rather than hardware. It calculates inter-frame shake based on gyro sensor data and applies &lt;strong&gt;cropping and warping (geometrically transforming and aligning the image)&lt;/strong&gt; to each frame to produce stabilized video output.&lt;/p&gt;

&lt;p&gt;In simple terms, the actual output uses a narrower area than the full sensor capture, and the position of this output region is adjusted frame by frame according to the detected shake.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pipeline
&lt;/h3&gt;

&lt;p&gt;A typical EIS processing pipeline works as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Gyro data acquisition&lt;/strong&gt; — Angular velocity data is read from the IMU (Inertial Measurement Unit)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Motion estimation&lt;/strong&gt; — Gyro data is combined with (in some cases) image-based motion vectors to estimate camera movement&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Target path generation&lt;/strong&gt; — Intentional panning is separated from unintentional shake. A smoothing filter creates a stabilized target path&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frame transformation&lt;/strong&gt; — Affine (transformation preserving parallelism, including rotation, scaling, and translation) or homography (transformation that also accounts for perspective) transforms are applied to each frame for correction&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Crop and output&lt;/strong&gt; — The final frame is cropped to remove black borders caused by the transformation&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Pros
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;No additional hardware required, &lt;strong&gt;favorable for cost reduction&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;No impact on module size&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Algorithm improvements&lt;/strong&gt; possible through software updates&lt;/li&gt;
&lt;li&gt;No physical correction range limits, enabling &lt;strong&gt;compensation for large movements&lt;/strong&gt; (practically effective up to around ±3 degrees)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;FOV (field of view) loss&lt;/strong&gt; due to cropping (typically 10–20%)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resolution degradation&lt;/strong&gt; from the crop + warp process&lt;/li&gt;
&lt;li&gt;Difficult to apply to still photos (requires inter-frame comparison)&lt;/li&gt;
&lt;li&gt;Weak at correcting high-frequency vibrations&lt;/li&gt;
&lt;li&gt;Cannot reduce motion blur itself in low light (since it's not optical correction)&lt;/li&gt;
&lt;li&gt;Combined with &lt;strong&gt;rolling shutter distortion&lt;/strong&gt;, correction results can appear unnatural&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  HIS — Hybrid Image Stabilization
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Concept
&lt;/h3&gt;

&lt;p&gt;HIS combines OIS and EIS. The term "HIS" itself is more of a marketing and technical convenience label used by manufacturers rather than an industry standard, but the concept of using OIS and EIS together is adopted by most flagship smartphones today.&lt;/p&gt;

&lt;h3&gt;
  
  
  Actual Operating Structure: OIS First-Pass Correction → EIS Post-Processing
&lt;/h3&gt;

&lt;p&gt;It's easy to imagine HIS as OIS and EIS equally sharing the workload, but the actual implementation is closer to a &lt;strong&gt;serial pipeline&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;OIS corrects first, physically&lt;/strong&gt; — When the gyro sensor detects shake, the actuator moves the lens or sensor to optically counteract the movement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EIS reads the OIS correction results and post-processes&lt;/strong&gt; — Residual micro-shake, rolling shutter distortion, and motion blur remaining after OIS correction are refined through software to produce smooth final video.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The key point is that &lt;strong&gt;EIS knows what OIS corrected and by how much&lt;/strong&gt;. By passing OIS actuator motion data to the EIS pipeline, over-compensation is avoided and the strengths of each system can be leveraged.&lt;/p&gt;

&lt;p&gt;The best example of this structure is Google Pixel 2's (2017) &lt;strong&gt;Fused Video Stabilization&lt;/strong&gt;. Google extracts gyro signals and OIS motion data together to precisely estimate camera movement, then synthesizes frames through a 3-stage pipeline with machine learning-based motion filtering. Sony Xperia's Optical SteadyShot also officially employs a structure that drives OIS and EIS simultaneously.&lt;/p&gt;

&lt;h3&gt;
  
  
  Angle-Based Role Division
&lt;/h3&gt;

&lt;p&gt;As an approach to OIS-EIS collaboration, &lt;strong&gt;dividing roles based on correction angle&lt;/strong&gt; is also discussed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OIS's physical correction limit is approximately &lt;strong&gt;±1 degree&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;EIS can provide practical correction up to approximately &lt;strong&gt;±3 degrees&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using this difference, shake within ±1 degree is handled optically by OIS without quality loss, while larger movements exceeding ±1 degree are handled by EIS through software. This minimizes EIS crop amount while securing a wide correction range.&lt;/p&gt;

&lt;p&gt;However, the extent to which this approach is implemented in actual commercial products varies by manufacturer, and most do not disclose their detailed internal structures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Differences by Camera and Mode
&lt;/h3&gt;

&lt;p&gt;One important point to note is that even on the same smartphone, the stabilization method changes depending on the camera and shooting mode.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Still photos&lt;/strong&gt; → OIS only (EIS cannot be applied as it requires inter-frame comparison)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Video recording (main camera)&lt;/strong&gt; → OIS + EIS combination often operates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ultra-wide camera&lt;/strong&gt; → EIS only (no OIS hardware)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Telephoto camera&lt;/strong&gt; → Primarily OIS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even modes marketed as providing powerful video stabilization, such as Samsung's "Super Steady" and Apple's "Action Mode," actually work by switching to the ultra-wide camera and maximizing EIS crop. Rather than sophisticated OIS+EIS collaboration, they are closer to &lt;strong&gt;modes that aggressively apply EIS&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Adding EIS software post-processing on top of OIS optical correction produces &lt;strong&gt;smoother video&lt;/strong&gt; than either alone&lt;/li&gt;
&lt;li&gt;Since OIS handles first-pass correction, &lt;strong&gt;EIS crop amount can be reduced&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Most flagship smartphones adopt this combination, making it &lt;strong&gt;effectively the standard configuration for video stabilization&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Synchronization between OIS motion data and the EIS pipeline is challenging&lt;/strong&gt; to implement&lt;/li&gt;
&lt;li&gt;OIS hardware costs are still included&lt;/li&gt;
&lt;li&gt;Behavior varies by camera and mode, making it &lt;strong&gt;difficult for users to experience consistency&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Implementation levels vary significantly between manufacturers, and most do not disclose their detailed structures&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Comparison Summary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Attribute&lt;/th&gt;
&lt;th&gt;OIS&lt;/th&gt;
&lt;th&gt;EIS&lt;/th&gt;
&lt;th&gt;HIS&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Correction Method&lt;/td&gt;
&lt;td&gt;Hardware (Physical)&lt;/td&gt;
&lt;td&gt;Software (Digital)&lt;/td&gt;
&lt;td&gt;Hardware + Software&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image Quality Impact&lt;/td&gt;
&lt;td&gt;No loss&lt;/td&gt;
&lt;td&gt;FOV loss, resolution degradation&lt;/td&gt;
&lt;td&gt;OIS-level maintained + minimized EIS crop&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Low-Light Performance&lt;/td&gt;
&lt;td&gt;Excellent&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;Excellent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Still Photos&lt;/td&gt;
&lt;td&gt;Effective&lt;/td&gt;
&lt;td&gt;Ineffective&lt;/td&gt;
&lt;td&gt;Same as OIS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Video Stabilization&lt;/td&gt;
&lt;td&gt;Effective (range limited)&lt;/td&gt;
&lt;td&gt;Effective (quality tradeoff)&lt;/td&gt;
&lt;td&gt;Most effective&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Module Cost&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;No additional cost&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Module Size&lt;/td&gt;
&lt;td&gt;Increases&lt;/td&gt;
&lt;td&gt;No impact&lt;/td&gt;
&lt;td&gt;Increases&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Implementation Difficulty&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Correction Range&lt;/td&gt;
&lt;td&gt;±1° (physical limit)&lt;/td&gt;
&lt;td&gt;~±3°&lt;/td&gt;
&lt;td&gt;OIS ±1° + extended by EIS&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Future Trends
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Rise of AI-Based EIS
&lt;/h3&gt;

&lt;p&gt;While traditional EIS has been based on gyro data and conventional computer vision, &lt;strong&gt;deep learning-based motion estimation and frame synthesis&lt;/strong&gt; technologies have been advancing rapidly in recent years. Google's Pixel series leads this field, evolving toward reducing crop loss by combining inter-frame interpolation with super-resolution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Expansion of Sensor-Shift OIS
&lt;/h3&gt;

&lt;p&gt;Sensor-shift was pioneered by Apple, followed by Chinese manufacturers like Huawei and OPPO, and Samsung finally adopted it with the Galaxy S26 Ultra. As major flagship manufacturers adopt it one after another, sensor-shift is becoming the new standard in the flagship market. Sensor-shift increases lens design flexibility, creating strong synergy with lens module miniaturization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Software-Defined Camera
&lt;/h3&gt;

&lt;p&gt;In the long term, the weight of hardware correction will likely decrease, with software correction combining sensor data and AI processing becoming more important. However, the limits of physics — particularly insufficient light in low-light conditions — cannot be fully overcome by software alone, so OIS is unlikely to disappear entirely.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Image stabilization is not a simple feature but a system design challenge where hardware, software, physics, and cost are complexly intertwined. There is no single "best" answer among OIS, EIS, and HIS. The optimal combination varies depending on product positioning, target price, and primary shooting scenarios.&lt;/p&gt;

&lt;p&gt;In the next article, I plan to dive deeper into the internal structure of the EIS pipeline, covering gyro data processing, frame transformation, and smoothing algorithms.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was written based on hands-on experience in camera module development. What type of image stabilization does your smartphone use? If you find any errors in this article, please point them out in the comments. Feedback and opinions are always welcome.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>camera</category>
      <category>eis</category>
      <category>ois</category>
    </item>
    <item>
      <title>What It's Like Being a Software Engineer at a Hardware Company — The Reality of SW Development Dragged Along by HW Schedules</title>
      <dc:creator>Ross Kim</dc:creator>
      <pubDate>Tue, 07 Apr 2026 14:58:33 +0000</pubDate>
      <link>https://dev.to/_630fdf100267a43420f70/what-its-like-being-a-software-engineer-at-a-hardware-company-the-reality-of-sw-development-2ho1</link>
      <guid>https://dev.to/_630fdf100267a43420f70/what-its-like-being-a-software-engineer-at-a-hardware-company-the-reality-of-sw-development-2ho1</guid>
      <description>&lt;p&gt;I used to work in software development in the mobile industry, and now I develop software at a camera module company. We make camera modules for smartphones and industrial devices. The core of the product is hardware — lenses, sensors, mechanical design — and software's role is to make that hardware work or to develop test programs.&lt;br&gt;
Working as a software engineer in this kind of environment, you run into some unique situations. Things that would be hard to imagine at a pure software company (though, to be fair, I've never actually worked at one, so I can't say for sure). Here are a few experiences I've put together.&lt;br&gt;
&lt;strong&gt;"We Can't Change the Hardware, So Fix It in Software"&lt;/strong&gt;&lt;br&gt;
This is the phrase you hear the most as a software engineer at a hardware company.&lt;br&gt;
Once, the hardware team physically mounted a display in the wrong orientation. With assembly already complete, tearing it off and reattaching it didn't make sense in terms of cost or time. So the request that came to software was simple: "Just flip the display output direction in software."&lt;br&gt;
This wasn't a one-time incident. When problems come up in hardware design, the cost of fixing them is high, so it's an everyday occurrence for the fix to land on software's plate. From software's perspective, work that was never part of the plan suddenly appears. And yet, the deadline doesn't change.&lt;br&gt;
&lt;strong&gt;An Organization That Treats Software Like Hardware&lt;/strong&gt;&lt;br&gt;
The opposite pattern exists too.&lt;br&gt;
Sometimes a fix that would clearly take just a few lines of code gets met with: "That's already been finalized, so changes aren't possible." Hardware-centric organizations tend to be very conservative about changes after design sign-off. Recutting molds or redesigning boards costs real money, so the mindset itself makes sense.&lt;br&gt;
The problem is when that same standard gets applied to software. Software's essential strength is its flexibility — but when you lock it into the same process as hardware, inefficiency follows. Instead of a straightforward fix, you end up adding workaround logic or tacking on extra features. The code gets more complex with every iteration.&lt;br&gt;
&lt;strong&gt;When Hardware Is Delayed, It's the Software Schedule That Shrinks&lt;/strong&gt;&lt;br&gt;
Product development has deadlines. Delivery dates promised to clients don't move easily.&lt;br&gt;
The development sequence is usually hardware first, software second. You need the sensor and mechanical parts ready before you can load firmware, and tuning and validation have to happen on actual hardware — so this order makes sense.&lt;br&gt;
But when the hardware phase gets delayed due to design changes or component issues, the overall deadline stays the same while the time given to software shrinks. Once the hardware team finishes their part, their stance becomes: "We've handed it off, so figure out the rest within the remaining time." (Or, as mentioned above, it becomes: "Just find a way to fix it in software.")&lt;br&gt;
A software development period that was supposed to be a month can turn into two weeks. But the amount of work doesn't shrink. If this happened once or twice, you could push through with overtime. But when it repeats on every project, it just starts to feel normal.&lt;br&gt;
&lt;strong&gt;So Is This Environment All Bad?&lt;/strong&gt;&lt;br&gt;
I'm not trying to just list complaints.&lt;br&gt;
Working in this kind of environment, you naturally become a software engineer who understands hardware. You learn to read circuit diagrams and develop an instinct for designing software on top of physical constraints. I think that's an experience that's hard to gain in a pure software environment.&lt;br&gt;
And when you keep responding to requests like "solve this hardware problem with software," the range of your problem-solving ability grows. Implementing defined requirements and creatively working around physical constraints are two different kinds of skills.&lt;br&gt;
Being a software engineer at a hardware company is uncomfortable, but it's also a uniquely rewarding environment for growth.&lt;/p&gt;




&lt;p&gt;I develop software at a camera module company. I write about real experiences from the field.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>devops</category>
      <category>softwaredevelopment</category>
    </item>
  </channel>
</rss>
