DEV Community

Marco Rinaldi
Marco Rinaldi

Posted on

Event cameras and edge inference: why the frame-based mindset is still holding us back

So, the thing is, most edge inference pipelines for computer vision are built around a mental model that goes: capture frame → preprocess → run model → get result → repeat. Everything is designed around this loop. The latency budget, the model architecture, the preprocessing pipeline, the hardware selection — all of it assumes that "input" means "a dense grid of pixel values captured at a regular interval."

This works. For many applications it works well. But there's a class of problems where this mental model is the actual constraint — not the model, not the hardware, not the optimization. The sensing paradigm.

I've been working with event cameras at Prophesee for a while now, and I want to give you an honest, detailed picture of where they change the game and where they don't. Not hype. The technology is genuinely interesting and the engineering challenges are real.

What an event camera outputs

A conventional camera takes a photo. All pixels fire at the same time, you get a matrix of intensity values, you process it.

An event camera works completely differently. Each pixel operates independently and fires an event when the change in log luminance at that pixel crosses a threshold. The output is not a frame — it's a continuous, asynchronous stream of events, each containing:

event = {
    x: int,          # pixel column
    y: int,          # pixel row
    t: int,          # timestamp in microseconds
    p: bool,         # polarity: True = brightness increase, False = decrease
}
Enter fullscreen mode Exit fullscreen mode

A static scene generates almost no events. A fast-moving object generates a dense burst. The data rate is determined by scene activity, not a fixed clock.

This has concrete engineering consequences:

Property Frame-based camera Event camera
Temporal resolution 1/fps (e.g. 33ms at 30fps) ~1 microsecond
Latency floor Frame period Sub-millisecond
Motion blur Present for fast objects Eliminated
Dynamic range ~60 dB 120+ dB
Data at rest Constant (full frame every period) Near-zero
Data during motion Same Proportional to activity

Processing event data — the engineering reality

The mental model shift required here is significant. You don't have frames. You have a sparse, asynchronous, continuous stream. Standard CNNs that expect (batch, channels, height, width) tensors don't directly apply.

There are four main approaches in practice:

1. Event accumulation into pseudo-frames

The pragmatic approach. Accumulate events over a fixed time window (or fixed event count), render them into an image-like representation, then run a standard CNN.

import numpy as np
import torch

def events_to_voxel_grid(
    events: np.ndarray,  # shape (N, 4): x, y, t, p
    num_bins: int = 5,
    height: int = 480,
    width: int = 640,
) -> torch.Tensor:
    """
    Convert events to a voxel grid representation.
    Each bin accumulates events in a time slice.
    Output shape: (num_bins, height, width)
    """
    t_start = events[:, 2].min()
    t_end = events[:, 2].max()

    if t_end == t_start:
        return torch.zeros(num_bins, height, width)

    # Normalize timestamps to [0, num_bins)
    t_normalized = (events[:, 2] - t_start) / (t_end - t_start) * (num_bins - 1)

    voxel = torch.zeros(num_bins, height, width)

    xs = events[:, 0].astype(int)
    ys = events[:, 1].astype(int)
    ts = t_normalized
    ps = events[:, 3] * 2 - 1  # convert 0/1 polarity to -1/+1

    # Bilinear interpolation across time bins
    t_floor = np.floor(ts).astype(int)
    t_ceil = np.ceil(ts).astype(int)

    # Lower bin contribution
    weight_floor = torch.tensor(1 - (ts - t_floor), dtype=torch.float32)
    weight_ceil = torch.tensor(ts - t_floor, dtype=torch.float32)

    for i in range(len(events)):
        if 0 <= xs[i] < width and 0 <= ys[i] < height:
            if t_floor[i] < num_bins:
                voxel[t_floor[i], ys[i], xs[i]] += float(ps[i]) * float(weight_floor[i])
            if t_ceil[i] < num_bins and t_ceil[i] != t_floor[i]:
                voxel[t_ceil[i], ys[i], xs[i]] += float(ps[i]) * float(weight_ceil[i])

    return voxel
Enter fullscreen mode Exit fullscreen mode

Then feed this voxel grid to any standard backbone. ResNet, EfficientNet, whatever your latency/accuracy tradeoff requires. You can export to ONNX and run TensorRT on it exactly like any other vision model.

The tradeoff: you've reintroduced a temporal discretization. If your window is 10ms, you've effectively given yourself 100fps temporal resolution. Better than 30fps, worse than raw event resolution.

2. Graph neural networks over spatiotemporal point clouds

Treat events as a 3D point cloud in (x, y, t) space. Use GNNs to process them natively.

import torch
from torch_geometric.data import Data
from torch_geometric.nn import knn_graph, GCNConv

def events_to_graph(
    events: np.ndarray,  # (N, 4): x, y, t, p
    k: int = 16,          # k-nearest neighbors
    time_weight: float = 0.1,  # weight of time dimension relative to spatial
) -> Data:
    """
    Convert events to a k-NN graph for GNN processing.
    Nodes are events, edges connect spatiotemporally close events.
    """
    # Normalize coordinates
    pos = torch.tensor(events[:, :3], dtype=torch.float32)
    pos[:, 0] /= events[:, 0].max()  # normalize x to [0,1]
    pos[:, 1] /= events[:, 1].max()  # normalize y to [0,1]
    pos[:, 2] /= events[:, 2].max()  # normalize t to [0,1]
    pos[:, 2] *= time_weight           # down-weight time dimension

    # Node features: polarity and position
    x = torch.zeros(len(events), 4, dtype=torch.float32)
    x[:, 0] = torch.tensor(events[:, 3] * 2 - 1)  # polarity: -1 or +1
    x[:, 1:] = pos  # position as additional features

    # Build k-NN graph
    edge_index = knn_graph(pos, k=k)

    return Data(x=x, edge_index=edge_index, pos=pos)
Enter fullscreen mode Exit fullscreen mode

GNN approaches preserve temporal information better and handle the asynchronous nature more naturally. The cost: significantly higher computational overhead and harder to deploy on constrained edge hardware.

3. Spiking neural networks

The philosophically correct approach. Spiking neural networks (SNNs) are event-driven by nature — neurons fire when their membrane potential crosses a threshold, which maps naturally onto event data.

# Using SpikingJelly (spikingjelly.pytorch)
import torch
import torch.nn as nn
from spikingjelly.activation_based import neuron, functional, layer

class SpikingEventEncoder(nn.Module):
    def __init__(self, in_channels: int = 2, feature_dim: int = 128):
        super().__init__()
        self.encoder = nn.Sequential(
            layer.Conv2d(in_channels, 32, 3, padding=1, bias=False),
            layer.BatchNorm2d(32),
            neuron.LIFNode(tau=2.0),  # Leaky integrate-and-fire
            layer.Conv2d(32, 64, 3, stride=2, padding=1, bias=False),
            layer.BatchNorm2d(64),
            neuron.LIFNode(tau=2.0),
            layer.Conv2d(64, feature_dim, 3, stride=2, padding=1, bias=False),
            layer.BatchNorm2d(feature_dim),
            neuron.LIFNode(tau=2.0),
        )
        functional.set_step_mode(self, step_mode='m')  # multi-step mode

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        # x shape: (T, B, C, H, W) — T timesteps of binary spike frames
        return self.encoder(x)
Enter fullscreen mode Exit fullscreen mode

SNNs are energy-efficient on neuromorphic hardware (Intel Loihi, BrainScaleS) — we're talking orders of magnitude lower power than GPU inference. Harder to train (backpropagation through spikes requires surrogate gradients), harder to deploy on standard hardware, and the training frameworks are still maturing.

Where the architecture choice is decisive

I'll be direct about this. Event cameras are not better than frame cameras in general. They're decisively better for specific problems.

High-speed tracking: A ball at 200km/h. A manufacturing defect on a conveyor at 10m/s. At 240fps, you have 4ms between frames and ~22cm of position uncertainty for that ball. Event cameras track position continuously with microsecond resolution. No model architecture on a frame-based camera solves this — it's a sensing problem.

Low-latency reactive control: If you need your control loop to react in under 2ms, the 33ms floor of 30fps sensing is disqualifying. Even 240fps (4ms) is marginal for some robotics tasks. Event cameras give you sub-millisecond sensing latency.

High dynamic range scenes: Outdoor robotics, automotive, industrial inspection in variable lighting. Frame cameras require HDR fusion tricks (multi-exposure, tone mapping) that add latency and complexity. Event cameras have 120+ dB dynamic range natively.

Power-constrained edge deployment: A static scene generates near-zero events. Data-proportional computation means dramatic power savings at rest. Pair this with an SNN and neuromorphic hardware and you have inference systems that run on milliwatts.

Where frame-based is still correct

Static scene understanding: events encode change, not state. If you need to understand a scene that isn't moving, you have no data.

Texture and color: events detect luminance changes, not absolute color or fine texture. For tasks where texture is discriminative, frame cameras are better.

Existing ecosystem: PyTorch, TensorRT, ONNX, OpenCV, every pre-trained backbone — all of this is built for frame-based input. The event camera tooling ecosystem is much smaller and less mature.

The practical advice

If you're running real-time inference at standard framerates (<30ms latency budget) on scenes with standard dynamic range: stick with frame-based. The maturity gap in tooling, pre-trained models, and deployment infrastructure is not worth it.

If your application involves any of these — fast motion, low latency requirements, HDR environments, power-constrained edge hardware — start evaluating event cameras seriously. The Prophesee Metavision SDK is the most mature industrial option. IniVation makes the DAVIS cameras that are popular in research.

The frame-based mindset isn't wrong. It's just not universal. The sensing architecture is a design choice like any other, and for some problems, it's the most important design choice you'll make.


I'm working with event cameras daily and happy to answer specific questions about hardware, processing pipelines, or deployment. Drop them in the comments.

Top comments (0)