DEV Community

Cover image for Auto-Detect Should Not Auto-Apply: Building Reviewable Redaction Overlays
byeval
byeval

Posted on • Originally published at happyimg.com

Auto-Detect Should Not Auto-Apply: Building Reviewable Redaction Overlays

The easiest way to make automatic redaction feel unsafe is to skip the review step.

OCR, barcode detection, license-plate heuristics, and signature detection all make mistakes. If the product silently bakes those guesses into the exported image, users cannot tell whether the result is cautious, incomplete, or just wrong.

The better architecture is to turn detections into normal editor objects first.

The full companion guide is here:

https://happyimg.com/guides/how-reviewable-redaction-overlays-work-before-export

Normalize detector output before it reaches the editor

Different detectors can start from very different internals:

  • OCR text blocks
  • barcode APIs
  • plate-specific filters
  • connected-component image analysis

The editor should not need to know about any of that once a candidate region exists.

The useful boundary is one normalized shape:

  • left
  • top
  • width
  • height

Once every detector emits that region format, the editor can stay consistent even while the detection engines stay totally different.

Detection results should become editor objects

In the markup editor, automatic suggestions are inserted as ordinary objects.

Text detections can create redact rectangles. Blur and pixelation detections can create effect patches. The important part is that the result is editable:

  • moveable
  • resizable
  • deletable
  • visible before export

That is a much better interaction model than "the detector already changed your image."

Tag what the detector owns

The next implementation detail is the part that makes re-detect usable.

Auto-generated objects are tagged with a source identifier:

redact.data = {
  objectType: "shape",
  filled: true,
  autoGenerated: options?.autoGenerated,
};
Enter fullscreen mode Exit fullscreen mode

That means the editor can distinguish OCR-generated regions from QR-generated regions and both from manual user edits.

Without that tag, every new scan risks wiping out the user's manual cleanup.

Replace only the detector's previous suggestions

Once objects carry a source tag, rerunning detection becomes much safer:

replaceAutoRedacts(regions: RedactRegion[], sourceTag: string) {
  this.clearAutoGenerated(sourceTag);
  regions.forEach((region) =>
    this.addRedact(region, {
      autoGenerated: sourceTag,
      select: false,
    })
  );
}
Enter fullscreen mode Exit fullscreen mode

That gives you a clean behavior model:

  • detector reruns replace only their own old suggestions
  • manual edits stay intact
  • the user keeps control of the final reviewed state

This is one of those small code decisions that has huge product consequences.

Export still happens after review

The overlay architecture also keeps the export path honest.

Instead of baking detection results in immediately, the editor rebuilds the final image from the source plus the current object set. So the saved file always reflects the reviewed state, not the detector's first guess.

That is exactly where privacy tooling should land. Detectors propose. Users decide. Export serializes the decision.

The practical lesson

If an automatic detector can be wrong, it should create editable overlays rather than an irreversible export.

That rule works for OCR, QR codes, signatures, license plates, and basically every other privacy-sensitive suggestion pipeline I have seen.

It is also the point where an auto-detection feature stops feeling like a demo and starts feeling like a tool people can trust.

More implementation details:

https://happyimg.com/guides/how-reviewable-redaction-overlays-work-before-export

Top comments (0)