Blurring a face is easy if you only care about a static demo.
It gets more interesting when the user can redetect faces, expand padding, move patches, resize them, disable individual faces, change blur strength, and then export the final image without everything drifting out of alignment.
The architecture that held up best for us was patch-based.
The full companion guide is here:
https://happyimg.com/guides/how-face-blur-patches-stay-aligned-during-export
First build one blurred source image
Instead of blurring each patch independently, the editor creates a blurred version of the entire source image first:
ctx.filter = `blur(${blurStrength}px)`;
ctx.drawImage(image, 0, 0, canvas.width, canvas.height);
ctx.filter = "none";
That gives the editor one reusable blur source for every detected face.
The advantage is practical:
- blur strength changes rebuild one source image
- existing patches can keep their geometry
- face interactions stay fast
Every face is a cropped image patch
Each detected face becomes a FabricImage patch that points into the blurred source image:
const patch = new FabricImage(this.blurredSourceElement!, {
left: region.left + region.width / 2,
top: region.top + region.height / 2,
width: region.width,
height: region.height,
cropX: region.left,
cropY: region.top,
});
That is the key design choice.
The editor is not blurring arbitrary rectangles on demand. It is showing cropped windows into a precomputed blurred image.
Geometry and crop source have to move together
If the user drags or resizes a blur patch, the patch cannot only change its visible rectangle. The crop window inside the blurred source has to stay aligned too.
That is why the implementation normalizes the patch back into real geometry and updates cropX and cropY alongside position and size:
patch.set({
left: geometry.left + geometry.width / 2,
top: geometry.top + geometry.height / 2,
width: geometry.width,
height: geometry.height,
cropX: geometry.left,
cropY: geometry.top,
scaleX: 1,
scaleY: 1,
});
That reset is important. It turns temporary drag/scale transforms back into stable source-image coordinates.
Padding matters before editing starts
Face detections are usually too tight on their own, so the code expands each detection by a configurable percentage before converting it into a patch.
That gives users a better starting point and reduces the number of patches that need manual resizing just to cover the edges of a face properly.
It is a small step, but it makes the blur tool feel much less fragile.
Export should replay the current patch set
When the user exports, the editor builds a fresh StaticCanvas at original size, adds the untouched base image, then re-adds each visible blur patch with its current geometry and crop source.
That means the saved file reflects:
- current blur strength
- current patch positions
- current patch sizes
- current enabled or disabled state
Nothing depends on the on-screen viewport.
Why patch-based blur works
This model stays understandable under real editing pressure:
- one blur source
- many editable crop windows
- normalized geometry after interaction
- export rebuilt from source pixels
That is what keeps blur regions aligned even after several rounds of detection, adjustment, and export.
More implementation details:
https://happyimg.com/guides/how-face-blur-patches-stay-aligned-during-export
Top comments (0)