DEV Community

TildAlice
TildAlice

Posted on • Originally published at tildalice.io

OpenCV to Albumentations: 3x Faster Augmentation Pipeline

Why Your OpenCV Augmentation Loop Is Probably Too Slow

I've seen production pipelines where augmentation takes longer than model training per epoch. The culprit? Hand-rolled OpenCV transforms applied one by one in a Python for-loop.

OpenCV is great for reading images and basic preprocessing. But when you're stacking 8+ augmentations per image across 50,000 training samples, those sequential cv2.rotate(), cv2.GaussianBlur(), and manual brightness adjustments compound into a bottleneck. Albumentations solves this by batching transforms into a single optimized pipeline with minimal memory copies.

Here's what I mean. A typical OpenCV augmentation setup looks like this:


python
import cv2
import numpy as np
import random

def augment_opencv(image):
    # Horizontal flip
    if random.random() > 0.5:
        image = cv2.flip(image, 1)

    # Rotation
    angle = random.uniform(-15, 15)
    h, w = image.shape[:2]
    M = cv2.getRotationMatrix2D((w/2, h/2), angle, 1.0)
    image = cv2.warpAffine(image, M, (w, h))

    # Brightness adjustment
    brightness_factor = random.uniform(0.8, 1.2)
    image = np.clip(image * brightness_factor, 0, 255).astype(np.uint8)

    # Gaussian blur
    if random.random() > 0.7:
        image = cv2.GaussianBlur(image, (5, 5), 0)

    # Hue/saturation shift (requires HSV conversion)
    hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
    hsv[:, :, 0] = (hsv[:, :, 0] + random.randint(-10, 10)) % 180

---

*Continue reading the full article on [TildAlice](https://tildalice.io/opencv-to-albumentations-augmentation-pipeline-migration/)*
Enter fullscreen mode Exit fullscreen mode

Top comments (0)