Some platforms need to blur NSFW content instead of removing it — dating apps, art communities, and news sites blur flagged images and let users opt in. Here's how to detect and blur in Python.
What You'll Build
A Python script that:
- Sends an image to the NSFW detection API
- Checks if any label exceeds your confidence threshold
- Applies a Gaussian blur if flagged
- Saves the blurred version
Prerequisites
pip install requests Pillow
Complete Script
import requests
from PIL import Image, ImageFilter
from io import BytesIO
from pathlib import Path
NSFW_API_URL = "https://nsfw-detect3.p.rapidapi.com/nsfw-detect"
HEADERS = {
"x-rapidapi-host": "nsfw-detect3.p.rapidapi.com",
"x-rapidapi-key": "YOUR_API_KEY",
"Content-Type": "application/x-www-form-urlencoded",
}
BLUR_CATEGORIES = {"Explicit Nudity", "Suggestive", "Violence", "Visually Disturbing"}
CONFIDENCE_THRESHOLD = 75
BLUR_RADIUS = 40
def moderate_and_blur(image_url: str, output_dir: str = ".") -> dict:
# Step 1: Detect
response = requests.post(NSFW_API_URL, headers=HEADERS, data={"url": image_url})
labels = response.json()["body"]["ModerationLabels"]
flagged = [
l for l in labels
if l["Name"] in BLUR_CATEGORIES and l["Confidence"] > CONFIDENCE_THRESHOLD
]
if not flagged:
return {"action": "safe", "path": None}
# Step 2: Download and blur
img = Image.open(BytesIO(requests.get(image_url).content))
blurred = img.filter(ImageFilter.GaussianBlur(radius=BLUR_RADIUS))
output_path = Path(output_dir) / "blurred_output.jpg"
blurred.save(output_path, "JPEG", quality=85)
return {
"action": "blurred",
"labels": [f"{l['Name']} ({l['Confidence']:.0f}%)" for l in flagged],
"path": str(output_path),
}
result = moderate_and_blur("https://example.com/user-upload.jpg")
if result["action"] == "blurred":
print(f"Blurred. Flagged: {', '.join(result['labels'])}")
else:
print("Safe — no blur needed")
Use Cases
- Dating apps — Blur explicit photos by default, let users reveal
- Content feeds — Reddit-style NSFW tags with blur overlay
- News platforms — Auto-blur graphic content with content warnings
Tips
- Threshold: 75% balances false positives/negatives. Lower for children's platforms, raise for art communities
- Blur radius: Use 50 for Explicit Nudity, 20 for Suggestive — different intensities for different severity
- Performance: Generate blurred version once at upload time, cache both versions
- Pipeline: Process moderation asynchronously — don't block the upload response
👉 Read the full tutorial with more use cases and integration patterns
Top comments (0)