DEV Community

Cover image for How to upscale and enhance video quality with FFmpeg: scaling, denoising, stabilization
Wanda
Wanda

Posted on • Originally published at apidog.com

How to upscale and enhance video quality with FFmpeg: scaling, denoising, stabilization

TL;DR

FFmpeg upscaling: use -vf "scale=1920:1080:flags=lanczos" for best quality (Lanczos algorithm). For denoising, apply hqdn3d to reduce grain and preserve edges. Video stabilization is handled via vidstab with a two-pass workflow. Combine these filters into one command for a robust video enhancement pipeline.

Try Apidog today

Introduction

Enhancing video quality in FFmpeg is more than just resizing. For best results, combine upscaling, denoising, and stabilization. Each step targets a specific issue: blurry or pixelated frames, unwanted noise/grain, and camera shake. This guide gives you step-by-step commands for each technique and shows how to chain them for optimal output.

Scaling Algorithms

The scaling algorithm you select in FFmpeg directly impacts the quality of your upscaled video.

Algorithm Speed Quality Best for
neighbor Fastest Lowest Pixel art
bilinear Fast Low Speed-critical
bicubic Medium Good Downscaling
lanczos Slower Best Upscaling

Upscale to 1080p with Lanczos:

ffmpeg -i input_720p.mp4 -vf "scale=1920:1080:flags=lanczos" -c:v libx264 -crf 20 output_1080p.mp4
Enter fullscreen mode Exit fullscreen mode

Maintain aspect ratio:

ffmpeg -i input.mp4 -vf "scale=1920:-2:flags=lanczos" -c:v libx264 -crf 20 output.mp4
Enter fullscreen mode Exit fullscreen mode

-2 auto-calculates height to preserve aspect ratio and ensures the value is divisible by 2.

Scale to 4K:

ffmpeg -i input.mp4 -vf "scale=3840:-2:flags=lanczos" -c:v libx264 -crf 18 -preset slow output_4k.mp4
Enter fullscreen mode Exit fullscreen mode

Use -preset slow for better compression at high resolutions.

Denoising with hqdn3d

Use the hqdn3d filter to remove grain/noise while retaining edge sharpness.

Standard denoising:

ffmpeg -i noisy_video.mp4 -vf "hqdn3d=4:3:6:4.5" -c:v libx264 -crf 20 denoised.mp4
Enter fullscreen mode Exit fullscreen mode

Parameters: luma_spatial:chroma_spatial:luma_temporal:chroma_temporal

  • luma_spatial (0-16): spatial denoise for brightness (default 4)
  • chroma_spatial (0-16): color channel spatial denoise (default 3)
  • luma_temporal (0-16): temporal denoise for brightness (default 6)
  • chroma_temporal (0-16): temporal denoise for color (default 4.5)

Stronger denoising:

ffmpeg -i grainy.mp4 -vf "hqdn3d=10:8:15:10" -c:v libx264 -crf 20 clean.mp4
Enter fullscreen mode Exit fullscreen mode

Higher values = more noise removed, but potential blurring. Test settings on a short clip first.

Light denoising (preserve more detail):

ffmpeg -i video.mp4 -vf "hqdn3d=2:1.5:3:2.5" -c:v libx264 -crf 20 output.mp4
Enter fullscreen mode Exit fullscreen mode

Stabilization with vidstab

Stabilization in FFmpeg uses the vidstab filter in two passes.

Check for vidstab support:

ffmpeg -filters | grep vidstab
Enter fullscreen mode Exit fullscreen mode

On macOS, brew install ffmpeg includes vidstab.

Pass 1: Motion Analysis

ffmpeg -i shaky_video.mp4 -vf "vidstabdetect=stepsize=6:shakiness=8:accuracy=9:result=transform.trf" -f null -
Enter fullscreen mode Exit fullscreen mode
  • shakiness=8 (1-10): higher values for more shake
  • accuracy=9 (1-15): higher = better detection
  • Outputs transform.trf (discard video output with -f null -)

Pass 2: Apply Stabilization

ffmpeg -i shaky_video.mp4 -vf "vidstabtransform=input=transform.trf:zoom=1:smoothing=10" -c:v libx264 -crf 20 stabilized.mp4
Enter fullscreen mode Exit fullscreen mode
  • zoom=1 (1% zoom to hide borders)
  • smoothing=10 (higher = smoother camera path)

More aggressive stabilization:

ffmpeg -i video.mp4 -vf "vidstabtransform=input=transform.trf:zoom=3:smoothing=30:optzoom=1" -c:v libx264 -crf 20 stable.mp4
Enter fullscreen mode Exit fullscreen mode
  • optzoom=1 lets FFmpeg auto-calculate needed zoom

Combined Quality Enhancement Pipeline

Chain denoising, scaling, and stabilization for best results. Always run vidstabdetect pass first.

ffmpeg -i source.mp4 \
  -vf "hqdn3d=4:3:6:4.5,scale=1920:-2:flags=lanczos,vidstabtransform=input=transform.trf:zoom=1:smoothing=10" \
  -c:v libx264 -crf 18 -preset slow \
  -c:a copy \
  enhanced.mp4
Enter fullscreen mode Exit fullscreen mode

Order: denoise → scale → stabilize. This prevents noise from being upscaled.

Sharpening Filter

If the video is soft (not noisy), use unsharp to enhance detail.

Default sharpening:

ffmpeg -i video.mp4 -vf "unsharp=5:5:1.5:5:5:0.5" -c:v libx264 -crf 20 sharpened.mp4
Enter fullscreen mode Exit fullscreen mode

Parameters: lx:ly:la:cx:cy:ca

  • lx:ly: luma matrix size
  • la: luma amount (positive = sharpen, negative = blur)
  • cx:cy:ca: chroma equivalents

Light sharpening: unsharp=3:3:0.5:3:3:0.0

Strong sharpening: unsharp=5:5:2.5:5:5:0.0

Performance Considerations

Enhancement filters are CPU/GPU intensive. Approximate times for a 10-minute 1080p video:

  • Scale only: 2-5 minutes
  • Scale + hqdn3d: 5-10 minutes
  • Scale + hqdn3d + vidstab: 15-25 minutes

Use -preset in x264 to balance speed/size:

  • ultrafast: fastest, largest files
  • fast: moderate
  • slow: smallest files
  • veryslow: rarely worth it over slow

Batch processing with GNU parallel:

ls *.mp4 | parallel ffmpeg -i {} -vf "scale=1920:-2:flags=lanczos" -c:v libx264 -crf 20 enhanced_{/}
Enter fullscreen mode Exit fullscreen mode

Connecting to AI Video Upscaling APIs

For best results on low-quality or damaged footage, use AI upscaling APIs in addition to FFmpeg.

Example: WaveSpeedAI API

POST https://api.wavespeed.ai/api/v2/wavespeed-ai/video-enhance
Authorization: Bearer {{WAVESPEED_API_KEY}}
Content-Type: application/json

{
  "video_url": "https://storage.example.com/source-video.mp4",
  "scale": 2,
  "enhance": true
}
Enter fullscreen mode Exit fullscreen mode

Test with Apidog before integrating:

  • Assert response status is 200
  • Assert response body has field id

Example assertions:

Status code is 200
Response body has field id
Enter fullscreen mode Exit fullscreen mode

Poll the status endpoint and compare the AI-upscaled video to FFmpeg's Lanczos result. AI often produces more natural textures and detail, especially on severely degraded footage. Use FFmpeg for standard tasks, and AI APIs when you need maximum visual improvement.

FAQ

Is Lanczos always better than bicubic?

Lanczos is best for upscaling. For downscaling, bicubic is faster with similar quality. Lanczos is more resource-intensive.

Does vidstab work for phone footage?

Yes—set shakiness=8-10 for handheld video.

How much zoom to hide stabilization borders?

Typically 3–8%. Use optzoom=1 for auto-calculation.

Can FFmpeg enhance low-res historical footage?

FFmpeg improves quality to a degree. For severely degraded footage, AI-based upscaling (ESRGAN, APIs) gives better results.

Does denoising slow playback?

No. Denoising is only applied during conversion; the output video plays normally.

Top comments (0)