DEV Community

Cover image for Efficient Video Processing Pipelines with FFmpeg and C#
Willem Janssen
Willem Janssen

Posted on

Efficient Video Processing Pipelines with FFmpeg and C#

Video conversion at scale can quickly turn into a performance bottleneck — especially when dealing with multiple input formats, varying resolutions, and long transcoding times.
In this post, we’ll explore how to build an efficient, developer-friendly video processing pipeline in .NET using FFmpeg, focusing on asynchronous execution, parallelism, and resource control.

Why You Need a Pipeline, Not Just a Script

Running FFmpeg manually or invoking it with a single command is fine for one-off conversions.
But when you need to process dozens or hundreds of files — or integrate conversions into a web service — you’ll want a pipeline that can:

  • Queue jobs dynamically
  • Run conversions in parallel (but safely)
  • Log progress and handle errors gracefully
  • Optimize CPU and GPU usage

Let’s see how to design such a system in .NET 8.

The Core: Executing FFmpeg from C

You can run FFmpeg commands directly using System.Diagnostics.Process, which gives you full control over input/output streams.

using System.Diagnostics;

public async Task<int> RunFFmpegAsync(string input, string output)
{
    var startInfo = new ProcessStartInfo
    {
        FileName = "ffmpeg",
        Arguments = $"-i \"{input}\" -c:v libx264 -preset fast -crf 22 \"{output}\"",
        RedirectStandardError = true,
        RedirectStandardOutput = true,
        UseShellExecute = false,
        CreateNoWindow = true
    };

    using var process = new Process { StartInfo = startInfo };
    process.Start();

    await process.WaitForExitAsync();
    return process.ExitCode;
}
Enter fullscreen mode Exit fullscreen mode

That’s the simplest foundation.
But we can do much better.

Step 1: Create a Job Queue

A simple job queue helps manage concurrent video conversions without overwhelming system resources.

public class VideoJob
{
    public string InputPath { get; set; } = "";
    public string OutputPath { get; set; } = "";
}

public class VideoPipeline
{
    private readonly Channel<VideoJob> _channel = Channel.CreateUnbounded<VideoJob>();

    public async Task EnqueueAsync(VideoJob job)
        => await _channel.Writer.WriteAsync(job);

    public async Task StartProcessingAsync(int maxParallel = 2)
    {
        var tasks = Enumerable.Range(0, maxParallel).Select(_ => ProcessJobsAsync());
        await Task.WhenAll(tasks);
    }

    private async Task ProcessJobsAsync()
    {
        await foreach (var job in _channel.Reader.ReadAllAsync())
        {
            Console.WriteLine($"Processing {job.InputPath}");
            await RunFFmpegAsync(job.InputPath, job.OutputPath);
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Now you can queue jobs safely and let them process concurrently with controlled parallelism.

Step 2: Add Progress and Logging

FFmpeg reports progress via stderr. You can parse this output to display or store progress updates.

process.ErrorDataReceived += (s, e) =>
{
    if (!string.IsNullOrEmpty(e.Data) && e.Data.Contains("time="))
    {
        // Extract progress info
        Console.WriteLine(e.Data);
    }
};
Enter fullscreen mode Exit fullscreen mode

You can expand this by adding structured logging (e.g., Serilog, Microsoft.Extensions.Logging, or NLog) and even send metrics to Application Insights if you’re running in Azure.

Step 3: Handle Hardware Acceleration (Optional)

If you’re processing large batches, using GPU acceleration can drastically reduce encoding time.

For example, with NVIDIA NVENC:

ffmpeg -hwaccel cuda -i input.mp4 -c:v h264_nvenc -preset fast output.mp4
Enter fullscreen mode Exit fullscreen mode

Or Intel QuickSync:

ffmpeg -hwaccel qsv -c:v h264_qsv -i input.mp4 -preset medium output.mp4
Enter fullscreen mode Exit fullscreen mode

You can expose these as configurable parameters in your C# app, enabling your pipeline to dynamically select CPU or GPU encoding depending on hardware availability.

Step 4: Scaling the Pipeline

For large-scale workloads, you can:

  • Deploy your service on Azure Container Apps or Kubernetes
  • Use Azure Queue Storage or RabbitMQ to handle distributed job scheduling
  • Persist logs and statuses to SQL Server or Cosmos DB
  • Integrate Azure Functions to trigger conversions automatically on file upload

This approach transforms your local script into a scalable, cloud-ready video processing service.

Bonus: Use FFmpeg Wrappers for .NET

While you can call FFmpeg directly, libraries like FFMpegCore simplify integration:

await FFMpegArguments
    .FromFileInput("input.mp4")
    .OutputToFile("output.mp4", true, options => options
        .WithVideoCodec("libx264")
        .WithConstantRateFactor(23)
        .WithSpeedPreset(Speed.UltraFast))
    .ProcessAsynchronously();
Enter fullscreen mode Exit fullscreen mode

It’s great for prototyping or when you don’t need ultra-fine control over the CLI.

Conclusion

By building a structured pipeline instead of running single FFmpeg commands, you’ll achieve:

  • Better performance and stability
  • Easier scaling and monitoring
  • Safer parallel execution
  • Cleaner code that integrates with modern .NET apps

Whether you’re developing a desktop video converter or a backend media service, these techniques make FFmpeg automation both robust and developer-friendly.

Author: Willem Janssen
Technical specialist in video conversion and creator of DVDConverter.app, a lightweight tool for digitizing your DVD collection.

Top comments (0)