DEV Community

Javid Jamae
Javid Jamae

Posted on • Originally published at ffmpeg-micro.com

How to Use FFmpeg in Python Without Installing It

Originally published at ffmpeg-micro.com

You've written a Python script that processes video. It works on your laptop. You deploy it to Lambda (or Render, or Vercel) and get the dreaded error: FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'.

The problem is simple: FFmpeg is a system binary, and most cloud platforms don't ship it. The typical fix involves Docker layers, custom build packs, or static binaries crammed into a Lambda layer. It works, but it's fragile and annoying to maintain.

There's a better approach. Instead of installing FFmpeg on every platform, call it as an API. Python's requests library is all you need.

The Subprocess Approach (And Why It Breaks)

Most Python FFmpeg tutorials show something like this:

import subprocess

subprocess.run([
    "ffmpeg", "-i", "input.mp4",
    "-c:v", "libx264", "-crf", "23",
    "output.mp4"
])
Enter fullscreen mode Exit fullscreen mode

Locally, this is fine. But it assumes FFmpeg is installed at the OS level. On AWS Lambda, Google Cloud Functions, Vercel serverless functions, or any container without FFmpeg pre-installed, this fails instantly.

The workarounds are all painful:

  • Lambda layers with a static FFmpeg binary (70MB+ zip, architecture-specific)
  • Docker images with apt-get install ffmpeg in the Dockerfile
  • Python wrappers like ffmpeg-python or imageio-ffmpeg that still need the binary underneath

Every one of these adds deployment complexity. And when FFmpeg updates or you switch platforms, you're rebuilding the whole thing.

Using FFmpeg as an API from Python

FFmpeg Micro is a cloud API that runs FFmpeg for you. One HTTP call, and your Python code can transcode, resize, or watermark video. No binary, no Docker, no Lambda layer.

The entire integration is just Python's requests library:

import requests

API_KEY = "your-api-key"
BASE_URL = "https://api.ffmpeg-micro.com"

response = requests.post(
    f"{BASE_URL}/v1/transcodes",
    headers={"Authorization": f"Bearer {API_KEY}"},
    json={
        "inputs": [{"url": "https://example.com/video.mp4"}],
        "outputFormat": "mp4",
        "preset": {"quality": "medium", "resolution": "720p"}
    }
)

job = response.json()
print(f"Job ID: {job['jobId']}")
Enter fullscreen mode Exit fullscreen mode

That's it. No subprocess, no binary, no system dependency. This runs the same way on Lambda, Render, Vercel, your laptop, or anywhere Python runs.

Checking Job Status and Downloading Results

Transcoding takes a few seconds to a few minutes depending on file size. Poll the job status:

import time

job_id = job["jobId"]

while True:
    status_response = requests.get(
        f"{BASE_URL}/v1/transcodes/{job_id}",
        headers={"Authorization": f"Bearer {API_KEY}"}
    )
    status = status_response.json()

    if status["status"] == "completed":
        print("Done!")
        break
    elif status["status"] == "failed":
        print(f"Failed: {status.get('error')}")
        break

    time.sleep(2)
Enter fullscreen mode Exit fullscreen mode

Once complete, grab the download URL:

download_response = requests.get(
    f"{BASE_URL}/v1/transcodes/{job_id}/download",
    headers={"Authorization": f"Bearer {API_KEY}"}
)

download_url = download_response.json()["url"]
print(f"Download: {download_url}")
Enter fullscreen mode Exit fullscreen mode

Advanced: Custom FFmpeg Options

The preset mode covers most cases, but you can pass raw FFmpeg options for full control:

response = requests.post(
    f"{BASE_URL}/v1/transcodes",
    headers={"Authorization": f"Bearer {API_KEY}"},
    json={
        "inputs": [{"url": "https://example.com/video.mp4"}],
        "outputFormat": "webm",
        "options": [
            {"option": "-c:v", "argument": "libvpx-vp9"},
            {"option": "-crf", "argument": "30"},
            {"option": "-b:v", "argument": "0"},
            {"option": "-c:a", "argument": "libopus"}
        ]
    }
)
Enter fullscreen mode Exit fullscreen mode

This converts MP4 to WebM using VP9 and Opus codecs. The same FFmpeg flags you'd use on the command line, just passed as structured data.

Uploading Files First

If your video isn't already hosted at a public URL, upload it through the API first. It's a three-step process:

# Step 1: Get a presigned upload URL
presign = requests.post(
    f"{BASE_URL}/v1/upload/presigned-url",
    headers={"Authorization": f"Bearer {API_KEY}"},
    json={
        "filename": "my-video.mp4",
        "contentType": "video/mp4",
        "fileSize": 15000000
    }
).json()

# Step 2: Upload directly to cloud storage
with open("my-video.mp4", "rb") as f:
    requests.put(
        presign["uploadUrl"],
        headers={"Content-Type": "video/mp4"},
        data=f
    )

# Step 3: Confirm the upload
confirm = requests.post(
    f"{BASE_URL}/v1/upload/confirm",
    headers={"Authorization": f"Bearer {API_KEY}"},
    json={
        "filename": presign["filename"],
        "fileSize": 15000000
    }
).json()

# Now use the file URL for transcoding
file_url = confirm["fileUrl"]
Enter fullscreen mode Exit fullscreen mode

The fileUrl from the confirm step is what you pass in the inputs array when creating a transcode job.

When to Use an API vs. Local FFmpeg

Local FFmpeg still makes sense for some cases. If you're running batch processing on a dedicated server that you control, a local install is simpler and cheaper at high volumes.

But if you're building a web app, running serverless functions, or deploying to platforms where you don't control the OS, an API removes an entire category of deployment problems. You also don't have to think about FFmpeg version mismatches, codec availability, or scaling compute for video-heavy workloads.

FFmpeg Micro has a free tier with 10 minutes of processing per month. Enough to build and test your integration before you commit to anything. Sign up and get your API key to try the examples above.

Top comments (0)