DEV Community

RenderIO
RenderIO

Posted on • Originally published at renderio.dev

Batch Process AI Videos for Social Media Platforms

Why you can't upload the same video everywhere

You generated a great AI video. Now you need it on TikTok, Instagram Reels, YouTube Shorts, LinkedIn, and Twitter. Each platform wants different dimensions, aspect ratios, file sizes, and audio levels. If you're also looking to compress video with FFmpeg before uploading, that guide covers CRF values tuned per platform.

Manually exporting five versions in Premiere takes 15-20 minutes per video. If you're producing 10 videos per day, that's 3 hours of repetitive export work. Nobody should spend their day doing that.

FFmpeg handles all five conversions. RenderIO runs them in parallel so all five finish in the time it takes to process one.

Platform specifications for batch processing AI video

Here are the exact specs each platform expects (verified March 2026):

Platform Resolution Aspect Ratio Max Duration Max File Size Audio
TikTok 1080x1920 9:16 10 min 287 MB -14 LUFS
Instagram Reels 1080x1920 9:16 3 min 250 MB -14 LUFS
YouTube Shorts 1080x1920 9:16 3 min 256 MB -14 LUFS
LinkedIn 1920x1080 16:9 10 min 5 GB -16 LUFS
Twitter/X 1280x720 16:9 2:20 min 512 MB -16 LUFS

The differences look subtle but they matter. Wrong aspect ratio means black bars or cropped faces. Wrong audio levels mean your video sounds too loud or too quiet next to native content. And TikTok actually rejects files over 287 MB, not 300 MB like some guides claim.

Note on LinkedIn: it also accepts 9:16 vertical video now, but 16:9 still performs better in the feed because LinkedIn's desktop audience is larger than mobile. If your content is primarily mobile-targeted, you could skip the landscape conversion.

FFmpeg commands for each platform

Starting from a 1920x1080 AI-generated video.

TikTok (9:16, 1080x1920)

ffmpeg -i input.mp4 \
  -map_metadata -1 \
  -filter_complex "[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920,boxblur=25[bg];[0:v]scale=1080:1920:force_original_aspect_ratio=decrease,noise=alls=16:allf=t[fg];[bg][fg]overlay=(W-w)/2:(H-h)/2[v]" \
  -map "[v]" -map 0:a? \
  -af "loudnorm=I=-14:TP=-2:LRA=7" \
  -c:v libx264 -crf 22 -c:a aac -b:a 128k \
  -movflags +faststart \
  tiktok.mp4
Enter fullscreen mode Exit fullscreen mode

The boxblur background avoids ugly black bars. The blurred version of your video fills the frame behind the properly scaled foreground. The noise filter adds grain so it doesn't look sterile.

Instagram Reels (9:16, 1080x1920, 3 min max)

Reels technically allows up to 3 minutes, but the algorithm pushes shorter content harder. Keeping it under 90 seconds gives you better reach in Explore.

ffmpeg -i input.mp4 \
  -map_metadata -1 -t 90 \
  -filter_complex "[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920,boxblur=25[bg];[0:v]scale=1080:1920:force_original_aspect_ratio=decrease[fg];[bg][fg]overlay=(W-w)/2:(H-h)/2[v]" \
  -map "[v]" -map 0:a? \
  -af "loudnorm=I=-14:TP=-2:LRA=7" \
  -c:v libx264 -crf 22 -c:a aac -b:a 128k \
  -movflags +faststart \
  reels.mp4
Enter fullscreen mode Exit fullscreen mode

The -t 90 flag truncates at 90 seconds, which keeps you in the algorithm's sweet spot. If your video is under 90 seconds, it has no effect.

YouTube Shorts (9:16, 1080x1920, 3 min max)

YouTube extended Shorts to 3 minutes in October 2024. The command below trims to 60 seconds since shorter Shorts get more impressions, but you can bump -t 60 to -t 180 if your content needs the extra time.

ffmpeg -i input.mp4 \
  -map_metadata -1 -t 60 \
  -filter_complex "[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920,boxblur=25[bg];[0:v]scale=1080:1920:force_original_aspect_ratio=decrease[fg];[bg][fg]overlay=(W-w)/2:(H-h)/2[v]" \
  -map "[v]" -map 0:a? \
  -af "loudnorm=I=-14:TP=-2:LRA=7" \
  -c:v libx264 -crf 20 -c:a aac -b:a 128k \
  -movflags +faststart \
  shorts.mp4
Enter fullscreen mode Exit fullscreen mode

Lower CRF (20 vs 22) because YouTube re-encodes aggressively. Starting with higher quality gives a better result after YouTube's compression pass.

LinkedIn (16:9, 1920x1080)

ffmpeg -i input.mp4 \
  -map_metadata -1 \
  -vf "scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2:black" \
  -af "loudnorm=I=-16:TP=-2:LRA=11" \
  -c:v libx264 -crf 20 -c:a aac -b:a 192k \
  -movflags +faststart \
  linkedin.mp4
Enter fullscreen mode Exit fullscreen mode

LinkedIn targets -16 LUFS (quieter than TikTok/Reels). Higher audio bitrate because LinkedIn's player doesn't re-encode as aggressively.

Twitter/X (16:9, 1280x720, 2:20 max)

ffmpeg -i input.mp4 \
  -map_metadata -1 -t 140 \
  -vf "scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2:black" \
  -af "loudnorm=I=-16:TP=-2:LRA=11" \
  -c:v libx264 -crf 23 -c:a aac -b:a 128k \
  -movflags +faststart \
  twitter.mp4
Enter fullscreen mode Exit fullscreen mode

Twitter limits videos to 2 minutes 20 seconds (140 seconds). CRF 23 keeps file size down since Twitter's 512 MB limit is generous but their player favors smaller files for faster loading.

Batch all five with RenderIO API

Send five API calls in parallel. Each produces one platform version. For more on the API, see the complete FFmpeg API guide.

const INPUT_URL = "https://storage.example.com/ai-video.mp4";

const platforms = [
  {
    name: "tiktok",
    command: `-i {{in_video}} -map_metadata -1 -filter_complex "[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920,boxblur=25[bg];[0:v]scale=1080:1920:force_original_aspect_ratio=decrease,noise=alls=16:allf=t[fg];[bg][fg]overlay=(W-w)/2:(H-h)/2[v]" -map "[v]" -map 0:a? -af "loudnorm=I=-14:TP=-2:LRA=7" -c:v libx264 -crf 22 -c:a aac -b:a 128k -movflags +faststart {{out_video}}`,
  },
  {
    name: "reels",
    command: `-i {{in_video}} -map_metadata -1 -t 90 -filter_complex "[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920,boxblur=25[bg];[0:v]scale=1080:1920:force_original_aspect_ratio=decrease[fg];[bg][fg]overlay=(W-w)/2:(H-h)/2[v]" -map "[v]" -map 0:a? -af "loudnorm=I=-14:TP=-2:LRA=7" -c:v libx264 -crf 22 -c:a aac -b:a 128k -movflags +faststart {{out_video}}`,
  },
  {
    name: "shorts",
    command: `-i {{in_video}} -map_metadata -1 -t 60 -filter_complex "[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920,boxblur=25[bg];[0:v]scale=1080:1920:force_original_aspect_ratio=decrease[fg];[bg][fg]overlay=(W-w)/2:(H-h)/2[v]" -map "[v]" -map 0:a? -af "loudnorm=I=-14:TP=-2:LRA=7" -c:v libx264 -crf 20 -c:a aac -b:a 128k -movflags +faststart {{out_video}}`,
  },
  {
    name: "linkedin",
    command: `-i {{in_video}} -map_metadata -1 -vf "scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2:black" -af "loudnorm=I=-16:TP=-2:LRA=11" -c:v libx264 -crf 20 -c:a aac -b:a 192k -movflags +faststart {{out_video}}`,
  },
  {
    name: "twitter",
    command: `-i {{in_video}} -map_metadata -1 -t 140 -vf "scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2:black" -af "loudnorm=I=-16:TP=-2:LRA=11" -c:v libx264 -crf 23 -c:a aac -b:a 128k -movflags +faststart {{out_video}}`,
  },
];

async function processForAllPlatforms(inputUrl) {
  const jobs = platforms.map(platform =>
    fetch("https://renderio.dev/api/v1/run-ffmpeg-command", {
      method: "POST",
      headers: {
        "X-API-KEY": process.env.RENDERIO_API_KEY,
        "Content-Type": "application/json",
      },
      body: JSON.stringify({
        ffmpeg_command: platform.command,
        input_files: { in_video: inputUrl },
        output_files: { out_video: `${platform.name}.mp4` },
      }),
    }).then(r => r.json())
  );

  return Promise.all(jobs);
}

const results = await processForAllPlatforms(INPUT_URL);
// 5 jobs running in parallel on Cloudflare's edge
Enter fullscreen mode Exit fullscreen mode

Five API calls. Five platform-optimized videos. All processing happens in parallel, so you wait for one, not five.

If you also need to generate speed variations — a 1.5x version for Stories previews or a 2x cut for YouTube Shorts teasers — you can add those as additional jobs in the same batch. The FFmpeg speed up video guide covers setpts and atempo with ready-to-use API examples.

Automate batch processing with n8n or Zapier

n8n workflow

  1. Trigger: New file in Google Drive, S3, or webhook
  2. Split: Fan out to 5 parallel branches
  3. HTTP Request: Each branch calls RenderIO with platform-specific command
  4. Wait: Poll for completion (or use webhook callback)
  5. Upload: Send each output to the respective platform's upload API

Zapier workflow

  1. Trigger: New file in Dropbox/Drive
  2. Webhooks by Zapier: POST to RenderIO API (TikTok version)
  3. Delay: Wait 30 seconds
  4. Webhooks by Zapier: GET command status
  5. Filter: Continue when status is "SUCCESS"
  6. Repeat for each platform using Zapier paths

For detailed workflow setup, see the n8n video processing guide or the AI UGC video processing pipeline guide. Brands processing creator content should also check the UGC video processing guide for normalization and branding steps.

Choosing aspect ratio strategy

When converting 16:9 to 9:16, you have three options. The right choice depends on your content.

Crop from center works when the subject is centered (most AI talking-head videos). You lose the left and right edges but keep the subject sharp.

Blurred background preserves the full frame at a smaller size with a blurred version filling the background. Looks clean but the video appears smaller on screen.

Letterbox/pillarbox (black bars) is technically correct but looks terrible on mobile-first platforms. Avoid it for TikTok and Reels.

For landscape-to-portrait conversion specifically, the HeyGen to Reels guide covers cropping strategies for talking-head content.

Quality optimization tips

Don't over-compress for TikTok. TikTok re-encodes everything you upload. If you upload at CRF 28 (low quality), TikTok's re-encoding makes it look worse. Upload at CRF 20-22 and let TikTok's encoder work with better source material.

Use -movflags +faststart on every output. This moves the metadata to the beginning of the file, which means playback starts faster. Platforms and users both prefer it.

Match the frame rate. If your source is 24fps, don't upconvert to 30fps. It creates stuttery motion interpolation artifacts. Keep the original frame rate unless a platform specifically requires 30fps.

Test your audio. Play your output next to a popular native video on each platform. If yours sounds noticeably quieter or louder, your LUFS target is off. The -14 LUFS target for TikTok/Reels is a guideline — some creators target -12 for punchier audio.

Scaling to 10+ videos per day

At 10 videos per day, you're making 50 API calls daily (5 platforms x 10 videos). That's 1,500 per month.

Daily videos Monthly API calls Plan Monthly cost
1-3 150-450 Starter $9/mo
4-6 600-900 Growth $29/mo
10-50 1,500-7,500 Business $99/mo
100+ 15,000+ Business + overage $99/mo + usage

Each video processed on RenderIO costs about $0.005-$0.018 depending on your plan. Compare that to building and maintaining your own FFmpeg server, which runs $50-200/month in compute alone before engineering time. For a full cost comparison, see the FFmpeg API pricing comparison. You can also transcode video with FFmpeg locally for testing before scaling to the API.

Troubleshooting common issues

Black screen on TikTok upload. Usually means the video codec isn't H.264 or the pixel format isn't yuv420p. Add -pix_fmt yuv420p to your command.

Audio out of sync after conversion. Common with variable frame rate sources. Add -vsync cfr to force constant frame rate before other filters.

Instagram rejects the upload. Check file size (250 MB max for Reels) and duration (3 minutes max). Also confirm the video is at least 3 seconds long, since Reels rejects shorter clips.

FAQ

Can I batch process videos that aren't AI-generated?

Yes. The commands work with any MP4 input. Screen recordings, phone clips, webcam footage, anything. The pipeline doesn't care about the video source.

How long does batch processing take for one video?

About 15-45 seconds for all five platform versions to finish, depending on the video length. They process in parallel, so you're waiting for the slowest one, not all five sequentially.

What if a platform changes its specs?

Update the relevant command in your platforms array. The rest of the pipeline stays the same. We update this guide when platforms change their requirements.

Can I add a watermark or logo during batch processing?

Yes. Add an overlay filter to each command. For example: -i {{in_video}} -i {{in_logo}} -filter_complex "[1:v]scale=80:-1[logo];[0:v][logo]overlay=W-w-20:20[v]". The logo file is a second input in the API call.

What's the difference between this and using CapCut's batch export?

CapCut requires manual setup per video and runs on your machine. RenderIO processes via API, so it integrates into automated workflows (n8n, Zapier, custom scripts) and runs on Cloudflare's edge infrastructure. No local compute needed.

Top comments (0)