<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: RenderIO</title>
    <description>The latest articles on DEV Community by RenderIO (@renderio).</description>
    <link>https://dev.to/renderio</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/renderio"/>
    <language>en</language>
    <item>
      <title>E-Commerce Video Processing API: Product Video Pipeline</title>
      <dc:creator>RenderIO</dc:creator>
      <pubDate>Mon, 06 Apr 2026 11:19:36 +0000</pubDate>
      <link>https://dev.to/renderio/e-commerce-video-processing-api-product-video-pipeline-36ec</link>
      <guid>https://dev.to/renderio/e-commerce-video-processing-api-product-video-pipeline-36ec</guid>
      <description>&lt;h2&gt;
  
  
  E-commerce needs video at scale
&lt;/h2&gt;

&lt;p&gt;Product pages with video convert better than static images. A 2023 Wyzowl survey found that 82% of consumers were convinced to buy a product after watching a video. Shopify merchants who added product video to their listings reported higher conversion rates, particularly in electronics and apparel. TikTok Shop requires video for every listing. Amazon encourages product video. Instagram Shopping is video-first.&lt;/p&gt;

&lt;p&gt;The problem isn't creating one video. It's processing thousands. Every product needs videos resized for each platform, watermarks for different sales channels, compressed versions for fast loading, and multiple variations for A/B testing.&lt;/p&gt;

&lt;p&gt;A video editor can handle 10-20 products per day. An API handles 10,000.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why use a video processing API
&lt;/h2&gt;

&lt;p&gt;Building video processing in-house means installing FFmpeg on your servers, managing CPU-intensive workloads that spike and idle, handling file storage and delivery, building queuing systems for batch operations, and scaling infrastructure as your catalog grows.&lt;/p&gt;

&lt;p&gt;Or you can send an HTTP request and get a processed video back.&lt;/p&gt;

&lt;p&gt;RenderIO runs FFmpeg on Cloudflare's edge network. You send a command, it processes the video, you get a download URL. No servers, no scaling, no infrastructure. For a full walkthrough of the API, see the &lt;a href="https://renderio.dev/blogs/ffmpeg-api-complete-guide" rel="noopener noreferrer"&gt;FFmpeg API complete guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core e-commerce video processing API operations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Resize for platforms
&lt;/h3&gt;

&lt;p&gt;Each sales channel has different requirements:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# TikTok Shop: 9:16, 1080x1920&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://renderio.dev/api/v1/run-ffmpeg-command &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"X-API-KEY: your_api_key"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "ffmpeg_command": "-i {{in_video}} -vf \"scale=1080:1920:force_original_aspect_ratio=decrease,pad=1080:1920:(ow-iw)/2:(oh-ih)/2:white\" -c:v libx264 -crf 20 -movflags +faststart {{out_video}}",
    "input_files": { "in_video": "https://cdn.example.com/product-original.mp4" },
    "output_files": { "out_video": "product-tiktok.mp4" }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Amazon: 16:9, 1920x1080, max 5GB&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://renderio.dev/api/v1/run-ffmpeg-command &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"X-API-KEY: your_api_key"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "ffmpeg_command": "-i {{in_video}} -vf \"scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2:white\" -c:v libx264 -crf 18 -movflags +faststart {{out_video}}",
    "input_files": { "in_video": "https://cdn.example.com/product-original.mp4" },
    "output_files": { "out_video": "product-amazon.mp4" }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Instagram Shopping: 1:1, 1080x1080&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://renderio.dev/api/v1/run-ffmpeg-command &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"X-API-KEY: your_api_key"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "ffmpeg_command": "-i {{in_video}} -vf \"crop=min(iw\\,ih):min(iw\\,ih),scale=1080:1080\" -c:v libx264 -crf 20 -movflags +faststart {{out_video}}",
    "input_files": { "in_video": "https://cdn.example.com/product-original.mp4" },
    "output_files": { "out_video": "product-instagram.mp4" }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Add watermarks
&lt;/h3&gt;

&lt;p&gt;Protect product videos on different channels:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://renderio.dev/api/v1/run-ffmpeg-command &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"X-API-KEY: your_api_key"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "ffmpeg_command": "-i {{in_video}} -i {{in_logo}} -filter_complex \"[1:v]scale=120:-1,format=rgba,colorchannelmixer=aa=0.4[logo];[0:v][logo]overlay=W-w-20:H-h-20[v]\" -map \"[v]\" -map 0:a -c:v libx264 -crf 20 -c:a copy {{out_video}}",
    "input_files": {
      "in_video": "https://cdn.example.com/product.mp4",
      "in_logo": "https://cdn.example.com/brand-logo.png"
    },
    "output_files": { "out_video": "product-watermarked.mp4" }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;colorchannelmixer=aa=0.4&lt;/code&gt; makes the watermark 40% transparent. Professional without being intrusive.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compress for fast loading
&lt;/h3&gt;

&lt;p&gt;Product pages need fast-loading video. Here's how to compress without visible quality loss:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://renderio.dev/api/v1/run-ffmpeg-command &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"X-API-KEY: your_api_key"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "ffmpeg_command": "-i {{in_video}} -c:v libx264 -crf 28 -preset slow -vf \"scale=720:-2\" -c:a aac -b:a 96k -movflags +faststart {{out_video}}",
    "input_files": { "in_video": "https://cdn.example.com/product-hd.mp4" },
    "output_files": { "out_video": "product-web.mp4" }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This produces a 720p video with aggressive compression. File size typically drops 70-80% while maintaining acceptable quality for product pages. For more compression strategies, check the &lt;a href="https://renderio.dev/blogs/ffmpeg-compress-video" rel="noopener noreferrer"&gt;video compression guide&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create thumbnails
&lt;/h3&gt;

&lt;p&gt;Extract the best frame for product listing thumbnails:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://renderio.dev/api/v1/run-ffmpeg-command &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"X-API-KEY: your_api_key"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "ffmpeg_command": "-i {{in_video}} -vf \"select=eq(pict_type\\,I),scale=800:-1\" -frames:v 1 {{out_thumb}}",
    "input_files": { "in_video": "https://cdn.example.com/product.mp4" },
    "output_files": { "out_thumb": "thumbnail.jpg" }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This extracts the first I-frame (keyframe), which is typically the clearest frame in the video. For more thumbnail strategies including scene detection and quality optimization, see the &lt;a href="https://renderio.dev/blogs/ffmpeg-extract-frames" rel="noopener noreferrer"&gt;FFmpeg frame extraction guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the complete pipeline
&lt;/h2&gt;

&lt;h3&gt;
  
  
  One product, all platforms
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;processProductVideo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;productId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;videoUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;logoUrl&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;platforms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;tiktok&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`-i {{in_video}} -i {{in_logo}} -filter_complex "[1:v]scale=80:-1,format=rgba,colorchannelmixer=aa=0.3[logo];[0:v]scale=1080:1920:force_original_aspect_ratio=decrease,pad=1080:1920:(ow-iw)/2:(oh-ih)/2:white[bg];[bg][logo]overlay=W-w-20:20[v]" -map "[v]" -map 0:a? -c:v libx264 -crf 22 -c:a aac -movflags +faststart {{out_video}}`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;amazon&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`-i {{in_video}} -vf "scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2:white" -c:v libx264 -crf 18 -movflags +faststart {{out_video}}`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;instagram&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`-i {{in_video}} -i {{in_logo}} -filter_complex "[1:v]scale=80:-1,format=rgba,colorchannelmixer=aa=0.3[logo];[0:v]crop=min(iw&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;,ih):min(iw&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;,ih),scale=1080:1080[bg];[bg][logo]overlay=W-w-15:H-h-15[v]" -map "[v]" -map 0:a? -c:v libx264 -crf 20 -movflags +faststart {{out_video}}`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;web-compressed&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`-i {{in_video}} -c:v libx264 -crf 28 -preset slow -vf "scale=720:-2" -c:a aac -b:a 96k -movflags +faststart {{out_video}}`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;];&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jobs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;platforms&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;platform&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
    &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://renderio.dev/api/v1/run-ffmpeg-command&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;X-API-KEY&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;RENDERIO_API_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;ffmpeg_command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;platform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;input_files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;in_video&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;videoUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;in_logo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;logoUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;output_files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;out_video&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;productId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;platform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.mp4`&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jobs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4 API calls per product. 4 platform-ready videos. All processing in parallel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Batch process catalog
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;processCatalog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;products&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;BATCH_SIZE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;products&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;BATCH_SIZE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;batch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;products&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;BATCH_SIZE&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nx"&gt;batch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
        &lt;span class="nf"&gt;processProductVideo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;videoUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;logoUrl&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Processed &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;BATCH_SIZE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;products&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;products&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; products`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;1,000 products × 4 platforms = 4,000 API calls. That fits within the Business plan (20,000 commands/month).&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling processing failures
&lt;/h2&gt;

&lt;p&gt;Video processing can fail for a few reasons, and your pipeline needs to handle each one:&lt;/p&gt;

&lt;h3&gt;
  
  
  Invalid input URL
&lt;/h3&gt;

&lt;p&gt;The source video doesn't exist or requires authentication. Use signed URLs with at least 1 hour of expiry. If you're pulling from Shopify's CDN, those URLs are public. But if you're hosting on S3, make sure the bucket policy or presigned URL allows access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Timeout on large files
&lt;/h3&gt;

&lt;p&gt;Videos over 500MB take longer. Poll with reasonable intervals (5-10 seconds) and set a maximum retry count. If a command hasn't completed after 5 minutes, check the error status rather than polling forever.&lt;/p&gt;

&lt;h3&gt;
  
  
  FFmpeg command errors
&lt;/h3&gt;

&lt;p&gt;A typo in your filter chain fails the whole command. Test commands locally with a sample file before putting them in your pipeline. The error response includes FFmpeg's stderr output, so read it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;pollWithRetry&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;commandId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;maxAttempts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;maxAttempts&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`https://renderio.dev/api/v1/commands/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;commandId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;X-API-KEY&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;apiKey&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;SUCCESS&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;FAILED&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`FFmpeg failed: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Timeout after &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;maxAttempts&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;s`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Webhook-based completion
&lt;/h2&gt;

&lt;p&gt;For production pipelines, polling loops waste resources. Use webhooks instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ffmpeg_command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"-i {{in_video}} -vf &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;scale=1080:1920:...&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; -c:v libx264 -crf 20 {{out_video}}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"input_files"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"in_video"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://cdn.example.com/product.mp4"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"output_files"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"out_video"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"product-tiktok.mp4"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"webhook_url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://your-server.com/api/video-complete"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the command finishes, RenderIO sends a POST to your webhook URL with the command status and output file URLs. Your server processes the callback, updates the product listing, and moves on. No polling. No wasted API calls.&lt;/p&gt;

&lt;p&gt;This matters at scale. If you're processing 500 products per day across 4 platforms, that's 2,000 commands. With polling at 5-second intervals and an average processing time of 15 seconds, you'd make 6,000 status checks. With webhooks, you make zero.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration with e-commerce platforms
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Shopify
&lt;/h3&gt;

&lt;p&gt;Use Shopify's Admin API to upload processed videos back to product listings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// After RenderIO processing completes&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;processedVideoUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;completedCommand&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;output_files&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;product-web.mp4&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;shopify&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;productId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;media&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Product video&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;mediaContentType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;VIDEO&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;originalSource&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;processedVideoUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For automating this with Zapier (new product triggers video creation automatically), see the &lt;a href="https://renderio.dev/blogs/zapier-automate-product-videos" rel="noopener noreferrer"&gt;Zapier product video automation guide&lt;/a&gt;. The &lt;a href="https://renderio.dev/blogs/n8n-video-processing-guide" rel="noopener noreferrer"&gt;n8n video processing guide&lt;/a&gt; covers the same flow for n8n users.&lt;/p&gt;

&lt;h3&gt;
  
  
  WooCommerce
&lt;/h3&gt;

&lt;p&gt;WooCommerce doesn't have native video fields on products. You'll need either a plugin like "Product Video for WooCommerce" or a custom meta field:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;woocommerce&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;products/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;productId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;meta_data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;product_video_url&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;processedVideoUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This stores the URL in &lt;code&gt;meta_data&lt;/code&gt;, but your theme won't display it automatically. You'll need a custom template snippet or a video gallery plugin that reads from meta fields.&lt;/p&gt;

&lt;h3&gt;
  
  
  TikTok Shop
&lt;/h3&gt;

&lt;p&gt;TikTok Shop video requirements: 9:16 aspect ratio, 1080x1920 minimum, MP4 format, under 500MB, 15-60 seconds. The resize command above handles the format. Upload via TikTok Shop's Content API or manually through Seller Center.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating the full workflow
&lt;/h2&gt;

&lt;p&gt;The real power is connecting video processing to your product catalog so it runs automatically. There are two good approaches:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zapier&lt;/strong&gt;: New product trigger → RenderIO API → upload to storage. The &lt;a href="https://renderio.dev/blogs/zapier-automate-product-videos" rel="noopener noreferrer"&gt;Zapier product video guide&lt;/a&gt; walks through the complete Zap with template overlays and text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;n8n&lt;/strong&gt;: Webhook or schedule trigger → batch process → upload. More flexibility for complex logic. See the &lt;a href="https://renderio.dev/blogs/n8n-video-processing-guide" rel="noopener noreferrer"&gt;n8n video processing guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For batch processing large catalogs at once, the &lt;a href="https://renderio.dev/blogs/batch-process-ai-videos-social-media" rel="noopener noreferrer"&gt;batch process AI videos guide&lt;/a&gt; covers parallelization strategies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing for e-commerce
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Catalog size&lt;/th&gt;
&lt;th&gt;Platforms&lt;/th&gt;
&lt;th&gt;API calls/month&lt;/th&gt;
&lt;th&gt;Plan&lt;/th&gt;
&lt;th&gt;Cost/month&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;30 products&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;120&lt;/td&gt;
&lt;td&gt;Starter&lt;/td&gt;
&lt;td&gt;$9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;250 products&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;1,000&lt;/td&gt;
&lt;td&gt;Growth&lt;/td&gt;
&lt;td&gt;$29&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5,000 products&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;20,000&lt;/td&gt;
&lt;td&gt;Business&lt;/td&gt;
&lt;td&gt;$99&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Zero egress fees because RenderIO runs on Cloudflare R2. No hidden storage costs. No bandwidth charges.&lt;/p&gt;

&lt;p&gt;The math is straightforward: count your products, multiply by the number of platform versions you need per product, and pick the plan that fits. Most small-to-medium stores land on Growth. Large catalogs or stores that reprocess on price/image changes need Business.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How long does video processing take?
&lt;/h3&gt;

&lt;p&gt;Typical product videos (30-60 seconds, 1080p source) process in 5-15 seconds per operation. Resizing is faster than complex filter chains. A four-platform pipeline for one product finishes in about 15-20 seconds total since all four commands run in parallel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can I process videos when a new product is added?
&lt;/h3&gt;

&lt;p&gt;Yes. Set up a Shopify webhook (or use Zapier/n8n) to trigger video processing whenever a product is created or updated. The &lt;a href="https://renderio.dev/blogs/zapier-automate-product-videos" rel="noopener noreferrer"&gt;Zapier product video guide&lt;/a&gt; has the complete setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  What video formats does TikTok Shop accept?
&lt;/h3&gt;

&lt;p&gt;MP4 is the safe choice. TikTok Shop requires 9:16 aspect ratio, minimum 720x1280 resolution (1080x1920 recommended), H.264 codec, under 500MB, and between 15-60 seconds duration.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do I handle processing failures?
&lt;/h3&gt;

&lt;p&gt;Check the command status endpoint. Failed commands return a &lt;code&gt;status: "failed"&lt;/code&gt; with an &lt;code&gt;error&lt;/code&gt; field containing FFmpeg's stderr output. Common fixes: verify your input URL is accessible, check your FFmpeg command syntax, ensure the output format matches the filename extension.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is there a file size limit?
&lt;/h3&gt;

&lt;p&gt;No hard limit on the API side. Input files are fetched via URL, so the bottleneck is download speed. Videos over 1GB work fine but take longer to fetch and process. For very large files (2GB+), expect processing times of 1-3 minutes.&lt;/p&gt;

</description>
      <category>ffmpeg</category>
      <category>video</category>
      <category>api</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Build an AI UGC Video Processing Pipeline</title>
      <dc:creator>RenderIO</dc:creator>
      <pubDate>Mon, 06 Apr 2026 11:18:50 +0000</pubDate>
      <link>https://dev.to/renderio/build-an-ai-ugc-video-processing-pipeline-17kl</link>
      <guid>https://dev.to/renderio/build-an-ai-ugc-video-processing-pipeline-17kl</guid>
      <description>&lt;h2&gt;
  
  
  The real bottleneck in AI UGC video production
&lt;/h2&gt;

&lt;p&gt;AI-generated UGC for ads and social media has moved past the "can we do this" phase. Tools like HeyGen, Synthesia, and D-ID produce convincing avatar videos. The generation part works. Everything after generation is where teams get stuck.&lt;/p&gt;

&lt;p&gt;You generate a video. Then you need to post-process it so it doesn't scream "AI." Then you need variations for A/B testing across ad sets. Then each variation needs reformatting for different platforms. One base video can turn into 50-100 output files. Without a pipeline, each one is manual work in Premiere or CapCut.&lt;/p&gt;

&lt;p&gt;This guide walks through building that pipeline with FFmpeg and the &lt;a href="https://renderio.dev/blogs/ffmpeg-api-complete-guide" rel="noopener noreferrer"&gt;RenderIO API&lt;/a&gt;, from raw AI output to platform-ready content.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the AI UGC video processing pipeline works
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AI Generation → Post-Processing → Variation → Platform Formatting → Distribution
  (HeyGen)       (RenderIO)     (RenderIO)     (RenderIO)         (n8n/Zapier)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each stage is a separate API call. Each call runs independently. The entire pipeline from generation to distribution takes under 10 minutes for 50+ output files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 1: choose your AI generation tool
&lt;/h2&gt;

&lt;p&gt;Pick the tool that matches what you're building:&lt;/p&gt;

&lt;p&gt;HeyGen works best for talking-head UGC with custom avatars. If you're creating product demos or testimonial-style content, this is probably where you start. Their avatar quality has gotten noticeably better since late 2025. See our guide on &lt;a href="https://renderio.dev/blogs/heygen-video-to-instagram-reels" rel="noopener noreferrer"&gt;converting HeyGen output to Instagram Reels&lt;/a&gt; for the full post-processing workflow.&lt;/p&gt;

&lt;p&gt;Synthesia is more corporate. Training videos, internal comms, that sort of thing. The avatars feel professional but not "social media native."&lt;/p&gt;

&lt;p&gt;D-ID turns a single photo into a talking video. Useful when you don't have studio footage. Less realistic than HeyGen but faster to set up.&lt;/p&gt;

&lt;p&gt;Runway combined with a voice-over tool works for creative or lifestyle UGC where you want more visual flexibility than a talking head.&lt;/p&gt;

&lt;p&gt;Output from any of these: one raw MP4 file, typically 16:9, 30-60 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 2: post-processing raw AI video
&lt;/h2&gt;

&lt;p&gt;Raw AI video has tells. Metadata flags it as AI-generated. Audio levels are inconsistent. The video looks "too clean" compared to native social content. Post-processing fixes all of that in one API call per base video.&lt;/p&gt;

&lt;p&gt;Here's why each step matters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;map_metadata -1&lt;/code&gt; strips generation metadata that platforms can detect&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;nlmeans&lt;/code&gt; + &lt;code&gt;noise&lt;/code&gt; adds natural film grain (AI video is unnaturally clean)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;eq&lt;/code&gt; shifts color just enough to break perceptual fingerprints&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;loudnorm&lt;/code&gt; normalizes audio to -14 LUFS (what TikTok and Reels expect)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://renderio.dev/api/v1/run-ffmpeg-command &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"X-API-KEY: your_api_key"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "ffmpeg_command": "-i {{in_video}} -map_metadata -1 -vf \"nlmeans=s=6:p=3:r=9,noise=alls=12:allf=t,eq=brightness=0.01:contrast=1.03:saturation=0.97\" -af \"loudnorm=I=-14:TP=-2:LRA=7\" -c:v libx264 -crf 18 -c:a aac -b:a 128k {{out_video}}",
    "input_files": { "in_video": "https://example.com/heygen-raw.mp4" },
    "output_files": { "out_video": "base-processed.mp4" }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output is a clean base for variation creation. For more on &lt;a href="https://renderio.dev/blogs/make-ai-video-look-natural" rel="noopener noreferrer"&gt;making AI video look natural&lt;/a&gt;, we have a dedicated guide.&lt;/p&gt;

&lt;h3&gt;
  
  
  Troubleshooting post-processing
&lt;/h3&gt;

&lt;p&gt;A few things that trip people up:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grain looks blocky on short videos (under 15 seconds).&lt;/strong&gt; Lower the noise value from &lt;code&gt;alls=12&lt;/code&gt; to &lt;code&gt;alls=6&lt;/code&gt;. Short clips get compressed harder by platforms, and heavy grain turns into blocky artifacts after re-encoding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audio sounds distorted after loudnorm.&lt;/strong&gt; This usually happens when the source audio is already very loud (above -8 LUFS). Add a limiter before loudnorm: &lt;code&gt;-af "alimiter=limit=0.9,loudnorm=I=-14:TP=-2:LRA=7"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HeyGen output has variable frame rate.&lt;/strong&gt; Force constant frame rate early in the chain by adding &lt;code&gt;-r 30&lt;/code&gt; before the output filename. Variable frame rate causes sync issues in some platform players.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 3: creating AI UGC video variations
&lt;/h2&gt;

&lt;p&gt;One base video becomes 10-20 unique variations. Each variation uses different FFmpeg parameters so every output has a distinct fingerprint. This matters for ad testing (different creatives per ad set) and for posting across accounts without duplicate detection.&lt;/p&gt;

&lt;h3&gt;
  
  
  Color grade variations
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;colorVariations&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;warm&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;colortemperature=temperature=6500&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cool&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;colortemperature=temperature=4500&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;vivid&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;eq=saturation=1.3:contrast=1.1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;muted&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;eq=saturation=0.7:contrast=0.95&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;vintage&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;eq=saturation=0.8:contrast=1.1,colorbalance=rs=0.05:gs=-0.02:bs=-0.05&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Speed variations
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;speedVariations&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;normal&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;setpts=1.0*PTS&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;afilter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;atempo=1.0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;fast&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;setpts=0.9*PTS&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;afilter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;atempo=1.11&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;slow&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;setpts=1.1*PTS&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;afilter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;atempo=0.91&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Crop variations
&lt;/h3&gt;

&lt;p&gt;Different crop positions change the video's perceptual hash, which helps if you're posting variations across multiple accounts. See our guide on &lt;a href="https://renderio.dev/blogs/batch-process-ai-videos-social-media" rel="noopener noreferrer"&gt;batch processing AI videos for social media&lt;/a&gt; for platform-specific crop strategies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cropVariations&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;center&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;crop=ih*9/16:ih:(iw-ih*9/16)/2:0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;left&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;crop=ih*9/16:ih:iw*0.1:0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;right&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;crop=ih*9/16:ih:iw*0.5:0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;tight&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;crop=iw*0.6:ih*0.6:iw*0.2:ih*0.1,scale=1080:1920&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Combined variation generator
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;createVariations&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;baseVideoUrl&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;variations&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;

  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;color&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;colorVariations&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;speed&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;speedVariations&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;color&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;speed&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;vf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;speed&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;color&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;af&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;speed&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;afilter&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

      &lt;span class="nx"&gt;variations&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`-i {{in_video}} -vf "&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;vf&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;" -af "&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;af&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;" -c:v libx264 -crf 22 -c:a aac -b:a 128k {{out_video}}`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// 5 colors x 3 speeds = 15 variations&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jobs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;variations&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
    &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://renderio.dev/api/v1/run-ffmpeg-command&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;X-API-KEY&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;RENDERIO_API_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;ffmpeg_command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;v&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;input_files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;in_video&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;baseVideoUrl&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;output_files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;out_video&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.mp4`&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jobs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;15 variations, all processing in parallel. Total time: same as processing one video.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 4: platform formatting for AI UGC videos
&lt;/h2&gt;

&lt;p&gt;Each variation needs platform-specific formatting. This multiplies your output count. For the full breakdown of specs per platform, see &lt;a href="https://renderio.dev/blogs/batch-process-ai-videos-social-media" rel="noopener noreferrer"&gt;batch processing AI videos for social media&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;platformConfigs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;tiktok&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`-i {{in_video}} -filter_complex "[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920[v]" -map "[v]" -map 0:a -c:v libx264 -crf 22 -c:a aac -movflags +faststart {{out_video}}`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;reels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`-i {{in_video}} -t 90 -filter_complex "[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920[v]" -map "[v]" -map 0:a -c:v libx264 -crf 22 -c:a aac -movflags +faststart {{out_video}}`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;shorts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`-i {{in_video}} -t 60 -filter_complex "[0:v]scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920[v]" -map "[v]" -map 0:a -c:v libx264 -crf 20 -c:a aac -movflags +faststart {{out_video}}`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;linkedin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`-i {{in_video}} -vf "scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2" -af "loudnorm=I=-16" -c:v libx264 -crf 20 -c:a aac -movflags +faststart {{out_video}}`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;formatForPlatforms&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;variationUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;variationName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jobs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;entries&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;platformConfigs&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(([&lt;/span&gt;&lt;span class="nx"&gt;platform&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
    &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://renderio.dev/api/v1/run-ffmpeg-command&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;X-API-KEY&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;RENDERIO_API_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;ffmpeg_command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;input_files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;in_video&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;variationUrl&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;output_files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;out_video&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;variationName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;platform&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.mp4`&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jobs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;15 variations x 4 platforms = 60 platform-ready videos. All from one AI generation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 5: distribution with n8n
&lt;/h2&gt;

&lt;p&gt;Wire it all together with n8n (or Zapier). Check the &lt;a href="https://renderio.dev/blogs/n8n-video-processing-guide" rel="noopener noreferrer"&gt;n8n video processing guide&lt;/a&gt; for setup details.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Webhook trigger receives HeyGen export URL&lt;/li&gt;
&lt;li&gt;HTTP Request sends POST to RenderIO for post-processing&lt;/li&gt;
&lt;li&gt;Wait/Poll checks command status until complete&lt;/li&gt;
&lt;li&gt;Loop iterates each variation config, sends POST to RenderIO&lt;/li&gt;
&lt;li&gt;Wait/Poll checks all variation commands&lt;/li&gt;
&lt;li&gt;Loop iterates each platform, sends POST to RenderIO&lt;/li&gt;
&lt;li&gt;Wait/Poll checks all platform commands&lt;/li&gt;
&lt;li&gt;Upload sends results to respective platform APIs or scheduling tools&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The entire pipeline runs automatically. You input one HeyGen URL and get 60 platform-ready videos.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost analysis
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Stage&lt;/th&gt;
&lt;th&gt;API calls per base video&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Post-processing&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;One-time cleanup&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Variations&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;td&gt;5 colors x 3 speeds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Platform formatting&lt;/td&gt;
&lt;td&gt;60&lt;/td&gt;
&lt;td&gt;15 variations x 4 platforms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;76&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Per base video&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;On RenderIO's Growth plan at $29/month (1,000 commands), you can process about 13 base videos per month through the full pipeline. For higher volumes, the Business plan at $99/month (20,000 commands) handles 263 base videos per month.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost per output video (Business): ~$0.005&lt;/li&gt;
&lt;li&gt;Cost per base video on Business (76 outputs): ~$0.38&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's how it compares:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Cost per base video&lt;/th&gt;
&lt;th&gt;Your time&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Manual processing in Premiere&lt;/td&gt;
&lt;td&gt;$100-150 (at $50/hr)&lt;/td&gt;
&lt;td&gt;2-3 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Adobe Premiere batch export&lt;/td&gt;
&lt;td&gt;~$25 of time&lt;/td&gt;
&lt;td&gt;30 min&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RenderIO pipeline (Business)&lt;/td&gt;
&lt;td&gt;$0.38&lt;/td&gt;
&lt;td&gt;0 min&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;Start with a simpler pipeline and expand:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Week 1&lt;/strong&gt;: Post-processing only (1 API call per video)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Week 2&lt;/strong&gt;: Add 3 color variations (4 API calls per video)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Week 3&lt;/strong&gt;: Add platform formatting (16 API calls per video)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Week 4&lt;/strong&gt;: Add speed variations and full automation (76 API calls per video)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Starter plan ($9/month, 500 commands) covers week 1-2 for most teams. Scale to Growth or Business as your volume increases. You can also &lt;a href="https://renderio.dev/blogs/ffmpeg-compress-video" rel="noopener noreferrer"&gt;compress video with FFmpeg&lt;/a&gt; to reduce storage costs before uploading.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How long does the full pipeline take to process one base video?
&lt;/h3&gt;

&lt;p&gt;Under 10 minutes for all 76 API calls. RenderIO processes commands in parallel on Cloudflare's edge network, so 15 variation calls finish in roughly the same time as one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can I use this pipeline with AI video tools other than HeyGen?
&lt;/h3&gt;

&lt;p&gt;Yes. The pipeline is tool-agnostic after stage 1. Any MP4 output works, whether it comes from HeyGen, Synthesia, D-ID, Runway, or even screen recordings. The post-processing and variation stages don't care how the video was generated.&lt;/p&gt;

&lt;h3&gt;
  
  
  What happens if an API call fails mid-pipeline?
&lt;/h3&gt;

&lt;p&gt;Each command returns a status you can poll. Failed commands return an error with details. In an n8n workflow, add an error branch that retries failed calls up to 3 times with a 10-second delay between attempts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do I need all 15 variations, or can I start with fewer?
&lt;/h3&gt;

&lt;p&gt;Start with 3 color variations and skip speed variations. That gives you 12 platform-ready files (3 variations x 4 platforms) from 4 API calls. Add speed variations once you're comfortable with the workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Which RenderIO plan fits a UGC pipeline?
&lt;/h3&gt;

&lt;p&gt;Depends on your volume. For 1-5 base videos per month, the Starter plan ($9/month, 500 commands) is enough. For 10-13 base videos, Growth ($29/month, 1,000 commands). For 50+ base videos, Business ($99/month, 20,000 commands).&lt;/p&gt;

</description>
      <category>ffmpeg</category>
      <category>video</category>
      <category>api</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Make AI-Generated Video Undetectable on TikTok</title>
      <dc:creator>RenderIO</dc:creator>
      <pubDate>Mon, 06 Apr 2026 11:16:17 +0000</pubDate>
      <link>https://dev.to/renderio/make-ai-generated-video-undetectable-on-tiktok-2cg3</link>
      <guid>https://dev.to/renderio/make-ai-generated-video-undetectable-on-tiktok-2cg3</guid>
      <description>&lt;h2&gt;
  
  
  How to make AI video undetectable on TikTok
&lt;/h2&gt;

&lt;p&gt;You generated a video with Runway, Kling, Pika, or Sora. It looks great. You upload it to TikTok. It gets suppressed or flagged.&lt;/p&gt;

&lt;p&gt;The problem is twofold: metadata fingerprints from the generation tool, and visual patterns that detection algorithms catch. Both are fixable with FFmpeg. This guide walks through each fix step by step, so you can make AI-generated video undetectable before uploading.&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes AI video detectable
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Metadata fingerprints
&lt;/h3&gt;

&lt;p&gt;Every AI video tool embeds metadata in the output file. The encoder field contains the tool's name or rendering engine. Creation timestamps are typically UTC and batch-generated (a giveaway when you upload seconds after "recording"). Some tools add proprietary tags and custom metadata fields. C2PA Content Credentials — increasingly common in 2025-2026 — explicitly declare AI origin. And the EXIF data (resolution, color space, technical settings) often matches the tool's default output exactly, which is another signal.&lt;/p&gt;

&lt;p&gt;For a full walkthrough on &lt;a href="https://renderio.dev/blogs/strip-video-metadata-ffmpeg" rel="noopener noreferrer"&gt;stripping video metadata with FFmpeg&lt;/a&gt;, that guide covers every metadata field and how to remove them.&lt;/p&gt;

&lt;p&gt;You can see this with ffprobe:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ffprobe &lt;span class="nt"&gt;-v&lt;/span&gt; quiet &lt;span class="nt"&gt;-print_format&lt;/span&gt; json &lt;span class="nt"&gt;-show_format&lt;/span&gt; input.mp4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look for fields like &lt;code&gt;encoder&lt;/code&gt;, &lt;code&gt;comment&lt;/code&gt;, &lt;code&gt;creation_time&lt;/code&gt;, and any tool-specific tags.&lt;/p&gt;

&lt;h3&gt;
  
  
  Visual patterns
&lt;/h3&gt;

&lt;p&gt;AI video has characteristic patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Consistent frame timing&lt;/strong&gt;: AI renders at exact intervals. Natural video has micro-variations in frame timing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uniform noise patterns&lt;/strong&gt;: AI-generated frames lack the random sensor noise present in camera footage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temporal consistency&lt;/strong&gt;: AI maintains unnaturally smooth motion in areas that real cameras would show compression artifacts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Color space&lt;/strong&gt;: Many AI tools output in a specific color space (often BT.709 with particular gamma curves).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Strip all metadata
&lt;/h2&gt;

&lt;p&gt;Remove every metadata field:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ffmpeg &lt;span class="nt"&gt;-i&lt;/span&gt; ai_video.mp4 &lt;span class="nt"&gt;-map_metadata&lt;/span&gt; &lt;span class="nt"&gt;-1&lt;/span&gt; &lt;span class="nt"&gt;-fflags&lt;/span&gt; +bitexact &lt;span class="nt"&gt;-c&lt;/span&gt;:v libx264 &lt;span class="nt"&gt;-crf&lt;/span&gt; 22 &lt;span class="nt"&gt;-c&lt;/span&gt;:a aac output.mp4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;-map_metadata -1&lt;/code&gt; removes all metadata containers. &lt;code&gt;-fflags +bitexact&lt;/code&gt; prevents FFmpeg from writing its own metadata.&lt;/p&gt;

&lt;p&gt;API call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://renderio.dev/api/v1/run-ffmpeg-command &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"X-API-KEY: your_api_key"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "ffmpeg_command": "-i {{in_video}} -map_metadata -1 -fflags +bitexact -c:v libx264 -crf 22 -c:a aac {{out_video}}",
    "input_files": { "in_video": "https://example.com/ai-video.mp4" },
    "output_files": { "out_video": "clean.mp4" }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Add natural sensor noise
&lt;/h2&gt;

&lt;p&gt;Real cameras produce random noise from the image sensor. AI video is too clean. Add subtle noise:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ffmpeg &lt;span class="nt"&gt;-i&lt;/span&gt; ai_video.mp4 &lt;span class="nt"&gt;-vf&lt;/span&gt; &lt;span class="s2"&gt;"noise=alls=8:allf=t"&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt;:v libx264 &lt;span class="nt"&gt;-crf&lt;/span&gt; 22 output.mp4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;alls=8&lt;/code&gt; adds noise at strength 8 across all color planes. &lt;code&gt;allf=t&lt;/code&gt; makes it temporal (varies per frame), mimicking real sensor behavior.&lt;/p&gt;

&lt;p&gt;For a more natural look, add gaussian noise instead of uniform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ffmpeg &lt;span class="nt"&gt;-i&lt;/span&gt; ai_video.mp4 &lt;span class="nt"&gt;-vf&lt;/span&gt; &lt;span class="s2"&gt;"noise=alls=6:allf=t+u"&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt;:v libx264 &lt;span class="nt"&gt;-crf&lt;/span&gt; 22 output.mp4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Introduce frame timing variation
&lt;/h2&gt;

&lt;p&gt;AI video has perfectly consistent frame timing. Real video from phones has slight jitter. Add micro-variations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ffmpeg &lt;span class="nt"&gt;-i&lt;/span&gt; ai_video.mp4 &lt;span class="nt"&gt;-vf&lt;/span&gt; &lt;span class="s2"&gt;"setpts=PTS+random(0)*0.001"&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt;:v libx264 &lt;span class="nt"&gt;-crf&lt;/span&gt; 22 output.mp4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This adds up to 1ms of random timing variation per frame. Invisible during playback but breaks the perfect timing pattern.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Re-encode to match phone camera output
&lt;/h2&gt;

&lt;p&gt;TikTok expects video from phones. Match the encoding characteristics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ffmpeg &lt;span class="nt"&gt;-i&lt;/span&gt; ai_video.mp4 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt;:v libx264 &lt;span class="nt"&gt;-profile&lt;/span&gt;:v high &lt;span class="nt"&gt;-level&lt;/span&gt;:v 4.0 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-crf&lt;/span&gt; 23 &lt;span class="nt"&gt;-preset&lt;/span&gt; medium &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-pix_fmt&lt;/span&gt; yuv420p &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt;:a aac &lt;span class="nt"&gt;-b&lt;/span&gt;:a 128k &lt;span class="nt"&gt;-ar&lt;/span&gt; 44100 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-movflags&lt;/span&gt; +faststart &lt;span class="se"&gt;\&lt;/span&gt;
  output.mp4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This matches the H.264 High Profile Level 4.0 output that modern phones produce. &lt;code&gt;yuv420p&lt;/code&gt; is the standard pixel format. &lt;code&gt;movflags +faststart&lt;/code&gt; is how phone cameras write MP4 files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Crop to remove AI artifacts
&lt;/h2&gt;

&lt;p&gt;AI videos often have subtle artifacts at frame edges (blurring, warping, or inconsistent generation). Crop a few pixels:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ffmpeg &lt;span class="nt"&gt;-i&lt;/span&gt; ai_video.mp4 &lt;span class="nt"&gt;-vf&lt;/span&gt; &lt;span class="s2"&gt;"crop=iw-8:ih-8:4:4"&lt;/span&gt; output.mp4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This removes 4 pixels from each edge. Eliminates edge artifacts and changes the perceptual hash.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Adjust color space
&lt;/h2&gt;

&lt;p&gt;Runway and Sora output in BT.709 with a specific gamma curve (usually 2.2 or sRGB transfer). Kling defaults to BT.709 but with flatter gamma that gives a slightly washed-out look. Pika's output varies by model version. The point is: each tool has a default color profile that detection systems can fingerprint. Shift it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ffmpeg &lt;span class="nt"&gt;-i&lt;/span&gt; ai_video.mp4 &lt;span class="nt"&gt;-vf&lt;/span&gt; &lt;span class="s2"&gt;"eq=brightness=0.02:contrast=1.02:saturation=1.03:gamma=1.01"&lt;/span&gt; output.mp4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Slight brightness, contrast, saturation, and gamma adjustments. These shift the color profile away from the AI tool's default output.&lt;/p&gt;

&lt;h2&gt;
  
  
  The complete naturalizer command
&lt;/h2&gt;

&lt;p&gt;Combine all steps into one FFmpeg command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ffmpeg &lt;span class="nt"&gt;-i&lt;/span&gt; ai_video.mp4 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-vf&lt;/span&gt; &lt;span class="s2"&gt;"crop=iw-6:ih-6:3:3,noise=alls=6:allf=t,eq=brightness=0.015:saturation=1.02,hue=h=1"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-af&lt;/span&gt; &lt;span class="s2"&gt;"asetrate=44100*1.003,aresample=44100"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt;:v libx264 &lt;span class="nt"&gt;-profile&lt;/span&gt;:v high &lt;span class="nt"&gt;-level&lt;/span&gt;:v 4.0 &lt;span class="nt"&gt;-crf&lt;/span&gt; 23 &lt;span class="nt"&gt;-preset&lt;/span&gt; medium &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-pix_fmt&lt;/span&gt; yuv420p &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt;:a aac &lt;span class="nt"&gt;-b&lt;/span&gt;:a 128k &lt;span class="nt"&gt;-ar&lt;/span&gt; 44100 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-map_metadata&lt;/span&gt; &lt;span class="nt"&gt;-1&lt;/span&gt; &lt;span class="nt"&gt;-fflags&lt;/span&gt; +bitexact &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-movflags&lt;/span&gt; +faststart &lt;span class="se"&gt;\&lt;/span&gt;
  output.mp4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Crops edges (removes AI artifacts, changes pHash)&lt;/li&gt;
&lt;li&gt;Adds sensor noise (naturalizes the image)&lt;/li&gt;
&lt;li&gt;Shifts brightness and color (moves away from AI defaults)&lt;/li&gt;
&lt;li&gt;Shifts audio pitch slightly (alters audio fingerprint)&lt;/li&gt;
&lt;li&gt;Encodes to phone-camera-like specs&lt;/li&gt;
&lt;li&gt;Strips all metadata&lt;/li&gt;
&lt;li&gt;Optimizes for mobile playback&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;API call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://renderio.dev/api/v1/run-ffmpeg-command &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"X-API-KEY: your_api_key"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "ffmpeg_command": "-i {{in_video}} -vf \"crop=iw-6:ih-6:3:3,noise=alls=6:allf=t,eq=brightness=0.015:saturation=1.02,hue=h=1\" -af \"asetrate=44100*1.003,aresample=44100\" -c:v libx264 -profile:v high -level:v 4.0 -crf 23 -preset medium -pix_fmt yuv420p -c:a aac -b:a 128k -ar 44100 -map_metadata -1 -fflags +bitexact -movflags +faststart {{out_video}}",
    "input_files": { "in_video": "https://example.com/ai-video.mp4" },
    "output_files": { "out_video": "naturalized.mp4" }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Tool-specific considerations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Runway Gen-3/Gen-4
&lt;/h3&gt;

&lt;p&gt;Runway writes multiple custom metadata fields: &lt;code&gt;encoder&lt;/code&gt;, &lt;code&gt;handler_name&lt;/code&gt;, and sometimes a &lt;code&gt;comment&lt;/code&gt; field with generation parameters. The &lt;code&gt;-map_metadata -1 -fflags +bitexact&lt;/code&gt; flags strip all of these.&lt;/p&gt;

&lt;p&gt;Runway's color profile tends toward high saturation with punchy contrast. The naturalizer command's brightness and saturation shifts already handle this, but if your video still looks "too clean," add a slight gamma adjustment: &lt;code&gt;gamma=0.98&lt;/code&gt; in the eq filter.&lt;/p&gt;

&lt;p&gt;Runway Gen-4 outputs at exactly 24fps with zero frame timing variation. Real phone cameras shoot at 29.97 or 30fps with slight jitter. Re-encode at 30fps with the timing variation from Step 3.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kling AI
&lt;/h3&gt;

&lt;p&gt;Kling has a known issue with temporal inconsistencies at scene transitions — frames sometimes stutter or repeat. The noise filter masks these, but also check for it visually before uploading. A single repeated frame is a dead giveaway to human reviewers.&lt;/p&gt;

&lt;p&gt;Kling may embed watermarks depending on your subscription tier. Check the bottom-right corner of the frame. If present, crop by 20-30 pixels from the bottom edge:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ffmpeg &lt;span class="nt"&gt;-i&lt;/span&gt; kling_video.mp4 &lt;span class="nt"&gt;-vf&lt;/span&gt; &lt;span class="s2"&gt;"crop=iw:ih-30:0:0"&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt;:v libx264 &lt;span class="nt"&gt;-crf&lt;/span&gt; 22 output.mp4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kling's audio tracks are often silent or contain synthesized ambient noise at suspiciously consistent levels. If your video has audio, verify it sounds natural or replace it entirely.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sora
&lt;/h3&gt;

&lt;p&gt;Sora produces some of the smoothest AI video on the market, which is actually a problem. Real video has micro-jitter, slight focus shifts, and compression artifacts from the camera sensor. Sora has none of that.&lt;/p&gt;

&lt;p&gt;Beyond the noise and timing variation from the naturalizer command, consider adding a slight speed fluctuation. Slow the video by 2% and it introduces natural-feeling drag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ffmpeg &lt;span class="nt"&gt;-i&lt;/span&gt; sora_video.mp4 &lt;span class="nt"&gt;-vf&lt;/span&gt; &lt;span class="s2"&gt;"setpts=PTS*1.02,noise=alls=7:allf=t"&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt;:v libx264 &lt;span class="nt"&gt;-crf&lt;/span&gt; 22 output.mp4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sora also outputs with specific C2PA Content Credentials that explicitly declare AI generation. The metadata strip handles this, but double-check with ffprobe after processing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pika Labs
&lt;/h3&gt;

&lt;p&gt;Pika's free tier adds a visible watermark in the lower-right corner. Crop it or cover it with your own overlay before running the naturalizer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ffmpeg &lt;span class="nt"&gt;-i&lt;/span&gt; pika_video.mp4 &lt;span class="nt"&gt;-vf&lt;/span&gt; &lt;span class="s2"&gt;"crop=iw-40:ih-40:0:0"&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt;:v libx264 &lt;span class="nt"&gt;-crf&lt;/span&gt; 22 output.mp4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pika's output resolution varies by model version (sometimes 576p, sometimes 720p, sometimes 1080p). If you're uploading to TikTok, resize to 1080x1920 after naturalizing. A non-standard resolution is a subtle signal.&lt;/p&gt;

&lt;p&gt;For more on cleaning up AI artifacts specifically, see &lt;a href="https://renderio.dev/blogs/remove-ai-artifacts-from-video" rel="noopener noreferrer"&gt;remove AI artifacts from video&lt;/a&gt; and &lt;a href="https://renderio.dev/blogs/make-ai-video-look-natural" rel="noopener noreferrer"&gt;make AI video look natural&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Batch processing AI videos
&lt;/h2&gt;

&lt;p&gt;For content operations generating many AI videos:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;API_KEY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ffsk_your_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;HEADERS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Content-Type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X-API-KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;API_KEY&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;videos&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://example.com/ai-video-1.mp4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://example.com/ai-video-2.mp4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://example.com/ai-video-3.mp4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;videos&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;noise&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;brightness&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.01&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;0.005&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://renderio.dev/api/v1/run-ffmpeg-command&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;HEADERS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ffmpeg_command&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-i {{in_video}} -vf &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;crop=iw-6:ih-6:3:3,noise=alls=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;noise&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;:allf=t,eq=brightness=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;brightness&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; -c:v libx264 -crf 23 -map_metadata -1 -fflags +bitexact {{out_video}}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input_files&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;in_video&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output_files&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;out_video&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;natural_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.mp4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Video &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;command_id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verification
&lt;/h2&gt;

&lt;p&gt;After processing, check four things:&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;ffprobe -v quiet -print_format json -show_format output.mp4&lt;/code&gt; and confirm no tool-specific metadata fields remain. Look for &lt;code&gt;encoder&lt;/code&gt;, &lt;code&gt;comment&lt;/code&gt;, &lt;code&gt;creation_time&lt;/code&gt;, and any custom tags. If any are present, your metadata strip didn't work.&lt;/p&gt;

&lt;p&gt;View the video at 200% zoom. You should see subtle grain from the noise filter. If the image is perfectly clean, the noise wasn't applied.&lt;/p&gt;

&lt;p&gt;Check file size. A 30-second 1080p video from a phone is typically 30-80MB. If your output is 5MB or 200MB, something's off with the encoding settings.&lt;/p&gt;

&lt;p&gt;Play the audio back. A 0.3-0.5% pitch shift is inaudible. Above 1%, you'll hear it. If the audio sounds slightly chipmunked, dial back the pitch multiplier.&lt;/p&gt;

&lt;p&gt;If you're posting the same AI video to multiple accounts, you'll also need to &lt;a href="https://renderio.dev/blogs/avoid-tiktok-duplicate-detection-at-scale" rel="noopener noreferrer"&gt;avoid TikTok duplicate detection at scale&lt;/a&gt; by generating unique variations. For deeper metadata removal, the &lt;a href="https://renderio.dev/blogs/remove-ai-metadata-from-video" rel="noopener noreferrer"&gt;remove AI metadata from video&lt;/a&gt; guide covers edge cases that the basic strip misses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;

&lt;p&gt;The Starter plan at $9/mo includes 500 commands, enough to process 10-15 AI-generated clips per day. Explore the &lt;a href="https://dev.to/ffmpeg-api"&gt;FFmpeg video API&lt;/a&gt; or &lt;a href="https://dev.to/get-api-key"&gt;get your API key&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Does TikTok actually detect AI-generated video?
&lt;/h3&gt;

&lt;p&gt;Yes, and it's getting better at it. TikTok uses a combination of metadata analysis, perceptual fingerprinting, and (increasingly) visual pattern detection. C2PA Content Credentials are the most obvious signal. Tools like Runway and Sora now embed these by default. Metadata stripping handles C2PA. The visual patterns are harder to detect algorithmically, but TikTok is investing in it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Will these techniques work on Instagram and YouTube too?
&lt;/h3&gt;

&lt;p&gt;The same principles apply. Instagram uses similar fingerprinting for Reels. YouTube has its own content detection system (Content ID) but it's focused on copyright, not AI detection, at least for now. The metadata strip and noise addition work across all platforms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is it legal to remove AI metadata from videos?
&lt;/h3&gt;

&lt;p&gt;Removing metadata is legal in most jurisdictions. However, some regions are implementing AI disclosure requirements (the EU AI Act, for example). Removing C2PA markers to avoid disclosure could have legal implications depending on how you use the video. This guide covers the technical steps; consult local regulations for compliance.&lt;/p&gt;

&lt;h3&gt;
  
  
  How much noise should I add without making the video look bad?
&lt;/h3&gt;

&lt;p&gt;Noise strength 5-8 is the range. Below 5, the noise is too subtle to fool detection. Above 10, it's visible on mobile screens. For high-quality AI video (Sora, Runway Gen-4), start at 6. For lower-quality sources (Pika free tier, older Kling models), start at 4. They already have enough imperfections.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do I need to process audio separately?
&lt;/h3&gt;

&lt;p&gt;Not usually. The naturalizer command shifts audio pitch as part of the combined pipeline. If your AI video has no audio (many AI tools generate silent video), add a natural ambient track or keep it silent. TikTok doesn't flag silent videos specifically. If you're adding music, that replaces the audio fingerprint entirely.&lt;/p&gt;

</description>
      <category>ffmpeg</category>
      <category>video</category>
      <category>api</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
