DEV Community

Ronn Huang
Ronn Huang

Posted on

How to Generate AI Videos with Seedance API (JavaScript & Python Examples)

ByteDance's Seedance 2.0 has been getting a lot of attention lately — 2K resolution, up to 15 seconds, native audio generation, and arguably the best motion quality in the current AI video landscape.

But most of the documentation is in Chinese, and figuring out how to actually use it programmatically can be frustrating. I spent a few weeks building tools around Seedance and want to share what I learned about the API integration.

Your Options

There are three ways to access Seedance via API:

Platform Models Available Pricing Auth
fal.ai 1.0 (all variants), 1.5 Pro ~$0.01-0.05/sec API key
Replicate 1.0 Pro, 1.5 Pro ~$0.01-0.05/sec API token
Volcano Engine All versions incl. 2.0 Credits-based Chinese phone required

For most developers outside China, fal.ai or Replicate are the practical choices. Let's start with fal.ai.

Option 1: fal.ai

Setup

npm install @fal-ai/client
Enter fullscreen mode Exit fullscreen mode

Set your API key:

export FAL_KEY="your-api-key-here"
Enter fullscreen mode Exit fullscreen mode

Text-to-Video

import { fal } from "@fal-ai/client";

const result = await fal.subscribe(
  "fal-ai/seedance-v1-pro-t2v",
  {
    input: {
      prompt: "A golden retriever running through a sunlit meadow, slow motion, cinematic lighting, shallow depth of field",
      negative_prompt: "blurry, distorted, low quality",
      num_frames: 120,
      guidance_scale: 7.5,
      seed: 42,
    },
    logs: true,
    onQueueUpdate: (update) => {
      if (update.status === "IN_PROGRESS") {
        console.log(update.logs?.map(l => l.message));
      }
    },
  }
);

console.log(result.data.video.url);
Enter fullscreen mode Exit fullscreen mode

Image-to-Video

This is where Seedance really shines. You provide a reference image and it generates a video that maintains the visual style:

const result = await fal.subscribe(
  "fal-ai/seedance-v1-pro-i2v",
  {
    input: {
      prompt: "The woman turns her head slowly and smiles, wind blowing through her hair",
      image_url: "https://example.com/portrait.jpg",
      num_frames: 120,
      guidance_scale: 7.5,
    },
  }
);

console.log(result.data.video.url);
Enter fullscreen mode Exit fullscreen mode

Option 2: Replicate

Setup

pip install replicate
Enter fullscreen mode Exit fullscreen mode
export REPLICATE_API_TOKEN="your-token-here"
Enter fullscreen mode Exit fullscreen mode

Python Example

import replicate

output = replicate.run(
    "bytedance/seedance-v1-pro-t2v",
    input={
        "prompt": "Aerial drone shot of a coastal city at sunset, golden hour lighting, waves crashing against cliffs, 4K cinematic",
        "negative_prompt": "blurry, distorted",
        "num_frames": 120,
        "guidance_scale": 7.5,
        "seed": 42,
    }
)

print(output)
Enter fullscreen mode Exit fullscreen mode

cURL (for any language)

curl -s -X POST "https://api.replicate.com/v1/predictions" \
  -H "Authorization: Bearer $REPLICATE_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "version": "bytedance/seedance-v1-pro-t2v",
    "input": {
      "prompt": "A cup of coffee with steam rising, macro shot, warm lighting",
      "num_frames": 120,
      "guidance_scale": 7.5
    }
  }'
Enter fullscreen mode Exit fullscreen mode

Prompt Engineering Tips

After generating hundreds of videos, here's what actually matters:

1. Be specific about motion

# Bad
"A cat sitting on a table"

# Good
"A tabby cat slowly stretches on a wooden table, then turns its head toward the camera, soft afternoon light from a window"
Enter fullscreen mode Exit fullscreen mode

2. Specify camera movement

Seedance handles camera instructions well:

"Slow dolly-in on a vintage typewriter, shallow depth of field, the keys begin to press themselves"
Enter fullscreen mode Exit fullscreen mode

3. Use the @reference system for image-to-video

When using i2v mode, reference tags give you precise control:

"@Image1 The person in the photo begins to walk forward, camera follows with a steady tracking shot"
Enter fullscreen mode Exit fullscreen mode

4. Negative prompts matter

Always include: "blurry, distorted, low quality, watermark, text overlay"

Model Comparison: Which Variant to Use?

Model Best For Speed Quality
1.0 Lite Quick prototypes Fast Good
1.0 Pro Production quality Slow Excellent
1.5 Pro Latest features Medium Excellent
1.0 Fast Batch processing Fastest Good

For most use cases, 1.0 Pro gives the best quality-to-cost ratio. Use 1.0 Fast when you're iterating on prompts and need quick feedback.

Cost Estimation

A rough formula:

cost ≈ duration_seconds × $0.03 (fal.ai average)
Enter fullscreen mode Exit fullscreen mode

So a 5-second video costs roughly $0.15, and a 10-second video about $0.30. Significantly cheaper than Sora 2 (which requires a $200/mo ChatGPT Pro subscription).

Common Errors and Fixes

NSFW content detected — Seedance has content filters. Rephrase your prompt to avoid triggering them.

Timeout after 300s — Pro models can take 3-5 minutes for longer videos. Increase your timeout or use the async/webhook pattern.

Invalid image format — For i2v, use JPEG or PNG. WebP sometimes causes issues. Image should be at least 512x512.

Resources

I built a free resource site that covers all of this in more detail:

The site is open source: github.com/hueshadow/seedance


What's your experience with Seedance? Have you found any interesting use cases? Drop a comment below.

Top comments (0)