DEV Community

EvoLink
EvoLink

Posted on • Originally published at seedance2api.app

Seedance 2.0 API Tutorial: From Zero to Your First AI Video (Python)

Seedance 2.0 is ByteDance's most advanced AI video model — multimodal references, native audio, cinematic camera control, and 4–15 second generation at up to 1080p. This tutorial walks you through the entire API workflow in Python: from getting your API key to downloading your first generated video.

By the end, you'll have working code for text-to-video, image-to-video, async polling, webhook handling, and error recovery. Every code example here was tested against a live API.

Note — Seedance 2.0 vs 1.5: Seedance 2.0 is rolling out progressively. You can test the complete workflow right now using seedance-1.5-pro — when 2.0 is fully available, just change the model name. All endpoints, parameters, and response formats are identical. The key differences in 2.0: multimodal references (mix images, videos, and audio as inputs), native audio generation, improved physics simulation, and video editing capabilities. Everything in this tutorial works with both versions.

Get your free API key to follow along.


What You'll Build (and What You Need)

Here's what a Seedance-generated video looks like — created with a single API call:

A girl reaches for a leather-bound book in a grand library. Generated by Seedance with a single text prompt.

In this tutorial, you'll write Python code that:

  1. Sends a text prompt → gets back a generated video
  2. Sends an image → animates it into a video
  3. Polls for results asynchronously
  4. Handles errors and retries like production code
  5. Receives results via webhook (no polling needed)
  6. Cancels in-progress tasks when needed

Prerequisites

  • Python 3.8+ (check with python3 --version)
  • requests library (pip install requests)
  • An EvoLink API key (free to sign up — we'll get this in the next section)

No GPU, no Docker, no complex setup. Just Python and an API key.

Pro Tip: If you're building a production app, consider using a virtual environment to isolate dependencies:

python3 -m venv seedance-env
source seedance-env/bin/activate  # macOS/Linux
seedance-env\Scripts\activate     # Windows
pip install requests flask

Get Your API Key

Seedance 2.0 is available through EvoLink, an API gateway that provides unified access to multiple AI video models — including Seedance 2.0, Kling, and others — through a single API key.

Here's how to get started:

  1. Go to evolink.ai/early-access and create an account
  2. Navigate to Dashboard → API Keys
  3. Click Create New Key
  4. Copy your key — it starts with sk-

Store your key securely. Don't commit it to version control. We'll use an environment variable:

export EVOLINK_API_KEY="sk-your-api-key-here"
Enter fullscreen mode Exit fullscreen mode

This line sets the EVOLINK_API_KEY environment variable in your terminal session. On macOS/Linux, add it to your ~/.bashrc or ~/.zshrc to persist across sessions. On Windows, use set EVOLINK_API_KEY=sk-your-api-key-here in Command Prompt, or set it in System Properties → Environment Variables for persistence.

Your account includes starter credits to experiment with. Check the Getting Started docs for current pricing details.

Common Mistake: Don't hardcode your API key in source files. If you push it to GitHub, automated scrapers will find it within minutes. Always use environment variables or a secrets manager like AWS Secrets Manager or HashiCorp Vault.


Set Up Your Python Environment

Install the one dependency you need:

pip install requests
Enter fullscreen mode Exit fullscreen mode

Create a file called seedance_tutorial.py and add this setup code. Every example in this tutorial builds on this foundation:

import requests
import time
import os
import json

# ── Configuration ─────────────────────────────────────────────
API_KEY = os.getenv("EVOLINK_API_KEY", "sk-your-api-key-here")
BASE_URL = "https://api.evolink.ai/v1"
HEADERS = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json"
}
Enter fullscreen mode Exit fullscreen mode

Let's break this down line by line:

  • os.getenv("EVOLINK_API_KEY", "sk-your-api-key-here") — Reads the API key from the environment variable. The second argument is a fallback default (replace this with your actual key only for local testing).
  • BASE_URL — The root URL for all EvoLink API endpoints. All requests go to https://api.evolink.ai/v1/....
  • HEADERS — Two headers sent with every request: Authorization carries your API key using the Bearer token scheme, and Content-Type tells the server we're sending JSON.

Now add the reusable helper functions:

# ── Reusable Polling Helper ───────────────────────────────────
def wait_for_video(task_id, poll_interval=10, timeout=600):
    """
    Poll a video generation task until it completes or fails.

    Args:
        task_id: The task ID returned by the generation endpoint.
        poll_interval: Seconds between polls (default 10).
        timeout: Maximum wait time in seconds (default 600).

    Returns:
        dict: The completed task response with video URLs.

    Raises:
        TimeoutError: If the task doesn't complete within the timeout.
        RuntimeError: If the task fails.
    """
    elapsed = 0
    while elapsed < timeout:
        # Send GET request to check the task's current status
        response = requests.get(
            f"{BASE_URL}/tasks/{task_id}",
            headers=HEADERS
        )
        # Raise an exception if the HTTP status code indicates an error
        response.raise_for_status()
        task = response.json()

        # Extract status and progress from the response
        status = task["status"]
        progress = task.get("progress", 0)
        print(f"  [{elapsed}s] Status: {status} | Progress: {progress}%")

        # Check terminal states
        if status == "completed":
            return task
        elif status == "failed":
            error_info = task.get("error", {})
            raise RuntimeError(
                f"Task {task_id} failed: {error_info.get('message', 'Unknown error')}"
            )

        # Wait before the next poll
        time.sleep(poll_interval)
        elapsed += poll_interval

    raise TimeoutError(f"Task {task_id} timed out after {timeout}s")
Enter fullscreen mode Exit fullscreen mode

Key design decisions in this function:

  • poll_interval=10 — 10 seconds is the sweet spot. Faster wastes API quota; slower delays your workflow.
  • timeout=600 — 10 minutes is generous. Most videos complete in 30–120 seconds, but this covers edge cases like queue congestion.
  • response.raise_for_status() — Converts HTTP errors (4xx/5xx) into Python exceptions so they don't silently pass.
  • Progress printing — The [elapsed]s prefix helps you correlate timing. Useful for debugging slow generations.
# ── Helper: Download Video ────────────────────────────────────
def download_video(url, filename="output.mp4"):
    """Download a video file from a URL."""
    print(f"Downloading video to {filename}...")
    resp = requests.get(url, stream=True)
    resp.raise_for_status()
    with open(filename, "wb") as f:
        for chunk in resp.iter_content(chunk_size=8192):
            f.write(chunk)
    print(f"Saved: {filename} ({os.path.getsize(filename) / 1024:.0f} KB)")
Enter fullscreen mode Exit fullscreen mode

This function streams the download in 8 KB chunks instead of loading the entire video into memory. This matters — generated videos can be 10–50 MB. The stream=True parameter tells requests to download incrementally.

These three pieces — configuration, polling, and download — are the foundation. Every code example below uses them. We won't repeat them — just the new payload each time.

For the full API reference, see the Video Generation docs.


Generate Your First Video (Text-to-Video)

Time to generate a video. Add this to your script:

# ── Text-to-Video ─────────────────────────────────────────────
def text_to_video():
    payload = {
        "model": "seedance-2.0",          # The AI model to use
        "prompt": (
            "A golden retriever puppy chases a butterfly through "
            "a sunlit meadow. The camera follows the puppy with a "
            "smooth tracking shot as wildflowers sway in the breeze."
        ),
        "duration": 5,                     # Video length: 4-15 seconds
        "quality": "720p",                 # Resolution: 480p, 720p, 1080p
        "aspect_ratio": "16:9",            # Standard widescreen
        "generate_audio": True             # AI generates matching audio
    }

    print("Submitting text-to-video request...")
    response = requests.post(
        f"{BASE_URL}/videos/generations",  # The video generation endpoint
        headers=HEADERS,                   # Auth + content-type headers
        json=payload                       # Automatically serializes to JSON
    )
    response.raise_for_status()            # Throw if not 200 OK
    task = response.json()                 # Parse the JSON response

    # Log key info from the response
    print(f"Task created: {task['id']}")
    print(f"Estimated time: {task['task_info']['estimated_time']}s")
    print(f"Credits reserved: {task['usage']['credits_reserved']}")

    # Poll until the video is ready
    result = wait_for_video(task["id"])

    # The results array contains one or more video URLs
    video_url = result["results"][0]
    print(f"\nVideo URL: {video_url}")
    download_video(video_url, "my_first_video.mp4")

    return result


if __name__ == "__main__":
    text_to_video()
Enter fullscreen mode Exit fullscreen mode

Let's walk through each parameter in the payload:

  • model — Which Seedance model to use. Set seedance-2.0 for the latest; use seedance-1.5-pro if 2.0 isn't available in your region yet.
  • prompt — Your video description. Be specific about subject, action, camera movement, and mood. The prompt above uses a three-part structure: subject ("golden retriever puppy"), action ("chases a butterfly"), and camera ("smooth tracking shot"). For advanced prompt techniques, see our Prompt Engineering Guide.
  • duration — Video length in seconds (4–15). Shorter videos generate faster and cost fewer credits. Start with 5 for testing.
  • quality — Resolution tier. 720p is the best balance of quality and speed for development. Use 480p for rapid iteration, 1080p for final renders.
  • aspect_ratio — Output dimensions. 16:9 for YouTube/landscape, 9:16 for TikTok/Reels/Shorts, 1:1 for Instagram feed.
  • generate_audio — When true, Seedance generates ambient sound and music that matches the visual content. Adds ~2 seconds to generation time.

Run it:

python seedance_tutorial.py
Enter fullscreen mode Exit fullscreen mode

What the API Returns

When you submit a generation request, you get back a task object immediately — the video isn't ready yet. Here's the actual response:

{
  "created": 1772203771,
  "id": "task-unified-1772203771-yf1dxogh",
  "model": "seedance-2.0",
  "object": "video.generation.task",
  "progress": 0,
  "status": "pending",
  "task_info": {
    "can_cancel": true,
    "estimated_time": 132
  },
  "type": "video",
  "usage": {
    "billing_rule": "per_second",
    "credits_reserved": 17.784,
    "user_group": "default"
  }
}
Enter fullscreen mode Exit fullscreen mode

Key fields explained:

Field Meaning
id Your task ID — use this to check status and retrieve results
status Starts as pending, moves to processing, then completed or failed
progress 0–100 percentage. Updates in real-time during processing
estimated_time Approximate seconds until completion (server-side estimate)
credits_reserved Credits held for this job. Refunded automatically if the task fails
task_info.can_cancel Whether you can cancel this task (always true before completion)
created Unix timestamp of when the task was submitted
usage.billing_rule How credits are calculated — per_second means cost scales with duration

Pro Tip: Save the id to a file or database immediately after submission. If your script crashes during polling, you can resume by calling wait_for_video() with the saved task ID. Tasks persist on the server for 24 hours.

The Polling Sequence

The wait_for_video() function polls every 10 seconds. Here's what the real output looks like:

Submitting text-to-video request...
Task created: task-unified-1772203771-yf1dxogh
Estimated time: 132s
Credits reserved: 17.784
  [0s] Status: pending | Progress: 0%
  [10s] Status: processing | Progress: 7%
  [20s] Status: processing | Progress: 13%
  [30s] Status: processing | Progress: 20%
  [40s] Status: processing | Progress: 27%
  [50s] Status: completed | Progress: 100%

Video URL: https://files.evolink.ai/.../cgt-20260227224931-8vl7s.mp4
Downloading video to my_first_video.mp4...
Saved: my_first_video.mp4 (2847 KB)
Enter fullscreen mode Exit fullscreen mode

That's it — about 50 seconds from API call to video file on disk.

Important: Video URLs expire after 24 hours. Always download the file promptly or store it in your own storage (S3, GCS, Cloudflare R2, etc.).

Common Mistake: Don't rely on the video URL for long-term storage. Build your pipeline to download immediately after completion. If you're processing videos asynchronously, use webhooks (covered below) to trigger downloads the moment they're ready.

For tips on writing effective prompts, see the Seedance 2.0 Prompt Guide — it covers shot-script format, style keywords, and timing syntax.


Poll for Results: Understanding the Async Workflow

Video generation takes 30–120+ seconds depending on duration and quality. The API uses an asynchronous task pattern — the same pattern used by OpenAI, Stability AI, and most other generative AI APIs:

  1. Submit → POST to /v1/videos/generations → get a task ID instantly
  2. Poll → GET /v1/tasks/{task_id} → check status periodically
  3. Retrieve → When status: "completed", the results array contains video URLs

This pattern exists because video generation is computationally expensive. A synchronous HTTP request would time out long before the video is ready.

Task Status Lifecycle

pending → processing → completed
                    ↘ failed
Enter fullscreen mode Exit fullscreen mode
Status What's Happening Typical Duration
pending Task is queued, waiting for GPU resources 0–30 seconds
processing Video is being generated — progress updates in real-time 30–120 seconds
completed Done! results array has your video URL(s) Terminal state
failed Something went wrong — check the error details Terminal state

Polling Best Practices

Poll interval: 10 seconds is a good default. Polling too fast wastes requests and could trigger rate limits; too slow delays your pipeline. For time-critical applications, you can poll every 5 seconds, but there's no benefit to going faster than that.

Timeout: Set a reasonable upper limit based on your parameters:

Configuration Expected Time Suggested Timeout
4s, 480p 20–40 seconds 120 seconds
5s, 720p 30–60 seconds 180 seconds
10s, 720p 60–90 seconds 300 seconds
15s, 1080p 90–180 seconds 600 seconds

Progress tracking: The progress field (0–100) gives you granular feedback — useful for building progress bars in a UI. Progress updates roughly every 5–7 seconds during the processing phase.

Cancelling a Task

If you need to stop a generation in progress (wrong prompt, changed your mind), you can cancel it:

def cancel_task(task_id):
    """Cancel a pending or processing task. Credits are refunded."""
    response = requests.post(
        f"{BASE_URL}/tasks/{task_id}/cancel",
        headers=HEADERS
    )
    if response.status_code == 200:
        print(f"Task {task_id} cancelled. Credits refunded.")
    else:
        print(f"Cancel failed: {response.json()}")
Enter fullscreen mode Exit fullscreen mode

Cancellation works when task_info.can_cancel is true. Once a task reaches completed or failed, it can't be cancelled. Reserved credits are refunded automatically on cancellation.

Pro Tip: Build a cancellation mechanism into your UI early. Users will inevitably submit wrong prompts, and waiting 2 minutes for a bad video wastes both time and credits.

The wait_for_video() function from our setup code handles the standard polling flow. If you want to skip polling entirely, jump to the Webhooks section below.


Animate an Image (Image-to-Video)

Got a product photo, character illustration, or landscape you want to bring to life? Pass it as an image_url and Seedance will animate it. This is one of the most powerful features for e-commerce product videos — take a static product shot and turn it into an engaging video ad.

Uses the same setup and polling function from the first example above.

# ── Image-to-Video ────────────────────────────────────────────
def image_to_video():
    payload = {
        "model": "seedance-2.0",
        "prompt": (
            "@Image1 as the first frame. The scene slowly comes "
            "to life — leaves rustle gently, soft light shifts "
            "across the frame, and the subject blinks naturally."
        ),
        "image_urls": [
            "https://example.com/your-image.jpg"
        ],
        "duration": 5,
        "quality": "720p",
        "aspect_ratio": "16:9"
    }

    print("Submitting image-to-video request...")
    response = requests.post(
        f"{BASE_URL}/videos/generations",
        headers=HEADERS,
        json=payload
    )
    response.raise_for_status()
    task = response.json()

    print(f"Task created: {task['id']}")
    result = wait_for_video(task["id"])

    video_url = result["results"][0]
    download_video(video_url, "animated_image.mp4")

    return result
Enter fullscreen mode Exit fullscreen mode

Let's unpack what's different from text-to-video:

  • image_urls — An array of publicly accessible image URLs. The API fetches these directly, so they must be reachable from the internet (not localhost or private network URLs).
  • @Image1 in the prompt — This tag tells Seedance which image to reference and how. It corresponds to the first URL in image_urls. If you pass three images, you'd use @Image1, @Image2, @Image3.
  • No generate_audio — Omitted here, which defaults to true. You can set it to false for silent animation.

How @Image Tags Work

The @Image1 tag in your prompt tells Seedance how to use the image. It references the first URL in the image_urls array. You can pass up to 9 images (@Image1 through @Image9). For a complete guide on multimodal tags including @Video and @Audio, see the Multimodal @Tags Guide.

Common patterns:

Prompt Pattern What It Does Best For
@Image1 as first frame Uses the image as the opening frame Product showcases, scene setting
@Image1 as last frame Uses the image as the closing frame Logo reveals, transitions
@Image1 as character reference Maintains the character's appearance Consistent characters across clips
@Image1 as style reference Applies the image's visual style Brand consistency, art direction
@Image1 as first frame, @Image2 as last frame Creates a transition between two images Before/after, transformations

The actual response from our test:

{
  "created": 1772204037,
  "id": "task-unified-1772204036-lify8u5p",
  "model": "seedance-2.0",
  "object": "video.generation.task",
  "progress": 0,
  "status": "pending",
  "task_info": {
    "can_cancel": true,
    "estimated_time": 145
  },
  "type": "video",
  "usage": {
    "billing_rule": "per_second",
    "credits_reserved": 17.784,
    "user_group": "default"
  }
}
Enter fullscreen mode Exit fullscreen mode

Image-to-video follows the exact same async pattern — submit, poll, download. The estimated_time is slightly longer because the model needs to analyze the input image.

Image Requirements

Constraint Value
Max images 9 per request
Max file size 30 MB per image
Supported formats JPEG, PNG, WebP, BMP, TIFF, GIF
URL requirement Must be publicly accessible
Recommended resolution At least 720px on the shorter side

Common Mistake: Passing a local file path instead of a URL. The image_urls field requires publicly accessible HTTP/HTTPS URLs. If your images are local, upload them to S3, Cloudflare R2, or even a temporary file hosting service first.

Restriction: Seedance does not support uploading realistic human face images. The system automatically rejects them. Use illustrated or stylized characters instead.

Hosting Images for the API

If you don't have a CDN, here are quick options for getting a public URL:

# Option 1: Upload to S3 (if you have AWS)
import boto3
s3 = boto3.client('s3')
s3.upload_file('local_image.jpg', 'my-bucket', 'seedance/input.jpg')
image_url = f"https://my-bucket.s3.amazonaws.com/seedance/input.jpg"

# Option 2: Use a temporary file hosting API
# Many services offer free temporary hosting for testing
Enter fullscreen mode Exit fullscreen mode

For advanced image-to-video techniques — first-last frame control, multi-image composition, and e-commerce product animation — see the Image-to-Video deep dive.


Customize Your Videos

Every parameter you can tune in a generation request:

Parameter Type Default Options Description
model string seedance-2.0 Required. The model to use.
prompt string ≤2000 tokens Required. Video description with optional @tags.
duration integer 5 4–15 Video length in seconds.
quality string 720p 480p, 720p, 1080p Resolution tier. Higher = more credits.
aspect_ratio string 16:9 16:9, 9:16, 1:1, 4:3, 3:4, 21:9 Output aspect ratio.
generate_audio boolean true true, false Enable AI-generated audio/music.
image_urls array ≤9 images Reference images. Use @Image1, @Image2... in prompt.
video_urls array ≤3 videos Reference videos. Use @Video1, @Video2... in prompt.
audio_urls array ≤3 audio files Reference audio. Use @Audio1, @Audio2... in prompt.
callback_url string HTTPS URL Webhook for completion notification.

Seedance 2.0 vs 1.5 Note: All parameters above work with both seedance-2.0 and seedance-1.5-pro. The key difference: video_urls, audio_urls, and multi-image references (@Image2 through @Image9) are 2.0-only features. If you use them with 1.5, the API returns a 400 error with a clear message indicating the feature isn't supported.

Quick Examples

Vertical video for social media (TikTok/Reels):

Uses the same setup and polling function from the first example above.

payload = {
    "model": "seedance-2.0",
    "prompt": "A barista pours latte art in slow motion. Close-up overhead shot.",
    "duration": 8,
    "quality": "1080p",
    "aspect_ratio": "9:16",       # Vertical for mobile
    "generate_audio": True
}
Enter fullscreen mode Exit fullscreen mode

The 9:16 aspect ratio generates a 1080×1920 video — native resolution for TikTok, Instagram Reels, and YouTube Shorts. The 1080p quality tier ensures crisp visuals on mobile screens.

Cinematic widescreen with camera movement:

payload = {
    "model": "seedance-2.0",
    "prompt": (
        "Aerial drone shot over a misty mountain range at sunrise. "
        "Camera slowly pushes forward, revealing a hidden valley. "
        "Cinematic color grading, volumetric lighting."
    ),
    "duration": 10,
    "quality": "1080p",
    "aspect_ratio": "21:9",       # Ultra-widescreen cinematic
    "generate_audio": True
}
Enter fullscreen mode Exit fullscreen mode

For programmatic camera control — dolly zooms, orbital shots, and Hitchcock-style movements — see the Camera Movement API Guide.

Silent video for a website background:

payload = {
    "model": "seedance-2.0",
    "prompt": "Abstract flowing particles in deep blue and gold. Slow, meditative movement.",
    "duration": 15,               # Max duration for seamless loops
    "quality": "720p",
    "aspect_ratio": "21:9",       # Wide background
    "generate_audio": False       # No audio for autoplay backgrounds
}
Enter fullscreen mode Exit fullscreen mode

Budget-friendly draft (fast iteration):

payload = {
    "model": "seedance-2.0",
    "prompt": "A cat wearing sunglasses sits at a DJ booth. Neon club lighting.",
    "duration": 4,                # Minimum duration = fastest generation
    "quality": "480p",            # Lowest quality = cheapest credits
    "aspect_ratio": "16:9"
}
Enter fullscreen mode Exit fullscreen mode

Pro Tip: During development, always use duration: 4 and quality: "480p". This is the cheapest and fastest combination — ideal for iterating on prompts. Once you're happy with the content, render the final version at 1080p with your desired duration.

Credit Cost Estimation

Credits scale with duration and quality. Here's a rough guide:

Quality 4s 5s 10s 15s
480p ~8 ~10 ~20 ~30
720p ~14 ~18 ~36 ~53
1080p ~22 ~28 ~55 ~83

Approximate credits. Actual costs shown in credits_reserved field. Check the EvoLink dashboard for current rates.

The multimodal reference system — @Image, @Video, @Audio tags — is where Seedance 2.0 truly shines. You can replicate camera movements from reference videos, maintain character consistency across shots, and sync to audio beats. For a complete guide, read The Ultimate Guide to @Tags.


Handle Errors Gracefully

API calls fail. Networks drop. Rate limits hit. Here's how to build resilient code that handles every real error scenario.

Common Error Responses

Every error follows the same format:

{
  "error": {
    "message": "description of what went wrong",
    "type": "error_category",
    "code": "specific_error_code"
  }
}
Enter fullscreen mode Exit fullscreen mode

The error object always contains message and type. The code field is present for most errors but not all. Always check type first, then code for specifics.

Here are real error responses from the API:

401 — Invalid API Key:

{
  "error": {
    "message": "Invalid token (request id: 20260227225245660301729AApJNAhJ)",
    "type": "evo_api_error"
  }
}
Enter fullscreen mode Exit fullscreen mode

This means your API key is wrong, expired, or was revoked. Double-check the EVOLINK_API_KEY environment variable. A common cause: copying the key with trailing whitespace.

400 — Missing Required Field:

{
  "error": {
    "code": "invalid_parameter",
    "message": "prompt cannot be empty",
    "type": "invalid_request_error"
  }
}
Enter fullscreen mode Exit fullscreen mode

The prompt field is required for all generation requests. This also triggers if you pass an empty string or whitespace-only prompt.

400 — Invalid Parameter Value:

{
  "error": {
    "code": "invalid_parameter",
    "message": "duration must be between 4 and 15",
    "type": "invalid_request_error"
  }
}
Enter fullscreen mode Exit fullscreen mode

Happens when you pass duration: 3 or duration: 20. The valid range is 4–15 seconds inclusive.

400 — Unsupported Quality Tier:

{
  "error": {
    "code": "invalid_parameter",
    "message": "quality must be one of: 480p, 720p, 1080p",
    "type": "invalid_request_error"
  }
}
Enter fullscreen mode Exit fullscreen mode

Common when passing "quality": "4k" or "quality": "hd". Use the exact strings: 480p, 720p, or 1080p.

402 — Insufficient Credits:

{
  "error": {
    "message": "Insufficient credits. Required: 17.784, Available: 2.100",
    "type": "insufficient_quota_error"
  }
}
Enter fullscreen mode Exit fullscreen mode

Your account doesn't have enough credits. The message tells you exactly how many you need vs. how many you have. Top up at the EvoLink dashboard.

404 — Task Not Found:

{
  "error": {
    "message": "Task not found",
    "type": "invalid_request_error",
    "code": "task_not_found"
  }
}
Enter fullscreen mode Exit fullscreen mode

Usually means the task ID is wrong, or the task was created more than 24 hours ago (tasks expire). Double-check you're using the id field from the creation response, not some other field.

413 — Image Too Large:

{
  "error": {
    "message": "Image file size exceeds 30MB limit",
    "type": "request_too_large_error"
  }
}
Enter fullscreen mode Exit fullscreen mode

Compress your image before uploading. For the API, visual quality above 2–3 MB rarely improves results.

429 — Rate Limited:

{
  "error": {
    "message": "Rate limit exceeded. Please retry after 60 seconds.",
    "type": "rate_limit_error"
  }
}
Enter fullscreen mode Exit fullscreen mode

You're sending too many requests. The default limit is generous for development, but batch scripts can hit it. Implement exponential backoff (see below).

422 — Content Moderation Rejection:

{
  "error": {
    "message": "Content rejected by safety filter",
    "type": "content_policy_violation",
    "code": "content_filtered"
  }
}
Enter fullscreen mode Exit fullscreen mode

Your prompt or input images triggered the content moderation system. Rephrase your prompt to avoid restricted content. Realistic human faces in image_urls are automatically rejected.

Error Reference Table

HTTP Code Type Meaning Retryable? Action
400 invalid_request_error Bad parameters No Fix your payload
401 authentication_error Invalid API key No Verify your key
402 insufficient_quota_error Out of credits No Top up your account
404 not_found_error Task or model not found No Check task_id / model name
413 request_too_large_error Payload too big No Reduce file sizes
422 content_policy_violation Content filtered No Rephrase prompt
429 rate_limit_error Too many requests Yes Wait 60s, retry
500 internal_server_error Server issue Yes Retry after a few seconds
502 bad_gateway Upstream error Yes Retry after 5s
503 service_unavailable_error Service down Yes Retry after 30s

Production-Ready Error Handling

Wrap your API calls with retry logic for transient errors:

Uses the same setup and polling function from the first example above.

import random

def generate_video_with_retry(payload, max_retries=3):
    """
    Submit a video generation request with automatic retry
    for transient errors (429, 500, 502, 503).

    Uses exponential backoff with jitter to avoid thundering herd:
    - Attempt 1: wait ~1s
    - Attempt 2: wait ~2s  
    - Attempt 3: wait ~4s

    Non-retryable errors (400, 401, 402, 404, 413, 422) fail immediately
    because retrying won't fix the underlying problem.
    """
    for attempt in range(max_retries):
        try:
            response = requests.post(
                f"{BASE_URL}/videos/generations",
                headers=HEADERS,
                json=payload,
                timeout=30       # 30s connection timeout
            )

            # Success — return the task object
            if response.status_code == 200:
                return response.json()

            # Parse the error response
            error = response.json().get("error", {})
            error_type = error.get("type", "")
            error_msg = error.get("message", "Unknown error")

            # Non-retryable errors — fail immediately
            if response.status_code in (400, 401, 402, 404, 413, 422):
                raise ValueError(
                    f"API error {response.status_code}: {error_msg}"
                )

            # Retryable errors — exponential backoff with jitter
            if response.status_code in (429, 500, 502, 503):
                wait = (2 ** attempt) + random.uniform(0, 1)
                print(f"  Retry {attempt + 1}/{max_retries} "
                      f"after {wait:.1f}s ({error_type}: {error_msg})")
                time.sleep(wait)
                continue

        except requests.exceptions.Timeout:
            # Server didn't respond within 30 seconds
            wait = (2 ** attempt) + random.uniform(0, 1)
            print(f"  Timeout. Retry {attempt + 1}/{max_retries} "
                  f"after {wait:.1f}s")
            time.sleep(wait)
            continue

        except requests.exceptions.ConnectionError as e:
            # DNS failure, refused connection, etc.
            wait = (2 ** attempt) + random.uniform(0, 1)
            print(f"  Connection error: {e}. Retry {attempt + 1}/{max_retries} "
                  f"after {wait:.1f}s")
            time.sleep(wait)
            continue

    raise RuntimeError(f"Failed after {max_retries} retries")
Enter fullscreen mode Exit fullscreen mode

This handles:

  • Rate limits (429) — exponential backoff with jitter avoids synchronized retries from multiple clients
  • Server errors (500/502/503) — automatic retry with increasing delay
  • Timeouts — 30-second timeout prevents hanging on unresponsive servers
  • Connection drops — DNS failures, refused connections, network blips
  • Client errors (400/401/402/404/413/422) — fail immediately because retrying won't fix bad input

Pro Tip: For production systems, consider logging failed requests with their full payload and error response. This makes debugging much easier when things go wrong at 3 AM.

Validating Input Before API Calls

Save credits and time by catching obvious errors locally:

def validate_payload(payload):
    """
    Validate a generation payload before sending to the API.
    Catches common mistakes that would result in 400 errors.
    """
    errors = []

    # Required fields
    if not payload.get("model"):
        errors.append("'model' is required")
    if not payload.get("prompt") or not payload["prompt"].strip():
        errors.append("'prompt' is required and cannot be empty")

    # Duration range
    duration = payload.get("duration", 5)
    if duration < 4 or duration > 15:
        errors.append(f"'duration' must be 4-15, got {duration}")

    # Quality values
    valid_qualities = {"480p", "720p", "1080p"}
    quality = payload.get("quality", "720p")
    if quality not in valid_qualities:
        errors.append(f"'quality' must be one of {valid_qualities}, got '{quality}'")

    # Aspect ratio values
    valid_ratios = {"16:9", "9:16", "1:1", "4:3", "3:4", "21:9"}
    ratio = payload.get("aspect_ratio", "16:9")
    if ratio not in valid_ratios:
        errors.append(f"'aspect_ratio' must be one of {valid_ratios}, got '{ratio}'")

    # Image URL validation
    image_urls = payload.get("image_urls", [])
    if len(image_urls) > 9:
        errors.append(f"Maximum 9 images allowed, got {len(image_urls)}")
    for i, url in enumerate(image_urls):
        if not url.startswith(("http://", "https://")):
            errors.append(f"image_urls[{i}] must be an HTTP(S) URL")

    if errors:
        raise ValueError(f"Payload validation failed:\n" + "\n".join(f"  - {e}" for e in errors))

    return True
Enter fullscreen mode Exit fullscreen mode

Common Mistake: Forgetting to URL-encode special characters in image URLs. If your image path contains spaces or non-ASCII characters, use urllib.parse.quote() to encode it.


Set Up Webhooks (Skip the Polling)

Polling works fine for scripts and prototyping. For production systems, webhooks are more efficient — the API pushes the result to your server when the video is ready. No wasted requests, no delay between completion and notification.

How It Works

Add callback_url to your generation request:

Uses the same setup from the first example above.

payload = {
    "model": "seedance-2.0",
    "prompt": "A spaceship launches from a desert landscape at sunset.",
    "duration": 8,
    "quality": "720p",
    "callback_url": "https://your-server.com/api/webhook/seedance"
}

response = requests.post(
    f"{BASE_URL}/videos/generations",
    headers=HEADERS,
    json=payload
)
task = response.json()
print(f"Task submitted: {task['id']}")
# No polling needed — your webhook will receive the result
Enter fullscreen mode Exit fullscreen mode

When the video is ready, the API sends a POST request to your callback_url with the completed task object — the exact same payload you'd get from polling.

Webhook Requirements

Requirement Details
Protocol HTTPS only (no HTTP) — required for security
Response Return 2xx within 10 seconds
Retries 3 attempts on failure (1s, 2s, 4s intervals)
URL length ≤ 2048 characters
Network No internal/private IPs (localhost, 10.x.x.x, 192.168.x.x)
Body JSON POST with the full task object

Production Flask Webhook Receiver

Here's a complete webhook server using Flask with proper validation, error handling, and async video downloading:

# webhook_server.py
"""
Seedance webhook receiver — handles video completion callbacks.
Run: pip install flask requests
      python webhook_server.py
"""
from flask import Flask, request, jsonify
import json
import os
import threading
import requests as req  # renamed to avoid conflict with flask.request

app = Flask(__name__)

# Directory to save completed videos
OUTPUT_DIR = os.getenv("VIDEO_OUTPUT_DIR", "./videos")
os.makedirs(OUTPUT_DIR, exist_ok=True)


def download_video_async(video_url, task_id):
    """Download video in a background thread to not block the webhook response."""
    try:
        filename = os.path.join(OUTPUT_DIR, f"{task_id}.mp4")
        print(f"  Downloading {task_id} to {filename}...")
        resp = req.get(video_url, stream=True, timeout=120)
        resp.raise_for_status()
        with open(filename, "wb") as f:
            for chunk in resp.iter_content(chunk_size=8192):
                f.write(chunk)
        size_mb = os.path.getsize(filename) / (1024 * 1024)
        print(f"  Saved: {filename} ({size_mb:.1f} MB)")
    except Exception as e:
        print(f"  Download failed for {task_id}: {e}")


@app.route("/api/webhook/seedance", methods=["POST"])
def handle_webhook():
    """
    Handle Seedance video completion webhook.

    The API sends a POST with the full task object when
    a video generation completes (success or failure).
    """
    # Parse the incoming task object
    task = request.json
    if not task:
        return jsonify({"error": "Empty body"}), 400

    task_id = task.get("id", "unknown")
    status = task.get("status", "unknown")
    model = task.get("model", "unknown")

    print(f"\n{'='*50}")
    print(f"Webhook received: task={task_id}")
    print(f"  Status: {status}")
    print(f"  Model: {model}")

    if status == "completed":
        # Extract video URL(s) from results
        results = task.get("results", [])
        if results:
            video_url = results[0]
            print(f"  Video URL: {video_url}")

            # Download in background thread so we respond quickly
            thread = threading.Thread(
                target=download_video_async,
                args=(video_url, task_id)
            )
            thread.start()
        else:
            print(f"  WARNING: Completed but no results array!")

    elif status == "failed":
        error_info = task.get("error", {})
        print(f"  FAILED: {json.dumps(error_info, indent=2)}")
        # TODO: Log to your error tracking system (Sentry, etc.)
        # TODO: Optionally retry the generation with modified parameters

    else:
        print(f"  Unexpected status: {status}")
        print(f"  Full payload: {json.dumps(task, indent=2)}")

    # Always return 200 quickly — the API expects a response within 10s
    return jsonify({"received": True, "task_id": task_id}), 200


@app.route("/health", methods=["GET"])
def health_check():
    """Health check endpoint for load balancers."""
    return jsonify({"status": "ok"}), 200


if __name__ == "__main__":
    print(f"Starting webhook server...")
    print(f"Videos will be saved to: {os.path.abspath(OUTPUT_DIR)}")
    print(f"Webhook URL: http://localhost:5000/api/webhook/seedance")
    app.run(host="0.0.0.0", port=5000, debug=True)
Enter fullscreen mode Exit fullscreen mode

Install dependencies and run:

pip install flask requests
python webhook_server.py
Enter fullscreen mode Exit fullscreen mode

Key design decisions in this server:

  • Background downloads — We spawn a thread to download the video so the webhook handler returns 200 immediately. The API expects a response within 10 seconds; video downloads can take longer.
  • Health check endpoint/health is useful when deploying behind a load balancer (ALB, nginx, etc.).
  • Error logging — Failed tasks are printed with the full error payload. In production, pipe this to Sentry, Datadog, or your logging stack.

Exposing Localhost with ngrok

For local development, use ngrok to create a public HTTPS URL that tunnels to your local server:

# Install ngrok (macOS)
brew install ngrok

# Or download from https://ngrok.com/download

# Start the tunnel
ngrok http 5000
Enter fullscreen mode Exit fullscreen mode

ngrok outputs something like:

Forwarding  https://a1b2c3d4.ngrok-free.app → http://localhost:5000
Enter fullscreen mode Exit fullscreen mode

Use that HTTPS URL as your callback_url:

payload = {
    "model": "seedance-2.0",
    "prompt": "Your prompt here",
    "callback_url": "https://a1b2c3d4.ngrok-free.app/api/webhook/seedance"
}
Enter fullscreen mode Exit fullscreen mode

Common Mistake: Using the http:// ngrok URL instead of https://. The Seedance API requires HTTPS for webhooks — it will reject plain HTTP callback URLs with a 400 error.

Webhook Security

In production, validate that webhook requests actually come from the EvoLink API:

import hmac
import hashlib

def verify_webhook(request):
    """Verify webhook authenticity using the task ID pattern."""
    task = request.json
    task_id = task.get("id", "")

    # EvoLink task IDs follow a specific format
    if not task_id.startswith("task-unified-"):
        return False

    # Additional validation: check required fields exist
    required_fields = ["id", "status", "model", "created"]
    if not all(field in task for field in required_fields):
        return False

    return True
Enter fullscreen mode Exit fullscreen mode

When to Use Webhooks vs Polling

Scenario Recommendation Why
Quick prototyping / scripts Polling Simpler, no server needed
Production web app Webhooks Scalable, no wasted requests
Batch processing (100+ videos) Webhooks + queue Submit all, process as they complete
CLI tools Polling No server infrastructure required
Mobile app backend Webhooks Push notifications to users on completion
Serverless (Lambda/Cloud Functions) Webhooks Perfect fit — function triggered per completion

Pro Tip: For batch processing, combine webhooks with a message queue (Redis, RabbitMQ, SQS). Submit all generation requests, then process completions as they arrive on the queue. This decouples submission from processing and handles retries gracefully.


Batch Processing: Generate Multiple Videos

Real-world use cases often involve generating many videos. Here's a pattern for batch processing with rate limiting:

Uses the same setup and helper functions from the first example above.

import concurrent.futures

def batch_generate(prompts, max_concurrent=3):
    """
    Generate multiple videos with controlled concurrency.

    Args:
        prompts: List of prompt strings.
        max_concurrent: Maximum simultaneous generations.

    Returns:
        List of (prompt, result_or_error) tuples.
    """
    results = []

    def generate_one(prompt, index):
        """Generate a single video and return the result."""
        payload = {
            "model": "seedance-2.0",
            "prompt": prompt,
            "duration": 5,
            "quality": "720p"
        }
        try:
            task = generate_video_with_retry(payload)
            print(f"[{index}] Submitted: {task['id']}")
            result = wait_for_video(task["id"])
            video_url = result["results"][0]
            download_video(video_url, f"batch_{index}.mp4")
            return (prompt, result)
        except Exception as e:
            print(f"[{index}] Failed: {e}")
            return (prompt, str(e))

    # Process in batches to respect rate limits
    with concurrent.futures.ThreadPoolExecutor(max_workers=max_concurrent) as executor:
        futures = {
            executor.submit(generate_one, prompt, i): i
            for i, prompt in enumerate(prompts)
        }
        for future in concurrent.futures.as_completed(futures):
            results.append(future.result())

    # Summary
    succeeded = sum(1 for _, r in results if isinstance(r, dict))
    print(f"\nBatch complete: {succeeded}/{len(prompts)} succeeded")
    return results


# Example usage
prompts = [
    "A hummingbird hovering near a red flower. Macro lens, shallow depth of field.",
    "Ocean waves crashing on volcanic rocks at sunset. Slow motion.",
    "A street musician playing violin in the rain. Cinematic lighting.",
]
batch_generate(prompts, max_concurrent=2)
Enter fullscreen mode Exit fullscreen mode

Key considerations for batch processing:

  • max_concurrent=3 — Don't submit too many requests simultaneously. Start with 2–3 and increase based on your rate limits.
  • ThreadPoolExecutor — Uses threads (not processes) because we're I/O-bound (waiting for API responses), not CPU-bound.
  • Error isolation — Each video generation is independent. One failure doesn't stop the batch.

What's Next

You've covered the fundamentals — text-to-video, image-to-video, async polling, webhooks, error handling, and batch processing. Here's where to go deeper:

Explore Advanced Features

Reference Docs

Build Something

Combine what you've learned. Here are a few project ideas:

  • Automated product video pipeline — Upload product photos, generate marketing videos in bulk (see our E-commerce Video Guide)
  • Social media content engine — Generate short-form vertical videos from text briefs, post directly to TikTok/Reels
  • Storyboard-to-video tool — Turn sequential images into animated scenes with camera movement control
  • AI video editing pipeline — Use Seedance 2.0's video extension to create longer narratives from shorter clips

Ready to build? Get your free EvoLink API key and start generating videos today.


Frequently Asked Questions

How long does Seedance 2.0 video generation take?

Typically 30–120 seconds depending on duration and quality settings. A 5-second 720p video completes in about 50 seconds. A 15-second 1080p video can take 2–3 minutes. The API returns an estimated_time field with each task so you can set appropriate timeouts. During peak hours, queue wait times may add 10–30 seconds to the total.

What image formats does the Seedance 2.0 API accept?

JPEG, PNG, WebP, BMP, TIFF, and GIF. Each image must be under 30 MB. You can pass up to 9 images per request via the image_urls parameter. Images must be publicly accessible URLs — the API fetches them directly. For best results, use images that are at least 720px on the shorter side. Very low-resolution images (below 256px) may produce blurry animations.

Can I generate videos longer than 15 seconds?

The maximum single generation is 15 seconds. For longer content, generate multiple clips and concatenate them using FFmpeg or any video editor. Seedance 2.0 supports video extension — you can use a generated video's last frame as the first frame of the next generation to create seamless continuity. Here's the basic approach: generate clip 1, extract the last frame, pass it as @Image1 as first frame for clip 2.

How much does Seedance 2.0 API cost through EvoLink?

Pricing is based on video duration and quality tier. A 5-second 720p video costs approximately 18 credits. EvoLink provides smart routing that can reduce costs compared to direct API access. Check your dashboard for current per-second rates. The credits_reserved field in the API response shows the exact cost before generation begins — you'll never be charged more than that amount.

What's the difference between seedance-1.5-pro and seedance-2.0?

Seedance 2.0 adds multimodal references (mix images, videos, and audio as inputs), native audio generation, improved physics and consistency, and video editing capabilities. The API interface is identical — same endpoint, same parameters, same response format. You can test with seedance-1.5-pro today and switch to seedance-2.0 by changing the model name. Key 1.5 limitations: single image input only (no @Image2–9), no video/audio references, no native audio generation. See the Seedance 2.0 vs Sora 2 comparison for benchmarks.

How do I handle the "content rejected by safety filter" error?

The content moderation system rejects prompts involving realistic violence, explicit content, and real public figures. It also rejects realistic human face images uploaded via image_urls. To work around face restrictions, use illustrated, stylized, or anime-style character images. For prompt rejections, rephrase to be less specific about restricted topics. The error response includes type: "content_policy_violation" — check for this in your error handling code to give users a clear message.

Can I use the Seedance API in a Node.js / JavaScript project?

Yes. The REST API is language-agnostic — any HTTP client works. The concepts in this tutorial (async polling, webhooks, error handling) apply directly to Node.js with fetch or axios. EvoLink also provides official Node.js and Python SDKs that handle polling and retries for you.

What happens if my webhook server is down when the video completes?

The API retries webhook delivery 3 times with increasing intervals (1s, 2s, 4s). If all 3 retries fail, the webhook is abandoned — but the video is still available. You can always fall back to polling with GET /v1/tasks/{task_id} to retrieve the result. For this reason, it's good practice to store the task ID on submission and have a background job that periodically checks for any tasks that completed but weren't received via webhook.

Is there a rate limit on API requests?

Yes. The default rate limit is generous for development and moderate production use. If you hit a 429 error, implement exponential backoff as shown in the error handling section. For high-volume use cases (thousands of videos per day), contact EvoLink support to discuss custom rate limits and dedicated capacity.

Can I use Seedance 2.0 for commercial projects?

Yes. Videos generated through the EvoLink API are licensed for commercial use. You own the output and can use it in products, marketing materials, client deliverables, and published content. See the Seedance 2.0 copyright guide for detailed licensing terms and best practices for commercial use.


Complete Script

Here's the full tutorial code in a single file — copy, paste, add your API key, and run:

"""
Seedance 2.0 API Tutorial — Complete Script
Docs: https://seedance2api.app/docs/video-generation
API Key: https://evolink.ai/early-access
"""
import requests
import time
import os
import json
import random

# ── Configuration ─────────────────────────────────────────────
API_KEY = os.getenv("EVOLINK_API_KEY", "sk-your-api-key-here")
BASE_URL = "https://api.evolink.ai/v1"
HEADERS = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json"
}


# ── Reusable Helpers ──────────────────────────────────────────
def wait_for_video(task_id, poll_interval=10, timeout=600):
    """Poll a video generation task until completion."""
    elapsed = 0
    while elapsed < timeout:
        response = requests.get(
            f"{BASE_URL}/tasks/{task_id}",
            headers=HEADERS
        )
        response.raise_for_status()
        task = response.json()
        status = task["status"]
        progress = task.get("progress", 0)
        print(f"  [{elapsed}s] Status: {status} | Progress: {progress}%")
        if status == "completed":
            return task
        elif status == "failed":
            raise RuntimeError(f"Task {task_id} failed: {task}")
        time.sleep(poll_interval)
        elapsed += poll_interval
    raise TimeoutError(f"Task {task_id} timed out after {timeout}s")


def download_video(url, filename="output.mp4"):
    """Download a video file from a URL."""
    print(f"Downloading to {filename}...")
    resp = requests.get(url, stream=True)
    resp.raise_for_status()
    with open(filename, "wb") as f:
        for chunk in resp.iter_content(chunk_size=8192):
            f.write(chunk)
    print(f"Saved: {filename} ({os.path.getsize(filename) / 1024:.0f} KB)")


def generate_video_with_retry(payload, max_retries=3):
    """Submit a generation request with retry for transient errors."""
    for attempt in range(max_retries):
        try:
            response = requests.post(
                f"{BASE_URL}/videos/generations",
                headers=HEADERS,
                json=payload,
                timeout=30
            )
            if response.status_code == 200:
                return response.json()
            error = response.json().get("error", {})
            if response.status_code in (400, 401, 402, 404, 413, 422):
                raise ValueError(
                    f"API error {response.status_code}: "
                    f"{error.get('message', 'Unknown')}"
                )
            if response.status_code in (429, 500, 502, 503):
                wait = (2 ** attempt) + random.uniform(0, 1)
                print(f"  Retry {attempt+1}/{max_retries} after {wait:.1f}s")
                time.sleep(wait)
                continue
        except requests.exceptions.RequestException:
            wait = (2 ** attempt) + random.uniform(0, 1)
            print(f"  Retry {attempt+1}/{max_retries} after {wait:.1f}s")
            time.sleep(wait)
            continue
    raise RuntimeError(f"Failed after {max_retries} retries")


def validate_payload(payload):
    """Validate generation payload before API call."""
    errors = []
    if not payload.get("model"):
        errors.append("'model' is required")
    if not payload.get("prompt") or not payload["prompt"].strip():
        errors.append("'prompt' is required")
    duration = payload.get("duration", 5)
    if duration < 4 or duration > 15:
        errors.append(f"'duration' must be 4-15, got {duration}")
    quality = payload.get("quality", "720p")
    if quality not in {"480p", "720p", "1080p"}:
        errors.append(f"Invalid quality: {quality}")
    if errors:
        raise ValueError("Validation failed:\n" + "\n".join(f"  - {e}" for e in errors))


def cancel_task(task_id):
    """Cancel a pending or processing task."""
    response = requests.post(
        f"{BASE_URL}/tasks/{task_id}/cancel",
        headers=HEADERS
    )
    if response.status_code == 200:
        print(f"Task {task_id} cancelled.")
    else:
        print(f"Cancel failed: {response.json()}")


# ── Example 1: Text-to-Video ─────────────────────────────────
def text_to_video():
    payload = {
        "model": "seedance-2.0",
        "prompt": (
            "A golden retriever puppy chases a butterfly through "
            "a sunlit meadow. The camera follows the puppy with a "
            "smooth tracking shot as wildflowers sway in the breeze."
        ),
        "duration": 5,
        "quality": "720p",
        "aspect_ratio": "16:9",
        "generate_audio": True
    }
    validate_payload(payload)
    task = generate_video_with_retry(payload)
    print(f"Task: {task['id']} (ETA: {task['task_info']['estimated_time']}s)")
    result = wait_for_video(task["id"])
    download_video(result["results"][0], "text_to_video.mp4")


# ── Example 2: Image-to-Video ────────────────────────────────
def image_to_video():
    payload = {
        "model": "seedance-2.0",
        "prompt": (
            "@Image1 as the first frame. The scene slowly comes "
            "to life — leaves rustle gently, soft light shifts "
            "across the frame."
        ),
        "image_urls": ["https://example.com/your-image.jpg"],
        "duration": 5,
        "quality": "720p"
    }
    validate_payload(payload)
    task = generate_video_with_retry(payload)
    print(f"Task: {task['id']}")
    result = wait_for_video(task["id"])
    download_video(result["results"][0], "image_to_video.mp4")


# ── Example 3: Vertical Social Media Video ───────────────────
def social_media_video():
    payload = {
        "model": "seedance-2.0",
        "prompt": (
            "A barista pours latte art in slow motion. "
            "Close-up overhead shot, warm cafe lighting."
        ),
        "duration": 8,
        "quality": "1080p",
        "aspect_ratio": "9:16",
        "generate_audio": True
    }
    validate_payload(payload)
    task = generate_video_with_retry(payload)
    print(f"Task: {task['id']}")
    result = wait_for_video(task["id"])
    download_video(result["results"][0], "social_video.mp4")


if __name__ == "__main__":
    print("=== Text-to-Video ===")
    text_to_video()
    # print("\n=== Image-to-Video ===")
    # image_to_video()  # Uncomment and set your image URL
    # print("\n=== Social Media Video ===")
    # social_media_video()
Enter fullscreen mode Exit fullscreen mode

Tip: To test with the currently available model, change "seedance-2.0" to "seedance-1.5-pro". The API interface is identical — same endpoint, same parameters, same response format. When Seedance 2.0 is fully rolled out, just switch the model name back.

Start building → Get your free API key at EvoLink

Top comments (0)