DEV Community

Cover image for How to Use Seedance 2.0 API 2026
Wanda
Wanda

Posted on • Originally published at apidog.com

How to Use Seedance 2.0 API 2026

TL;DR

The Seedance 2.0 API, launched April 2, 2026 via Volcengine Ark, enables programmatic video generation. Submit a video generation task with a POST request, poll the GET endpoint until status is "succeeded". Supports text-to-video, image-to-video, bookend frame control, multimodal reference (images, video, audio), and native audio. A 5s 1080p video costs ~$0.93. Download your video within 24 hours before the URL expires.

Try Apidog today

Want to Use Seedance 2.0 API? Want to Use Nanobanana API at 40% OFF? Try the Ultimate AI API Platform for All AI Meida APIs: Hypereal AI !

Seedance 2.0 Demo

<a href="https://hypereal.cloud">Try Hypereal AI</a>
Enter fullscreen mode Exit fullscreen mode

Introduction

On April 2, 2026, ByteDance's Volcengine Ark released the Seedance 2.0 API, allowing direct programmatic access. Previous tutorials covered only the web console UI. This guide steps through using the API directly for automation and integration in your dev workflow.

💡 Tip: The API uses an async task model: POST to create, poll GET for status, then download when complete. Apidog's Test Scenarios can automate this full flow—chain POST, extract task ID, loop GET poll, and assert on video URL. See the Apidog section for full automation setup.

This article covers all input types, cost calculation, and common errors.

What is Seedance 2.0?

Seedance 2.0 is ByteDance's advanced video generation model, available on Volcengine Ark with models:

  • doubao-seedance-2-0-260128 (standard)
  • doubao-seedance-2-0-fast-260128 (faster, lower quality)

Features:

  • Text-to-video and image-to-video
  • Bookend (first/last frame) control
  • Multimodal references: images, video, audio in one request
  • Native audio generation (dialogue, SFX, music, >8 languages)
  • Camera motion prompts (dolly, crane, etc.)
  • Up to 15s, 2K output, 24fps, aspect ratios 1:1–21:9

What changed: guide vs official API

Older guides (like this February 2026 post) covered only the web console. As of April 2026, you can call the API from any backend, automate pipelines, and integrate Seedance into products. Use this API guide for all developer implementations.

Prerequisites

  1. Get a Volcengine account: volcengine.com
  2. Generate an API key: Visit:
   https://console.volcengine.com/ark/region:ark+cn-beijing/apikey
Enter fullscreen mode Exit fullscreen mode

Export your API key:

   export ARK_API_KEY="your-api-key-here"
Enter fullscreen mode Exit fullscreen mode
  1. All requests: Use the Bearer token in the Authorization header:
   Authorization: Bearer YOUR_ARK_API_KEY
Enter fullscreen mode Exit fullscreen mode

Trial credits cover about 8 full 15-second 1080p generations.

Text-to-video: your first request

Base URL:

https://ark.cn-beijing.volces.com/api/v3
Enter fullscreen mode Exit fullscreen mode

Submit a text-to-video task:

POST to /v1/contents/generations/tasks.

cURL Example

curl -X POST "https://ark.cn-beijing.volces.com/api/v3/contents/generations/tasks" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $ARK_API_KEY" \
  -d '{
    "model": "doubao-seedance-2-0-260128",
    "content": [
      {
        "type": "text",
        "text": "A golden retriever running through a sunlit wheat field, wide tracking shot, cinematic"
      }
    ],
    "resolution": "1080p",
    "ratio": "16:9",
    "duration": 5,
    "watermark": false
  }'
Enter fullscreen mode Exit fullscreen mode

API response:

{"id": "cgt-2025xxxxxxxx-xxxx"}
Enter fullscreen mode Exit fullscreen mode

Python Example (Official SDK)

Install the SDK:

pip install volcenginesdkarkruntime
Enter fullscreen mode Exit fullscreen mode

Submit a task:

import os
from volcenginesdkarkruntime import Ark

client = Ark(api_key=os.environ.get("ARK_API_KEY"))

resp = client.content_generation.tasks.create(
    model="doubao-seedance-2-0-260128",
    content=[
        {
            "type": "text",
            "text": "A golden retriever running through a sunlit wheat field, wide tracking shot, cinematic"
        }
    ],
    resolution="1080p",
    ratio="16:9",
    duration=5,
    watermark=False,
)

print(resp.id)
Enter fullscreen mode Exit fullscreen mode

Store the task ID for polling.

The async task pattern: submit, poll, download

Seedance jobs are async. A 5s 1080p video takes 60–120 seconds. Use this lifecycle:

queued -> running -> succeeded
                -> failed
                -> expired
                -> cancelled
Enter fullscreen mode Exit fullscreen mode

Poll GET until status is no longer queued or running.

Full Python polling loop

import os
import time
import requests
from volcenginesdkarkruntime import Ark

client = Ark(api_key=os.environ.get("ARK_API_KEY"))

# Step 1: submit
resp = client.content_generation.tasks.create(
    model="doubao-seedance-2-0-260128",
    content=[
        {"type": "text", "text": "Aerial shot of a mountain lake at sunrise, slow dolly forward"}
    ],
    resolution="1080p",
    ratio="16:9",
    duration=5,
    watermark=False,
)

task_id = resp.id
print(f"Task submitted: {task_id}")

# Step 2: poll with exponential backoff
wait = 10
while True:
    result = client.content_generation.tasks.get(task_id=task_id)
    status = result.status
    print(f"Status: {status}")

    if status == "succeeded":
        video_url = result.content.video_url
        print(f"Video URL: {video_url}")
        break
    elif status in ("failed", "expired", "cancelled"):
        print(f"Task ended with status: {status}")
        break

    time.sleep(wait)
    wait = min(wait * 2, 60)  # cap at 60s

# Step 3: download immediately
if status == "succeeded":
    response = requests.get(video_url, stream=True)
    with open("output.mp4", "wb") as f:
        for chunk in response.iter_content(chunk_size=8192):
            f.write(chunk)
    print("Downloaded: output.mp4")
Enter fullscreen mode Exit fullscreen mode

Use exponential backoff to avoid API throttling. Always download as soon as succeeded.

Image-to-video (I2V): animating a still image

Add an image_url object to content alongside your text prompt. The image will be the video's first frame.

resp = client.content_generation.tasks.create(
    model="doubao-seedance-2-0-260128",
    content=[
        {
            "type": "text",
            "text": "The woman slowly turns her head and smiles at the camera"
        },
        {
            "type": "image_url",
            "image_url": {"url": "https://example.com/portrait.jpg"}
        }
    ],
    ratio="adaptive",
    duration=5,
    watermark=False,
)
Enter fullscreen mode Exit fullscreen mode
  • Set ratio: "adaptive" to match image aspect ratio (avoids crop/letterbox).
  • Max 30MB per image, up to 9 images per request.

First and last frame: controlling start and end points

For bookend frame control, supply the first and last frame images plus a text prompt. Use for transitions, morphs, or controlled motion.

resp = client.content_generation.tasks.create(
    model="doubao-seedance-2-0-260128",
    content=[
        {
            "type": "text",
            "text": "The flower blooms from bud to full open, macro lens, soft light"
        },
        {
            "type": "image_url",
            "image_url": {"url": "https://example.com/flower-bud.jpg"}
        },
        {
            "type": "image_url",
            "image_url": {"url": "https://example.com/flower-open.jpg"}
        }
    ],
    ratio="adaptive",
    duration=8,
    watermark=False,
)
Enter fullscreen mode Exit fullscreen mode
  • Two images + text prompt triggers bookend mode.
  • Order matters: first frame first, last frame second.
  • Use return_last_frame: true to chain clips—pass last frame as next first frame.

Multimodal reference: combining images, video, and audio

Seedance 2.0 accepts mixed references in content:

resp = client.content_generation.tasks.create(
    model="doubao-seedance-2-0-260128",
    content=[
        {
            "type": "text",
            "text": "Match the visual style of the reference clip and add the provided background audio"
        },
        {
            "type": "image_url",
            "image_url": {"url": "https://example.com/style-reference.jpg"}
        },
        {
            "type": "video_url",
            "video_url": {"url": "https://example.com/motion-reference.mp4"}
        },
        {
            "type": "audio_url",
            "audio_url": {"url": "https://example.com/background-music.mp3"}
        }
    ],
    duration=10,
    ratio="16:9",
    watermark=False,
)
Enter fullscreen mode Exit fullscreen mode
  • Up to 9 images (≤30MB each)
  • Up to 3 video clips (2–15s, ≤50MB each)
  • Up to 3 audio files (MP3, ≤15MB each)
  • V2V pricing: If a video reference is present, token rate is lower ($3.90 per million tokens).

Native audio generation

Set generate_audio: true to generate audio and video together. Model produces dialogue, SFX, music, with lip sync in 8+ languages.

resp = client.content_generation.tasks.create(
    model="doubao-seedance-2-0-260128",
    content=[
        {
            "type": "text",
            "text": "A street musician plays guitar outside a cafe in Paris, crowds passing by, city sounds"
        }
    ],
    resolution="1080p",
    ratio="16:9",
    duration=10,
    generate_audio=True,
    watermark=False,
)
Enter fullscreen mode Exit fullscreen mode
  • Audio is joint-generated with video for sync.
  • Slightly more tokens/cost than silent video.

Controlling resolution, ratio, and duration

  • resolution: "480p", "720p", "1080p", "2K" (default: "1080p")
  • ratio: "16:9", "9:16", "4:3", "3:4", "21:9", "1:1", "adaptive"
  • duration: 4–15 (seconds, default: 5)

  • Use fast model (doubao-seedance-2-0-fast-260128) for drafts, standard model for production.

  • Pick Seedance 2.0 if you need multimodal input, bookend/frame control, or native audio.

Reading the cost from the response

On success, the response includes a usage field:

{
  "usage": {
    "completion_tokens": 246840,
    "total_tokens": 246840
  }
}
Enter fullscreen mode Exit fullscreen mode
  • 15s 1080p ≈ 308,880 tokens
  • 5s 1080p ≈ 102,960 tokens

Pricing:

  • T2V/I2V 1080p: 46 yuan per million tokens (~$6.40)
  • V2V: 28 yuan per million tokens (~$3.90)

Estimate cost:

  • 5s 1080p: ~102,960 tokens ≈ $0.93
  • 10s 1080p: ~205,920 tokens ≈ $1.97

Multiply completion_tokens by your task type's rate.

Important: download the video within 24 hours

  • video_url expires 24 hours after success (returns 403 after).
  • Download immediately after status is succeeded.
  • execution_expires_after is for task record (48h), but video file URL expires in 24h.
  • Task history: only 7 days.

How to test the Seedance API with Apidog

Async workflows require chained API calls. Use Apidog's Test Scenarios to automate:

Apidog Seedance Test

Steps:

  1. Create Test Scenario:

    In Apidog, Tests → new scenario "Seedance 2.0 video generation". Set ARK_API_KEY as an environment variable.

  2. Add submit request:

    POST to https://ark.cn-beijing.volces.com/api/v3/contents/generations/tasks with Bearer {{ARK_API_KEY}}.

    Use your JSON body.

  3. Extract Task ID:

    Use Extract Variable processor: JSONPath $.id → environment variable TASK_ID.

  4. Add Wait processor:

    Wait 30s after submission.

  5. Add poll request in For loop:

    For loop (max 20 iterations):

    • GET https://ark.cn-beijing.volces.com/api/v3/contents/generations/tasks/{{TASK_ID}}
    • Wait processor (10s)
    • Break If: $.status == "succeeded" or "failed"
  6. Add assertions:

    After loop, assert:

    • $.status == "succeeded"
    • $.content.video_url is not empty

Apidog will show a full test report of each step. You can also import from cURL to speed up step creation.

Pricing breakdown: what a 10-second video costs

Seedance API uses pay-as-you-go token pricing.

Task type Rate (per 1M tokens)
T2V / I2V 1080p 46 yuan (~$6.40)
V2V (video input) 28 yuan (~$3.90)

Typical 1080p video costs:

Duration Approx tokens Cost (T2V/I2V)
5 seconds ~103,000 ~0.66 yuan / ~$0.93
10 seconds ~206,000 ~9.48 yuan / ~$1.32
15 seconds ~309,000 ~14.21 yuan / ~$1.97

Trial credits cover ~8 full 15s runs. For cost control, use lower resolutions (e.g., 720p for dev).

Common errors and fixes

429 Too Many Requests

  • Indicates concurrency limit. Use exponential backoff (start 10s, double to max 60s).

status: "failed"

  • Prompt may violate safety, input file too large/corrupt, or bad parameter combo. Check and retry.

status: "expired"

  • Queued too long, often during peak load. Resubmit.

403 on video_url

  • URL expired (24h limit). Regenerate using the same parameters and saved seed.

Seed reproducibility

  • Save seed from responses; use in new requests with same params to reproduce output.

Conclusion

Seedance 2.0 API enables advanced, automated video generation for developers. Use the async POST-poll-download pattern. Leverage multimodal inputs, native audio, and bookend control for complex workflows. Automate your API tests with Apidog to catch polling or expiry issues early.

FAQ

Q: What's the difference between doubao-seedance-2-0-260128 and doubao-seedance-2-0-fast-260128?

A: Standard = higher quality; Fast = quicker, lower quality. Use fast for prototyping, standard for production.

Q: Can I use Seedance 2.0 outside China?

A: Yes, but endpoint is Beijing region (higher latency). Check Volcengine ToS for restrictions.

Q: How do I chain multiple clips into a longer video?

A: Set return_last_frame: true to get last frame, use as first frame of next request. Stitch clips with a video editing tool.

Q: Does native audio generation cost more?

A: Yes, slightly higher token use due to joint audio-video generation.

Q: Can I set a webhook instead of polling?

A: Yes, pass callback_url in submit request. API will POST result when ready.

Q: What happens if I exceed the 9-image limit?

A: API returns 400 validation error. Limit images to 9 per request.

Q: Is the seed parameter guaranteed to reproduce the exact same video?

A: Not guaranteed, but increases reproducibility if parameters and model version are unchanged.

Q: How do I track spending across multiple tasks?

A: Read completion_tokens from each task, multiply by your rate. Build your own tracking system—API does not provide a spending dashboard.

Top comments (0)