DEV Community

Evan-dong
Evan-dong

Posted on

How to Use Seedance 2.0 API: A Complete Step-by-Step Guide

Seedance 2.0 API is a powerful video generation platform that transforms text prompts, images, and multimodal references into professional AI-generated videos. Whether you're building a creative app, automating video production, or experimenting with AI-powered content creation, this guide will walk you through everything you need to know.

What is Seedance 2.0 API?
Seedance 2.0 API enables developers to generate AI videos through a simple, unified workflow. The API supports three primary generation modes: text-to-video, image-to-video, and reference-to-video, each available in both standard and fast variants. All models follow the same asynchronous task-based pattern, making integration straightforward and consistent.

Understanding the Workflow
Before diving into the technical details, it's important to understand how Seedance 2.0 API works. Unlike synchronous APIs that return results immediately, video generation is inherently time-intensive. The workflow follows four simple steps:

Create a generation task by sending your request with model selection, prompt, and parameters. The API immediately returns a task ID without waiting for the video to complete.

Receive your task ID instantly, allowing your application to continue processing other requests or inform users that generation has started.

Poll the task status periodically to check progress, or configure a callback URL to receive automatic notifications when generation completes.

Download the generated video once the task status shows "completed," using the URLs provided in the response payload.

Step 1: Get Your API Key
Before making any API calls, you need an API key from EvoLink.ai. Store this key securely—you'll need it for authentication in every request.

Step 2: Choose Your Model
Seedance 2.0 offers six models across three generation modes:

Text-to-Video Models generate videos purely from text descriptions. Use seedance-2.0-text-to-video for high-quality prompt-based generation, or seedance-2.0-fast-text-to-video when speed is more important. These models are ideal for concept visualization, trend-aware content, and scenarios where you don't have reference images or videos.

Image-to-Video Models animate still images into dynamic video clips. The seedance-2.0-image-to-video and seedance-2.0-fast-image-to-video models accept one or two images. With a single image, the model treats it as the first frame and animates from there. With two images, it creates a smooth transition from the first frame to the last. This mode excels at product demos, social media content, and bringing static visuals to life.

Reference-to-Video Models offer the most control and flexibility. Both seedance-2.0-reference-to-video and seedance-2.0-fast-reference-to-video accept images, videos, and audio as reference inputs. You can extend existing videos, edit content with multimodal guidance, or create entirely new compositions that inherit style, motion, or audio characteristics from your references.

Step 3: Prepare Your Request
Every generation request requires several key parameters. The model parameter specifies which Seedance 2.0 variant you're using. The prompt is your creative instruction—be specific and descriptive about camera movement, lighting, mood, and action. The duration controls output length, accepting values from 4 to 15 seconds, or -1 for smart duration that adapts to your prompt.

Quality and aspect ratio shape the visual output. Set quality to either 480p or 720p depending on your resolution needs and budget. The aspect_ratio parameter supports common formats like 16:9 for widescreen, 9:16 for vertical mobile content, 1:1 for square posts, and several others including adaptive which automatically selects the best ratio.

If you want synchronized audio, set generate_audio to true. For asynchronous workflows, include a callback_url pointing to your HTTPS endpoint—Seedance will POST the completed task data when generation finishes.

Step 4: Create Your First Video
Let's walk through a practical text-to-video example. This request generates a cinematic aerial shot of a futuristic city:

curl --request POST \
--url https://api.evolink.ai/v1/videos/generations \
--header 'Authorization: Bearer YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
"model": "seedance-2.0-text-to-video",
"prompt": "A cinematic aerial shot of a futuristic city at sunrise, soft clouds, reflective skyscrapers, smooth camera motion",
"duration": 5,
"quality": "720p",
"aspect_ratio": "16:9",
"generate_audio": true
}'
The API responds immediately with a task object:

{
"id": "task-unified-1774857405-abc123",
"model": "seedance-2.0-text-to-video",
"object": "video.generation.task",
"status": "pending",
"progress": 0,
"type": "video"
}
Save that task ID—you'll need it to check progress and retrieve your video.

Step 5: Check Task Status
Video generation takes time. Poll the task endpoint to monitor progress:

curl --request GET \
--url https://api.evolink.ai/v1/tasks/task-unified-1774857405-abc123 \
--header 'Authorization: Bearer YOUR_API_KEY'
The response includes a status field that progresses through states: pending, processing, and eventually completed or failed. The progress field shows percentage completion. When status reaches completed, the response includes your video URLs in the result payload.

Here's a Python example that automates the polling process:

import requests
import time

def wait_for_completion(task_id, api_key):
url = f"https://api.evolink.ai/v1/tasks/{task_id}"
headers = {"Authorization": f"Bearer {api_key}"}

while True:
    response = requests.get(url, headers=headers)
    task = response.json()

    if task["status"] == "completed":
        return task["result"]
    elif task["status"] == "failed":
        raise Exception(f"Task failed: {task.get('error')}")

    print(f"Progress: {task['progress']}%")
    time.sleep(5)
Enter fullscreen mode Exit fullscreen mode

Step 6: Download Your Video
Once the task completes, the result payload contains your generated video URLs:

{
"id": "task-unified-1774857405-abc123",
"status": "completed",
"progress": 100,
"result": {
"video_url": "https://cdn.evolink.ai/videos/your-video.mp4",
"thumbnail_url": "https://cdn.evolink.ai/thumbnails/your-thumbnail.jpg"
}
}
Advanced: Image-to-Video Generation
Image-to-video models animate still images. You can provide one image (first-frame animation) or two images (first-to-last-frame transition):

curl --request POST \
--url https://api.evolink.ai/v1/videos/generations \
--header 'Authorization: Bearer YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
"model": "seedance-2.0-image-to-video",
"prompt": "Smooth camera push-in, warm lighting",
"image_url": "https://example.com/product-shot.jpg",
"duration": 5,
"quality": "720p",
"aspect_ratio": "16:9"
}'
For two-image transitions, add an end_image_url parameter. The model will create a smooth interpolation between the two frames.

Advanced: Reference-to-Video Generation
Reference-to-video models offer the most flexibility. You can combine images, videos, and audio as reference inputs:

curl --request POST \
--url https://api.evolink.ai/v1/videos/generations \
--header 'Authorization: Bearer YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
"model": "seedance-2.0-reference-to-video",
"prompt": "Extend this scene with dramatic lighting changes",
"reference_video_url": "https://example.com/base-video.mp4",
"reference_image_url": "https://example.com/style-reference.jpg",
"duration": 10,
"quality": "720p"
}'
This mode excels at video extension, style transfer, and complex multimodal compositions.

Troubleshooting Common Issues
If your task fails, check the error field in the task response. Common issues include invalid image URLs, unsupported formats, or insufficient credits. Ensure your images are publicly accessible and in supported formats like JPG or PNG. Verify your API key is correct and has sufficient credit balance.

For tasks that remain in "pending" status longer than expected, check the EvoLink.ai status page for any service disruptions. During peak usage, generation may take longer than usual.

Next Steps
You now have everything you need to integrate Seedance 2.0 API into your projects. Start with simple text-to-video requests to familiarize yourself with the workflow, then experiment with image-to-video and reference-to-video modes as your needs grow more sophisticated.

For deeper technical details, explore the official documentation, review the GitHub repository examples, and join the EvoLink.ai community to share your creations and learn from other developers.

The future of video creation is here, and it's accessible through a simple API call. Start building today.

Top comments (0)