Most AI video generators take a text prompt and give you whatever they feel like. Seedance 2.0 works differently — you upload images, videos, and audio files, then use @tags to tell the model exactly what each file should do.
Think of it like a film director's shot list. Each uploaded file gets a role:
-
@Image1 as the first frame— pins the opening visual -
@Video1 for camera movement reference— copies the cinematography -
@Audio1 as background music— sets the soundtrack and rhythm
You can combine up to 12 files (9 images + 3 videos + 3 audio) in a single generation.
The @Tag Syntax
The format is simple: @ + asset type + number.
| Tag Type | Range | Example |
|---|---|---|
| @Image | 1-9 | @Image1, @Image2 |
| @Video | 1-3 | @Video1 |
| @Audio | 1-3 | @Audio1 |
Quick API Example
import requests
response = requests.post(
"https://api.evolink.ai/v1/videos/generations",
headers={"Authorization": "Bearer YOUR_API_KEY"},
json={
"model": "seedance-2-0-t2v",
"prompt": "A cinematic sunset over the ocean, @Image1 as first frame, @Audio1 as background music. Slow dolly forward with warm golden light.",
"image_urls": ["https://example.com/sunset.jpg"],
"audio_urls": ["https://example.com/ambient.mp3"],
"duration": 10,
"quality": "1080p"
}
)
What Makes @Tags Different
No other AI video API offers this level of multimodal control:
- Sora 2: Text + single image input only, no audio reference
- Kling 3.0: Image-to-video but no @tag assignment system
- Veo 3.1: Text-only prompts, generates its own audio
Seedance 2.0's @tag system lets you direct the generation rather than just describe it.
Read the Full Guide
This is a condensed overview. The complete guide covers all @tag types with real prompt examples, file allocation strategies, common mistakes, and multi-modal combination recipes.
👉 Read the full @Tags guide on seedance2api.app
EvoLink provides unified AI API access — one key for all major AI models including Seedance 2.0, Sora 2, Veo 3.1, and more.
Top comments (0)