<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Atlas Cloud</title>
    <description>The latest articles on DEV Community by Atlas Cloud (@atlas_cloud_ai).</description>
    <link>https://dev.to/atlas_cloud_ai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/atlas_cloud_ai"/>
    <language>en</language>
    <item>
      <title>Long video generation blog: How We Shipped SVI in Production</title>
      <dc:creator>Atlas Cloud</dc:creator>
      <pubDate>Thu, 07 May 2026 09:38:34 +0000</pubDate>
      <link>https://dev.to/atlas_cloud_ai/long-video-generation-blog-how-we-shipped-svi-in-production-5bln</link>
      <guid>https://dev.to/atlas_cloud_ai/long-video-generation-blog-how-we-shipped-svi-in-production-5bln</guid>
      <description>&lt;p&gt;In &lt;a href="https://www.atlascloud.ai/blog/guides/long-video-generation-blog-1" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;, we surveyed six approaches to long video generation — TTT, LoL, Self Forcing, Self Forcing++, Infinite Talk, and Helios — and landed on SVI as the only path that ships today without retraining a 14B model. This post is about what building with it actually looked like: how the clip-stitching loop works, why Error-Recycling matters, and the production numbers from our first deployment on TurboWan.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The choice: SVI (Stable Video Infinity)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;SVI's core philosophy is to turn infinite-length generation into stitching together a finite number of short clips with carefully designed memory transfer. That sounds modest until you realize it cleans up most of the engineering pain points at once: no base-model retraining (a small LoRA mounted on TurboWan), constant VRAM, composable with existing speed-distillation, and official LoRA weights are public.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34g6h65rbmwb1jm0nkif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F34g6h65rbmwb1jm0nkif.png" alt="image8.png" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;SVI's mental model. (a) Standard video generative models have a Train-Test Hypothesis Gap — they train on clean inputs but face noisy, error-accumulated inputs at inference. (b) Image restoration models are robust to errors but cannot generate new content. (c) SVI's Error-Recycling Fine-Tuning bridges both — using self-generated errors as supervisory signals so the model actively learns to identify and correct its own generation errors.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How clip stitching works&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Each clip is 81 frames (5s @ 16fps). Generation is just a loop: condition the next clip on a global identity anchor and a short-term motion bridge from the previous clip, then concatenate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clip 1. &lt;/strong&gt;Inputs: ref image + empty motion memory. Output: a 5s clip. Extract motion memory: the latent of the last 4 frames. &lt;strong&gt;Clip 2. &lt;/strong&gt;Inputs: ref image + motion memory from clip 1. Output: a 5s clip. Extract motion memory from its tail. &lt;strong&gt;... &lt;/strong&gt;Repeat for N clips, then concatenate clip 1 + clip 2 + … + clip N into the long video.&lt;/p&gt;

&lt;p&gt;The clean part is that no DiT attention modification is needed. Historical context is concatenated at the input level as latents, and a small LoRA teaches the model to actually use that prefix.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anchor latent. &lt;/strong&gt;User-provided reference image, encoded by the VAE → keeps subject / character appearance globally consistent. &lt;strong&gt;Motion latent. &lt;/strong&gt;Latent of the last 4 / 8 / 12 frames of the previous clip → tells the model how the last segment ended. &lt;strong&gt;Padding. &lt;/strong&gt;Aligns the input shape so the DiT sees one tidy concatenated sequence: anchor + motion + padding.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Error-Recycling Fine-Tuning&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The detail that makes SVI hold up over many clips is how its LoRA is trained. Standard inference always starts denoising from pure Gaussian noise — but in long-video stitching, errors from earlier clips contaminate the conditioning for later clips. If you only ever train on clean reference inputs, you have baked in the train-inference gap.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Standard training: &lt;/strong&gt;every clip's reference inputs are clean ground truth → the model never sees the kind of noisy historical context it actually faces at inference, and discontinuities accumulate. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error-Recycling: &lt;/strong&gt;during training, deliberately inject the model's own past errors into the reference inputs, so the LoRA explicitly learns to operate on noisy historical context. Visual discontinuities at clip boundaries drop sharply.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5jdyi2d66vi3dqr1s74.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5jdyi2d66vi3dqr1s74.png" alt="image9.png" width="666" height="797"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;SVI identifies two core error types. (a) Error-Free Flow Matching is the training-time trajectory. (b) Single-Clip Predictive Error — the per-clip drift between the denoising path and the ideal trajectory. (c) Cross-Clip Conditional Error — error-contaminated reference images cause cascading drift across clips. Error-Recycling explicitly injects both.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tgsd2vcpti8fto9hpjx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tgsd2vcpti8fto9hpjx.png" alt="image10.png" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;SVI training framework. (a) Inject DiT's self-generated errors into the latent space to break the error-free assumption. (b) Efficiently compute bidirectional errors via one-step forward / backward integration. (c) Store errors in a Replay Memory and dynamically resample for reuse, forming a closed-loop error supervision cycle.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;SVI separates two error types. &lt;em&gt;Single-clip Predictive Error&lt;/em&gt; is the per-clip drift between the denoising path and the ideal trajectory. &lt;em&gt;Cross-clip Conditional Error&lt;/em&gt; is the cascading drift caused when error-contaminated reference images flow into the next clip. Error-Recycling injects both, so the LoRA learns explicit error tolerance.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;LoRA variants&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;SVI ships three variants — &lt;em&gt;SVI-Shot&lt;/em&gt; for static-image → short-clip, &lt;em&gt;SVI-Dance&lt;/em&gt; for human motion (it can also take a pose-sequence input), and &lt;em&gt;SVI-Film&lt;/em&gt; for multi-shot / scene-transition long video. Hyperparameters: 81 frames per clip, num_motion_frames ∈ {4, 8, 12}, LoRA rank typically 16–64.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Stacking on TurboWan&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We mount SVI's LoRA on top of TurboWan (an speedup version of Wan optimized by Atlas), and we keep our specialized LoRA in the stack for style control. At inference, multiple LoRA weights are superimposed at once.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Base. &lt;/strong&gt;TurboWan &lt;strong&gt;LoRA 1.&lt;/strong&gt; specialized LoRA — content / style control. &lt;strong&gt;LoRA 2. &lt;/strong&gt;SVI LoRA — long-video consistency. &lt;strong&gt;Combined. &lt;/strong&gt;TurboWan speed + SVI long-video continuity + Spicy style, all in one inference pass.&lt;/p&gt;

&lt;p&gt;The full inference flow is straightforward: encode the reference into an anchor latent, concatenate it with the previous clip's motion latent and padding, run TurboWan's denoise, decode, append, and update the motion latent from the tail of the freshly-generated clip. After N iterations, concatenate everything into one video.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. &lt;/strong&gt;Encode ref image → anchor latent. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. &lt;/strong&gt;y = concat(anchor latent, motion latent, padding). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. &lt;/strong&gt;Run TurboWan's 5-step denoise conditioned on y and the text embedding. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. &lt;/strong&gt;VAE-decode the clip and append to the output list. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. &lt;/strong&gt;Set motion latent = tail (last num_motion_frames) of the just-generated clip. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. &lt;/strong&gt;Repeat for N clips, then concatenate all of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Some production numbers&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Standard test: a single reference image and 3 prompts, generating ~15s output (3 clips × 5s):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Metric&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Value&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Generated duration&lt;/td&gt;
&lt;td&gt;15s (3 clips)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Per-clip inference time&lt;/td&gt;
&lt;td&gt;~14s (TurboWan fp8, single GPU)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Total inference time&lt;/td&gt;
&lt;td&gt;~42s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Subject consistency&lt;/td&gt;
&lt;td&gt;Good&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A worked example: Cat Adventure&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To make the cross-clip behavior concrete, we ran a 15-second case with one reference and three shots. The style prompt fixed a Pixar look with warm lighting; the character was an orange tabby kitten with big curious eyes; the three shots took it from windowsill, to sidewalk, to meeting a golden retriever, each with its own camera direction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqyiryc4d0nxuy84w4rg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqyiryc4d0nxuy84w4rg.png" alt="image11.png" width="400" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Clip 1 (0–5s): the orange Pixar kitten on a windowsill, with the camera slowly pulling back from a close-up. Style and character stay stable across frames.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc6laagm4ugy45w1y9ix4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc6laagm4ugy45w1y9ix4.png" alt="image12.png" width="400" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Clip 2 (5–10s) at the transition boundary: the kitten's appearance matches Clip 1, then turns and shifts posture as it jumps down. The motion latent has carried the motion state across the boundary.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8za6fd3czd5dilwbuie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8za6fd3czd5dilwbuie.png" alt="image13.png" width="400" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Clip 3 (10–15s): a golden retriever is introduced and the scene transitions toward an indoor / outdoor boundary. The kitten's Pixar style remains stable across all three clips.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Aggregate metrics for the run:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Metric&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Value&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Total duration&lt;/td&gt;
&lt;td&gt;15s (3 clips × 5s)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Total frames&lt;/td&gt;
&lt;td&gt;240 frames (16fps)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Total inference time&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;33s (TurboWan, single GPU)&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Time-to-video ratio&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2.2 s/s&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Subject consistency&lt;/td&gt;
&lt;td&gt;Pixar orange kitten stable throughout&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Clip boundary discontinuity&lt;/td&gt;
&lt;td&gt;No obvious jump cuts&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That is a 15-second long video in 33 seconds on a single GPU, with cross-clip subject consistency — well within the ≤ 60s wait we set as our target. On a 14-case internal test set, 9 cases came back with no obvious issues (64% pass rate).&lt;/p&gt;

&lt;p&gt;The honest closing observation is that in video generation, speed, length, and quality are three corners of an iron triangle. No single approach today leads on all three at once. The interesting work is in choosing which corner you can give up the least, given today's hardware and your training budget. SVI gives up a little length and a little boundary quality — and in exchange we ship long video with Wan2.2-class fidelity, on one GPU, today.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>svi</category>
      <category>videogeneration</category>
    </item>
    <item>
      <title>Long video generation blog: Six Approaches, One Decision</title>
      <dc:creator>Atlas Cloud</dc:creator>
      <pubDate>Thu, 07 May 2026 09:35:29 +0000</pubDate>
      <link>https://dev.to/atlas_cloud_ai/long-video-generation-blog-six-approaches-one-decision-8l4</link>
      <guid>https://dev.to/atlas_cloud_ai/long-video-generation-blog-six-approaches-one-decision-8l4</guid>
      <description>&lt;p&gt;A few months ago we set ourselves a deceptively simple goal: produce coherent, high-quality video longer than 15 seconds, on a single GPU, in well under a minute of wall-clock time. Today's video diffusion models like Wan2.2 are good at 3–5 second clips. Stretching that to 10s, 30s, or a minute is where things get interesting.&lt;/p&gt;

&lt;p&gt;This post documents the route we actually took. We surveyed six approaches that show up in recent papers and tech reports — TTT, LoL, Self Forcing, Self Forcing++, Infinite Talk, and Helios — measured the trade-offs, and ultimately landed on SVI (Stable Video Infinity), wired up next to TurboWan in our DiffSynth Engine. We will go over each of those routes, then how SVI works, then the production numbers.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why long video is hard&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Three things break when you push past about five seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The VRAM wall&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Wan2.2 uses Full Attention with O(n²) cost in the number of latent tokens. The math is unforgiving:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5s (81 frames): &lt;/strong&gt;~32.7k tokens, attention matrix ~10 GB. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10s (165 frames): &lt;/strong&gt;~65.5k tokens, attention matrix ~40 GB — already spills off a single GPU. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;30s (~500 frames): &lt;/strong&gt;~200k tokens, infeasible.&lt;/p&gt;

&lt;p&gt;In practice, Self Forcing alone fills most of an H200's 129 GB at 165 frames just for the KV cache.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Temporal drift&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Even when memory is fine, three drift modes show up. The Helios paper named them: &lt;em&gt;position shift&lt;/em&gt; (subjects wandering across the frame), &lt;em&gt;color shift&lt;/em&gt; (gradual hue and brightness drift), and &lt;em&gt;restoration shift&lt;/em&gt; (the model overcorrecting and producing visible discontinuities).&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Causal consistency&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Standard video diffusion uses bidirectional Full Attention — every frame attends to every other. That means no streaming output: you cannot show frame 1 until frame N is done.&lt;/p&gt;

&lt;p&gt;Our concrete target was modest: ≥15 second video, smooth visual continuity, stable subjects across the whole clip, total wait under 60 seconds, minimal training, and a strong preference for reusing weights we already have.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The survey&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We looked at six families. The names are mostly paper titles; the categories will matter later.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Route 1 · TTT (Test-Time Training)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Paper: One-Minute Video Generation with Test-Time Training (arXiv 2504.05298, Apr 2025).&lt;/p&gt;

&lt;p&gt;The idea is to fine-tune the model during inference so it remembers what it has already generated. A small TTT layer (a 2-layer MLP, plus a gate and a local attention) gets inserted after Attention in every Transformer Block, and the model is trained on a curriculum that pushes from short clips out to a full minute.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Per-block insertion: &lt;/strong&gt;after the standard attention, splice in a Gate, a TTT Layer, and a Local Attention, then a LayerNorm. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Curriculum: &lt;/strong&gt;train on progressively longer windows — 3s → 9s → 18s → 30s → 60s. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost: &lt;/strong&gt;256 H100s for ~50 hours.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdlo0wipajt4uz1rf7ef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdlo0wipajt4uz1rf7ef.png" alt="image1.png" width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;TTT — left: insertion point (Gate + TTT Layer + Local Attention + LayerNorm, attached after standard Attention via residual). Right: video segmented into 3-second clips, each handled by Local Attention internally, with the TTT Layer carrying global memory across segments.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It works — the paper reaches 1-minute generation. But the training cost is enormous, the experiments only cover CogVideoX 5B (transfer to Wan2.2 14B is unproven), and the inserted TTT layers conflict with the kernel optimizations we already rely on. Verdict: not selected.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Route 2 · LoL (Longer than Longer)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Paper: LoL: Longer than Longer, Scaling Video Generation to Hour (arXiv 2601.16914, Jan 2026).&lt;/p&gt;

&lt;p&gt;LoL targets a specific failure mode in autoregressive long video — &lt;em&gt;sink-collapse&lt;/em&gt;, where multi-head attention all converges onto the anchor frame and the video periodically reverts to its initial state. The fix is &lt;em&gt;Multi-Head RoPE Jitter&lt;/em&gt;: per-head random phase perturbations that break inter-head homogeneity. Training-free, plug-in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Failure mode: &lt;/strong&gt;sink-collapse — under autoregressive RoPE, distant frames' positional phases periodically realign with the anchor, attention concentrates, content snaps back to the anchor frame. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix: &lt;/strong&gt;give each attention head its own small random phase shift. Heads can no longer collapse to the same column. No retraining required, drops into existing models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favud5wbhh8monvrd1zrx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favud5wbhh8monvrd1zrx.png" alt="image2.png" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;L2 distance to anchor vs. frame index. Self-Forcing++ (red) and LongLive (blue), both with sink, repeatedly snap back at specific frame positions — those are sink-collapse events where the video reverts to the anchor. LoL's Phase Alignment (green) eliminates the snap-back.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzikkvkchlo1z6w8sbozk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzikkvkchlo1z6w8sbozk.png" alt="image3.png" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Per-head attention maps. Top row: normal frames — heads have visibly different patterns. Bottom rows: during sink-collapse — every head looks the same, all collapsed onto the anchor frame's column. RoPE Jitter restores per-head diversity.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;LoL hits 12-hour video on CogVideoX/HunyuanVideo with little quality loss. The catch is that all the demos are static-ish scenes; we don't know how it survives dance, sports, or anything with strong motion. Plus we'd need to modify Wan2.2's attention. Verdict: adaptation cost is too high for unproven gains on motion content. Not selected.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Route 3 · Self Forcing (Causal Wan2.2)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Paper: Self Forcing: Bridging the Train-Test Gap in Autoregressive Video Diffusion (arXiv 2506.08009, NeurIPS 2025 Spotlight).&lt;/p&gt;

&lt;p&gt;Self Forcing replaces Wan2.2's bidirectional Full Attention with &lt;em&gt;causal&lt;/em&gt; attention: a frame only attends to frames before it. That single change unlocks streaming generation — once chunk 1 is done, decode and ship it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bidirectional: &lt;/strong&gt;every frame attends to every other → must finish all 40 denoise steps before any frame can be shown. &lt;strong&gt;Causal: &lt;/strong&gt;a frame only sees its past → the first chunk can stream the moment it is done.&lt;/p&gt;

&lt;p&gt;The training trick is what gives the paper its name. Instead of training on clean ground-truth context (Teacher Forcing) or with custom attention masks (Diffusion Forcing), Self Forcing rolls out the actual inference path with a rolling KV cache, so train and inference distributions match.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generation loop: &lt;/strong&gt;denoise the next small chunk of frames using DMD's compressed step schedule, conditioned on a rolling KV cache built from already-generated frames. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stream: &lt;/strong&gt;as soon as a chunk finishes, VAE-decode and emit it. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Carry-over: &lt;/strong&gt;push the new chunk's latents into the KV cache for the next chunk to attend to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfqyarjefvo919iefuds.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfqyarjefvo919iefuds.png" alt="image4.png" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Three training paradigms compared: (a) Teacher Forcing trains on clean frames — at inference, noisy frames cause out-of-distribution drift; (b) Diffusion Forcing uses custom attention masks but still has train-inference mismatch; (c) Self Forcing replays the true inference process using a rolling KV cache, fully aligning training and inference.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We measured it on the FastVideo framework, single H200:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Length&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Frames&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Time&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;VRAM&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;5s&lt;/td&gt;
&lt;td&gt;81 frames&lt;/td&gt;
&lt;td&gt;70s&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10s&lt;/td&gt;
&lt;td&gt;165 frames&lt;/td&gt;
&lt;td&gt;168s&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;129 GB (near capacity)&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;20s&lt;/td&gt;
&lt;td&gt;321 frames&lt;/td&gt;
&lt;td&gt;287s&lt;/td&gt;
&lt;td&gt;129 GB (KV cache capped at 42 frames)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is architecturally the cleanest answer, and we genuinely like it. But 10s already saturates an H200's VRAM, quality drops at 165 frames, the original model needs causal-attention fine-tuning, and true streaming also needs a Causal Conv3D in the VAE. &lt;/p&gt;

&lt;p&gt;Verdict: wait for the community to chip away at VRAM and quality. Not adopted for now.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Route 4 · Self Forcing++&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Paper: Self-Forcing++: Towards Minute-Scale High-Quality Video Generation (arXiv 2510.02283, Oct 2025).&lt;/p&gt;

&lt;p&gt;Builds on Self Forcing with three additions: &lt;em&gt;Backward Noise Initialization&lt;/em&gt; (each new chunk starts from noise back-integrated from already-generated frames, removing chunk-boundary discontinuities); &lt;em&gt;Extended DMD alignment&lt;/em&gt; (slice 5s windows from a long rollout and align them against a teacher's short-window output); and a &lt;em&gt;GRPO&lt;/em&gt; stage with optical-flow reward to push for more dynamic motion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1. &lt;/strong&gt;Self-rollout the student for far longer than 5 seconds, accumulating a long draft using a rolling KV cache. &lt;strong&gt;Step 2. &lt;/strong&gt;Slice random 5s windows out of that draft, run them through Extended DMD against the teacher's short-window distribution to align. &lt;strong&gt;Step 3. &lt;/strong&gt;Refine with GRPO using optical-flow magnitude as reward, nudging the model toward more dynamic motion. &lt;strong&gt;Trick. &lt;/strong&gt;Each new chunk starts from noise back-integrated from the previous chunk, not from fresh Gaussian — so chunk boundaries no longer pop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzt9goa52nn49svm9dyo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzt9goa52nn49svm9dyo.png" alt="image5.png" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Left to right: CausVid (fixed training duration, train-inference mismatch) → Self Forcing (fixed duration + DMD alignment) → Self-Forcing++ (extended duration + Backward Noise Initialization + Extended DMD alignment). Bottom rows show training-stage and inference-stage correspondence.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Result: minute-scale video (up to about 4m15s) on a 1.3B Wan2.1. Great paper. For production we hit two walls: content is mostly static (low motion), the base model is 1.3B (a long way below Wan2.2 14B), and there is no released code or weights to bootstrap from. Verdict: not selected for now.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Route 5 · Infinite Talk (A2V)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A different shape of problem entirely — &lt;em&gt;Audio-to-Video&lt;/em&gt;, where audio drives continuous talking-head generation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Per-chunk input bundle: &lt;/strong&gt;the new chunk's noisy latents, the audio features for that time window, the user-provided reference image, the last frame of the previous chunk, and a soft conditioning weight. &lt;strong&gt;Reference identity: &lt;/strong&gt;the reference image keeps long-term appearance stable. &lt;strong&gt;Adaptive constraint: &lt;/strong&gt;the soft weight tightens or relaxes the reference based on similarity drift. &lt;strong&gt;Motion bridge: &lt;/strong&gt;the previous chunk's last frame carries motion across boundaries.&lt;/p&gt;

&lt;p&gt;It is good for what it is — talking heads, indefinitely. But the architecture differs enough from Wan2.2 that it requires dedicated training, and it does not generalize to general scenes. Verdict: valuable in a narrow lane, not a general long-video solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Route 6 · Helios&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Paper: Helios: Real Real-Time Long Video Generation Model (PKU-YuanGroup, arXiv 2603.04379, Mar 2026).&lt;/p&gt;

&lt;p&gt;As of writing, Helios is the SOTA for long video — 14B params, 19.5 FPS real-time on a single H100. The trick is to compress historical frames into a three-level pyramid and inject them into the current frame's denoising, so the token budget stays constant no matter how long the video gets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-Term Memory. &lt;/strong&gt;Short-term history (last 3 frames) keeps full resolution; mid-term (last 20 frames) gets moderate compression; long-term (everything earlier) gets heavy compression. Total token budget is constant regardless of video length. &lt;strong&gt;Guidance Attention. &lt;/strong&gt;Inside each DiT block, clean historical KVs and noisy current QKVs are processed separately so historical noise cannot contaminate current denoising. &lt;strong&gt;Pyramid Sampling. &lt;/strong&gt;Sample at low resolution first to define structure, then refine to high resolution to add detail — about 2.3× fewer tokens overall.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm464qp1zg2mehwzi3a5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm464qp1zg2mehwzi3a5.png" alt="image6.png" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Helios architecture. Left: Unified History Injection — short / mid / long-term history compressed at different ratios, concatenated with the current frame before entering the DiT. Right: Pyramid Unified Predictor-Corrector — low token count first to define structure, then high token count to refine details, reducing computation by ~2.3×.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feezjjsxe8o7jugai7foi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feezjjsxe8o7jugai7foi.png" alt="image7.png" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Helios paper systematically defines and visualizes three categories of drift in long-video generation: (a) position shift, (b) color shift, (c) restoration shift (noise), (d) restoration shift (blur). Guidance Attention is specifically designed to address all three.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Helios's measured throughput on H200 is striking — basically flat with length:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Length&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Time&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Throughput&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;240 frames (10s)&lt;/td&gt;
&lt;td&gt;24s&lt;/td&gt;
&lt;td&gt;~10 FPS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;480 frames (20s)&lt;/td&gt;
&lt;td&gt;42s&lt;/td&gt;
&lt;td&gt;~11.4 FPS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;960 frames (40s)&lt;/td&gt;
&lt;td&gt;82s&lt;/td&gt;
&lt;td&gt;~11.7 FPS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;H100 single GPU (Helios-Distilled)&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;19.5 FPS&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The catch is that Multi-Term Memory Patchification needs full retraining of a 14B model. There are no released weights — only a tech report — so we cannot just bolt on a LoRA. Verdict: a medium-to-long-term direction; not deployable today.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Route Comparison Summary&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;All six routes side by side, with SVI added as the row we ultimately committed to:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Approach&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Max Duration&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Quality&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Training Required&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Engineering Difficulty&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Generality&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Rec.&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TTT&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1 minute&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Heavy training needed&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;★★☆&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LoL&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Hour-scale&lt;/td&gt;
&lt;td&gt;Medium (static only)&lt;/td&gt;
&lt;td&gt;Training needed&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;★★☆&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Self Forcing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Theoretically unlimited&lt;/td&gt;
&lt;td&gt;Medium (drops &amp;gt; 10s)&lt;/td&gt;
&lt;td&gt;Existing model&lt;/td&gt;
&lt;td&gt;High (VRAM issues)&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;★★★&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Self Forcing++&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Minute-scale&lt;/td&gt;
&lt;td&gt;Low (mostly static)&lt;/td&gt;
&lt;td&gt;Training needed&lt;/td&gt;
&lt;td&gt;Very high (no code)&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;★☆☆&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Infinite Talk&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;td&gt;High (talking head)&lt;/td&gt;
&lt;td&gt;Training needed&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Low (A2V only)&lt;/td&gt;
&lt;td&gt;★★☆&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Helios&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Theoretically unlimited&lt;/td&gt;
&lt;td&gt;High (industry SOTA)&lt;/td&gt;
&lt;td&gt;Full retraining&lt;/td&gt;
&lt;td&gt;Very high (no weights)&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;★★★☆&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SVI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Unlimited&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Medium-High&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Open-source LoRA&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Medium&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;High&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;★★★★&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A taxonomy that fell out of the survey&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you squint, every approach we surveyed falls into one of three buckets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Type A — extend the attention range itself &lt;/strong&gt;(Self Forcing, LoL, TTT). Have the model directly process longer sequences. Highest theoretical quality. VRAM grows quadratically, so engineering hits a wall around 10s today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Type B — hierarchical history compression &lt;/strong&gt;(Helios). Compress past frames and inject them as conditioning. Bypasses VRAM. Costs a full retraining of a 14B model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Type C — stateful rolling generation &lt;/strong&gt;(SVI, Infinite Talk). Decompose long video into short clips with overlapping state. Constant VRAM, unlimited length, LoRA-only training. The trade is possible discontinuities at clip boundaries and unbounded long-term drift you can manage but not eliminate.&lt;/p&gt;

&lt;p&gt;For this quarter, Type C is what we ship. For next year, Type B is where we are watching the literature.&lt;/p&gt;




&lt;p&gt;In the next post, we go into what shipping actually looked like — six approaches to ≥15-second video generation, why we picked SVI, and what the production numbers look like. &lt;a href="https://www.atlascloud.ai/blog/guides/long-video-generation-blog-2" rel="noopener noreferrer"&gt;Read Part 2 →&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>mlengineering</category>
      <category>diffusion</category>
    </item>
  </channel>
</rss>
