DEV Community

Cover image for From Windows/Corona to Linux V-Ray Standalone on AWS Deadline Cloud – Architecture That Actually Worked
Raj Murugan
Raj Murugan

Posted on

From Windows/Corona to Linux V-Ray Standalone on AWS Deadline Cloud – Architecture That Actually Worked

Over the last few weeks I moved a real production scene from Windows/Corona to Linux V-Ray Standalone on AWS Deadline Cloud. This isn't a hello-world write-up—it's the practical path that got 400 frames moving reliably, with the guardrails that kept things from falling over.

Why V-Ray Standalone on Linux

  • Cost and scale: spot capacity on Linux is abundant and significantly cheaper; per-frame costs dropped materially.
  • Stability: lean workers, faster boot, fewer moving parts than full DCC stacks.
  • Portability: a well-formed .vrscene plus clean paths travels across environments.

Architecture at a glance

  • Submit: 3ds Max exports a .vrscene and a tiny JSON sidecar with frame range and metadata.
  • Queue: the render manager expands the frame list (e.g., 0-399x1) into 1 task per frame.
  • Workers: Linux images/containers with V-Ray Standalone and a small pre-task that rewrites Windows/UNC paths to Linux mounts and validates every reference.
  • Storage options:
    • Job Attachments: upload scene+assets once; content-hash dedupe; great for portability.
    • Sync/Mounts: Resilio/NAS to EFS/FSx mounts; great for giant libraries and rapid iteration.
  • Output: frames land in S3 (or mounted storage) and mirror back on-prem if needed.

What I shipped first (MVP)

  • One .vrscene
  • One asset bundle (attachments) or a mounted project share
  • One pre-task: path mapping + sanity checks
  • One queue with a clean frame list
  • Checkpointing enabled (tunable interval) to survive interruptions

Path mapping that kept me sane

I avoid "parse everything" and instead declare path intents. A simple JSON map drives a conservative rewrite; anything not matching known roots is left untouched, then validated.

Example path map (sidecar)

{
"mappings": [
{ "win": "C:\Projects\", "linux": "/mnt/projects/" },
{ "win": "\nas\assets\", "linux": "/mnt/assets/" }
],
"fps": 25,
"start": 0,
"end": 399,
"step": 1
}

Pre-task outline (Python)

  • Read scene.meta.json and path map.
  • Scan the .vrscene for Windows/UNC paths, rewrite to Linux.
  • Verify existence; if any missing, print a short remediation report and exit non-zero.
  • Hand the cleaned .vrscene to V-Ray.

Attachments vs Sync (when I pick which)

Attachments

  • Pros: portable, deduped, reproducible; perfect for contained shots (<20-30 GB).
  • Cons: pay upload cost; less ideal for very frequent micro-edits.

Sync/Mounts

  • Pros: great for giant shared libraries, instant edits; familiar artist workflow.
  • Cons: cold caches and path drift can bite; reproducibility depends on discipline.

Rule of thumb now

  • Shot-specific data → Attachments
  • Global/shared libraries → Mounts
  • Hybrid is fine

Licensing notes that saved time

  • Bring-your-own licenses work well if the server is reachable with low latency—preflight a license ping and fail fast if checkout trips.
  • Usage-based licensing is a clean burst option when seats run tight.

Checkpointing/resume (aka sticky rendering)

  • Keep it on, but measure the overhead. I start at 10 minutes; 5-15 minutes is the practical band depending on frame length and disk I/O.
  • Store checkpoints on local NVMe; avoid remote writes in the hot loop.

How I made ~400 frames feel easy

  • 1 task per frame → clean retries and metrics.
  • Target workers ≈ frame count, respecting license ceilings.
  • Guardrails: budget tags, per-queue caps, idle scale-in after the tail finishes.

Observability that actually mattered

  • Per-frame: queue wait, time-to-first-pixel, render time, upload time, cost per frame.
  • Per-fleet: desired vs healthy, interruptions, cache hit rates.
  • Logs I read first: pre-task "missing assets" summary, V-Ray headers/footers.

Things that failed (and fixes)

  • Mixed slashes/whitespace in texture paths → normalize separators and quote paths.
  • "Works on my machine" exports → validation that lists all external refs by type before export.
  • UNC vs drive letter roots → always include both in mapping.

Submission checklist (copy-paste)

  • Render Setup has the correct frame list (or the submitter enforces it).
  • Export .vrscene + scene.meta.json + pathmap.json.
  • Pick one storage mode per shot (Attachments vs Sync) and stick to it.
  • Select the Linux V-Ray queue; set frames (e.g., 0-399x1).
  • Enable checkpoints; pick interval.
  • Tag the job (project, shot, budget).
  • Submit; only open logs when a task fails—start with the pre-task.

If you're starting today

  • Ship the pre-task and one shot with attachments first; it removes 80% of unknowns.
  • Add sync mounts later if iteration speed demands it.
  • Keep exporters simple and submitters smart: the job owns the frame list; the .vrscene is the payload.

What I'd love to hear

  • Your path-mapping rules for mixed Windows/UNC environments.
  • Checkpoint intervals that worked best for long frames.
  • Any gotchas with VRMesh/proxy paths across platforms.

If there's interest, I'll post the pre-task template, a scene.meta.json schema, and a ready-to-use dashboard for frame-time and cost.

Top comments (0)