DEV Community

Cover image for Train a Custom Z‑Image Turbo LoRA with the Ostris AI Toolkit (RunPod Edition)
Prompting Pixels
Prompting Pixels

Posted on

Train a Custom Z‑Image Turbo LoRA with the Ostris AI Toolkit (RunPod Edition)

What we’re building

A complete, reproducible workflow to train a Z‑Image Turbo LoRA with the Ostris AI Toolkit, running on a rented GPU (RunPod). We’ll go from blank slate to a downloadable .safetensors LoRA, then load it into a downstream workflow (e.g., ComfyUI) to test the results with a trigger token.

You’ll learn:

  • How to spin up the right environment on RunPod
  • How to assemble and configure a dataset for concept training
  • How to pick the right model, adapter, and sample prompts for monitoring
  • How to kick off and observe training progress
  • How to export and use your LoRA in your own pipeline

💡 Pro tip: Z‑Image Turbo is fast and surprisingly VRAM‑friendly. Even before the base model drops, the distilled weights already make for practical LoRA experimentation.

Check out the accompanying video on YouTube.


TL;DR (Quick Reference)

  1. Start a RunPod instance using the “Ostris AI Toolkit” template.
  2. Create a dataset (8–20 images is a good starting point). Optionally add captions.
  3. New job → select Z‑Image Turbo + LoRA target.
  4. Set a unique trigger token (e.g., myuniqueconcept) and configure sample prompts.
  5. Run ~3,000 steps to start; expect ~1 hour on a high-end GPU (e.g., RTX 5090).
  6. Download the resulting LoRA (.safetensors) from the job’s Checkpoints.
  7. Load the LoRA into your favorite workflow (ComfyUI, etc.) and prompt with the trigger.

Step 1 — Spin up the GPU workspace

On RunPod, search for and launch the Ostris AI Toolkit template. Keep disk size generous (datasets and samples eat space as you iterate).

RunPod 'Deploy a Pod' UI screenshot with red arrows: select 'AI Toolkit - ostris - ui - official' template, edit/change template, adjust disk size, and press purple 'Deploy On-Demand' button; shows On-Demand $0.89/hr and RTX 5090 pod summary (200 GB disk).

🧪 Debug tip: If you see 0% GPU utilization during training, your job likely didn’t start or is stuck on CPU. Check the Training Queue and logs.


Step 2 — Assemble a tiny but consistent dataset

Hop into Datasets → New Dataset. Name it something meaningful; I like a short handle that matches my future trigger token.

Dark-mode web UI 'OSTRIS AI-TOOLKIT' showing Datasets page with left sidebar (Dashboard, New Job, Training Queue, Datasets highlighted, Settings), main area saying 'Empty' and 'Refresh', and red annotated arrows labeled '1. Navigate to

Upload 8–20 representative images. Keep variety in poses and contexts, but a consistent subject identity.

OSTRIS AI-Toolkit dataset 'teach3r' screenshot: 3x3 grid of teacher thumbnails with overlays and trash icons, left nav and Add Images button.

Captions are optional. If you add them, keep the phrasing consistent (e.g., always include your trigger token).

🧭 Guideline: Resolution 1024×1024 is a solid baseline with Z‑Image Turbo. If your source images vary wildly, consider pre-cropping/centering the subject.


Step 3 — Configure the training job like a pro

Head to New Job:

  • Training name: something short you’ll recognize later
  • Trigger token: a unique string (avoid real words; e.g., xqteachu, zimg_concept01)
  • Architecture: Z‑Image Turbo (LoRA target)

OSTRIS AI-TOOLKIT 'New Training Job' UI screenshot; red arrows highlight Training Name/Trigger 'teach3r' and Model Architecture dropdown set to 'Z-Image Turbo'. Fields show GPU #0, Steps 3000, Target LoRA.

You’ll see a training adapter path. There’s also a newer “v2” adapter rolling out. If it’s available in your build, you can switch the file name from v1 to v2 to try it out.

Screenshot of a tweet about a v2 z-image-turbo training adapter above a split image: left shows model settings selecting Z-Image Turbo and training_adapter_v2.safetensors with Low VRAM on; right shows config lines highlighting training_adapter_v1.safetensors and training_adapter_v2.safetensors

Attach your dataset and set preview sampling. Samples during training are clutch—they confirm your LoRA is “taking.”

OSTRIS New Training Job UI: Dataset 1 panel, red arrow 'Select your Dataset...', target teach3r, 1024x1024 selected, sample settings shown, prompt contains 'bomb'.

For samples, create two contrasting prompts so you can inspect generalization:

  • “{trigger}, cinematic portrait, soft light, 85mm, bokeh”
  • “{trigger}, full body action scene, dynamic pose, outdoor, golden hour”

Ostris AI-TOOLKIT New Training Job UI showing SAMPLE settings (Sample Every 250, Width/Height 1024, Seed 42), two sample prompts with seeds and LoRA scale, and a red arrow and large red note: 'Recommended to change the prompts to test LoRA outputs during training'.

💡 Pro tip: Keep the LoRA strength modest when previewing (e.g., 0.7–0.9). Too high can overcook and hide issues until it’s too late.

If your GPU is tight on VRAM, turn on the Low VRAM option in the model panel.


Step 4 — Start the job and watch it like a hawk

Create Job → Training Queue → Play → Start.

Dark OSTRIS AI-Toolkit view for 'teach3r' showing progress and GPU/CPU stats; red arrow and text 'Click the play button to start training' point to the play icon top-right.

On a 5090, ~3k steps typically finishes around the 1‑hour mark (defaults). If samples are configured every 250 steps, you’ll see the subject “phase in” across iterations.

🧪 Debug tip: If loss flatlines suspiciously early or samples look unrelated to your subject after ~1k steps, your trigger might not be present in the sample prompts, or your dataset is too small/too noisy.


Step 5 — Evaluate progress and export the LoRA

Open the Samples tab to review the training trajectory. You’ll usually notice early frames not obeying the trigger, then progressively adapting to your subject.

Screenshot of OSTRIS AI-TOOLKIT 'Job: ma1a' Samples tab showing four illustrated teacher-classroom panels, a hand cursor over the teacher, and left navigation menu

When it’s done, jump to the job Overview → Checkpoints. Download the newest .safetensors file—this is your LoRA.

OSTRIS AI-TOOLKIT job 'ma1a' UI showing 'Training completed' banner, terminal logs and progress bar, right sidebar with CPU/GPU stats and a checkpoints list; red annotation arrow points to the ma1a.safetensors download icon and cursor.

📦 Housekeeping: Save the training config alongside the .safetensors so you can reproduce tweaks later (steps, adapter version, dataset size, etc.).


Step 6 — Try the LoRA in your workflow

I like to validate in ComfyUI with a simple graph: base Z‑Image Turbo → CLIP encode prompt (including trigger) → sampler → VAE → preview.

ComfyUI node graph showing Load models and CLIP Text Encode nodes with prompt 'mala, school teacher shooting a basketball, smiling', connected sampler and VAE nodes, and a right-side cartoon image preview of a woman shooting a basketball on an outdoor court

Example prompt:

  • “myuniqueconcept, cheerful portrait, natural light, editorial style”

If the result skews too strongly to the subject or artifacts creep in, lower the LoRA strength a bit and re‑sample.

Final output from one of my runs:

Smiling girl in a yellow cardigan and blue jeans tossing a basketball toward a hoop on an outdoor court with trees and a building in the background

Top comments (0)