DEV Community

Prompting Pixels
Prompting Pixels

Posted on

One-command ComfyUI on Cloud GPUs: A Practical, Repeatable Setup

What we're building

A repeatable way to boot a cloud GPU (RunPod or Vast.ai), paste a single command, grab the exact ComfyUI version you want, auto-install your favorite custom nodes, and download models from Hugging Face/Civitai into the correct folders. No more “did I put that LoRA in the right place?” or “why is this template six months behind?”.

We’ll use a free script generator to produce the one-liner and show you how to tweak, debug, and extend it for your workflow.

Prompting Pixels AI Launcher web page showing a welcome panel, provider radios (Vast.ai selected), a bash wget installation one-liner, purple 'Deploy on RunPod' and black 'Deploy on Vast.ai' buttons, and save/load/preset configuration buttons; contact email deploy@promptingpixels.com visible.

💡 Pro tip: Time is literally money on cloud GPUs. Automating the boring parts pays for itself on the first run.


Why this was a pain (until now)

If you’ve tried to stand up ComfyUI on a fresh GPU instance, you’ve probably done some combination of:

  • Opening a terminal, then manually git pulling ComfyUI to get a newer version

  • Downloading models piecemeal from Hugging Face or Civitai

  • Guessing model folder locations (checkpoints vs LoRA vs VAE)

  • Cloning custom nodes and hoping dependencies resolve

  • Restarting ComfyUI multiple times to make new nodes appear

That’s too many manual steps, which means it’s slow, error-prone, and easy to forget when you come back a week later.


The fix: generate a deployment command once, reuse forever

We’ll use the generator at:

It outputs a one-line shell command that:

  • Installs or updates ComfyUI to a specific version

  • Downloads your selected models into the correct ComfyUI subfolders

  • Installs custom nodes you choose from the ComfyUI registry

  • Adapts paths to your provider (RunPod/Vast.ai) or your local OS

  • Supports tokens for gated downloads

Dark-themed AI deployment web UI showing Installation Script with One-Liner tab, red arrow and text 'One line is all you need :-)', a bash wget command, 'Deploy on RunPod' and 'Deploy on Vast.ai' buttons, and configuration summary with Add Models button.

🧭 Heads up: The generated command is provider-aware. Pick the right target before copying.


Hands-on: from blank GPU to ComfyUI, step-by-step

We’ll demonstrate with Vast.ai, but I’ll call out RunPod differences as we go.

1) Launch a GPU instance

  • Vast.ai: Use an image/template that includes a Jupyter Terminal or shell access.

  • RunPod: Either the ComfyUI template or a general-purpose image with CUDA.

Open a terminal on the instance:

Vast.ai Applications dashboard with tiles and blue 'Launch Application' buttons; a red arrow points to the 'Jupyter Terminal' tile.

2) Generate your script

  • Visit https://deploy.promptingpixels.com/

  • Choose App: ComfyUI

  • Pick the provider (Vast.ai or RunPod)

  • Add Models:

    • Search from Hugging Face or Civitai
    • The generator will route each file to the correct ComfyUI directory
  • Add Custom Nodes:

    • Search popular nodes (e.g., Impact Pack) and add them
  • Optionally pin the ComfyUI version (handy for reproducible builds)

💡 Pro tip: Use presets to recreate environments from previous projects. Consistency saves debugging time.

3) Copy the one-liner and run it

Paste the generated command into your instance terminal. It typically looks like a wget/curl piping into bash. If you need tokens for gated models, export them first:

# Optional: tokens for gated downloads
export HF_TOKEN=hf_your_read_token_here
export CIVITAI_TOKEN=your_civitai_token_here

# Example shape of the generated command (yours will be specific)
bash <(wget -qO- https://deploy.promptingpixels.com//api/script/cmim9aus50008n5vdgw3g00yv) --hf-token=YOUR_HF_TOKEN
Enter fullscreen mode Exit fullscreen mode

Jupyter terminal showing 'Activated conda/uv virtual environment at /venv/main' in green, red provisioning warnings, and a bash wget command with a redacted token; browser URL shows 'Not Secure' and an IP address

Go grab a coffee. When it finishes, ComfyUI will be set up with your exact configuration.

4) Launch ComfyUI

Start ComfyUI from your provider’s UI (or the app menu). If new nodes don’t show up in the menu, do a quick restart of the app.


Verify and debug like a developer

I like to sanity-check a fresh environment with a few quick commands.

Check the ComfyUI version you actually deployed

COMFYUI_ROOT="${COMFYUI_ROOT:-/workspace/ComfyUI}"  # adjust per provider if needed
git -C "$COMFYUI_ROOT" rev-parse --short HEAD
Enter fullscreen mode Exit fullscreen mode

Confirm models landed in the right folders

ls -1 "$COMFYUI_ROOT/models/checkpoints" | head
ls -1 "$COMFYUI_ROOT/models/loras" | head
ls -1 "$COMFYUI_ROOT/models/vae" | head
Enter fullscreen mode Exit fullscreen mode

See which custom nodes got installed

ls -1 "$COMFYUI_ROOT/custom_nodes" | sort | head -n 20
Enter fullscreen mode Exit fullscreen mode

🧪 Tip: If a node has Python deps, open its README. Some custom nodes require an extra pip install or a build tool. The generator handles common cases, but niche nodes can have surprises.


Provider-specific notes

  • Vast.ai common ComfyUI path:

    • /workspace/ComfyUI
  • RunPod common ComfyUI path:

    • /workspace/runpod-slim/ComfyUI

If you’re on a non-template image or your provider changed paths, set COMFYUI_ROOT manually and use the “Full Script” editor to update paths before running.

⚠️ Warning: Custom nodes appear after ComfyUI restarts. If you installed nodes while the UI was running, restart the service/app.


Full script mode: for tinkerers and control freaks

Click “Full Script” in the generator to see and edit everything it plans to run. This is great when you want to:

  • Pin exact commits for ComfyUI or nodes

  • Add extra pip packages

  • Change model destination directories

  • Integrate with a persistent volume

  • Insert health checks or post-install tests

Browser screenshot of a deployment UI with Vast.ai selected, dark code panel showing bash exports for HF and Civitai tokens, plus deploy buttons for RunPod and Vast.ai

Example tweaks you might add:

# Pin a specific ComfyUI commit
git -C "$COMFYUI_ROOT" fetch --all
git -C "$COMFYUI_ROOT" checkout <commit-or-tag>

# Install extra Python deps required by a custom workflow
source /venv/bin/activate 2>/dev/null || true
pip install xformers==0.0.23 safetensors==0.4.3

# Verify GPU is visible
nvidia-smi || echo "No GPU found (driver/container mismatch?)"
Enter fullscreen mode Exit fullscreen mode

Troubleshooting and “wish I knew this sooner”

  • Hugging Face 403? You probably need a token for that model/repo.

    export HF_TOKEN=hf_xxx
    
  • Slow model downloads: instance egress can be limited. Consider smaller test models first to validate paths.

  • Not enough disk: large checkpoints can exceed ephemeral storage. Use a larger volume or a persistent disk.

  • Node missing in the menu: restart ComfyUI after node install/update.

  • CUDA mismatch errors: ensure your image, driver, and PyTorch stack align. Templates help, bare images can drift.

  • Case-sensitive paths: ComfyUI model folders are strict: “checkpoints”, “loras”, “vae”, etc.

  • Port blocked? Verify the provider exposes the ComfyUI port (often 8188) and the service is running.

🛠️ Debug pattern I use: tail logs while launching the UI to spot import errors.

ps aux | grep -i comfy
# or check the provider's app logs panel if available

Developer tips

  • Use presets to create named environments for different workflows (e.g., “ControlNet Editing”, “Tiny SDXL Playground”).

  • Pin versions when collaborating so everyone runs the same stack.

  • For long downloads, wrap your terminal session in tmux/screen to avoid drops.

  • Cache model folders on a persistent volume to avoid re-downloading every session.

  • If you’re on Windows/macOS locally, point the generator to your ComfyUI path and generate a matching script for your OS.

💡 Pro tip: You can run the generator for local machines too. Set the install path and let it do the folder mapping for you.


Quick Reference (TL;DR)

1. Launch GPU instance (RunPod/Vast.ai) and open a terminal
2. In the generator, pick provider + ComfyUI version
3. Add models (HF/Civitai) and custom nodes
4. Copy the one-liner, set tokens if needed, paste into terminal
5. Launch ComfyUI; restart once to load new nodes
Enter fullscreen mode Exit fullscreen mode
  • Helpful env vars:

    export HF_TOKEN=hf_xxx
    export CIVITAI_TOKEN=xxx
    export COMFYUI_ROOT=/workspace/ComfyUI  # adjust if your layout differs
    
  • Verify after install:

    git -C "$COMFYUI_ROOT" rev-parse --short HEAD
    ls "$COMFYUI_ROOT/models/checkpoints" | head
    ls "$COMFYUI_ROOT/custom_nodes" | head
    

If you’ve got feature ideas or run into edge cases, the tool’s maintained and open to feedback: deploy@promptingpixels.com

Happy building!

Top comments (0)