DEV Community

Prompting Pixels
Prompting Pixels

Posted on

Ship ComfyUI on RunPod (Dev-Friendly): Cloud GPU, models, and zero local setup

What we’re building

A browser-based ComfyUI workstation running on a rented GPU (RunPod), preloaded with your favorite models and custom nodes, with no local installs. You’ll:

  • Launch a GPU pod with the official ComfyUI template
  • Install models LoRA/custom nodes the fast way (copy/paste one-liner)
  • Restart ComfyUI cleanly and verify everything works
  • Pull images off the pod and keep costs in check

If you like command-line control, I added optional CLI and debugging bits along the way.

TL;DR (Quick Reference)

  • Pick a GPU (3090 is great for learning; 5090 is fast if you need speed)
  • Use the RunPod ComfyUI template (no manual installs)
  • Enable the Web Terminal and paste a deployment one-liner from Prompting Pixels
  • Set Hugging Face and Civitai API tokens before running the script
  • Restart ComfyUI from Manager inside the UI
  • Download outputs via File Browser (port 8080)
  • Stop the pod when you’re done to avoid charges

The problem I was trying to solve

I wanted ComfyUI with good models and a clean environment without touching my local machine. My laptop can’t compete with a data center GPU, and I’m not babysitting CUDA installs or driver versions. RunPod + an official ComfyUI template + a model installer script is the shortest path I’ve found.

Plan of attack

1) Provision a GPU pod on RunPod

2) Select the official ComfyUI template

3) Wait for services to come up (ComfyUI on port 8188)

4) Use a one-liner to install models/custom nodes

5) Restart ComfyUI and generate images

6) Download results and pause billing


1) Provision the GPU pod

Sign into RunPod and open Pods. Filter for an affordable but capable GPU.

RunPod console 'Deploy a Pod' Select an Instance screen with left sidebar highlighting Pods, VRAM filter slider, featured GPU cards (RTX 5090, A40, H200 SXM) and a large red arrow annotation pointing to the main panel.

  • Starter pick: RTX 3090 (~$0.30–0.50/hr)
  • Faster iterations: RTX 5090 (~$0.75–0.90/hr)

💡 Pro tip: Don’t overspend on GPU at the start. You can always spin up a beefier pod later to re-run workflows faster.

Attach the right template

On the config screen, change the template to the official RunPod ComfyUI image.

Runpod deploy UI showing Configure Deployment with pod name 'miserable_peach_manatee', Runpod PyTorch 2.8.0 template, red arrow and text 'Click

Search “ComfyUI” and pick the RunPod-owned template (not a random user).

Runpod modal 'Explore Pod Templates' showing a search for 'Comfyui', a grid of pod template cards and a red arrow labeled 'Select this one' pointing to the 'ComfyUI' card (runpod/comfyui:latest).

Name your pod and deploy on-demand.

RunPod deploy screen with ComfyUI template, GPU count slider, pricing cards (On-Demand $0.46/hr selected), red annotation and arrow pointing to purple 'Deploy On-Demand' button; pod shows 1x RTX 3090, 117GB RAM, 200GB disk.

⚠️ Warning: Keep an eye on allocated disk size. Models are big. 100–200 GB leaves room for a couple of XL checkpoints + LoRAs + outputs.


2) Wait for services, then open ComfyUI

RunPod will boot your pod and wire up HTTP ports. You want ComfyUI on port 8188 to be “Ready” (green).

Browser console showing Pod 'miserable_peach_manatee' details with HTTP Services: Port 8080 FileBrowser, Port 8188 ComfyUI (red arrow note), Port 8888 JupyterLab, and SSH key setup box

Click “ComfyUI” to open the UI in a new tab.

Dark ComfyUI Templates modal with search/filters, left sidebar of generation types and APIs, and a grid of image and video template cards.

💡 Pro tip: JupyterLab (port 8888) and File Browser (port 8080) are also exposed. I use them for quick inspections/edits without SSH.


3) Install models and custom nodes (no manual path wrangling)

We’ll use a one-liner generator so you can pick models/nodes via checkboxes and paste a script once. Open:

  • deploy.promptingpixels.com
  • Choose your base models (e.g., JuggernautXL, RealVisXL)
  • Add LoRAs/styles and useful nodes (UltimateSDUpscale is solid)
  • Select “RunPod” as the provider
  • Copy the generated one-liner

Now in your pod’s Connect tab, enable the Web Terminal and open it:

RunPod console for pod miserable_peach_manatee showing Connect tab with HTTP services (8080 FileBrowser, 8188 ComfyUI, 8888 JupyterLab), SSH key instructions, red text and arrow pointing to purple 'Enable Web Terminal' toggle labeled Running

Paste the command you copied. It will look conceptually like this:

# Example shape — use the exact one-liner from the site
bash <(wget -qO- https://deploy.promptingpixels.com//api/script/cmijdjpcm000knikfe33dqbc2) --hf-token=YOUR_HF_TOKEN
Enter fullscreen mode Exit fullscreen mode

Browser window showing a terminal running a bash wget deploy script URL with redacted --hf-token and --civitai-token values highlighted in magenta

⚠️ Warning: Replace YOUR_HF_TOKEN and YOUR_CIVITAI_TOKEN with real API tokens.

Prefer not to paste tokens in plain text?

export HF_TOKEN=hf_XXXXXXXXXXXXXXXXXXXXXXXXXXXX
export CIVITAI_TOKEN=ct_XXXXXXXXXXXXXXXXXXXXXXXX
# Then paste the one-liner and change flags to:
# ... --hf-token "$HF_TOKEN" --civitai-token "$CIVITAI_TOKEN"
Enter fullscreen mode Exit fullscreen mode

Go grab a drink; downloads can take a while.


4) Restart ComfyUI cleanly

After the installer finishes, switch to your ComfyUI tab and use the Manager to reload nodes and models.

ComfyUI Manager v3.37.1 modal; red arrow and 'Press Restart' label point at red Restart button to load custom nodes; right panel shows version links.

💡 Pro tip: If you see “Bad Gateway” once, wait ~30s and refresh. The service is coming back up.

At this point, models and nodes should show up in dropdowns and node pickers.


5) Generate and export results

Create a simple workflow, run it, and confirm outputs appear in ComfyUI’s output directory.

To batch-download images, use File Browser (port 8080).

RunPod console showing pod 'miserable_peach_manatee' Connect view listing FileBrowser 8080, ComfyUI, JupyterLab 8888, SSH key box and red arrows.

Default credentials for File Browser:

  • Username: admin
  • Password: adminadmin12

Navigate: runpod-slim → ComfyUI → output, then right-click to download or multi-select.

💡 Pro tip: For automation, you can also zip the output folder and grab a single archive. Or sync to an object store if you want to get fancy.


Debugging and ops cheat sheet

When things go sideways, here’s what I actually run:

  • GPU sanity:
  nvidia-smi
Enter fullscreen mode Exit fullscreen mode
  • Disk space:
  df -h
  du -h --max-depth=1 runpod-slim/ComfyUI/models | sort -h
Enter fullscreen mode Exit fullscreen mode
  • Memory headroom:
  free -h
Enter fullscreen mode Exit fullscreen mode
  • Permissions jank (rare, but happens with new files/folders):
  chmod -R u+rwX,g+rwX runpod-slim/ComfyUI
Enter fullscreen mode Exit fullscreen mode
  • Quick reset if Manager restart isn’t enough:
    • Stop and start the pod from the RunPod dashboard
  • Verify ports:
    • ComfyUI on 8188, File Browser on 8080, JupyterLab on 8888

🧪 Tip for VRAM errors: Drop resolution, batch size, or disable high-memory nodes first. On 24GB cards, SDXL at 1024px usually needs conservative settings.


Gotchas (aka “wish I knew this earlier”)

  • Model sizes add up fast. Plan 10–30 GB per XL checkpoint, plus LoRAs and VAE.
  • The first template boot can take a few minutes—don’t panic-refresh too early.
  • A “Ready” indicator for port 8188 is the source of truth. If it isn’t green, ComfyUI isn’t up yet.
  • After adding nodes, always restart via Manager before assuming “it didn’t install.”
  • Tokens matter: a bad Hugging Face or Civitai token silently causes partial downloads.
  • Billing is minute-by-minute. A forgotten running pod is an involuntary donation.

Keep costs under control

When you’re done, pause the pod to stop charges while preserving your data. Termination deletes everything—use it when you’re really done.

Runpod Pods page showing pod 'miserable_peach_manatee', resource gauges, a dropdown menu with 'Stop Pod' highlighted by a red arrow and note to stop before terminating.

  • Stop = pause billing, keep storage
  • Terminate = destroy pod and data

💡 Pro tip: Name pods clearly (e.g., comfyui-sdxl-playground) so you don’t forget what’s safe to shut down.


Extras for power users

  • JupyterLab (8888) for quick scripts and file edits
  • SSH keys for terminal lovers (optional, Web Terminal works fine)
  • Volume planning: if you’re a model hoarder, allocate >200 GB up front
  • CI-like behavior: keep a lightweight pod for experimenting, spin a faster one for final renders

Quick Reference

  • GPU pick
    • 3090 = economical learning
    • 5090 = fast iteration
  • Services
    • ComfyUI: port 8188
    • File Browser: port 8080 (admin/adminadmin12)
    • JupyterLab: port 8888
  • Tokens
  • Installer
    • Use deploy.promptingpixels.com, provider “RunPod”, copy the one-liner
  • Common commands
  nvidia-smi
  df -h
  free -h
Enter fullscreen mode Exit fullscreen mode
  • Cost control
    • Stop pod when idle
    • Terminate only when you want a clean slate

What’s next

  • Add ControlNet and pose/edge preprocessors for surgical control
  • Wire up an S3 bucket to sync outputs automatically
  • Build a tiny HTTP service that triggers ComfyUI workflows (webhooks + queue)
  • Try LoRA training pods to create your own styles
  • Spin a second pod with a bigger GPU just for render days

If you ship anything cool with this setup, share your workflow JSON and model picks—I love seeing what people make with a clean ComfyUI rig.

Top comments (0)