What we’re building
A browser-based ComfyUI workstation running on a rented GPU (RunPod), preloaded with your favorite models and custom nodes, with no local installs. You’ll:
- Launch a GPU pod with the official ComfyUI template
- Install models LoRA/custom nodes the fast way (copy/paste one-liner)
- Restart ComfyUI cleanly and verify everything works
- Pull images off the pod and keep costs in check
If you like command-line control, I added optional CLI and debugging bits along the way.
TL;DR (Quick Reference)
- Pick a GPU (3090 is great for learning; 5090 is fast if you need speed)
- Use the RunPod ComfyUI template (no manual installs)
- Enable the Web Terminal and paste a deployment one-liner from Prompting Pixels
- Set Hugging Face and Civitai API tokens before running the script
- Restart ComfyUI from Manager inside the UI
- Download outputs via File Browser (port 8080)
- Stop the pod when you’re done to avoid charges
The problem I was trying to solve
I wanted ComfyUI with good models and a clean environment without touching my local machine. My laptop can’t compete with a data center GPU, and I’m not babysitting CUDA installs or driver versions. RunPod + an official ComfyUI template + a model installer script is the shortest path I’ve found.
Plan of attack
1) Provision a GPU pod on RunPod
2) Select the official ComfyUI template
3) Wait for services to come up (ComfyUI on port 8188)
4) Use a one-liner to install models/custom nodes
5) Restart ComfyUI and generate images
6) Download results and pause billing
1) Provision the GPU pod
Sign into RunPod and open Pods. Filter for an affordable but capable GPU.
- Starter pick: RTX 3090 (~$0.30–0.50/hr)
- Faster iterations: RTX 5090 (~$0.75–0.90/hr)
💡 Pro tip: Don’t overspend on GPU at the start. You can always spin up a beefier pod later to re-run workflows faster.
Attach the right template
On the config screen, change the template to the official RunPod ComfyUI image.
Search “ComfyUI” and pick the RunPod-owned template (not a random user).
Name your pod and deploy on-demand.
⚠️ Warning: Keep an eye on allocated disk size. Models are big. 100–200 GB leaves room for a couple of XL checkpoints + LoRAs + outputs.
2) Wait for services, then open ComfyUI
RunPod will boot your pod and wire up HTTP ports. You want ComfyUI on port 8188 to be “Ready” (green).
Click “ComfyUI” to open the UI in a new tab.
💡 Pro tip: JupyterLab (port 8888) and File Browser (port 8080) are also exposed. I use them for quick inspections/edits without SSH.
3) Install models and custom nodes (no manual path wrangling)
We’ll use a one-liner generator so you can pick models/nodes via checkboxes and paste a script once. Open:
- deploy.promptingpixels.com
- Choose your base models (e.g., JuggernautXL, RealVisXL)
- Add LoRAs/styles and useful nodes (UltimateSDUpscale is solid)
- Select “RunPod” as the provider
- Copy the generated one-liner
Now in your pod’s Connect tab, enable the Web Terminal and open it:
Paste the command you copied. It will look conceptually like this:
# Example shape — use the exact one-liner from the site
bash <(wget -qO- https://deploy.promptingpixels.com//api/script/cmijdjpcm000knikfe33dqbc2) --hf-token=YOUR_HF_TOKEN
⚠️ Warning: Replace YOUR_HF_TOKEN and YOUR_CIVITAI_TOKEN with real API tokens.
- Hugging Face: https://huggingface.co/settings/tokens
- Civitai: https://civitai.com/user/account
Prefer not to paste tokens in plain text?
export HF_TOKEN=hf_XXXXXXXXXXXXXXXXXXXXXXXXXXXX
export CIVITAI_TOKEN=ct_XXXXXXXXXXXXXXXXXXXXXXXX
# Then paste the one-liner and change flags to:
# ... --hf-token "$HF_TOKEN" --civitai-token "$CIVITAI_TOKEN"
Go grab a drink; downloads can take a while.
4) Restart ComfyUI cleanly
After the installer finishes, switch to your ComfyUI tab and use the Manager to reload nodes and models.
💡 Pro tip: If you see “Bad Gateway” once, wait ~30s and refresh. The service is coming back up.
At this point, models and nodes should show up in dropdowns and node pickers.
5) Generate and export results
Create a simple workflow, run it, and confirm outputs appear in ComfyUI’s output directory.
To batch-download images, use File Browser (port 8080).
Default credentials for File Browser:
- Username: admin
- Password: adminadmin12
Navigate: runpod-slim → ComfyUI → output, then right-click to download or multi-select.
💡 Pro tip: For automation, you can also zip the output folder and grab a single archive. Or sync to an object store if you want to get fancy.
Debugging and ops cheat sheet
When things go sideways, here’s what I actually run:
- GPU sanity:
nvidia-smi
- Disk space:
df -h
du -h --max-depth=1 runpod-slim/ComfyUI/models | sort -h
- Memory headroom:
free -h
- Permissions jank (rare, but happens with new files/folders):
chmod -R u+rwX,g+rwX runpod-slim/ComfyUI
- Quick reset if Manager restart isn’t enough:
- Stop and start the pod from the RunPod dashboard
- Verify ports:
- ComfyUI on 8188, File Browser on 8080, JupyterLab on 8888
🧪 Tip for VRAM errors: Drop resolution, batch size, or disable high-memory nodes first. On 24GB cards, SDXL at 1024px usually needs conservative settings.
Gotchas (aka “wish I knew this earlier”)
- Model sizes add up fast. Plan 10–30 GB per XL checkpoint, plus LoRAs and VAE.
- The first template boot can take a few minutes—don’t panic-refresh too early.
- A “Ready” indicator for port 8188 is the source of truth. If it isn’t green, ComfyUI isn’t up yet.
- After adding nodes, always restart via Manager before assuming “it didn’t install.”
- Tokens matter: a bad Hugging Face or Civitai token silently causes partial downloads.
- Billing is minute-by-minute. A forgotten running pod is an involuntary donation.
Keep costs under control
When you’re done, pause the pod to stop charges while preserving your data. Termination deletes everything—use it when you’re really done.
- Stop = pause billing, keep storage
- Terminate = destroy pod and data
💡 Pro tip: Name pods clearly (e.g., comfyui-sdxl-playground) so you don’t forget what’s safe to shut down.
Extras for power users
- JupyterLab (8888) for quick scripts and file edits
- SSH keys for terminal lovers (optional, Web Terminal works fine)
- Volume planning: if you’re a model hoarder, allocate >200 GB up front
- CI-like behavior: keep a lightweight pod for experimenting, spin a faster one for final renders
Quick Reference
- GPU pick
- 3090 = economical learning
- 5090 = fast iteration
- Services
- ComfyUI: port 8188
- File Browser: port 8080 (admin/adminadmin12)
- JupyterLab: port 8888
- Tokens
- Hugging Face: https://huggingface.co/settings/tokens
- Civitai: https://civitai.com/user/account
- Installer
- Use deploy.promptingpixels.com, provider “RunPod”, copy the one-liner
- Common commands
nvidia-smi
df -h
free -h
- Cost control
- Stop pod when idle
- Terminate only when you want a clean slate
What’s next
- Add ControlNet and pose/edge preprocessors for surgical control
- Wire up an S3 bucket to sync outputs automatically
- Build a tiny HTTP service that triggers ComfyUI workflows (webhooks + queue)
- Try LoRA training pods to create your own styles
- Spin a second pod with a bigger GPU just for render days
If you ship anything cool with this setup, share your workflow JSON and model picks—I love seeing what people make with a clean ComfyUI rig.











Top comments (0)