ComfyUI WAN 2.5 Serverless Setup Guide (RunPod)
This tutorial covers the setup of ComfyUI WAN 2.5 on RunPod, including network storage, model downloads, JupyterLab configuration, and workflow testing.
1. Create Required Accounts
- RunPod – for GPU pods and serverless functions. Sign up with referral link (Deposit $10 to potentially receive a $5–$500 credit bonus)
2. Fund Your RunPod Account
Ensure you have at least $10 in your RunPod account to spin up GPU pods.
3. Create Network Storage
- Navigate to Network Volumes in RunPod.
- Create a volume of at least 40 GB for persistent storage.
4. Deploy a Pod with GPU
- Click Deploy Pod from the dashboard.
- Select GPU (e.g., RTX A5000).
- Choose template:
ComfyUI Manager – Permanent Disk – torch 2.4
- Attach the network volume you created.
5. Launch the Environment
- Wait for pod installation.
- Open Jupyter Notebook via Connect.
- Run the terminal command:
./run_gpu.sh
6. Select Template
- Choose the WAN 2.2 Text to Video template for ComfyUI setup.
7. Install Necessary Models
Download and install all required WAN 2.2 models for video generation.
a. High Noise Model
curl -L -o /workspace/ComfyUI/models/diffusion_models/wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors \
"https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors"
b. Low Noise Model
curl -L -o /workspace/ComfyUI/models/diffusion_models/wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors \
"https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors"
c. High Noise LoRA
curl -L -o /workspace/ComfyUI/models/loras/wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise.safetensors \
"https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/loras/wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise.safetensors"
d. Low Noise LoRA
curl -L -o /workspace/ComfyUI/models/loras/wan2.2_t2v_lightx2v_4steps_lora_v1.1_low_noise.safetensors \
"https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/loras/wan2.2_t2v_lightx2v_4steps_lora_v1.1_low_noise.safetensors"
e. VAE Model
curl -L -o /workspace/ComfyUI/models/vae/wan_2.1_vae.safetensors \
"https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors"
f. Text Encoder
curl -L -o /workspace/ComfyUI/models/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors \
"https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors"
8. Test the Workflow
- Ensure all model names match the downloaded files.
- Open ComfyUI and verify the workflow.
- Run the workflow to confirm there are no errors.
9. Move the Models Folder
Move the models folder to a higher-level directory for easier access:
mv /workspace/ComfyUI/models /workspace/
10. Clean Up Workspace
Remove the ComfyUI workspace folder to save space, keeping only the models:
rm -rf /workspace/ComfyUI
Tip: This ensures only the necessary models remain for serverless usage.
11. Terminate the Pod
- Terminate the GPU pod to save costs.
- Your files will remain intact in Network Storage, ready for serverless deployment.
12. Upload to Private GitHub Repository
- Create a private GitHub repo.
- Upload the following:
- WAN serverless folder
- Dockerfile
- Snapshot
- Avoid tracking large model files unless necessary.
13. Deploy as Serverless Endpoint
- Connect the GitHub repo to RunPod.
- Add your HuggingFace token for model access.
14. Configure Endpoint Settings
- Attach the network storage.
- Add environment variables:
COMFY_POLLING_MAX_RETRIES=2000
COMFY_POLLING_INTERVAL_MS=500
15. Save and Wait
- Save the configuration.
- Wait for the deployment to complete before testing.
16. Testing the Endpoint
Method 1: Postman
- Set Postman variables to match your serverless API endpoint.
- Update input keys and output paths according to the deployed workflow.
Method 2: Custom Web App
- Update
.env
with the correct API URL and tokens. - Ensure environment variables match the endpoint settings.
Notes & Best Practices
- If ComfyUI is modified, update Postman requests and web app code accordingly.
- Verify that model paths, API keys, and workflow names match your deployed setup.
- Regularly check endpoint status and update models if needed.
- Use network storage for persistent data between serverless invocations.
- For web apps, update
.env
variables such as:
COMFY_API_URL
HF_TOKEN
STORAGE_PATH
✅ End of ComfyUI WAN 2.5 Serverless Setup Guide
Top comments (0)