DEV Community

Shahtab Mohtasin
Shahtab Mohtasin

Posted on

Complete Guide: ComfyUI WAN 2.2 on RunPod

Youtube Video

ComfyUI WAN 2.5 Serverless Setup Guide (RunPod)

This tutorial covers the setup of ComfyUI WAN 2.5 on RunPod, including network storage, model downloads, JupyterLab configuration, and workflow testing.


1. Create Required Accounts

  • RunPod – for GPU pods and serverless functions. Sign up with referral link (Deposit $10 to potentially receive a $5–$500 credit bonus)

2. Fund Your RunPod Account

Ensure you have at least $10 in your RunPod account to spin up GPU pods.


3. Create Network Storage

  1. Navigate to Network Volumes in RunPod.
  2. Create a volume of at least 40 GB for persistent storage.

4. Deploy a Pod with GPU

  1. Click Deploy Pod from the dashboard.
  2. Select GPU (e.g., RTX A5000).
  3. Choose template: ComfyUI Manager – Permanent Disk – torch 2.4
  4. Attach the network volume you created.

5. Launch the Environment

  1. Wait for pod installation.
  2. Open Jupyter Notebook via Connect.
  3. Run the terminal command:
./run_gpu.sh
Enter fullscreen mode Exit fullscreen mode

6. Select Template

  • Choose the WAN 2.2 Text to Video template for ComfyUI setup.

7. Install Necessary Models

Download and install all required WAN 2.2 models for video generation.

a. High Noise Model

curl -L -o /workspace/ComfyUI/models/diffusion_models/wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors \
"https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors"
Enter fullscreen mode Exit fullscreen mode

b. Low Noise Model

curl -L -o /workspace/ComfyUI/models/diffusion_models/wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors \
"https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors"
Enter fullscreen mode Exit fullscreen mode

c. High Noise LoRA

curl -L -o /workspace/ComfyUI/models/loras/wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise.safetensors \
"https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/loras/wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise.safetensors"
Enter fullscreen mode Exit fullscreen mode

d. Low Noise LoRA

curl -L -o /workspace/ComfyUI/models/loras/wan2.2_t2v_lightx2v_4steps_lora_v1.1_low_noise.safetensors \
"https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/loras/wan2.2_t2v_lightx2v_4steps_lora_v1.1_low_noise.safetensors"
Enter fullscreen mode Exit fullscreen mode

e. VAE Model

curl -L -o /workspace/ComfyUI/models/vae/wan_2.1_vae.safetensors \
"https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors"
Enter fullscreen mode Exit fullscreen mode

f. Text Encoder

curl -L -o /workspace/ComfyUI/models/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors \
"https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors"
Enter fullscreen mode Exit fullscreen mode

8. Test the Workflow

  1. Ensure all model names match the downloaded files.
  2. Open ComfyUI and verify the workflow.
  3. Run the workflow to confirm there are no errors.

9. Move the Models Folder

Move the models folder to a higher-level directory for easier access:

mv /workspace/ComfyUI/models /workspace/
Enter fullscreen mode Exit fullscreen mode

10. Clean Up Workspace

Remove the ComfyUI workspace folder to save space, keeping only the models:

rm -rf /workspace/ComfyUI
Enter fullscreen mode Exit fullscreen mode

Tip: This ensures only the necessary models remain for serverless usage.


11. Terminate the Pod

  • Terminate the GPU pod to save costs.
  • Your files will remain intact in Network Storage, ready for serverless deployment.

12. Upload to Private GitHub Repository

  1. Create a private GitHub repo.
  2. Upload the following:
  • WAN serverless folder
  • Dockerfile
  • Snapshot
    1. Avoid tracking large model files unless necessary.

13. Deploy as Serverless Endpoint

  1. Connect the GitHub repo to RunPod.
  2. Add your HuggingFace token for model access.

14. Configure Endpoint Settings

  • Attach the network storage.
  • Add environment variables:
COMFY_POLLING_MAX_RETRIES=2000
COMFY_POLLING_INTERVAL_MS=500
Enter fullscreen mode Exit fullscreen mode

15. Save and Wait

  • Save the configuration.
  • Wait for the deployment to complete before testing.

16. Testing the Endpoint

Method 1: Postman

  • Set Postman variables to match your serverless API endpoint.
  • Update input keys and output paths according to the deployed workflow.

Method 2: Custom Web App

  • Update .env with the correct API URL and tokens.
  • Ensure environment variables match the endpoint settings.

Notes & Best Practices

  • If ComfyUI is modified, update Postman requests and web app code accordingly.
  • Verify that model paths, API keys, and workflow names match your deployed setup.
  • Regularly check endpoint status and update models if needed.
  • Use network storage for persistent data between serverless invocations.
  • For web apps, update .env variables such as:
COMFY_API_URL
HF_TOKEN
STORAGE_PATH
Enter fullscreen mode Exit fullscreen mode

End of ComfyUI WAN 2.5 Serverless Setup Guide

Top comments (0)