DEV Community

Cover image for Auto-Scaling ComfyUI-API and ComfyUI: Orchestrating GPU Workloads with Azure Kubernetes Service and KEDA
Thang Chung
Thang Chung

Posted on

Auto-Scaling ComfyUI-API and ComfyUI: Orchestrating GPU Workloads with Azure Kubernetes Service and KEDA

Introduction

I am currently working on a project that relies heavily on multiple AI features built on Stable Diffusion models. The AI team uses ComfyUI and ComfyUI-api as the workflow orchestrator for Stable Diffusion. This allows them to design and optimise workflows visually in ComfyUI, then export each AI feature as a workflow definition (in JSON format), for example:
https://github.com/SaladTechnologies/comfyui-api/blob/main/example-workflows/sd1.5/txt2img.json

comfyui

All model training and fine-tuning work (including LLM and VLM models) is currently performed on GPU-enabled virtual machines.

After joining the team as a Solutions Architect, I worked closely with the backend engineers to refactor large parts of the application layer. This included applying the Asynchronous Request–Reply pattern, agent workflow orchestration patterns, webhooks, Azure Blob Storage integration, and other architectural improvements. As a result, the application tier now scales significantly better.

However, a major bottleneck remained: all AI workloads were still running on a single GPU VM, exposed via docker-compose up. The same machine was used for training, fine-tuning, and inference, creating tight coupling, poor isolation, and limited scalability.

Since this domain was relatively new to me, I spent time surveying the ecosystem for tools that could support scalable inference. I evaluated Ollama, Foundry Local, vLLM, KServe, and KAITO. In practice, none of these solutions fit the requirement, because ComfyUI and ComfyUI-api are not simple LLM inference endpoints—they are a workflow-based Stable Diffusion orchestrator with their own API server.

In the end, I settled on a straightforward, production-ready solution: Azure Kubernetes Service (AKS) with KEDA enabled. I spent multiple nights provisioning, deploying, validating, and stress-testing each component.

This blog consolidates that deployment work. I will walk through:

  • Provisioning an AKS cluster with GPU nodes (Standard_NC4as_T4_v3)
  • Containerising ComfyUI and ComfyUI-api, including workflow and model downloads
  • Enabling KEDA-based autoscaling with HTTP add-on
  • Demonstrating the final results

Let’s get started.

Build the comfyui-api execution file

My working environment is Windows 11 + WSL2 with Ubuntu 24

First of all, you need to check out the comfyui-api to build the execution file.

git clone https://github.com/SaladTechnologies/comfyui-api.git
cd comfyui-api
npm install
npm run build-binary
Enter fullscreen mode Exit fullscreen mode

Dockerize the comfyui and comfyui-api app

Package the comfyui and comfyui-api as well as copy all workflows and pull all LLM + VLM models (in this case, dreamshaper5.safetensors, dreamshaper_8.safetensors, and Qwen 3 VL) to the Dockerfile.

# Custom ComfyUI API image with QwenVL support
# Base image: comfyui-api with ComfyUI 0.7.0, API 1.16.1, PyTorch 2.8.0, CUDA 12.8
FROM ghcr.io/saladtechnologies/comfyui-api:comfy0.7.0-api1.16.1-torch2.8.0-cuda12.8-runtime

# Set environment variables
ENV WORKFLOW_DIR=/workflows
ENV STARTUP_CHECK_MAX_TRIES=30

# Install ComfyUI-QwenVL custom node
WORKDIR /opt/ComfyUI/custom_nodes
RUN git clone https://github.com/1038lab/ComfyUI-QwenVL && \
    pip install --no-cache-dir -r ComfyUI-QwenVL/requirements.txt

# Optional: Install llama-cpp-python for GGUF support (uncomment if needed)
# RUN pip install --no-cache-dir llama-cpp-python

# Copy workflows into the image
COPY example-workflows/sd1.5 /workflows

# Copy the comfyui-api binary
COPY bin/comfyui-api /app/comfyui-api
RUN chmod +x /app/comfyui-api

# Copy the model download script (downloads models at runtime)
COPY docker/download-models.sh /app/download-models.sh
RUN chmod +x /app/download-models.sh

# Set working directory
WORKDIR /app

# Run the entrypoint script (downloads models then starts API)
CMD ["/app/download-models.sh"]
Enter fullscreen mode Exit fullscreen mode

And download-models.sh:

#!/bin/bash
set -e

MODELS_DIR="/opt/ComfyUI/models/checkpoints"
mkdir -p "$MODELS_DIR"

# Download dreamshaper_8 if not exist
if [ ! -f "$MODELS_DIR/dreamshaper_8.safetensors" ]; then
  echo "Downloading dreamshaper_8.safetensors..."
  wget -q --show-progress -O "$MODELS_DIR/dreamshaper_8.safetensors" \
    "https://civitai.com/api/download/models/128713?type=Model&format=SafeTensor&size=pruned&fp=fp16"
  echo "✓ dreamshaper_8.safetensors downloaded"
else
  echo "✓ dreamshaper_8.safetensors already exists, skipping download"
fi

# Download dreamshaper5 if not exist
if [ ! -f "$MODELS_DIR/dreamshaper5.safetensors" ]; then
  echo "Downloading dreamshaper5.safetensors..."
  wget -q --show-progress -O "$MODELS_DIR/dreamshaper5.safetensors" \
    "https://huggingface.co/Lykon/DreamShaper/resolve/main/DreamShaper_5_beta2_noVae_half_pruned.safetensors?download=true"
  echo "✓ dreamshaper5.safetensors downloaded"
else
  echo "✓ dreamshaper5.safetensors already exists, skipping download"
fi

echo "All models ready. Starting comfyui-api..."

# Start the API server
exec /app/comfyui-api
Enter fullscreen mode Exit fullscreen mode

The reason we use CMD ["/app/download-models.sh"] because we don't want to pull all the big models into a Docker image, which makes the image really big and start-up time is slow sequentially.

Now you go to GitHub developer settings to create a personal access token.

Then, you can build and push the Docker image to a GitHub artefact like

docker login ghcr.io -u thangchung # it will ask for the GitHub access token just created
docker build -t ghcr.io/thangchung/agent-engineering-experiment/comfyui-api:qwenvl-1 -f docker/qwenvl.dockerfile .
docker push ghcr.io/thangchung/agent-engineering-experiment/comfyui-api:qwenvl-1
Enter fullscreen mode Exit fullscreen mode

See the file I have pushed before at https://github.com/thangchung/agent-engineering-experiment/pkgs/container/agent-engineering-experiment%2Fcomfyui-api/650944015?tag=qwenvl-1

Setup AKS cluster with GPU in-place

We follow the guidance at https://learn.microsoft.com/en-us/azure/aks/use-nvidia-gpu?tabs=add-ubuntu-gpu-node-pool#manually-install-the-nvidia-device-plugin to set up the AKS cluster (remember we need to have a node pool with a GPU, which will run the AI workload, and another node pool is the normal one with CPU, and we will use this normal node pool to run the rest of the workloads).

aks-nodepools

And the node pool with GPU (because we are experimenting, so we use Standard_NC4as_T4_v3, in reality, we might need to consider the bigger one, and we also need to request Microsoft Azure supporting team for more quota to be able to create this GPU option):

aks-gpu-nodepool

Run comfyui and comfyui-api with normal workload on AKS

After you finish provisioning the AKS cluster, just run

kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: comfyui
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: comfyui-api
  namespace: comfyui
  labels:
    app: comfyui-api
spec:
  replicas: 1
  selector:
    matchLabels:
      app: comfyui-api
  template:
    metadata:
      labels:
        app: comfyui-api
    spec:
      containers:
        - name: comfyui-api
          # GitHub Container Registry image (includes download-models.sh entrypoint)
          image: ghcr.io/thangchung/agent-engineering-experiment/comfyui-api:qwenvl-latest
          imagePullPolicy: Always
          ports:
            - name: api
              containerPort: 3000
              protocol: TCP
            - name: comfyui
              containerPort: 8188
              protocol: TCP
          env:
            - name: LOG_LEVEL
              value: "debug"
            - name: WORKFLOW_DIR
              value: "/workflows"
            - name: STARTUP_CHECK_MAX_TRIES
              value: "30"
          resources:
            limits:
              nvidia.com/gpu: 1
              memory: "16Gi"
              cpu: "4"
            requests:
              nvidia.com/gpu: 1
              memory: "8Gi"
              cpu: "2"
          livenessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 120
            periodSeconds: 30
            timeoutSeconds: 10
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 90
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 3
      # GPU node pool tolerations
      tolerations:
        - key: "sku"
          operator: "Equal"
          value: "gpu"
          effect: "NoSchedule"
      # Schedule on GPU nodes
      nodeSelector:
        accelerator: nvidia
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: comfyui-api-service
  namespace: comfyui
  labels:
    app: comfyui-api
spec:
  type: LoadBalancer
  ports:
    - name: api
      port: 3000
      targetPort: 3000
      protocol: TCP
    - name: comfyui
      port: 8188
      targetPort: 8188
      protocol: TCP
  selector:
    app: comfyui-api
EOF
Enter fullscreen mode Exit fullscreen mode

Now you can port-forward the comfyui-api POD and do some curl to test it, but wait a minute, we will do the test when we have auto-scaling in place. Now moving on.

Run autoscaling comfyui and comfyui-api on AKS and KEDA

But with the really expensive GPU workload, we need to think about how we can save the cost if no one uses it. So we can set up the KEDA to auto scale to zero incase no traffic usage.

To be able to do auto scaling, we need to install KEDA:

helm repo add kedacore https://kedacore.github.io/charts
helm repo update
helm install keda kedacore/keda --namespace keda --create-namespace
Enter fullscreen mode Exit fullscreen mode

Enable KEDA HTTP add-on (because we are based on the HTTP traffic to scale it up and down):

helm install http-add-on kedacore/keda-add-ons-http --namespace keda
Enter fullscreen mode Exit fullscreen mode

Finally, we can run the app with KEDA enabled:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: comfyui
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: comfyui-api
  namespace: comfyui
  labels:
    app: comfyui-api
spec:
  replicas: 0  # KEDA controls replicas
  selector:
    matchLabels:
      app: comfyui-api
  template:
    metadata:
      labels:
        app: comfyui-api
    spec:
      containers:
        - name: comfyui-api
          # GitHub Container Registry image (includes download-models.sh entrypoint)
          image: ghcr.io/thangchung/agent-engineering-experiment/comfyui-api:qwenvl-1
          imagePullPolicy: Always
          ports:
            - name: api
              containerPort: 3000
              protocol: TCP
            - name: comfyui
              containerPort: 8188
              protocol: TCP
          env:
            - name: LOG_LEVEL
              value: "debug"
            - name: WORKFLOW_DIR
              value: "/workflows"
            - name: STARTUP_CHECK_MAX_TRIES
              value: "30"
          resources:
            limits:
              nvidia.com/gpu: 1
              memory: "16Gi"
              cpu: "4"
            requests:
              nvidia.com/gpu: 1
              memory: "8Gi"
              cpu: "2"
          readinessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 90
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 3
      # GPU node pool tolerations
      tolerations:
        - key: "sku"
          operator: "Equal"
          value: "gpu"
          effect: "NoSchedule"
      # Schedule on GPU nodes
      nodeSelector:
        accelerator: nvidia
      restartPolicy: Always
---
# Internal service for KEDA HTTP Add-on to route traffic to
apiVersion: v1
kind: Service
metadata:
  name: comfyui-api-service
  namespace: comfyui
  labels:
    app: comfyui-api
spec:
  type: ClusterIP
  ports:
    - name: api
      port: 3000
      targetPort: 3000
      protocol: TCP
    - name: comfyui
      port: 8188
      targetPort: 8188
      protocol: TCP
  selector:
    app: comfyui-api
---
# KEDA HTTP Add-on - scales based on incoming HTTP requests
# Traffic flows: Client -> KEDA Interceptor -> comfyui-api-service -> Pod
apiVersion: http.keda.sh/v1alpha1
kind: HTTPScaledObject
metadata:
  name: comfyui-api-scaler
  namespace: comfyui
spec:
  hosts:
    - comfyui.local  # Use this hostname in requests (Host header)
  targetPendingRequests: 1   # Scale up on any pending request
  scaledownPeriod: 300       # 5 minutes idle before scale to zero
  scaleTargetRef:
    name: comfyui-api
    kind: Deployment
    apiVersion: apps/v1
    service: comfyui-api-service
    port: 3000
  replicas:
    min: 0   # Scale to ZERO when idle
    max: 2   # Maximum replicas
EOF
Enter fullscreen mode Exit fullscreen mode

In this case, if after 5 minutes there is no HTTP traffic, then KEDA will scale it down to zero automatically.

Make it a test

We need to do a port-forward for the KEDA HTTP interceptor proxy as

kubectl port-forward svc/keda-add-ons-http-interceptor-proxy -n keda 3000:8080
Enter fullscreen mode Exit fullscreen mode

Now try to curl as below

curl -H "Host: comfyui.local" http://localhost:3000/health
Enter fullscreen mode Exit fullscreen mode

You will got

error-comfyui-curl

But when you go back to the GPU's node pool, you will see it provisioning automatically.

aks-nodepool-running

And the POD will be running (in pending status):

aks-pod-runing

Waiting around 2 minutes, then the POD will be up and running:

aks-pod-run

Now you should see:

heathy-comfyui-curl

If you go to http://localhost:3000/docs, you should see the API document of the comfyui-api:

comfyui-api-docs

If you notice in the red box, you would see 3 workflows that we deploy: img2img, qwen3-vl3-experiment, and txt2img

Now, let test the txt2img worklow with a payload like:

{
  "id": "7f350df6-49a9-4cd0-88de-5b53df870003",
  "webhook_v2": "https://webhook.site/b59a7434-c944-4897-91b8-5cd808219094",
  "input": {
    "prompt": "Create a photorealistic image of a woman standing outdoors on what appears to be a sunny autumn day. She has shoulder-length black hair and is wearing a Vietnamese Ao Dai. The background features blurred trees in the Tet holiday season. The lighting suggests natural sunlight, casting soft shadows that highlight her figure. The scene includes other people in casual attire, suggesting a public or social setting. The camera angle is at eye level, focusing on the woman's upper body and face while slightly capturing the background to provide context. The whole picture in Ho Chi Minh City",
    "negative_prompt": "",
    "checkpoint": "dreamshaper_8.safetensors"
  }
}
Enter fullscreen mode Exit fullscreen mode

And do a curl request:

curl -X POST http://localhost:3000/workflow/txt2img \
  -H "Content-Type: application/json" \
  -H "Host: comfyui.local" \
  -d @payload.json
Enter fullscreen mode Exit fullscreen mode

After a minute, I go to https://webhook.site/#!/view/b59a7434-c944-4897-91b8-5cd808219094/bfab5db6-c8e9-463e-8936-9bfe3dd4d289/1, copy the base64 picture there and paste it to base64-to-image-converter:

the girl

But after 5 minutes, if there is no HTTP traffic, then the POD is gone:

aks-pod-gone

Conclusion

This brings me to the end of my experiment: scaling ComfyUI and ComfyUI-api on AKS using KEDA. From my perspective, the result is both practical and satisfying.

If you’ve read through this post and see opportunities for improvement or have alternative approaches to scaling a workflow-based Stable Diffusion orchestrator like ComfyUI. I’d be interested to hear your thoughts in the comments.

Happy hacking.

Appendix - comfyui and comfyui-api logs

✓ dreamshaper5.safetensors downloaded
All models ready. Starting comfyui-api...
{"level":20,"time":1769010999301,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","file":"/workflows/img2img.js","msg":"Evaluating workflow file"}
{"level":40,"time":1769010999370,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","module":"RemoteStorageManager","error":{},"msg":"Error initializing storage provider S3StorageProvider"}
{"level":30,"time":1769010999302,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","workflow":"img2img","file":"/workflows/img2img.js","msg":"Loaded workflow"}
{"level":20,"time":1769010999302,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","file":"/workflows/img2img.ts","msg":"Transpiling TypeScript workflow"}
{"level":20,"time":1769010999332,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","file":"/workflows/img2img.js","msg":"Evaluating workflow file"}
{"level":30,"time":1769010999333,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","workflow":"img2img","file":"/workflows/img2img.js","msg":"Loaded workflow"}
{"level":20,"time":1769010999333,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","file":"/workflows/qwen3-vl-experiment.js","msg":"Evaluating workflow file"}
{"level":30,"time":1769010999334,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","workflow":"qwen3-vl-experiment","file":"/workflows/qwen3-vl-experiment.js","msg":"Loaded workflow"}
{"level":20,"time":1769010999334,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","file":"/workflows/qwen3-vl-experiment.ts","msg":"Transpiling TypeScript workflow"}
{"level":20,"time":1769010999353,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","file":"/workflows/qwen3-vl-experiment.js","msg":"Evaluating workflow file"}
{"level":30,"time":1769010999353,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","workflow":"qwen3-vl-experiment","file":"/workflows/qwen3-vl-experiment.js","msg":"Loaded workflow"}
{"level":20,"time":1769010999354,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","file":"/workflows/txt2img.js","msg":"Evaluating workflow file"}
{"level":30,"time":1769010999354,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","workflow":"txt2img","file":"/workflows/txt2img.js","msg":"Loaded workflow"}
{"level":20,"time":1769010999354,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","file":"/workflows/txt2img.ts","msg":"Transpiling TypeScript workflow"}
{"level":20,"time":1769010999367,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","file":"/workflows/txt2img.js","msg":"Evaluating workflow file"}
{"level":30,"time":1769010999367,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","workflow":"txt2img","file":"/workflows/txt2img.js","msg":"Loaded workflow"}
{"level":40,"time":1769010999370,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","module":"RemoteStorageManager","error":{},"msg":"Error initializing storage provider AzureBlobStorageProvider"}
{"level":30,"time":1769010999370,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","module":"RemoteStorageManager","msg":"Initialized with 2 storage providers"}
{"level":30,"time":1769010999371,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","module":"RemoteStorageManager","msg":"Cache populated with 0 files, total size: 0.00 B"}
{"level":30,"time":1769010999371,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","msg":"Starting ComfyUI API 1.16.1 with ComfyUI 0.7.0"}
2082600K .......... .......... .......... .......... ..        100%  218M=9.4s2026-01-21 15:56:39 - root - DEBUG - Tracking command: launch with arguments: {'extra': ['--listen', '*'], 'background': False, 'frontend_pr': None}
2026-01-21 15:56:39 - root - DEBUG - tracking event called with event_name: launch and properties: {'extra': ['--listen', '*'], 'background': False, 'frontend_pr': None}
╭──────────────────────────── 🔔 Update Available! ────────────────────────────╮
│ ✨ Newer version of comfy-cli is available: 1.5.4.                           │
│ Current version: 1.5.3                                                       │
│ Update by running: 'pip install --upgrade comfy-cli' ⬆                       │
╰──────────────────────────────────────────────────────────────────────────────╯

Launching ComfyUI from: /opt/ComfyUI

[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2026-01-21 15:56:40.917
** Platform: Linux
** Python version: 3.11.13 | packaged by conda-forge | (main, Jun  4 2025, 14:48:23) [GCC 13.3.0]
** Python executable: /opt/conda/bin/python
** ComfyUI Path: /opt/ComfyUI
** ComfyUI Base Folder Path: /opt/ComfyUI
** User directory: /opt/ComfyUI/user
** ComfyUI-Manager config path: /opt/ComfyUI/user/__manager/config.ini
** Log path: /opt/ComfyUI/user/comfyui.log

Prestartup times for custom nodes:
   1.7 seconds: /opt/ComfyUI/custom_nodes/ComfyUI-Manager

Checkpoint files will always be loaded safely.
Total VRAM 15931 MB, total RAM 28063 MB
pytorch version: 2.8.0+cu128
Set vram state to: NORMAL_VRAM
Device: cuda:0 Tesla T4 : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 26659.0
Using pytorch attention
Python version: 3.11.13 | packaged by conda-forge | (main, Jun  4 2025, 14:48:23) [GCC 13.3.0]
ComfyUI version: 0.7.0
ComfyUI frontend version: 1.35.9
[Prompt Server] web root: /opt/conda/lib/python3.11/site-packages/comfyui_frontend_package/static
Total VRAM 15931 MB, total RAM 28063 MB
pytorch version: 2.8.0+cu128
Set vram state to: NORMAL_VRAM
Device: cuda:0 Tesla T4 : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 26659.0
### Loading: ComfyUI-Manager (V3.39)
[ComfyUI-Manager] network_mode: public
[ComfyUI-Manager] ComfyUI per-queue preview override detected (PR #11261). Manager's preview method feature is disabled. Use ComfyUI's --preview-method CLI option or 'Settings > Execution > Live preview method'.
### ComfyUI Revision: 1 [f59f71cf] *DETACHED | Released on '2025-12-30'
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
Error loading module AILab_QwenVL_GGUF_PromptEnhancer: No module named 'llama_cpp'

Import times for custom nodes:
   0.0 seconds: /opt/ComfyUI/custom_nodes/websocket_image_save.py
   0.1 seconds: /opt/ComfyUI/custom_nodes/ComfyUI-Manager
   0.5 seconds: /opt/ComfyUI/custom_nodes/ComfyUI-QwenVL

Context impl SQLiteImpl.
Will assume non-transactional DDL.
No target revision found.
Starting server

To see the GUI go to: http://*:8188
{"level":30,"time":1769011009412,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","msg":"Comfy UI started"}
{"level":30,"time":1769011009412,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","msg":"ComfyUI 0.7.0 started."}
{"level":30,"time":1769011009477,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","msg":"Registered workflow /workflow/img2img"}
{"level":30,"time":1769011009477,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","msg":"Registered workflow /workflow/qwen3-vl-experiment"}
{"level":30,"time":1769011009477,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","msg":"Registered workflow /workflow/txt2img"}
{"level":30,"time":1769011009561,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","msg":"ComfyUI fully ready in 10.19s"}
FETCH ComfyRegistry Data: 5/120
{"level":30,"time":1769011010791,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-1","req":{"method":"GET","url":"/health","host":"10.244.1.233:3000","remoteAddress":"::ffff:10.224.0.5","remotePort":35108},"msg":"incoming request"}
{"level":30,"time":1769011010793,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-1","res":{"statusCode":200},"responseTime":1.678925999905914,"msg":"request completed"}
FETCH ComfyRegistry Data: 10/120
FETCH ComfyRegistry Data: 15/120
FETCH ComfyRegistry Data: 20/120
{"level":30,"time":1769011020790,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-2","req":{"method":"GET","url":"/health","host":"10.244.1.233:3000","remoteAddress":"::ffff:10.224.0.5","remotePort":45030},"msg":"incoming request"}
{"level":30,"time":1769011020791,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-2","res":{"statusCode":200},"responseTime":0.429451999720186,"msg":"request completed"}
FETCH ComfyRegistry Data: 25/120
FETCH ComfyRegistry Data: 30/120
FETCH ComfyRegistry Data: 35/120
{"level":30,"time":1769011030790,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-3","req":{"method":"GET","url":"/health","host":"10.244.1.233:3000","remoteAddress":"::ffff:10.224.0.5","remotePort":41768},"msg":"incoming request"}
{"level":30,"time":1769011030790,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-3","res":{"statusCode":200},"responseTime":0.3221490001305938,"msg":"request completed"}
FETCH ComfyRegistry Data: 40/120
FETCH ComfyRegistry Data: 45/120
FETCH ComfyRegistry Data: 50/120
FETCH ComfyRegistry Data: 55/120
{"level":30,"time":1769011040789,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-4","req":{"method":"GET","url":"/health","host":"10.244.1.233:3000","remoteAddress":"::ffff:10.224.0.5","remotePort":54420},"msg":"incoming request"}
{"level":30,"time":1769011040790,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-4","res":{"statusCode":200},"responseTime":0.3270169999450445,"msg":"request completed"}
FETCH ComfyRegistry Data: 60/120
FETCH ComfyRegistry Data: 65/120
FETCH ComfyRegistry Data: 70/120
{"level":30,"time":1769011050790,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-5","req":{"method":"GET","url":"/health","host":"10.244.1.233:3000","remoteAddress":"::ffff:10.224.0.5","remotePort":48618},"msg":"incoming request"}
{"level":30,"time":1769011050790,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-5","res":{"statusCode":200},"responseTime":0.33953100023791194,"msg":"request completed"}
FETCH ComfyRegistry Data: 75/120
FETCH ComfyRegistry Data: 80/120
FETCH ComfyRegistry Data: 85/120
{"level":30,"time":1769011059836,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-6","req":{"method":"GET","url":"/health","host":"comfyui.local","remoteAddress":"::ffff:10.244.0.245","remotePort":36602},"msg":"incoming request"}
{"level":30,"time":1769011059836,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-6","res":{"statusCode":200},"responseTime":0.3211170001886785,"msg":"request completed"}
{"level":30,"time":1769011060790,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-7","req":{"method":"GET","url":"/health","host":"10.244.1.233:3000","remoteAddress":"::ffff:10.224.0.5","remotePort":59568},"msg":"incoming request"}
{"level":30,"time":1769011060790,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-7","res":{"statusCode":200},"responseTime":0.35903899976983666,"msg":"request completed"}
FETCH ComfyRegistry Data: 90/120
FETCH ComfyRegistry Data: 95/120
FETCH ComfyRegistry Data: 100/120
FETCH ComfyRegistry Data: 105/120
{"level":30,"time":1769011070790,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-8","req":{"method":"GET","url":"/health","host":"10.244.1.233:3000","remoteAddress":"::ffff:10.224.0.5","remotePort":56480},"msg":"incoming request"}
{"level":30,"time":1769011070791,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-8","res":{"statusCode":200},"responseTime":0.5209230002947152,"msg":"request completed"}
FETCH ComfyRegistry Data: 110/120
FETCH ComfyRegistry Data: 115/120
FETCH ComfyRegistry Data: 120/120
FETCH ComfyRegistry Data [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [DONE]
[ComfyUI-Manager] All startup tasks have been completed.
{"level":30,"time":1769011080790,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-9","req":{"method":"GET","url":"/health","host":"10.244.1.233:3000","remoteAddress":"::ffff:10.224.0.5","remotePort":44198},"msg":"incoming request"}
{"level":30,"time":1769011080791,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-9","res":{"statusCode":200},"responseTime":0.5331540000624955,"msg":"request completed"}
{"level":30,"time":1769011090790,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-a","req":{"method":"GET","url":"/health","host":"10.244.1.233:3000","remoteAddress":"::ffff:10.224.0.5","remotePort":37554},"msg":"incoming request"}
{"level":30,"time":1769011090790,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-a","res":{"statusCode":200},"responseTime":0.4365039998665452,"msg":"request completed"}
{"level":30,"time":1769011100790,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-b","req":{"method":"GET","url":"/health","host":"10.244.1.233:3000","remoteAddress":"::ffff:10.224.0.5","remotePort":43552},"msg":"incoming request"}
{"level":30,"time":1769011100790,"pid":1,"hostname":"comfyui-api-786bdbc-zz4nd","reqId":"req-b","res":{"statusCode":200},"responseTime":0.33777799969539046,"msg":"request completed"}
Enter fullscreen mode Exit fullscreen mode

Top comments (0)