DEV Community

Netanel Ben
Netanel Ben

Posted on

Creating a Custom Node for ComfyUI Using React + TS

app UI screenshot

Introduction & Code

Hello dev.to community, In this article, we’ll walk through the development of a custom node for ComfyUI built with React + TS that utilizes Stable Diffusion model to generate images from text prompts. This tool helps developers and artists produce creative visuals with a simple UI. The implementation showcases WebSocket communication, workflow.json integration, and real-time image generation feedback—all key aspects for a powerful and intuitive user experience.

Source code

You can jump right in and clone this repo into custom nodes folder and run React build from /web folder

Prerequisites

ComfyUI Installation

You need to have ComfyUI set up. ComfyUI is a powerful and modular UI that allows you to create workflows and pipelines for image generation using Stable Diffusion. You can follow the installation instructions on their GitHub page to get it up and running.

The model im using for this example is DreamShaper 8

ComfyUI Custom Nodes

Custom nodes are user-defined nodes that extend the functionality of ComfyUI. They allow developers to create new types of operations or integrate different models and algorithms to achieve specific goals that aren’t possible with the built-in nodes. Custom nodes are highly useful for:
• Extending Capabilities: Integrate new models, image filters, or custom transformations.
• Personalized Workflows: Customize workflows to manipulate prompts or generate unique image variations.
• Enhancing User Experience: Add nodes for user control, feedback, or dynamic parameter adjustments.

Running the React App Server with ComfyUI API

Upon startup when running python main.py from ComfyUI folder, Comfy scans the custom_nodes directory for Python modules and attempts to load them. Modules that export NODE_CLASS_MAPPINGS are identified as custom nodes.

The __init__.py Python module integrates a React frontend with the ComfyUI backend. It sets up routes to serve the React application, allowing users to interact with the custom nodes via a web interface.

• Web Root Setup: Defines the directory for serving the built React assets (web/dist)

WEBROOT = os.path.join(os.path.dirname(os.path.realpath(__file__)), "web/dist")
Enter fullscreen mode Exit fullscreen mode

• Routes: Adds server routes to serve the React app (/Text2Image) and static assets (/assets).

@server.PromptServer.instance.routes.get("/Text2Image")
def init(request):
    return web.FileResponse(os.path.join(WEBROOT, "index.html"))
Enter fullscreen mode Exit fullscreen mode

• Node Mappings: Exposes custom node class mappings for use in the workflow.

Read More about comfy UI custom nodes.

React client side code walkthrough

  1. set up WebSocket connection with ComfyUI backend
  2. listen to specific incoming messages
  3. handle workflow api json file
  4. generate image based on a text prompt

ComfyUI WebSocket connection

• hostname and protocol: Constructs the server address using the current window’s hostname and port. Chooses wss: for secure connections (https:) and ws: for others.
• wsClient: Initializes a WebSocket connection to /ws with a unique client ID.
• wsClient.onopen: Logs a message once the connection to the server is successfully established.
• must provide a unique ID via clientId param, i'm using uuidv4 for it.

const hostname = window.location.hostname + ":" + window.location.port;
    const protocol = window.location.protocol === "https:" ? "wss:" : "ws:";
    const wsClient = new WebSocket(
      `${protocol}//${hostname}/ws?clientId=${clientUniqueId}`
    );
Enter fullscreen mode Exit fullscreen mode

Listen to a response from ComfyUI

wsClient.addEventListener("message", (event) => {
if (data.type === "executed") {
        if ("images" in data.data.output) {
          const image = data.data.output.images[0];
          const { filename, type, subfolder } = image;
          const rando = Math.floor(Math.random() * 1000);
          const imageSrc = `/view?filename=${filename}&type=${type}&subfolder=${subfolder}&rand=${rando}`;
        }
      }
});
Enter fullscreen mode Exit fullscreen mode

Modifying ComfyUI workflow api json

The workflow api json is the set of nodes configuration for ComfyUI, in the following code I have searched via the JSON file for the relevant node ID (since node IDs are randomly generate when adding new nodes)

In this example the seed is randomly generated and the prompt text is applied too;

// Find the key of KSampler node
    const samplerNodeNumber = Object.entries(workflow).find(
      ([key, value]) => value.class_type === "KSampler"
    )[0] as keyof typeof workflow;

workflow[samplerNodeNumber].inputs.seed = Math.floor(
      Math.random() * 9999999999
    );

workflow[inputNodeNumber].inputs.text = prompt.replaceAll(
      /\r\n|\n|\r/gm,
      " "
    );
Enter fullscreen mode Exit fullscreen mode

Generate an image based on prompt

ComfyUI is exposing api routes such as /prompt and /interrupt to stop the process

async function queuePrompt(workflow = {}) {
    const data = { prompt: workflow, client_id: clientUniqueId };

    const response = await fetch("/prompt", {
      method: "POST",
      cache: "no-cache",
      headers: {
        "Content-Type": "application/json",
      },
      body: JSON.stringify(data),
    });

    return await response.json();
  }
Enter fullscreen mode Exit fullscreen mode

Check out the full source code for more tweaks such as progress tracking for the queue task.

Source code: https://github.com/netanelben/comfyui-text2image-customnode
Enter fullscreen mode Exit fullscreen mode

Top comments (0)