<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: GBieler</title>
    <description>The latest articles on DEV Community by GBieler (@gbieler).</description>
    <link>https://dev.to/gbieler</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gbieler"/>
    <language>en</language>
    <item>
      <title>Speed up ComfyUI Image and Video generation with TeaCache</title>
      <dc:creator>GBieler</dc:creator>
      <pubDate>Thu, 03 Apr 2025 11:15:17 +0000</pubDate>
      <link>https://dev.to/gbieler/speed-up-comfyui-image-and-video-generation-with-teacache-3dpm</link>
      <guid>https://dev.to/gbieler/speed-up-comfyui-image-and-video-generation-with-teacache-3dpm</guid>
      <description>&lt;p&gt;One of the biggest problems when it comes to generating images or videos is how slow the process can be. Fortunately, we now have a few good tricks to help speed up generation. In this post, we’ll go over our preferred solution when using ComfyUI: TeaCache and model compiling. During testing, we managed to speed up generation time by 3X with FLUX and 2.8X with Wan2.1 with no loss in quality.&lt;/p&gt;

&lt;p&gt;Without going into the details, TeaCache uses clever caching to take advantage of the fact that the output of many of the attention blocks inside diffusion models is very similar to their input. While model compiling speeds up inference by optimizing the model’s code. The great thing is that both of those techniques work out of the box with any ComfyUI workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance improvement
&lt;/h2&gt;

&lt;p&gt;TeaCache really feels like a free lunch and is perfect for speeding up API inference and ComfyUI workloads in general. However, model compiling has an important drawback that should be mentioned before we start: the first 2 to 3 generations of every session are much slower. This can make it hard to use effectively when running ComfyUI as an API unless servers are running for extended periods of time.&lt;/p&gt;

&lt;p&gt;We ran a few tests on Flux Dev and Wan2.1 text to video to quantify the performance gains:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz75xbvfqbj7o3dsuaa6o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz75xbvfqbj7o3dsuaa6o.png" alt="Image description" width="711" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, TeaCache on its own roughly halves the generation time with Flux and Wan2.1. And when combined with model compiling, you are looking at 2.5X to 3X speed improvements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Utilisation
&lt;/h2&gt;

&lt;p&gt;The TeaCache node from the ComfyUI-TeaCache node pack comes with two parameters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqw9kn772otf11mrchdl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqw9kn772otf11mrchdl.png" alt="Image description" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first one, rel_l1_thresh, controls how often the cache will be refreshed. At 0 it is turned off, while at higher values, the cache is refreshed more often. The other parameter, max_skip_steps, sets the maximum number of steps that the cached memory can skip.&lt;/p&gt;

&lt;p&gt;The higher those values, the faster the generation will be. That said, we found during testing that at values higher than 0.4 / 3 for Flux and 0.2 / 3 for Wan2.1, generations started losing some details.&lt;/p&gt;

&lt;p&gt;Here are some examples using Flux Dev with different threshold values and skip steps 3:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pkeka7h2bjr8vkjyiia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pkeka7h2bjr8vkjyiia.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And again with Wan2.1:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6a6bgfy6u2za6ete9r2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6a6bgfy6u2za6ete9r2.gif" alt="Image description" width="600" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, with Flux, results hardly change between 0 and 0.4 and start losing quality after that. While with Wan, results tend to be a lot more sensitive to the threshold. We found that 0.2 gave good results, like in the example above.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;All you have to do to use these techniques is to install  &lt;a href="https://github.com/welltop-cn/ComfyUI-TeaCache/tree/main" rel="noopener noreferrer"&gt;this&lt;/a&gt;  node pack, and add the TeaCache and/or the Compile Model nodes after loading the diffusion model (if you are using LoRAs, the TeaCache node will go after the Load LoRA node)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdsu6w9yrtyt6hroh6bp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdsu6w9yrtyt6hroh6bp3.png" alt="Image description" width="727" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a quick start, we’ve set up a template with everything you need to get started using TeaCache with Flux on  &lt;a href="https://app.viewcomfy.com/" rel="noopener noreferrer"&gt;ViewComfy&lt;/a&gt;. You can also use the Wan2.1 template and install ComfyUI-TeaCache to skip the model installation and run the model on Cloud GPUs. Both those templates work out of the box with ViewComfy’s serverless API and can easily be integrated into applications.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Integrate ComfyUI Workflows into your apps via API: A Guide to ViewComfy</title>
      <dc:creator>GBieler</dc:creator>
      <pubDate>Tue, 18 Mar 2025 17:25:09 +0000</pubDate>
      <link>https://dev.to/gbieler/integrate-comfyui-workflows-into-your-apps-via-api-a-guide-to-viewcomfy-3f3d</link>
      <guid>https://dev.to/gbieler/integrate-comfyui-workflows-into-your-apps-via-api-a-guide-to-viewcomfy-3f3d</guid>
      <description>&lt;p&gt;This guide goes over all the steps to integrate a ComfyUI workflow into a Python or TypeScript application via the &lt;a href="https://www.viewcomfy.com/deploy-comfyui" rel="noopener noreferrer"&gt;ViewComfy&lt;/a&gt; API. The first section will go over the details of deploying the workflow and turning it into a serverless API. This should only take a few minutes with the ViewComfy dashboard. We will then go over how to call the API and make the integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying your workflow
&lt;/h2&gt;

&lt;p&gt;The first thing you will need to do is to deploy your ComfyUI workflow. In this example, we will deploy a custom workflow. You could also deploy a template from ViewComfy, which can be quicker if you find the right one.&lt;/p&gt;

&lt;p&gt;Once you have the workflow_api.json for the workflow you want to deploy, you can head to the &lt;a href="https://app.viewcomfy.com/" rel="noopener noreferrer"&gt;ViewComfy dashboard&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2x66azpeetffetnx4co.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2x66azpeetffetnx4co.png" alt="Image description" width="241" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After selecting “Deploy your own”, you will have the option to choose the GPU you want to run your workflow on. You can then name your deployment and drop your workflow_api.json file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7spth7ys3eg3u4wetp7l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7spth7ys3eg3u4wetp7l.png" alt="Image description" width="512" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once everything is ready, you can click “Deploy”. The system will go through your workflow and install all the nodes you need to run it. It will also go through the models and download the ones from &lt;a href="https://github.com/ViewComfy/cloud-public/blob/main/supported_weights.md" rel="noopener noreferrer"&gt;this&lt;/a&gt; list automatically. If some models are missing from the list, no worries, you can add them before deploying using the “Add a model” button. You will also have the option to add new models once the deployment is live. For more info on how to add models, you can refer to &lt;a href="https://youtu.be/sRticjuabVQ" rel="noopener noreferrer"&gt;this&lt;/a&gt; guide.&lt;/p&gt;

&lt;p&gt;Depending on how many models the system has to download, it usually takes between 5 to 30 min to get your deployment ready. In some cases, like if you need to download the full Flux model family, it can take longer.&lt;/p&gt;

&lt;p&gt;Once your workflow is deployed, you will be able to access it via the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv90ul6qtvcdd3cn3wfsk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv90ul6qtvcdd3cn3wfsk.png" alt="Image description" width="800" height="65"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can make sure that everything was installed properly by clicking on the “ComfyUI” link and testing your workflow via the usual Comfy interface*. If you need to install new nodes, you can use the Comfy Manager in the same way you would locally. And if you want to add new models, you can use the “add a model” button on the dashboard.&lt;/p&gt;

&lt;p&gt;Once everything is in place, you can start integrating the API using the “API endpoint” link from the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;*note that sometimes the workflow does not load in the UI automatically. If that happens, you can just drop it in the Comfy interface as you would normally do&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Calling the workflow with the API
&lt;/h2&gt;

&lt;p&gt;The ViewComfy API is a REST API that can be called with a standard POST request but also supports streaming responses via Server-Sent Events. This second option allows for real-time tracking of the ComfyUI logs. In this guide, we will go over how to call the API with the streaming response.&lt;/p&gt;

&lt;p&gt;All the code you need to run the API can be found in &lt;a href="https://github.com/ViewComfy/cloud-public/tree/main/ViewComfy_API/Python" rel="noopener noreferrer"&gt;this&lt;/a&gt; GitHub folder (this guide uses the Python example code, you can access the TypeScript example code &lt;a href="https://github.com/ViewComfy/cloud-public/tree/main/ViewComfy_API/Node-TypeScript" rel="noopener noreferrer"&gt;here&lt;/a&gt;. It works in the same way.) After downloading all the files, you can install the dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the rest of the examples, we will use the Wan 2.1 image-to-video template from ViewComfy. You can deploy it from your dashboard if you want to follow along.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Getting your API keys&lt;/strong&gt;&lt;br&gt;
In order to use your API endpoint, you will first need to create your API keys.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffk5jfkak05t6r1xx335l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffk5jfkak05t6r1xx335l.png" alt="Image description" width="800" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After opening the API key menu from your dashboard, you can copy your “Client ID” and “Client Secret”. Keep them somewhere safe as you will need them to call the API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3h9k13xg0r8iqnf63r7k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3h9k13xg0r8iqnf63r7k.png" alt="Image description" width="536" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Extracting your workflow parameters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first thing to do before setting up the request is to identify the parameters in your workflow. This is done by using workflow_parameters_maker.py to flatten your workflow_api.json. You can run the script directly from your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python workflow_parameters_maker.py --workflow_api_path "&amp;lt;Path to your workflow_api.json file&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The flattened json file should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "_3-node-class_type-info": "KSampler",
    "3-inputs-cfg": 6,

    …

    "_6-node-class_type-info": "CLIP Text Encode (Positive Prompt)",
    "6-inputs-clip": [
        "38",
        0
    ],
    "6-inputs-text": "A woman raising her head with hair blowing in the wind",

    …

    "_52-node-class_type-info": "Load Image",
    "52-inputs-image": "&amp;lt;path_to_my_image&amp;gt;",

    …

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This dictionary contains all the parameters in your workflow. The key for each parameter contains the node id from your workflow_api.json file, whether it is an input, and the parameter’s input name. Keys that start with “_” are just there to give you context on the node corresponding to id, they are not parameters.&lt;/p&gt;

&lt;p&gt;In this example, the first key-value pair shows that node 3 is the KSampler and that “3-inputs-cfg” sets its corresponding cfg value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Updating the script with your parameter&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All the code you will need to call the API, parse the results and save the outputs are in main.py and api.py. In most cases, the only file you will need to edit is main.py. This is where you will add the parameters you want to change, your API endpoint, and the directory to save your outputs.&lt;/p&gt;

&lt;p&gt;The first thing to do is to copy the ViewComfy endpoint from your dashboard and set it to view_comfy_api_url. You should also get the “Client ID” and “Client Secret” you made earlier, and set the client_id and client_secret values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;view_comfy_api_url = "&amp;lt;Your_ViewComfy_endpoint&amp;gt;"
client_id = "&amp;lt;Your_ViewComfy_client_id&amp;gt;"
client_secret = "&amp;lt;Your_ViewComfy_client_secret&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then set the parameters using the keys from the json file you created in the previous step. In this example, we will change the prompt and the input image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;params = {}
params["6-inputs-text"] = "A flamingo dancing on top of a server in a pink universe, masterpiece, best quality, very aesthetic"
params["52-inputs-image"] = open("/home/gbieler/GitHub/API_tests/input_img.png", "rb")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Calling the API&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once you are done adding your parameters to main.py, you can call the API by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python main.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will send your parameters to api.py where all the functions to call the API and handle the outputs are stored.&lt;/p&gt;

&lt;p&gt;By default the script runs the “infer_with_logs” function which returns the generation logs from ComfyUI via a streaming response. If you would rather call the API via a standard POST request, you can use “infer” instead, like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  # Call the API and wait for the results
    prompt_result = await infer(api_url=view_comfy_api_url, params=params, override_workflow_api=override_workflow_api)

...

  # prompt_result = await infer_with_logs(
  #     api_url=view_comfy_api_url,
  #     params=params,
  #     logging_callback=logging_callback,
  #     override_workflow_api=override_workflow_api
  # )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result object returned by the API will contain the workflow outputs as well as the generation details. It is formatted as follows (For the full definition you can refer to “PromptResult” inside api.py.):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;prompt_id (str): Unique identifier for the prompt
status (str): Current status of the prompt execution
completed (bool): Whether the prompt execution is complete
execution_time_seconds (float): Time taken to execute the prompt
prompt (Dict): The original prompt configuration
outputs (List[Dict], optional): List of output file data. Defaults to empty list.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your outputs will automatically be saved in your working directory.&lt;/p&gt;

&lt;h2&gt;
  
  
  Editing the workflow
&lt;/h2&gt;

&lt;p&gt;So far we’ve seen how to make an API request to the workflow you uploaded when deploying. In this section, we will go over how to make a request using a different workflow. This approach will work with any workflow that can run on your deployment, so make sure you have installed all nodes and models you need to run the new workflow. (see the “Deploy your workflow” section for information on how to install new nodes and models to a deployment)&lt;/p&gt;

&lt;p&gt;The first step is to extract the workflow_api.json file for the new workflow. In this case, I will use &lt;a href="https://github.com/ViewComfy/cloud-public/blob/main/ViewComfy_API/example_workflow/wan_workflow_api_with_lora.json" rel="noopener noreferrer"&gt;this&lt;/a&gt; file, which is my Wan 2.1 workflow with the addition of a LoRA. You can then add the path to the new workflow_api.json file to “override_workflow_api”:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;override_workflow_api_path = "&amp;lt;path_to_your_new_workflow_api_file&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From that point onward, you can follow the steps from the previous section to get the parameters from the workflow and make the API call.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;ViewComfy provides a quick and flexible way to convert any ComfyUI workflow into an API. You can manage all of your deployments, edit them and add new models directly from the dashboard. They also come with a production-ready and scalable API out of the box.&lt;/p&gt;

&lt;p&gt;Using the example code from &lt;a href="https://github.com/ViewComfy/cloud-public/tree/main/ViewComfy_API" rel="noopener noreferrer"&gt;this&lt;/a&gt; repo, you can quickly integrate the API into your Python and TypeScript applications.&lt;/p&gt;

&lt;p&gt;Have questions or want to share your implementation? Join the discussion on our &lt;a href="https://discord.gg/wBuDqGsv" rel="noopener noreferrer"&gt;discord&lt;/a&gt; or reach out to the team &lt;a href="mailto:team@viewcomfy.com"&gt;team@viewcomfy.com&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
