<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Esha Sharma</title>
    <description>The latest articles on DEV Community by Esha Sharma (@aistudynow).</description>
    <link>https://dev.to/aistudynow</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aistudynow"/>
    <language>en</language>
    <item>
      <title>How to Copy Any Pose in ComfyUI and Fix AI Skin</title>
      <dc:creator>Esha Sharma</dc:creator>
      <pubDate>Thu, 05 Mar 2026 00:21:27 +0000</pubDate>
      <link>https://dev.to/aistudynow/how-to-copy-any-pose-in-comfyui-and-fix-ai-skin-4n57</link>
      <guid>https://dev.to/aistudynow/how-to-copy-any-pose-in-comfyui-and-fix-ai-skin-4n57</guid>
      <description>&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/f_UXuIFhAWY"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;This is a summarized guide. For the full JSON workflow and download files, check the original article on my site: &lt;a href="https://aistudynow.com/how-to-copy-any-pose-without-vnccs-pose-studio-comfyui-workflow/" rel="noopener noreferrer"&gt;https://aistudynow.com/how-to-copy-any-pose-without-vnccs-pose-studio-comfyui-workflow/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Manual 3D posing wastes time. You can extract poses directly from photos instead. This workflow auto-generates missing body parts and fixes plastic-looking skin. You control exactly what your character wears. The system offers a switch between GGUF and Safetensors models. You can also pick your image resolution instantly. Here is the exact process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set Up the Pose Recognition
&lt;/h2&gt;

&lt;p&gt;The AI needs to understand the body position. You need a specific node for this.&lt;/p&gt;

&lt;p&gt;Add the following node created by aistudynow:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Qwen-VL&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Open the node settings. Select this exact preset:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;prompt body posture&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This preset writes a perfect text prompt describing the body position. The model now knows what to do.&lt;/p&gt;

&lt;p&gt;Next, you need a specific LoRA. Download it and save it to your LoRA folder. Make sure it stays enabled. If you turn it off, you will get random backgrounds.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;VNCCS PoseStudio V5 LoRA&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Extract Poses From Complex Images
&lt;/h2&gt;

&lt;p&gt;Let us test a difficult pose. We will use an image of a hand showing a middle finger. Most AI models fail at this.&lt;/p&gt;

&lt;p&gt;Enable "Camera 2" in the workflow. This camera processes external images. Upload your reference picture.&lt;/p&gt;

&lt;p&gt;Enter this exact trigger text into your prompt:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Draw character from image 2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Run the workflow. The AI captures the posture perfectly.&lt;/p&gt;

&lt;p&gt;Fix Plastic Skin and Enhance Details&lt;br&gt;
The initial result might look like cheap plastic. The Qwen 2511 model often causes this issue. We will add details to fix it.&lt;/p&gt;

&lt;p&gt;Go to the detailer group:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;VNCCS Qwen Detailer&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Select the face target using this detector:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;face_yolov8&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Run the prompt again. The workflow crops and enhances the face. It looks better now. It still has an artificial look.&lt;/p&gt;

&lt;p&gt;Go to the end of the workflow. Enable the final detail group:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Z-Image Turbo&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Run it one last time. The plastic skin vanishes. The character looks highly realistic. The exact hand pose remains intact.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Generate Missing Body Parts&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Heavy detailers consume high VRAM. They can slow down your PC. Disable the add details and Z-Image groups for this next step.&lt;/p&gt;

&lt;p&gt;Load a half-body reference image. The AI detects the missing lower half. It guesses the rest of the body automatically. We want full control over the clothing.&lt;/p&gt;

&lt;p&gt;Go to your text prompt. Add this simple instruction:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;woman wearing jeans&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Run the workflow. The AI generates a full body. It dresses the woman in jeans perfectly.&lt;/p&gt;

&lt;p&gt;Build Custom Poses From Scratch&lt;br&gt;
You can also build custom poses using the VNCCS Pose Studio.&lt;/p&gt;

&lt;p&gt;Turn the 3D studio back on. Select the switch to enable "Camera 1". Look at the joints on the 3D dummy. Click the tiny yellow dot on any joint. Colored 3D circles will appear.&lt;/p&gt;

&lt;p&gt;Click and hold one circle. Drag your mouse. You can move the arm anywhere. Bend the elbow to create a perfect position.&lt;/p&gt;

&lt;p&gt;Go to your text prompt. Delete the clothing text. Leave only the magic trigger:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Draw character from image 2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Run it. The AI builds your exact custom pose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add Cinematic Lighting Automatically
&lt;/h2&gt;

&lt;p&gt;You do not need to type long prompts for shadows. We control lighting directly in the node.&lt;/p&gt;

&lt;p&gt;Find the lighting hook on the Pose Studio. Connect this wire directly to your text box:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Lighting Prompt -&amp;gt; Text 1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Open the lighting menu inside the 3D studio. Select a light. Change its direction. Drag it next to your 3D dummy.&lt;/p&gt;

&lt;p&gt;Run the generation again. The system reads your 3D lights. It adds beautiful and dramatic lighting to the final image. You get perfect shadows automatically.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>comfyui</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Running the 14B BitDance Model Locally Low Vram: Created Custom ComfyUI Nodes</title>
      <dc:creator>Esha Sharma</dc:creator>
      <pubDate>Mon, 23 Feb 2026 15:05:30 +0000</pubDate>
      <link>https://dev.to/aistudynow/running-the-14b-bitdance-model-locally-low-vram-created-custom-comfyui-nodes-gm4</link>
      <guid>https://dev.to/aistudynow/running-the-14b-bitdance-model-locally-low-vram-created-custom-comfyui-nodes-gm4</guid>
      <description>&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/4O9ATPbeQyg"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;Running massive 14-billion parameter models locally often results in immediate Out of Memory (OOM) crashes on standard consumer GPUs. This is a summarized guide. For the full JSON workflow and download files, check the original article on my site.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture Problem
&lt;/h2&gt;

&lt;p&gt;Unlike older models that denoise standard vector systems, BitDance builds images token-by-token using a massive Binary Tokenizer capable of holding 2^256 states. Because it leverages a 14B language model, the text encoding phase is exceptionally heavy. This causes a massive VRAM spike that instantly crashes most hardware.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: FP8 Conversion &amp;amp; Dynamic Offloading
&lt;/h2&gt;

&lt;p&gt;To bypass these memory limits, I built a custom ComfyUI node suite and converted the model weights to FP8. This significantly reduces the memory footprint while maintaining near-full visual fidelity.&lt;/p&gt;

&lt;p&gt;Read the full guide and get the workflow here: &lt;a href="https://aistudynow.com/how-to-fix-the-generic-face-bug-in-bitdance-14b-optimize-speed/" rel="noopener noreferrer"&gt;https://aistudynow.com/how-to-fix-the-generic-face-bug-in-bitdance-14b-optimize-speed/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>machinelearning</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Optimizing TeleStyle in ComfyUI: Zero-Morphing Style Transfer on 6GB VRAM</title>
      <dc:creator>Esha Sharma</dc:creator>
      <pubDate>Sun, 15 Feb 2026 21:14:59 +0000</pubDate>
      <link>https://dev.to/aistudynow/optimizing-telestyle-in-comfyui-zero-morphing-style-transfer-on-6gb-vram-6pm</link>
      <guid>https://dev.to/aistudynow/optimizing-telestyle-in-comfyui-zero-morphing-style-transfer-on-6gb-vram-6pm</guid>
      <description>&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/yHbaFDF083o"&gt;
&lt;/iframe&gt;
&lt;br&gt;
I recently developed a TeleStyle Custom Node for ComfyUI that utilizes the Wan 2.1 engine to transfer styles to images and videos without morphing. While the compressed model file is efficient (5GB), standard workflows can still be sluggish or produce visual artifacts on lower-end hardware.&lt;/p&gt;

&lt;p&gt;This is a summarized guide. For the full JSON workflow and download files, check the original article on my site.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aistudynow.com/how-to-fix-slow-style-transfer-in-comfyui-run-telestyle-on-6gb-vram/" rel="noopener noreferrer"&gt;https://aistudynow.com/how-to-fix-slow-style-transfer-in-comfyui-run-telestyle-on-6gb-vram/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aistudynow/Comfyui-tetestyle-image-video" rel="noopener noreferrer"&gt;https://github.com/aistudynow/Comfyui-tetestyle-image-video&lt;/a&gt;&lt;/p&gt;

</description>
      <category>comfyui</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>FLUX.2 Klein Guide: Run 9B Models on 12GB VRAM</title>
      <dc:creator>Esha Sharma</dc:creator>
      <pubDate>Thu, 22 Jan 2026 06:43:41 +0000</pubDate>
      <link>https://dev.to/aistudynow/flux2-klein-guide-run-9b-models-on-12gb-vram-2kj</link>
      <guid>https://dev.to/aistudynow/flux2-klein-guide-run-9b-models-on-12gb-vram-2kj</guid>
      <description>&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/jV7SNCMEKMw"&gt;
  &lt;/iframe&gt;


&lt;br&gt;
I strictly tested the new FLUX.2 Klein models for the last 48 hours, running them on everything from an RTX 4090 to a struggling 12GB RTX 3060. The verdict? It is incredibly fast, but the workflow is full of traps.&lt;/p&gt;

&lt;p&gt;If you tried loading this recently, you likely hit the RuntimeError: mat1 and mat2 shapes cannot be multiplied crash or saw generation speeds tank just by changing resolutions. I found the fixes.&lt;/p&gt;

&lt;p&gt;This is a summarized guide. For the full JSON workflow and download files, check the original article on my site.&lt;br&gt;
&lt;a href="https://aistudynow.com/flux-2-klein-fix-crashes-run-9b-on-6gb-vram-workflow-download/" rel="noopener noreferrer"&gt;https://aistudynow.com/flux-2-klein-fix-crashes-run-9b-on-6gb-vram-workflow-download/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>comfyui</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
