<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: RHU</title>
    <description>The latest articles on DEV Community by RHU (@tomotto1296).</description>
    <link>https://dev.to/tomotto1296</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tomotto1296"/>
    <language>en</language>
    <item>
      <title>anima_pipeline – browser UI + LLM ComfyUI Anima image generation automation</title>
      <dc:creator>RHU</dc:creator>
      <pubDate>Sat, 21 Mar 2026 04:01:01 +0000</pubDate>
      <link>https://dev.to/tomotto1296/animapipeline-browser-ui-llm-comfyui-anima-image-generation-automation-39eg</link>
      <guid>https://dev.to/tomotto1296/animapipeline-browser-ui-llm-comfyui-anima-image-generation-automation-39eg</guid>
      <description>&lt;p&gt;I've been generating anime-style images with ComfyUI's Anima workflow for a while, and the manual overhead kept bothering me: look up Danbooru tags for each character, hand-type hair/eye/outfit details, repeat for every variation.&lt;br&gt;
So I designed and shipped anima_pipeline — a single Python script that runs a local HTTP server (localhost:7860), serves a browser UI, and acts as middleware between the UI and ComfyUI. Most of the implementation was done with AI coding assistance (Claude/Codex), but the design decisions, specs, and all the testing were mine.&lt;br&gt;
Project page: &lt;a href="https://tomotto1296.github.io/anima-pipeline/index_en.html" rel="noopener noreferrer"&gt;https://tomotto1296.github.io/anima-pipeline/index_en.html&lt;/a&gt;&lt;br&gt;
GitHub: &lt;a href="https://github.com/tomotto1296/anima-pipeline" rel="noopener noreferrer"&gt;https://github.com/tomotto1296/anima-pipeline&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How it works
&lt;/h3&gt;

&lt;p&gt;Single-file server (anima_pipeline.py) using only Python's stdlib http.server + requests. No frameworks, no build step.&lt;br&gt;
The browser UI communicates with the server via a small REST API (POST /generate, GET /settings, GET /gallery, etc.).&lt;br&gt;
On generation, the server optionally calls an LLM (/v1/chat/completions-compatible: LM Studio, Gemini, or any OpenAI-compatible endpoint), then injects the resulting prompt into a ComfyUI workflow JSON and POSTs it to ComfyUI's queue API.&lt;br&gt;
Progress is streamed back to the browser via ComfyUI's WebSocket endpoint.&lt;/p&gt;

&lt;h3&gt;
  
  
  Design decisions I'm happy to discuss
&lt;/h3&gt;

&lt;p&gt;No framework dependency. The constraint I set from the start: stdlib + requests only. The target users run this on Windows alongside ComfyUI — minimizing install friction mattered more than anything else.&lt;br&gt;
LLM as optional middleware. The LLM's only job is translating Japanese character names and scene descriptions into Danbooru-style English tags. If you skip it, you type English tags directly — faster and fully offline.&lt;br&gt;
i18n without a JS library. The UI supports Japanese/English toggle. We used a lookup-table replacement pass over the DOM, with normalized matching for full-width/half-width equivalence. The tricky part was dynamic labels and making round-trip switching (JA→EN→JA) not leave stale strings behind.&lt;br&gt;
Log masking for non-developer users. I share this with friends who aren't developers. The requirement: mask token, api key, authorization, and Bearer ... patterns before writing to disk. Users export a ZIP from the UI without worrying about credential leaks.&lt;br&gt;
Workflow JSON injection. ComfyUI's API format is a JSON graph of nodes. Parse the workflow, locate prompt nodes and KSampler by ID, patch the values, POST the modified graph. Node IDs auto-detect on workflow selection, with manual override for edge cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  A few things I noticed
&lt;/h3&gt;

&lt;p&gt;LLM quality for Danbooru tag generation varies a lot by model. Smaller local models (3–7B) frequently hallucinate tag formats; Gemini free tier was more reliable for this narrow task.&lt;br&gt;
The hardest UX problem was making the tool debuggable for non-technical users without asking them to open a terminal. The log ZIP export came from hitting that wall repeatedly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Current state
&lt;/h3&gt;

&lt;p&gt;v1.4.7 released 2026-03-20&lt;br&gt;
Runs on Windows (bat launcher), cross-platform via python anima_pipeline.py&lt;br&gt;
Requires ComfyUI + Anima workflow (not bundled — IP reasons)&lt;br&gt;
LLM is optional; works fully offline without it&lt;/p&gt;

&lt;p&gt;Foundation refactor (v1.5.0)&lt;/p&gt;

&lt;p&gt;Codebase split from a single anima_pipeline.py into core/ + frontend/ modules&lt;br&gt;
Generation history DB — browse past prompts, re-edit, and regenerate&lt;br&gt;
Hierarchical presets (character / scene / camera / quality / LoRA / composite)&lt;br&gt;
Named session multi-save&lt;br&gt;
Character &amp;amp; series name JA/EN split (name_en, series_en)&lt;br&gt;
Setup self-diagnostics UI (GET /diagnostics)&lt;/p&gt;

&lt;p&gt;Compatibility &amp;amp; stability (v1.5.01–v1.5.11)&lt;/p&gt;

&lt;p&gt;ComfyUI Portable launcher support (bundled python_embeded prioritized)&lt;br&gt;
Anima2 workflow templates added alongside existing Anima1 templates&lt;br&gt;
Minimal distribution ZIP for clean installs&lt;/p&gt;

&lt;p&gt;New features (v1.5.12–v1.5.20)&lt;/p&gt;

&lt;p&gt;LoRA search, favorites, recommended weight persistence&lt;br&gt;
Theme toggle: Light / Dark / Device&lt;br&gt;
Today's Mood — one-click random character + scene generation&lt;br&gt;
Prompt diff viewer in history modal (compare against previous generation)&lt;br&gt;
Random character preset auto-generation&lt;br&gt;
Preset bundle Export/Import (GET /presets_export, POST /presets_import) — share your full preset set as a ZIP&lt;br&gt;
Mobile UI improvements&lt;/p&gt;

&lt;p&gt;Existing settings and presets remain fully compatible.&lt;br&gt;
→ Latest release: &lt;a href="https://github.com/tomotto1296/anima-pipeline/releases/tag/v1.5.20" rel="noopener noreferrer"&gt;https://github.com/tomotto1296/anima-pipeline/releases/tag/v1.5.20&lt;/a&gt;&lt;br&gt;
→ Civitai: &lt;a href="https://civitai.com/models/2480257" rel="noopener noreferrer"&gt;https://civitai.com/models/2480257&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>opensource</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
