<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: chraem</title>
    <description>The latest articles on DEV Community by chraem (@musket_49876618cd11).</description>
    <link>https://dev.to/musket_49876618cd11</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/musket_49876618cd11"/>
    <language>en</language>
    <item>
      <title>We Tried the Chat-on-Left, Output-on-Right Pattern for AI Figures. It Failed. Here's What Worked.</title>
      <dc:creator>chraem</dc:creator>
      <pubDate>Mon, 16 Feb 2026 18:14:11 +0000</pubDate>
      <link>https://dev.to/musket_49876618cd11/we-tried-the-chat-on-left-output-on-right-pattern-for-ai-figures-it-failed-heres-what-worked-2d8n</link>
      <guid>https://dev.to/musket_49876618cd11/we-tried-the-chat-on-left-output-on-right-pattern-for-ai-figures-it-failed-heres-what-worked-2d8n</guid>
      <description>&lt;p&gt;Hey DEV community!&lt;/p&gt;

&lt;p&gt;I'm Mert — I have a background in bioinformatics research and I recently launched an AI tool for creating scientific figures. I wanted to share some honest lessons about the UX decisions and failures we went through, because I think they apply to anyone building AI-powered creative tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Researchers spend an absurd amount of time making figures. A colleague of mine spent &lt;strong&gt;three days&lt;/strong&gt; in matplotlib just trying to match color palettes across six figures after a reviewer asked for revisions. Three days. For colors.&lt;/p&gt;

&lt;p&gt;The existing workflow looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write Python/R code to generate each plot individually&lt;/li&gt;
&lt;li&gt;Export each as PNG&lt;/li&gt;
&lt;li&gt;Open PowerPoint or Illustrator&lt;/li&gt;
&lt;li&gt;Manually arrange panels A, B, C, D&lt;/li&gt;
&lt;li&gt;Realize the fonts/colors don't match&lt;/li&gt;
&lt;li&gt;Go back to code, tweak, re-export, re-arrange&lt;/li&gt;
&lt;li&gt;Repeat until deadline&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We built &lt;a href="https://ai.plottie.art" rel="noopener noreferrer"&gt;Plottie AI&lt;/a&gt; to fix this. But the first version was wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  Failure #1: The Chat Pattern Doesn't Work for Visual Composition
&lt;/h2&gt;

&lt;p&gt;Every AI tool in 2025 had the same layout: chat on the left, result on the right. We copied it.&lt;/p&gt;

&lt;p&gt;The problem? Researchers don't create &lt;strong&gt;one&lt;/strong&gt; figure. They create &lt;strong&gt;8-24 panels&lt;/strong&gt; that need to look consistent. With a chat interface, each figure is an isolated conversation. You can't see them side by side. You can't compare color palettes. You always end up in PowerPoint anyway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What we built instead:&lt;/strong&gt; An infinite canvas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn8pynwwddixixm767l9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn8pynwwddixixm767l9.png" alt="Infinite canvas with multiple figure cards" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Think Figma, not ChatGPT. Multiple AI-generated figures live on the same surface. You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;See all your figures at once&lt;/li&gt;
&lt;li&gt;Drag them into "Frames" for multi-panel composition&lt;/li&gt;
&lt;li&gt;Export the whole thing as one PNG/SVG/PDF&lt;/li&gt;
&lt;li&gt;Swap color palettes across 20+ journal-specific presets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Result: one beta tester's workflow went from &lt;strong&gt;90 minutes → 15 minutes&lt;/strong&gt; for a complete multi-panel figure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Failure #2: AI Output Needs to Be Editable
&lt;/h2&gt;

&lt;p&gt;Our V1 treated AI-generated figures as final. Generate → export → done.&lt;/p&gt;

&lt;p&gt;Researchers &lt;strong&gt;hated&lt;/strong&gt; it.&lt;/p&gt;

&lt;p&gt;AI gets you 80% there, but the last 20% matters: exact hex codes for Nature's style guide, specific font sizes for figure legends, precise axis label formatting. "Close enough" doesn't work in academic publishing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; We rebuilt everything to output editable SVGs. Every element (text, axes, legends, colors) is adjustable after generation. We also added 20+ one-click color palettes matching specific journals (Nature, Science, Cell, Lancet).&lt;/p&gt;

&lt;h2&gt;
  
  
  Failure #3: Single LLM = Single Point of Failure
&lt;/h2&gt;

&lt;p&gt;V1 used one AI provider. When they rate-limited us or went down, our whole service died. On paper deadlines, that's unacceptable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Multi-LLM architecture — Claude, Gemini, and GPT with task-aware routing. Data plots go through a code sandbox (E2B), diagrams go through an Excalidraw-based pipeline. If one provider is slow, requests route to another. The user never notices.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Surprised Us
&lt;/h2&gt;

&lt;p&gt;We built for data plots (bar charts, scatter plots, heatmaps). But the biggest surprise was demand for &lt;strong&gt;diagrams&lt;/strong&gt;: flowcharts, CONSORT diagrams, pathway diagrams, scientific illustrations. Researchers don't just plot data — they explain processes.&lt;/p&gt;

&lt;p&gt;So we integrated a full diagram editor (Excalidraw) next to the data plotting engine. You can have a volcano plot and a CONSORT flowchart on the same canvas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr785jk0nx3zkyx82k14k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr785jk0nx3zkyx82k14k.png" alt="Diagram editor" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack (for the curious)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI Engine&lt;/strong&gt;: Python + FastAPI + multi-LLM routing + E2B sandbox&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frontend&lt;/strong&gt;: Next.js 15 + Excalidraw + Konva (canvas)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend&lt;/strong&gt;: Go + Gin + Typesense (search) + Cloudflare R2 (storage)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auth&lt;/strong&gt;: Supabase (shared via cookie across subdomains)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment&lt;/strong&gt;: Cloudflare Pages (frontend) + Fly.io (backend) + Docker (AI engine)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Numbers So Far
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Launched: January 21, 2026&lt;/li&gt;
&lt;li&gt;Beta users: ~3,000&lt;/li&gt;
&lt;li&gt;Figures created: 3,000+&lt;/li&gt;
&lt;li&gt;Most popular: volcano plots, heatmaps, flowcharts&lt;/li&gt;
&lt;li&gt;Free tier: 15 credits/day (enough for several figures)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;If you want to play with it: &lt;strong&gt;&lt;a href="https://ai.plottie.art" rel="noopener noreferrer"&gt;ai.plottie.art&lt;/a&gt;&lt;/strong&gt; — free, no card required.&lt;/p&gt;

&lt;p&gt;If you're building AI creative tools and ran into similar UX challenges (chat vs. canvas, single vs. multi-model, editable vs. static output), I'd love to compare notes in the comments.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm &lt;a href="https://x.com/jianhuamert" rel="noopener noreferrer"&gt;Mert&lt;/a&gt;, building &lt;a href="https://plottie.art" rel="noopener noreferrer"&gt;Plottie&lt;/a&gt; — an AI platform for scientific figures. Previously a bioinformatics researcher who was really, really tired of matplotlib.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>science</category>
      <category>ux</category>
    </item>
    <item>
      <title>Learning Chinese as a Developer: A Minimal Practice Method That Works</title>
      <dc:creator>chraem</dc:creator>
      <pubDate>Sat, 01 Nov 2025 21:23:09 +0000</pubDate>
      <link>https://dev.to/musket_49876618cd11/learning-chinese-as-a-developer-a-minimal-practice-method-that-works-3nc1</link>
      <guid>https://dev.to/musket_49876618cd11/learning-chinese-as-a-developer-a-minimal-practice-method-that-works-3nc1</guid>
      <description>&lt;h1&gt;
  
  
  I Built TypingMandarin — Learn Chinese by Typing What You Hear (Because Flashcards Weren’t Working)
&lt;/h1&gt;

&lt;p&gt;Hey devs 👋&lt;/p&gt;

&lt;p&gt;This is a small side project I’ve been building to solve a problem I kept running into while learning Chinese (and helping others learn it).&lt;/p&gt;

&lt;p&gt;I could &lt;em&gt;recognize&lt;/em&gt; words when reading or listening…&lt;br&gt;
…but when I tried to &lt;em&gt;say&lt;/em&gt; them, especially with the correct &lt;strong&gt;pinyin + tones&lt;/strong&gt;, my brain would blank.&lt;/p&gt;

&lt;p&gt;Flashcards helped with recognition.&lt;br&gt;
But not &lt;strong&gt;recall&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So I started asking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When does recall &lt;em&gt;actually&lt;/em&gt; happen?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For me, the breakthrough came when I practiced &lt;strong&gt;typing&lt;/strong&gt; what I heard instead of just reviewing cards. Typing forces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Listening&lt;/strong&gt; → accurate sound perception&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Active recall&lt;/strong&gt; → pulling pinyin from memory&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Muscle memory&lt;/strong&gt; → reinforcing tones through repetition&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That led me to build:&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;&lt;a href="https://typingmandarin.com" rel="noopener noreferrer"&gt;https://typingmandarin.com&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;A simple web app where you listen to short Chinese sentences and type what you hear.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;No accounts required.&lt;br&gt;
No gamified distractions.&lt;br&gt;
Just &lt;strong&gt;listen → type → reinforce&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why this works (memory-wise)
&lt;/h2&gt;

&lt;p&gt;There’s a well-documented principle in cognitive psychology:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Active recall + feedback strengthens long-term memory more than passive review.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Typing what you hear triggers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Input processing (listening)&lt;/li&gt;
&lt;li&gt;Retrieval (recall)&lt;/li&gt;
&lt;li&gt;Precision correction (tones, spelling)&lt;/li&gt;
&lt;li&gt;Repetition (muscle memory)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s the same reason people who &lt;em&gt;take notes by hand&lt;/em&gt; remember more than people who highlight PDFs.&lt;br&gt;
&lt;strong&gt;Effort builds memory.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Who this is for
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Beginners who want &lt;strong&gt;pinyin / tones to finally make sense&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Intermediate learners who &lt;em&gt;understand more than they can say&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Heritage learners wanting to reconnect Chinese in daily life&lt;/li&gt;
&lt;li&gt;Developers who just want one consistent, low-pressure practice habit&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can do &lt;strong&gt;5 minutes a day&lt;/strong&gt; and it still works.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I’m building next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Personal review mode&lt;/li&gt;
&lt;li&gt;Playback-speed control&lt;/li&gt;
&lt;li&gt;Shadowing mode&lt;/li&gt;
&lt;li&gt;Voice input (experimental)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If there’s anything you’d like to see — I’d genuinely love to hear it.&lt;/p&gt;




&lt;h2&gt;
  
  
  If you're learning Chinese (or have tried in the past)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What actually helped &lt;em&gt;you&lt;/em&gt; make progress?&lt;/strong&gt;&lt;br&gt;
Was it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Immersion?&lt;/li&gt;
&lt;li&gt;Flashcards?&lt;/li&gt;
&lt;li&gt;Conversation practice?&lt;/li&gt;
&lt;li&gt;TV dramas?&lt;/li&gt;
&lt;li&gt;Music + lyrics?&lt;/li&gt;
&lt;li&gt;Something else entirely?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’m collecting methods → testing them → turning them into small daily drills.&lt;/p&gt;

&lt;p&gt;Would love your thoughts 🙏&lt;br&gt;
Thanks for reading — and happy learning ✨&lt;/p&gt;

&lt;p&gt;—&lt;br&gt;
&lt;a href="https://typingmandarin.com" rel="noopener noreferrer"&gt;https://typingmandarin.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>sideprojects</category>
      <category>webdev</category>
      <category>languagelearning</category>
      <category>buildinpublic</category>
    </item>
  </channel>
</rss>
