<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Red Wolfman</title>
    <description>The latest articles on DEV Community by Red Wolfman (@red_wolfman_680dc4b3efc3e).</description>
    <link>https://dev.to/red_wolfman_680dc4b3efc3e</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/red_wolfman_680dc4b3efc3e"/>
    <language>en</language>
    <item>
      <title>How I Built Book-Writer-AI in a Few Days: Tech Stack, Architecture &amp; Challenges</title>
      <dc:creator>Red Wolfman</dc:creator>
      <pubDate>Sun, 16 Nov 2025 07:39:12 +0000</pubDate>
      <link>https://dev.to/red_wolfman_680dc4b3efc3e/how-i-built-book-writer-ai-in-a-few-days-tech-stack-architecture-challenges-2maa</link>
      <guid>https://dev.to/red_wolfman_680dc4b3efc3e/how-i-built-book-writer-ai-in-a-few-days-tech-stack-architecture-challenges-2maa</guid>
      <description>&lt;p&gt;Over the last few days, I built and launched a small SaaS called Book-Writer-AI — a tool that generates full books using AI, chapter by chapter, with controllable tone, pacing, characters and structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s available here: &lt;a href="https://book-writer-ai.com" rel="noopener noreferrer"&gt;https://book-writer-ai.com&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;(Some books generated by users are already public and readable on the site — a surprisingly fun bonus feature.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is the story of how I built it fast using &lt;br&gt;
PHP, vanilla SQL, Bootstrap, Redis, Claude + OpenAI APIs, and Stripe&lt;/strong&gt;, and the technical challenges that came with generating long-form narratives using LLMs.&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 Tech Stack
&lt;/h2&gt;

&lt;p&gt;Because I wanted to ship fast, I used a very lean and predictable stack:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Backend:&lt;/em&gt; PHP (vanilla, no framework — to keep it fast &amp;amp; simple)&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Database:&lt;/em&gt; MySQL with manually designed SQL tables&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cache / Queue:&lt;/em&gt; Redis&lt;/p&gt;

&lt;p&gt;_Frontend: _Bootstrap&lt;/p&gt;

&lt;p&gt;&lt;em&gt;AI Models:&lt;/em&gt; Claude 3.5 Sonnet + OpenAI GPT-4.1 for fallback&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Payments&lt;/em&gt;: Stripe&lt;/p&gt;

&lt;p&gt;_Hosting: _A basic Ubuntu VPS&lt;/p&gt;

&lt;p&gt;I built almost everything in a few days — which forced me to focus only on what mattered for an MVP:** structure, coherence, and predictable generation.**&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 The Challenge: LLMs Are Bad at Writing Long Books
&lt;/h2&gt;

&lt;p&gt;One of the first issues with using AI to write books is something every developer who works with LLMs knows:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;LLMs have short memories.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Even with large context windows (200k+), long-form consistency is still a problem:&lt;/p&gt;

&lt;p&gt;Characters change personality mid-story&lt;/p&gt;

&lt;p&gt;Plot threads get forgotten&lt;/p&gt;

&lt;p&gt;Style and tone drift&lt;/p&gt;

&lt;p&gt;Previously generated sections become irrelevant&lt;/p&gt;

&lt;p&gt;“Context stuffing” becomes expensive and slow&lt;/p&gt;

&lt;p&gt;Trying to generate a full 30k–50k-word book in a single long prompt is simply impossible — or at least unreliable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So I needed a system capable of generating small, coherent pieces while keeping them connected to a global narrative structure.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  📚 Solution: A Multi-Layered Story Architecture
&lt;/h2&gt;

&lt;p&gt;To deal with LLM limitations, I built the backend around two key SQL structures.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Overall Plot Structure Table
&lt;/h2&gt;

&lt;p&gt;plot_structure contains the entire macro-structure of the book: acts, arcs, turning points, midpoint, climax, resolution, etc. &lt;/p&gt;

&lt;p&gt;The idea:&lt;br&gt;
→ The model should always know where we are in the story.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;plot_structure (&lt;br&gt;
  act_1_percentage,&lt;br&gt;
  act_1_description,&lt;br&gt;
  act_1_key_events,&lt;br&gt;
  act_2_description,&lt;br&gt;
  midpoint,&lt;br&gt;
  climax,&lt;br&gt;
  resolution&lt;br&gt;
)&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;When generating chapters, I feed the relevant slice of this structure — not the whole book — keeping the prompt short, cheap, and focused.&lt;/p&gt;

&lt;p&gt;This prevents the “chapter 7 has nothing to do with chapter 3” syndrome.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Fine-Grained Chapter Part Table
&lt;/h2&gt;

&lt;p&gt;Instead of generating an entire chapter at once, I split chapters into smaller parts, each with targeted metadata.&lt;br&gt;
This is stored in the chapter_parts table.&lt;/p&gt;

&lt;p&gt;Each part includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;POV&lt;/li&gt;
&lt;li&gt;Characters involved&lt;/li&gt;
&lt;li&gt;Setting&lt;/li&gt;
&lt;li&gt;Atmosphere&lt;/li&gt;
&lt;li&gt;Key events&lt;/li&gt;
&lt;li&gt;Word-count targets&lt;/li&gt;
&lt;li&gt;Ratios for tone, tension, dialogue, pace&lt;/li&gt;
&lt;li&gt;Writing instructions&lt;/li&gt;
&lt;li&gt;And finally: &lt;strong&gt;the generated content&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This lets me ask the LLM to focus on a 300–500 word micro-scene with very specific goals instead of a massive 2k–4k word chapter.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎚️ Tone, Pacing &amp;amp; Style via Ratio-Based Controls
&lt;/h2&gt;

&lt;p&gt;One of the features I added is a lightweight “ratio-based tone system”.&lt;/p&gt;

&lt;p&gt;Every chapter part contains numeric weights like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;tension&lt;/li&gt;
&lt;li&gt;descriptive_tone&lt;/li&gt;
&lt;li&gt;character_development&lt;/li&gt;
&lt;li&gt;action_level&lt;/li&gt;
&lt;li&gt;emotional_intensity&lt;/li&gt;
&lt;li&gt;dialogue_ratio&lt;/li&gt;
&lt;li&gt;pacing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All values are normalized on a 0–1 scale (I used a “1-based ratio” design for rapid prototyping).&lt;/p&gt;

&lt;p&gt;These values are injected into the prompt like:&lt;/p&gt;

&lt;p&gt;“Increase dialogue to 0.70, reduce descriptive tone to 0.30, maintain tension at 0.55.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This gives the LLM guidance without micromanagement, resulting in more consistent stylistic identity across the book.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚡ Redis for Speed &amp;amp; Retry Logic
&lt;/h2&gt;

&lt;p&gt;Since generation is slow, expensive, and sometimes fails, Redis handles:&lt;/p&gt;

&lt;p&gt;Job queues&lt;/p&gt;

&lt;p&gt;Status (pending, writing, finished, failed)&lt;/p&gt;

&lt;p&gt;Retry logic&lt;/p&gt;

&lt;p&gt;Caching previously generated plot elements&lt;/p&gt;

&lt;p&gt;This keeps the PHP backend extremely lean.&lt;/p&gt;




&lt;h2&gt;
  
  
  👀 Making Books Publicly Readable (A Surprisingly Good Feature)
&lt;/h2&gt;

&lt;p&gt;I wasn’t planning it originally, but I added the ability for users to make their generated books publicly readable and shareable.&lt;/p&gt;

&lt;p&gt;This turned out to be:&lt;/p&gt;

&lt;p&gt;A discovery feature&lt;/p&gt;

&lt;p&gt;A social proof feature&lt;/p&gt;

&lt;p&gt;A traffic generator&lt;/p&gt;

&lt;p&gt;A retention loop (users return to see each other’s books)&lt;/p&gt;

&lt;p&gt;I’ve already seen users browse other AI-generated books just out of curiosity.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⏱️ Built in a Few Days
&lt;/h2&gt;

&lt;p&gt;The entire system — plot generator, chapter generator, database schema, UI, payment logic — was built in a few days.&lt;/p&gt;

&lt;p&gt;It is absolutely not perfect.&lt;br&gt;
But it works.&lt;br&gt;
It generates readable multi-chapter stories with decent consistency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And most importantly:&lt;br&gt;
It ships.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧪 What I Learned About AI Book Generation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;LLMs need structure&lt;/strong&gt;, not freedom&lt;/p&gt;

&lt;p&gt;Long-context models still drift, even with 200k tokens&lt;/p&gt;

&lt;p&gt;Breaking everything into small parts is essential&lt;/p&gt;

&lt;p&gt;Coherence is an architectural problem, not just a prompting problem&lt;/p&gt;

&lt;p&gt;SQL is a great “memory extension mechanism” for LLMs&lt;/p&gt;

&lt;p&gt;Tone ratios give more control than trying to force “write like X” prompts while i did usze that too &lt;/p&gt;




&lt;h2&gt;
  
  
  📬 If You Want to Try It
&lt;/h2&gt;

&lt;p&gt;You can check it out here:&lt;br&gt;
&lt;a href="https://book-writer-ai.com" rel="noopener noreferrer"&gt;https://book-writer-ai.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some books are already publicly readable — feel free to explore or generate your own.&lt;/p&gt;




&lt;h2&gt;
  
  
  ❓ Question to the Dev.to Community
&lt;/h2&gt;

&lt;p&gt;For those of you with experience in SaaS or indie projects:&lt;/p&gt;

&lt;p&gt;What’s the best way to promote something like this effectively?&lt;/p&gt;

&lt;p&gt;Long-form content?&lt;/p&gt;

&lt;p&gt;Reddit?&lt;/p&gt;

&lt;p&gt;YouTube?&lt;/p&gt;

&lt;p&gt;Partnerships?&lt;/p&gt;

&lt;p&gt;SEO?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Or a completely different approach?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I’d love your honest thoughts.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>saas</category>
      <category>startup</category>
    </item>
  </channel>
</rss>
