DEV Community

Cover image for How I Built Book-Writer-AI in a Few Days: Tech Stack, Architecture & Challenges
Red Wolfman
Red Wolfman

Posted on

How I Built Book-Writer-AI in a Few Days: Tech Stack, Architecture & Challenges

Over the last few days, I built and launched a small SaaS called Book-Writer-AI — a tool that generates full books using AI, chapter by chapter, with controllable tone, pacing, characters and structure.

It’s available here: https://book-writer-ai.com

(Some books generated by users are already public and readable on the site — a surprisingly fun bonus feature.)

This is the story of how I built it fast using
PHP, vanilla SQL, Bootstrap, Redis, Claude + OpenAI APIs, and Stripe
, and the technical challenges that came with generating long-form narratives using LLMs.


🚀 Tech Stack

Because I wanted to ship fast, I used a very lean and predictable stack:

Backend: PHP (vanilla, no framework — to keep it fast & simple)

Database: MySQL with manually designed SQL tables

Cache / Queue: Redis

_Frontend: _Bootstrap

AI Models: Claude 3.5 Sonnet + OpenAI GPT-4.1 for fallback

Payments: Stripe

_Hosting: _A basic Ubuntu VPS

I built almost everything in a few days — which forced me to focus only on what mattered for an MVP:** structure, coherence, and predictable generation.**


🧠 The Challenge: LLMs Are Bad at Writing Long Books

One of the first issues with using AI to write books is something every developer who works with LLMs knows:

LLMs have short memories.

Even with large context windows (200k+), long-form consistency is still a problem:

Characters change personality mid-story

Plot threads get forgotten

Style and tone drift

Previously generated sections become irrelevant

“Context stuffing” becomes expensive and slow

Trying to generate a full 30k–50k-word book in a single long prompt is simply impossible — or at least unreliable.

So I needed a system capable of generating small, coherent pieces while keeping them connected to a global narrative structure.


📚 Solution: A Multi-Layered Story Architecture

To deal with LLM limitations, I built the backend around two key SQL structures.


1. Overall Plot Structure Table

plot_structure contains the entire macro-structure of the book: acts, arcs, turning points, midpoint, climax, resolution, etc.

The idea:
→ The model should always know where we are in the story.

plot_structure (
act_1_percentage,
act_1_description,
act_1_key_events,
act_2_description,
midpoint,
climax,
resolution
)

When generating chapters, I feed the relevant slice of this structure — not the whole book — keeping the prompt short, cheap, and focused.

This prevents the “chapter 7 has nothing to do with chapter 3” syndrome.


2. Fine-Grained Chapter Part Table

Instead of generating an entire chapter at once, I split chapters into smaller parts, each with targeted metadata.
This is stored in the chapter_parts table.

Each part includes:

  • POV
  • Characters involved
  • Setting
  • Atmosphere
  • Key events
  • Word-count targets
  • Ratios for tone, tension, dialogue, pace
  • Writing instructions
  • And finally: the generated content

This lets me ask the LLM to focus on a 300–500 word micro-scene with very specific goals instead of a massive 2k–4k word chapter.


🎚️ Tone, Pacing & Style via Ratio-Based Controls

One of the features I added is a lightweight “ratio-based tone system”.

Every chapter part contains numeric weights like:

  • tension
  • descriptive_tone
  • character_development
  • action_level
  • emotional_intensity
  • dialogue_ratio
  • pacing

All values are normalized on a 0–1 scale (I used a “1-based ratio” design for rapid prototyping).

These values are injected into the prompt like:

“Increase dialogue to 0.70, reduce descriptive tone to 0.30, maintain tension at 0.55.”

This gives the LLM guidance without micromanagement, resulting in more consistent stylistic identity across the book.


⚡ Redis for Speed & Retry Logic

Since generation is slow, expensive, and sometimes fails, Redis handles:

Job queues

Status (pending, writing, finished, failed)

Retry logic

Caching previously generated plot elements

This keeps the PHP backend extremely lean.


👀 Making Books Publicly Readable (A Surprisingly Good Feature)

I wasn’t planning it originally, but I added the ability for users to make their generated books publicly readable and shareable.

This turned out to be:

A discovery feature

A social proof feature

A traffic generator

A retention loop (users return to see each other’s books)

I’ve already seen users browse other AI-generated books just out of curiosity.


⏱️ Built in a Few Days

The entire system — plot generator, chapter generator, database schema, UI, payment logic — was built in a few days.

It is absolutely not perfect.
But it works.
It generates readable multi-chapter stories with decent consistency.

And most importantly:
It ships.


🧪 What I Learned About AI Book Generation

LLMs need structure, not freedom

Long-context models still drift, even with 200k tokens

Breaking everything into small parts is essential

Coherence is an architectural problem, not just a prompting problem

SQL is a great “memory extension mechanism” for LLMs

Tone ratios give more control than trying to force “write like X” prompts while i did usze that too


📬 If You Want to Try It

You can check it out here:
https://book-writer-ai.com

Some books are already publicly readable — feel free to explore or generate your own.


❓ Question to the Dev.to Community

For those of you with experience in SaaS or indie projects:

What’s the best way to promote something like this effectively?

Long-form content?

Reddit?

YouTube?

Partnerships?

SEO?

Or a completely different approach?

I’d love your honest thoughts.

Top comments (0)