<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: kefa</title>
    <description>The latest articles on DEV Community by kefa (@aiwriterk).</description>
    <link>https://dev.to/aiwriterk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aiwriterk"/>
    <language>en</language>
    <item>
      <title>I built a “deterministic” LLM text rephraser with a validation pipeline - looking for architectural feedback</title>
      <dc:creator>kefa</dc:creator>
      <pubDate>Mon, 09 Feb 2026 06:16:48 +0000</pubDate>
      <link>https://dev.to/aiwriterk/i-built-a-deterministic-llm-text-rephraser-with-a-validation-pipeline-looking-for-architectural-1046</link>
      <guid>https://dev.to/aiwriterk/i-built-a-deterministic-llm-text-rephraser-with-a-validation-pipeline-looking-for-architectural-1046</guid>
      <description>&lt;p&gt;Most LLM apps that “rewrite text” are thin wrappers around an API call.&lt;br&gt;
You send text → you get text back.&lt;br&gt;
That works for demos.&lt;br&gt;
It breaks down quickly when you want predictable behavior, quotas, abuse resistance, and quality guarantees without storing user data.&lt;br&gt;
I built a prototype called AI Text Rephrase to explore a question:&lt;br&gt;
Can you make an LLM text transformation service behave like a deterministic backend system instead of a probabilistic chatbot?&lt;br&gt;
This post is about the architecture and trade-offs, not the product.&lt;br&gt;
App: &lt;a href="https://app.aitechfuture.net" rel="noopener noreferrer"&gt;https://app.aitechfuture.net&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;**The core problem&lt;br&gt;
**LLM rewriting is non-deterministic and unbounded by default:&lt;br&gt;
  • Output style drifts&lt;br&gt;
  • Sometimes it summarizes instead of rephrasing&lt;br&gt;
  • Sometimes it changes meaning&lt;br&gt;
  • Sometimes it ignores the requested tone&lt;br&gt;
  • Sometimes it returns explanations, lists, or commentary&lt;br&gt;
  • Sometimes it fails silently&lt;br&gt;
If you expose this directly as an API, you get:&lt;br&gt;
  • inconsistent UX&lt;br&gt;
  • hard-to-debug failures&lt;br&gt;
  • quota abuse&lt;br&gt;
  • unpredictable cost&lt;br&gt;
  • no way to enforce “this is a rephrase, not a rewrite”&lt;br&gt;
So instead of trusting the model, I wrapped it in a fixed pipeline with validation.&lt;/p&gt;

&lt;p&gt;**The design principle&lt;br&gt;
**The LLM is not trusted. It is treated like an unreliable subsystem that must pass validation before its output is accepted.&lt;br&gt;
Every request goes through this flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Rate limit&lt;/li&gt;
&lt;li&gt;Tier identification (anonymous vs authenticated)&lt;/li&gt;
&lt;li&gt;Quota check&lt;/li&gt;
&lt;li&gt;Input validation (length bounds)&lt;/li&gt;
&lt;li&gt;Text preprocessing&lt;/li&gt;
&lt;li&gt;LLM inference (temperature = 0, single output)&lt;/li&gt;
&lt;li&gt;Semantic validation&lt;/li&gt;
&lt;li&gt;Tone adherence validation&lt;/li&gt;
&lt;li&gt;Response assembly&lt;/li&gt;
&lt;li&gt;Quota increment
If validation fails, inference is retried once. Then the request fails.
No heuristics. No “looks good”. Pure thresholds.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The interesting part: the validation layer&lt;br&gt;
**After inference, three checks happen:&lt;br&gt;
**1) Semantic similarity check&lt;/strong&gt;&lt;br&gt;
Using sentence embeddings:&lt;br&gt;
cosine_similarity(original, rephrased) ≥ threshold&lt;br&gt;
If meaning drifts → reject.&lt;br&gt;
&lt;strong&gt;2) Tone adherence check&lt;/strong&gt;&lt;br&gt;
Simple linguistic heuristics:&lt;br&gt;
  • average word length&lt;br&gt;
  • formality markers&lt;br&gt;
  • structure patterns&lt;br&gt;
If tone is wrong → reject.&lt;br&gt;
&lt;strong&gt;3) Output format check&lt;/strong&gt;&lt;br&gt;
Length ratio must be within bounds.&lt;br&gt;
If the model summarizes or expands too much → reject.&lt;br&gt;
This turned out to matter more than prompt engineering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deterministic constraints (hard rules)&lt;/strong&gt;&lt;br&gt;
These cannot change at runtime:&lt;br&gt;
  • very low temperature&lt;br&gt;
  • single output only&lt;br&gt;
  • fixed set of tones&lt;br&gt;
  • validation always enabled&lt;br&gt;
  • no dynamic prompt mutation&lt;br&gt;
  • max 1 retry on failure&lt;br&gt;
The goal is to make the system behave predictably across requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why SQLite?&lt;/strong&gt;&lt;br&gt;
This is controversial.&lt;br&gt;
I intentionally used SQLite because:&lt;br&gt;
  • Single-file persistence&lt;br&gt;
  • No external DB&lt;br&gt;
  • Zero infrastructure overhead&lt;br&gt;
  • Prototype constraint: single instance, single writer&lt;br&gt;
The database stores only:&lt;br&gt;
  • users&lt;br&gt;
  • sessions&lt;br&gt;
  • quota counters&lt;br&gt;
  • OTPs&lt;br&gt;
It does not store:&lt;br&gt;
  • input text&lt;br&gt;
  • output text&lt;br&gt;
  • history&lt;br&gt;
This forces the system to be stateless regarding content and simplifies privacy concerns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API gateway before business logic&lt;/strong&gt;&lt;br&gt;
All cross-cutting concerns live before the pipeline:&lt;br&gt;
  • OTP authentication&lt;br&gt;
  • quota manager&lt;br&gt;
  • sliding window rate limiter&lt;br&gt;
  • request routing&lt;br&gt;
The rephrase pipeline never knows who the user is.&lt;br&gt;
It only receives validated input.&lt;br&gt;
This separation made debugging and reasoning about failures much easier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why a minimal frontend?&lt;/strong&gt;&lt;br&gt;
No framework. No build step.&lt;br&gt;
Because this is not a frontend problem.&lt;br&gt;
The goal was to reduce moving parts and make Docker deployment trivial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What this design prevents&lt;/strong&gt;&lt;br&gt;
This architecture prevents:&lt;br&gt;
  • prompt injection via user text&lt;br&gt;
  • quota exhaustion by bots&lt;br&gt;
  • style drift&lt;br&gt;
  • meaning drift&lt;br&gt;
  • random output shapes&lt;br&gt;
  • cost spikes from multi-output retries&lt;br&gt;
  • storing user content for debugging&lt;br&gt;
It behaves more like a compiler pipeline than an AI app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Known limitations (by design)&lt;/strong&gt;&lt;br&gt;
  • SQLite single-writer model&lt;br&gt;
  • No horizontal scaling&lt;br&gt;
  • In-memory embedding model load at startup&lt;br&gt;
  • No streaming responses&lt;br&gt;
  • No rephrase history&lt;br&gt;
All intentional for this stage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I’m looking for feedback on&lt;/strong&gt;&lt;br&gt;
I’m not looking for UI or feature feedback.&lt;br&gt;
I’d love input from people who’ve built LLM systems on:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Is semantic + tone validation a reasonable guardrail, or would you do this differently?&lt;/li&gt;
&lt;li&gt;Is “retry once then fail” the right trade-off?&lt;/li&gt;
&lt;li&gt;Would you move any validation before inference?&lt;/li&gt;
&lt;li&gt;Is SQLite acceptable here given the constraints?&lt;/li&gt;
&lt;li&gt;Any architectural smell in the pipeline separation?&lt;/li&gt;
&lt;li&gt;How would you evolve this toward multi-instance without breaking the design?
You can try to break it here:
&lt;a href="https://app.aitechfuture.net" rel="noopener noreferrer"&gt;https://app.aitechfuture.net&lt;/a&gt;
Would really appreciate thoughts from folks working on LLM infra and backend systems.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>architecture</category>
      <category>discuss</category>
      <category>llm</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
