<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matthew Karaula</title>
    <description>The latest articles on DEV Community by Matthew Karaula (@karamatt_).</description>
    <link>https://dev.to/karamatt_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/karamatt_"/>
    <language>en</language>
    <item>
      <title>Building a PDF Parser for Financial Data: Lessons from Arbiter V2</title>
      <dc:creator>Matthew Karaula</dc:creator>
      <pubDate>Fri, 01 May 2026 09:40:59 +0000</pubDate>
      <link>https://dev.to/karamatt_/building-a-pdf-parser-for-financial-data-lessons-from-arbiter-v2-4ddo</link>
      <guid>https://dev.to/karamatt_/building-a-pdf-parser-for-financial-data-lessons-from-arbiter-v2-4ddo</guid>
      <description>&lt;p&gt;I’m Matthew, building Arbiter Briefs — an AI engine that helps founders make high-stakes decisions. This week we shipped financial PDF ingestion, and I want to walk through the architecture, the gotchas, and why we chose regex over ML for extraction.&lt;br&gt;
The Problem&lt;br&gt;
Our v1 was generating rulings based on web research + user input. But founders kept saying the same thing: “This would be way more useful if you actually read my financial data.”&lt;br&gt;
So we added PDF upload. But now we had a new problem: how do you reliably extract structured financial metrics from PDFs that could be formatted a hundred different ways?&lt;br&gt;
We could’ve gone full ML pipeline. Instead, we went pragmatic.&lt;/p&gt;

&lt;p&gt;Architecture Overview&lt;/p&gt;

&lt;p&gt;PDF Upload (multer) &lt;br&gt;
  ↓&lt;br&gt;
Storage (Railway volume)&lt;br&gt;
  ↓&lt;br&gt;
Parse (pdf-parse)&lt;br&gt;
  ↓&lt;br&gt;
Extract (regex + heuristics)&lt;br&gt;
  ↓&lt;br&gt;
Store (PostgreSQL JSONB)&lt;br&gt;
  ↓&lt;br&gt;
Use in Ruling (context injection)&lt;/p&gt;

&lt;p&gt;Simple. Async. Testable.&lt;/p&gt;

&lt;p&gt;Step 1: Upload (Multer)&lt;br&gt;
We use multer for file handling — it’s simple, battle-tested, and handles multipart form data without fuss.&lt;/p&gt;

&lt;p&gt;Upload constraints:&lt;br&gt;
    • Max 10MB per file (covers P&amp;amp;Ls, balance sheets, cap tables)&lt;br&gt;
    • Max 5 files per analysis (prevents abuse)&lt;br&gt;
    • Only PDF files accepted&lt;br&gt;
    • In-memory buffer (files are saved to disk immediately after)&lt;/p&gt;

&lt;p&gt;Why these limits?&lt;br&gt;
    • 10MB keeps parsing under 5 seconds&lt;br&gt;
    • 5 files per analysis is enough context without overwhelming the system&lt;br&gt;
    • Railway volumes can handle it without quota issues&lt;/p&gt;

&lt;p&gt;Step 2: Storage (Railway Persistent Volume)&lt;br&gt;
We’re on Railway. Persistent volumes are simple: you mount a folder at /app/uploads, and files survive deploys.&lt;br&gt;
Folder structure: /uploads/{userId}/{analysisId}/{uuid}-filename.pdf&lt;/p&gt;

&lt;p&gt;This approach:&lt;br&gt;
    • Keeps files organized and private (users can’t enumerate each other’s documents)&lt;br&gt;
    • Makes cleanup easy (delete an analysis folder, files are gone)&lt;br&gt;
    • Survives deploys without S3 complexity&lt;br&gt;
Why not S3?&lt;br&gt;
    • We’re pre-launch. S3 adds cost (~$0.023/GB/month) and infrastructure overhead&lt;br&gt;
    • Railway volumes are free up to 5GB&lt;br&gt;
    • We can migrate to S3 in 30 minutes when we hit scale&lt;br&gt;
Why this folder structure?&lt;br&gt;
    • Privacy isolation: each user’s files are in their own path&lt;br&gt;
    • Easy multi-tenant if we ever need it&lt;br&gt;
    • Simple to debug (“ls /uploads/{userId}/{analysisId}” shows what’s there)&lt;/p&gt;

&lt;p&gt;Step 3: Parse (pdf-parse Library)&lt;br&gt;
We use pdf-parse (npm package) to extract text and metadata from PDFs. It handles the heavy lifting — text extraction, page count, embedded metadata.&lt;/p&gt;

&lt;p&gt;Why pdf-parse?&lt;br&gt;
    • Lightweight (~50KB, no external dependencies)&lt;br&gt;
    • Fast (parses a 20-page PDF in &amp;lt;1 second)&lt;br&gt;
    • Good enough for searchable PDFs&lt;br&gt;
Caveat: pdf-parse struggles with:&lt;br&gt;
    • Scanned PDFs (images, not text)&lt;br&gt;
    • Heavily formatted tables (preserves layout, loses structure)&lt;br&gt;
    • Non-standard encodings&lt;/p&gt;

&lt;p&gt;For Alpha 2, we’re assuming users upload searchable PDFs. If we get complaints, we’ll upgrade to a heavier library or integrate GPT-4o’s PDF API.&lt;br&gt;
Step 4: Extract (Regex + Heuristics)&lt;/p&gt;

&lt;p&gt;We detect document type first (P&amp;amp;L vs. balance sheet vs. cap table) by scanning for keyword signals. Then we extract relevant metrics using regex patterns.&lt;/p&gt;

&lt;p&gt;Document type detection:&lt;br&gt;
Look for keywords like “profit and loss” → P&amp;amp;L, “balance sheet” → balance sheet, “cap table” → cap table. If we hit 2+ keywords for a type, that’s our label.&lt;br&gt;
Metric extraction:&lt;/p&gt;

&lt;p&gt;Once we know the type, we target specific line items:&lt;br&gt;
    • P&amp;amp;L: Revenue, COGS, gross profit, operating expenses, EBITDA, net income&lt;br&gt;
    • Balance Sheet: Total assets, cash, liabilities, equity, debt&lt;br&gt;
    • Cap Table: Share classes, fully diluted, option pool&lt;/p&gt;

&lt;p&gt;We also extract all dollar amounts ($1.2M, $1,234,567, $2B, etc.) and store them as detected numbers.&lt;br&gt;
Why regex instead of ML?&lt;br&gt;
    • Speed: Regex runs in milliseconds. ML models take seconds.&lt;br&gt;
    • Cost: We’re pre-launch. No API spend yet.&lt;br&gt;
    • Simplicity: One founder + one engineer can own it.&lt;br&gt;
    • Good enough: Catches ~80% of cases correctly.&lt;br&gt;
The trade-off: Regex fails on non-standard formatting, scanned PDFs, and non-English text. Week 4 plan: Feed extracted text through GPT-4o with a structured prompt to handle edge cases.&lt;/p&gt;

&lt;p&gt;Step 5: Store (PostgreSQL JSONB)&lt;br&gt;
We store extracted metrics in a financial_documents table with an extracted_data JSONB column. This gives us flexibility (new metrics don’t require migrations) and queryability (can index on specific fields).&lt;/p&gt;

&lt;p&gt;What extracted data looks like:&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "documentType": "p&amp;amp;l",&lt;br&gt;
  "keyMetrics": {&lt;br&gt;
    "revenue": 2400000,&lt;br&gt;
    "cogs": 800000,&lt;br&gt;
    "grossProfit": 1600000,&lt;br&gt;
    "ebitda": 400000&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Why JSONB?&lt;br&gt;
    • Flexible schema (add new metrics without schema changes)&lt;br&gt;
    • Queryable (can build reports filtering on specific extracted values)&lt;br&gt;
    • Easy to version (old data remains valid when extraction logic evolves)&lt;/p&gt;

&lt;p&gt;Step 6: Async Parsing&lt;br&gt;
Critical architecture decision: Parsing happens asynchronously in the background.&lt;/p&gt;

&lt;p&gt;When a user uploads a PDF:&lt;br&gt;
    1.  File gets saved to disk immediately&lt;br&gt;
    2.  We insert a “pending” record in the database&lt;br&gt;
    3.  Return 201 OK to the frontend in ~200ms&lt;br&gt;
    4.  Parsing runs in the background (takes 5–10 seconds)&lt;br&gt;
    5.  Frontend polls every 3 seconds to check status&lt;br&gt;
    6.  When parsing completes, status badge updates from “Pending” → “Parsed” or “Failed”&lt;/p&gt;

&lt;p&gt;Why async?&lt;br&gt;
    • Upload returns fast (good UX)&lt;br&gt;
    • Parsing doesn’t block other requests&lt;br&gt;
    • User doesn’t wait for slow PDFs&lt;br&gt;
    • If parsing fails, user can click “Retry”&lt;br&gt;
This approach is essential for any file-based feature. Blocking on parse would timeout at 30 seconds and frustrate users.&lt;/p&gt;

&lt;p&gt;Frontend (React)&lt;br&gt;
On the frontend, users see:&lt;br&gt;
    • Drag-and-drop zone for uploading PDFs&lt;br&gt;
    • Status badges for each document (Pending → Parsed → Failed)&lt;br&gt;
    • Retry button if parsing fails&lt;br&gt;
    • Delete button to remove a document&lt;/p&gt;

&lt;p&gt;The UI polls the backend every 3 seconds while any document is “Pending”. As parsing completes, badges update in real-time. No page refresh needed.&lt;/p&gt;

&lt;p&gt;It’s simple. Minimal. Exactly what you need.&lt;/p&gt;

&lt;p&gt;What We Learned&lt;br&gt;
    1.  Start with regex. It’s fast, debuggable, and good enough for MVP. Upgrade to ML when you have clear signals it’s failing.&lt;br&gt;
    2.  Async is essential. If parsing blocks the response, your UX suffers. Let it run in background.&lt;br&gt;
    3.  JSONB is your friend. Don’t try to normalize financial data into relational tables. Store as JSON, query as needed.&lt;br&gt;
    4.  Test with real PDFs early. Every PDF format is slightly different. Our regex catches ~80% of P&amp;amp;Ls correctly. The other 20% need manual tweaks or GPT-4o.&lt;br&gt;
    5.  Storage matters at scale. Railway volumes are great for &amp;lt;5GB. If you grow past that, migrate to S3 preemptively.&lt;br&gt;
What’s Next&lt;br&gt;
    • Week 4: Replace regex with GPT-4o structured extraction (handles edge cases, learns from failures)&lt;br&gt;
    • Week 5–6: Financial modeling (sensitivity analysis using extracted metrics)&lt;br&gt;
    • Week 7: MiroFish integration (stakeholder simulation)&lt;br&gt;
    • Week 8: Visual graphs (tornado charts, waterfall charts)&lt;/p&gt;

&lt;p&gt;If you’re building something similar, happy to answer questions in the comments. And if you’re a founder facing high-stakes decisions, we’re building the tool for you. Early access: arbiterbriefs.com&lt;/p&gt;

</description>
      <category>saas</category>
      <category>pdf</category>
      <category>node</category>
      <category>ai</category>
    </item>
    <item>
      <title>How I Rebuilt My AI Decision Tool From a Summarizer Into a Constraint-Driven Arbitrator</title>
      <dc:creator>Matthew Karaula</dc:creator>
      <pubDate>Fri, 10 Apr 2026 04:47:49 +0000</pubDate>
      <link>https://dev.to/karamatt_/how-i-rebuilt-my-ai-decision-tool-from-a-summarizer-into-a-constraint-driven-arbitrator-5fc7</link>
      <guid>https://dev.to/karamatt_/how-i-rebuilt-my-ai-decision-tool-from-a-summarizer-into-a-constraint-driven-arbitrator-5fc7</guid>
      <description>&lt;p&gt;A few weeks ago, I shipped a tool called Arbiter that takes a business decision, runs it through GPT-4o, and returns a structured analysis. The output looked impressive. Recommendation, confidence score, pros and cons, risk ratings, next steps. Everything you'd expect from an AI decision tool.&lt;/p&gt;

&lt;p&gt;Then I posted it on Reddit and got destroyed in the comments.&lt;br&gt;
Not because the output was wrong. Because the output was vague. One commenter pointed out that the AI was just hand-waving its way to a conclusion. Another asked how it handled contradictory evidence between different perspectives. A third said the confidence scores felt arbitrary there was no mechanism that would actually drop confidence when the evidence was weak.&lt;/p&gt;

&lt;p&gt;They were right. I was running a single LLM call with a clever prompt and pretending it was decision intelligence.&lt;/p&gt;

&lt;p&gt;This post is about how I rebuilt the pipeline to actually adjudicate decisions instead of summarizing them, and the architectural decisions that made the difference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The original and how it failed&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first version was simple. One system prompt, one user prompt, one JSON response.&lt;br&gt;
User input → GPT-4o (with structured prompt) → JSON output&lt;br&gt;
The prompt asked the model to play "senior strategy analyst," analyze options, return pros and cons, and assign a confidence score. It worked in the sense that it produced reasonable-looking output. It failed in three specific ways.&lt;/p&gt;

&lt;p&gt;First, the model could justify any conclusion with confident-sounding prose. There was no internal mechanism forcing it to actually weigh evidence it just had to sound like it did.&lt;/p&gt;

&lt;p&gt;Second, confidence scores were cosmetic. The model would output 85% confidence on a vague decision and 75% on a well-defined one, with no consistent logic. I couldn't trace where the score came from.&lt;/p&gt;

&lt;p&gt;Third, when the same decision was run twice, the recommendations would sometimes flip. A single LLM call has no internal debate mechanism whichever framing the model latched onto first won.&lt;/p&gt;

&lt;p&gt;The redesign: separating extraction from advocacy from adjudication&lt;br&gt;
The core insight was that real decision-making isn't a single act of reasoning. It's at least three distinct cognitive operations:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Defining what success looks like&lt;/strong&gt; (constraints, criteria, non-negotiables)&lt;br&gt;
&lt;strong&gt;Building the strongest case for each option&lt;/strong&gt; (advocacy)&lt;br&gt;
*&lt;em&gt;Evaluating each case against the success criteria *&lt;/em&gt;(adjudication)&lt;/p&gt;

&lt;p&gt;A single LLM call was trying to do all three at once, which is why it could rationalize any answer. The fix was to separate them into distinct stages where each stage's output became a hard input to the next.&lt;br&gt;
Here's the new pipeline:&lt;br&gt;
User input&lt;br&gt;
  ↓&lt;br&gt;
Stage 1: Constraint Extraction&lt;br&gt;
  ↓&lt;br&gt;
Stage 2: Research (with web search)&lt;br&gt;
  ↓&lt;br&gt;
Stage 3: Independent Advocates (parallel)&lt;br&gt;
  ↓&lt;br&gt;
Stage 4: Arbitrator&lt;br&gt;
  ↓&lt;br&gt;
Decision Brief&lt;/p&gt;

&lt;p&gt;Each stage has a specific job, runs as its own LLM call with its own system prompt, and passes structured JSON to the next stage. Let me walk through what each one does and why it matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 1: Constraint extraction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This was the biggest unlock. Before any reasoning happens, the system extracts a normalized constraint framework from the user's inputs.&lt;/p&gt;

&lt;p&gt;json{&lt;br&gt;
  "hard_constraints": [&lt;br&gt;
    {"id": "HC1", "constraint": "Budget capped at $300K", "source": "user_input"}&lt;br&gt;
  ],&lt;br&gt;
  "soft_constraints": [&lt;br&gt;
    {"id": "SC1", "constraint": "Minimize disruption to existing team", "weight": "high"}&lt;br&gt;
  ],&lt;br&gt;
  "decision_criteria": [&lt;br&gt;
    {"id": "DC1", "criterion": "Operational within 4 months", "measurable": "go-live date"}&lt;br&gt;
  ],&lt;br&gt;
  "risk_tolerance": "moderate",&lt;br&gt;
  "non_negotiables": ["No customer downtime"],&lt;br&gt;
  "unknown_critical_inputs": ["Current team capacity"]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;The point isn't the format. The point is that every downstream stage now references the same constraint IDs. When an advocate argues for an option, they have to explicitly show which constraints their option satisfies. When the Arbitrator scores options, it scores them against the same constraint set, not against free-form prose.&lt;br&gt;
This single change eliminated about 80% of the hand-waving. The model couldn't just say "this option seems best" anymore. It had to point to specific constraints and show satisfaction.&lt;br&gt;
The other useful thing constraint extraction does is identify what the user didn't tell you. The unknown_critical_inputs field forces the model to flag missing information. That data later becomes input to the confidence calculation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 2: Research with real web search&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The original tool relied entirely on training data for "industry context." The output looked authoritative but was completely ungrounded — citing statistics that may or may not exist, referencing competitor moves the model imagined.&lt;/p&gt;

&lt;p&gt;The fix was Tavily, a search API designed for LLM consumption. The Research Agent generates three focused search queries from the decision context, executes them in parallel, and synthesizes the results into structured findings.&lt;/p&gt;

&lt;p&gt;The key design decision was how to handle uncertainty about source quality. Rather than pretending every claim is equally evidenced, every finding gets tagged:&lt;/p&gt;

&lt;p&gt;json{&lt;br&gt;
  "claim": "Australian SaaS NRR averaged 112% in Q4 2025",&lt;br&gt;
  "evidence_strength": "high",&lt;br&gt;
  "source_type": "cited",&lt;br&gt;
  "source_url": "https://...",&lt;br&gt;
  "source_title": "..."&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;source_type is one of cited, inference, or model_knowledge. evidence_strength is high, medium, or low. The rule baked into the prompt: a claim cannot be marked as high-strength evidence unless it has a real URL backing it.&lt;br&gt;
This sounds obvious but it took multiple iterations to get the model to actually respect it. Models have a strong default behavior of confidently asserting things. Breaking that habit required restating the rules in three different places in the prompt and explicitly forbidding fabricated citations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 3: Parallel advocates&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For each option the user provides, an advocate LLM call runs in parallel, building the strongest possible case. The system prompt instructs them to be persuasive but honest, and crucially:&lt;/p&gt;

&lt;p&gt;Your argument must be structured around the CONSTRAINTS defined by the decision analyst. You cannot hand-wave, you must explicitly show how your option satisfies each hard constraint, decision criterion, and key soft constraint.&lt;/p&gt;

&lt;p&gt;Each advocate returns:&lt;br&gt;
json{&lt;br&gt;
  "option": "Option A",&lt;br&gt;
  "executive_argument": "...",&lt;br&gt;
  "constraint_satisfaction": [&lt;br&gt;
    {"constraint_id": "HC1", "satisfied": "yes|partial|no", "reasoning": "..."}&lt;br&gt;
  ],&lt;br&gt;
  "supporting_evidence": [&lt;br&gt;
    {"point": "...", "evidence_strength": "high|medium|low", "source_ref": "..."}&lt;br&gt;
  ],&lt;br&gt;
  "acknowledged_weaknesses": [...]&lt;br&gt;
}&lt;br&gt;
The acknowledged_weaknesses field matters. Without it, advocates produced suspiciously one-sided arguments. Forcing them to acknowledge their own option's weaknesses produced more honest output, and gave the Arbitrator material to work with in the next stage.&lt;br&gt;
Running advocates in parallel was an obvious win for latency. Three options means three concurrent LLM calls instead of three sequential ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 4: The Arbitrator&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is where the real adjudication happens. The Arbitrator receives the constraint framework, the research findings, and all advocate arguments. Its system prompt explicitly tells it that its job is not to summarize:&lt;/p&gt;

&lt;p&gt;You are NOT summarizing the advocates. You are ADJUDICATING.&lt;br&gt;
Your process:&lt;/p&gt;

&lt;p&gt;Score each option against the constraints&lt;br&gt;
Identify contradictions between advocate arguments resolve them with evidence.&lt;/p&gt;

&lt;p&gt;Assess evidence strength for each advocate's claims&lt;br&gt;
Deliver a clear ruling. Do not hedge.&lt;br&gt;
Assess your own confidence based on constraint clarity, evidence quality, advocate agreement, and unknown critical inputs&lt;/p&gt;

&lt;p&gt;The output includes a constraint scorecard that maps every constraint to a pass/partial/fail rating per option, a list of contradictions between advocates with how they were resolved, sensitivity variables (concrete values that would flip the ruling), and the actual ruling itself.&lt;br&gt;
The most important field is certainty_rationale. The model has to explain why its confidence is what it is. This makes the score legible — you can see whether the 72% confidence comes from "strong evidence but advocates disagree" or "weak evidence but clear constraint winner." Two different stories that should produce different actions from the user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What this cost me&lt;/strong&gt;&lt;br&gt;
A single LLM call with the original architecture was about $0.02 per analysis on GPT-4o-mini. The new pipeline runs five LLM calls (constraint extraction, research synthesis, three advocates, arbitrator) plus three Tavily searches. Cost per brief is now closer to $0.10 on the same model. Latency went from ~15 seconds to ~45 seconds.&lt;/p&gt;

&lt;p&gt;That's a 5x cost increase and 3x latency increase. For most consumer products it would be a bad trade. For a tool whose entire value proposition is "give me a structured ruling I can act on," it's worth it. Users will wait 45 seconds for output that actually helps them. They won't pay for output that looks like a ChatGPT response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I learned&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Three things from the rebuild that I'd apply to any multi-stage LLM system.&lt;/p&gt;

&lt;p&gt;Separation of concerns matters more than prompt engineering. I spent weeks trying to make a single prompt produce better output. Splitting that prompt into four prompts each with a narrow job did more in a day than the prompt tweaks did in two weeks. Each stage gets to specialize. Each stage's output becomes a hard constraint for the next stage instead of a suggestion. &lt;/p&gt;

&lt;p&gt;Models will fabricate confidence unless you make confidence expensive. The original tool happily output 90% confidence because nothing in the prompt punished it for being overconfident. The new tool ties certainty to specific factors (evidence strength, advocate agreement, missing inputs) and forces the model to justify its score in writing. When the model has to explain its confidence, it gets more conservative.&lt;/p&gt;

&lt;p&gt;Adversarial structure produces better reasoning than collaborative structure. The original prompt asked the model to "consider all perspectives." The new architecture has independent advocates each arguing their case, then a neutral arbitrator weighing them against criteria. The adversarial setup produces sharper arguments because each advocate is incentivized to make the strongest case. The arbitrator then has real material to weigh instead of mush.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The output&lt;/strong&gt;&lt;br&gt;
Here's a screenshot of a real Decision Brief from the new pipeline. The constraint scorecard at the top is the most visually distinctive thing — every option scored against every extracted constraint. Below it, the research section shows cited findings with evidence strength badges and clickable source URLs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiit39zdon26mgpqr1vae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiit39zdon26mgpqr1vae.png" alt=" " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83481rx3vj3iw7oggg44.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83481rx3vj3iw7oggg44.png" alt=" " width="800" height="578"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's next&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The pipeline still has weak spots. Constraint extraction is fragile when user inputs are sparse, garbage in, garbage out. I'm working on a constraint review step where the user can edit the extracted framework before advocates run. Evidence strength calibration is also conservative the model defaults to "medium" for almost everything unless there's a clearly cited stat. I'm experimenting with explicit calibration examples in the prompt.&lt;/p&gt;

&lt;p&gt;If you want to play with the tool, it's at &lt;a href="https://arbiter-frontend-iota.vercel.app/" rel="noopener noreferrer"&gt;https://arbiter-frontend-iota.vercel.app/&lt;/a&gt;. Free tier gives you a few briefs per month, no credit card. Genuinely interested in feedback on where the pipeline breaks for your use case.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>architecture</category>
      <category>buildinpublic</category>
    </item>
  </channel>
</rss>
