<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: OnChainAIIntel</title>
    <description>The latest articles on DEV Community by OnChainAIIntel (@onchainaiintel).</description>
    <link>https://dev.to/onchainaiintel</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/onchainaiintel"/>
    <language>en</language>
    <item>
      <title>I built an npm middleware that scores your LLM prompts before they hit your agent workflow</title>
      <dc:creator>OnChainAIIntel</dc:creator>
      <pubDate>Sat, 04 Apr 2026 12:14:05 +0000</pubDate>
      <link>https://dev.to/onchainaiintel/i-built-an-npm-middleware-that-scores-your-llm-prompts-before-they-hit-your-agent-workflow-53ci</link>
      <guid>https://dev.to/onchainaiintel/i-built-an-npm-middleware-that-scores-your-llm-prompts-before-they-hit-your-agent-workflow-53ci</guid>
      <description>&lt;p&gt;The problem with most LLM agent workflows is that nobody is checking the quality of the prompts going in.&lt;/p&gt;

&lt;p&gt;Garbage in, garbage out but at scale, with agents firing hundreds of prompts per day, the garbage compounds fast.&lt;/p&gt;

&lt;p&gt;I built &lt;code&gt;x402-pqs&lt;/code&gt; to fix this. It's an Express middleware that intercepts prompts before they hit any LLM endpoint, scores them for quality, and adds the score to the request headers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;x402-pqs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;pqsMiddleware&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;x402-pqs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;pqsMiddleware&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;threshold&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="c1"&gt;// warn if prompt scores below 10/40&lt;/span&gt;
  &lt;span class="na"&gt;vertical&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;crypto&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// scoring context&lt;/span&gt;
  &lt;span class="na"&gt;onLowScore&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;warn&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// warn | block | ignore&lt;/span&gt;
&lt;span class="p"&gt;}));&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/api/chat&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Prompt score:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pqs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;score&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pqs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;grade&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ok&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every request gets these headers added automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;X-PQS-Score&lt;/code&gt; —&amp;gt; numeric score (0-40)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;X-PQS-Grade&lt;/code&gt; —&amp;gt; letter grade (A-F)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;X-PQS-Out-Of&lt;/code&gt; —&amp;gt; maximum score (40)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How the scoring works
&lt;/h2&gt;

&lt;p&gt;PQS scores prompts across 8 dimensions using 5 cited academic frameworks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt-side (4 dimensions):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Specificity —&amp;gt; does the prompt define what it wants precisely?&lt;/li&gt;
&lt;li&gt;Context —&amp;gt; does it give the model enough to work with?&lt;/li&gt;
&lt;li&gt;Clarity —&amp;gt; are the directives unambiguous?&lt;/li&gt;
&lt;li&gt;Predictability —&amp;gt; would different runs produce consistent results?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Output-side (4 dimensions):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Completeness, Relevancy, Reasoning depth, Faithfulness&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Source frameworks: PEEM (Dongguk University, 2026) · RAGAS · MT-Bench · G-Eval · ROUGE&lt;/p&gt;

&lt;h2&gt;
  
  
  Real example
&lt;/h2&gt;

&lt;p&gt;This prompt: &lt;code&gt;"who are the smartest wallets on solana right now"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Scored &lt;strong&gt;9/40&lt;/strong&gt; —&amp;gt; Grade D.&lt;/p&gt;

&lt;p&gt;The optimized version scored &lt;strong&gt;35/40&lt;/strong&gt; —&amp;gt; Grade A. &lt;/p&gt;

&lt;p&gt;+84% improvement.&lt;/p&gt;

&lt;p&gt;Same model. Same API. Completely different output quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  The payment layer
&lt;/h2&gt;

&lt;p&gt;The scoring API uses &lt;a href="https://x402.org" rel="noopener noreferrer"&gt;x402&lt;/a&gt;, an HTTP-native micropayment protocol now governed by the Linux Foundation, with Coinbase, Cloudflare, AWS, Stripe, Google, Microsoft, Visa, and Mastercard as founding members.&lt;/p&gt;

&lt;p&gt;Agents can call and pay for scoring autonomously — no API keys, no subscriptions. Just a wallet and $0.001 USDC per score.&lt;/p&gt;

&lt;p&gt;There's also a &lt;strong&gt;free tier&lt;/strong&gt; with no payment required:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://pqs.onchainintel.net/api/score/free &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"prompt": "your prompt here", "vertical": "general"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Returns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"score"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"out_of"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"grade"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"D"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"upgrade"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Get full dimension breakdown at /api/score for $0.001 USDC"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The data angle
&lt;/h2&gt;

&lt;p&gt;Every scored prompt pair goes into a corpus. At scale this becomes training data for a domain-specific prompt quality model. The thesis is similar to what Andrej Karpathy described recently about LLM knowledge bases, the data compounds in value over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;npm: &lt;a href="https://npmjs.com/package/x402-pqs" rel="noopener noreferrer"&gt;x402-pqs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/OnChainAIIntel/x402-pqs" rel="noopener noreferrer"&gt;OnChainAIIntel/x402-pqs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;API: &lt;a href="https://pqs.onchainintel.net" rel="noopener noreferrer"&gt;pqs.onchainintel.net&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Free endpoint: &lt;code&gt;POST https://pqs.onchainintel.net/api/score/free&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Would love feedback from anyone building agent workflows. What scoring dimensions would you add?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>node</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
