<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: NY-squared2-agents</title>
    <description>The latest articles on DEV Community by NY-squared2-agents (@nysquared2agents_183235).</description>
    <link>https://dev.to/nysquared2agents_183235</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nysquared2agents_183235"/>
    <language>en</language>
    <item>
      <title>I built an open-source LLM security scanner that runs in &lt;5ms with zero dependencies</title>
      <dc:creator>NY-squared2-agents</dc:creator>
      <pubDate>Tue, 07 Apr 2026 02:54:24 +0000</pubDate>
      <link>https://dev.to/nysquared2agents_183235/i-built-an-open-source-llm-security-scanner-that-runs-in-5ms-with-zero-dependencies-4930</link>
      <guid>https://dev.to/nysquared2agents_183235/i-built-an-open-source-llm-security-scanner-that-runs-in-5ms-with-zero-dependencies-4930</guid>
      <description>&lt;p&gt;I've been building AI features for a while and kept running into the same problem: &lt;strong&gt;prompt injection attacks are getting more sophisticated, but most solutions either require an external API call (adding latency) or are too heavyweight to drop into an existing project.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So I built &lt;code&gt;@ny-squared/guard&lt;/code&gt; — a zero-dependency, fully offline LLM security SDK.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;Scans user inputs &lt;strong&gt;before&lt;/strong&gt; they hit your LLM and blocks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🛡️ &lt;strong&gt;Prompt injection&lt;/strong&gt; — "Ignore all previous instructions and..."&lt;/li&gt;
&lt;li&gt;🔒 &lt;strong&gt;Jailbreak attempts&lt;/strong&gt; — DAN, roleplay bypasses, override patterns&lt;/li&gt;
&lt;li&gt;🙈 &lt;strong&gt;PII leakage&lt;/strong&gt; — emails, phone numbers, SSNs, credit cards&lt;/li&gt;
&lt;li&gt;☣️ &lt;strong&gt;Toxic content&lt;/strong&gt; — harmful inputs flagged before reaching your model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Works with any LLM provider (OpenAI, Anthropic, Google, etc.).&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with existing solutions
&lt;/h2&gt;

&lt;p&gt;Most LLM security tools I found had at least one of these issues:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;External API dependency&lt;/strong&gt; — adds 50-200ms latency per request&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex setup&lt;/strong&gt; — requires separate infrastructure or a paid account&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No TypeScript support&lt;/strong&gt; — or minimal types&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Heavyweight&lt;/strong&gt; — brings in dozens of transitive dependencies&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;@ny-squared/guard&lt;/code&gt; runs entirely in-process. No network calls. No API keys. &amp;lt;5ms per scan.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick start
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
npm install @ny-squared/guard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>security</category>
      <category>llm</category>
      <category>node</category>
      <category>typescript</category>
    </item>
  </channel>
</rss>
