<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: RESK</title>
    <description>The latest articles on DEV Community by RESK (@resk).</description>
    <link>https://dev.to/resk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/resk"/>
    <language>en</language>
    <item>
      <title>Securing Your LLM Integrations in JavaScript with Resk-LLM-TS: A Practical Guide</title>
      <dc:creator>RESK</dc:creator>
      <pubDate>Wed, 05 Nov 2025 15:34:13 +0000</pubDate>
      <link>https://dev.to/resk/securing-your-llm-integrations-in-javascript-with-resk-llm-ts-a-practical-guide-5g2b</link>
      <guid>https://dev.to/resk/securing-your-llm-integrations-in-javascript-with-resk-llm-ts-a-practical-guide-5g2b</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Learn how to protect your JavaScript/TypeScript LLM applications from prompt injections, PII leaks, and data exfiltration using &lt;strong&gt;Resk-LLM-TS&lt;/strong&gt; — an open-source security toolkit that wraps OpenAI-compatible APIs with enterprise-grade protection.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Growing Risk of LLM-Powered Apps
&lt;/h2&gt;

&lt;p&gt;Large Language Models (LLMs) are everywhere — chatbots, content generators, internal tools, and automation agents. But with great power comes great responsibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common threats include:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prompt Injection&lt;/strong&gt; → Attackers trick your model into ignoring instructions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PII Leakage&lt;/strong&gt; → Sensitive user data (emails, SSNs) gets exposed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Exfiltration&lt;/strong&gt; → Your system prompt or training data leaks in responses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content Violations&lt;/strong&gt; → Toxic, harmful, or off-brand outputs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You &lt;em&gt;can’t&lt;/em&gt; just trust the model to "be safe." You need &lt;strong&gt;defense in depth&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That’s where &lt;strong&gt;&lt;a href="https://github.com/Resk-Security/resk-llm-ts" rel="noopener noreferrer"&gt;Resk-LLM-TS&lt;/a&gt;&lt;/strong&gt; comes in.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is Resk-LLM-TS?
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;resk-llm-ts&lt;/code&gt; is a &lt;strong&gt;security wrapper&lt;/strong&gt; for OpenAI-compatible APIs (OpenAI, OpenRouter, etc.) that adds multiple layers of protection &lt;strong&gt;before&lt;/strong&gt; and &lt;strong&gt;after&lt;/strong&gt; your LLM calls.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;resk-llm-ts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
