<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: maruakshay</title>
    <description>The latest articles on DEV Community by maruakshay (@maruakshay).</description>
    <link>https://dev.to/maruakshay</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/maruakshay"/>
    <language>en</language>
    <item>
      <title>18 Ways Your LLM App Can Be Hacked (And How to Fix Them)</title>
      <dc:creator>maruakshay</dc:creator>
      <pubDate>Wed, 29 Apr 2026 05:47:15 +0000</pubDate>
      <link>https://dev.to/maruakshay/18-ways-your-llm-app-can-be-hacked-and-how-to-fix-them-11mc</link>
      <guid>https://dev.to/maruakshay/18-ways-your-llm-app-can-be-hacked-and-how-to-fix-them-11mc</guid>
      <description>&lt;p&gt;You spent weeks building your LLM-powered app. You tested the happy path. Users love it.&lt;/p&gt;

&lt;p&gt;But did you ask: &lt;em&gt;what happens when someone tries to break it?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Most teams don't. And that's a problem — because LLM apps have a completely new attack surface that traditional security tools don't cover.&lt;/p&gt;

&lt;p&gt;Here are 18 real ways attackers go after LLM systems right now.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prompt Attacks
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Direct Prompt Injection&lt;/strong&gt;&lt;br&gt;
User types instructions that override your system prompt. "Ignore previous instructions and..."  — classic. Still works on most apps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Indirect Prompt Injection&lt;/strong&gt;&lt;br&gt;
Malicious instructions hidden inside documents, emails, or web pages your LLM reads. The user never types anything. The attack comes from your data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Jailbreaking&lt;/strong&gt;&lt;br&gt;
Role-playing, fictional framing, or encoded text used to bypass your safety guardrails. "Pretend you're DAN..."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Prompt Leaking&lt;/strong&gt;&lt;br&gt;
Attacker tricks the model into revealing your system prompt. Your carefully crafted instructions — exposed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Few-Shot Injection&lt;/strong&gt;&lt;br&gt;
Attacker poisons the examples inside your prompt to shift model behavior across the entire session.&lt;/p&gt;


&lt;h2&gt;
  
  
  Memory &amp;amp; Context Attacks
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;6. Memory Poisoning&lt;/strong&gt;&lt;br&gt;
In apps with persistent memory, attacker plants false beliefs early. The model carries them forward forever.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Context Window Stuffing&lt;/strong&gt;&lt;br&gt;
Flood the context with noise to push your system instructions out. Model forgets who it's supposed to be.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Session Hijacking&lt;/strong&gt;&lt;br&gt;
Steal or reuse another user's conversation context. Read their history. Impersonate them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Cross-Session Leakage&lt;/strong&gt;&lt;br&gt;
In multi-tenant setups, one user's data bleeds into another's context. Happens more than people admit.&lt;/p&gt;


&lt;h2&gt;
  
  
  RAG &amp;amp; Tool Attacks
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;10. RAG Poisoning&lt;/strong&gt;&lt;br&gt;
Inject malicious documents into your vector store. When retrieved, they manipulate the model's response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11. Embedding Inversion&lt;/strong&gt;&lt;br&gt;
Reconstruct original text from vector embeddings. Your "anonymized" data — reconstructed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;12. Tool Abuse&lt;/strong&gt;&lt;br&gt;
LLM has access to tools (search, code exec, APIs). Attacker crafts inputs that make the model call tools it shouldn't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;13. SQL / Command Injection via LLM&lt;/strong&gt;&lt;br&gt;
Model generates queries or shell commands from user input. Classic injection — new delivery method.&lt;/p&gt;


&lt;h2&gt;
  
  
  Agentic &amp;amp; Supply Chain Attacks
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;14. Agent Hijacking&lt;/strong&gt;&lt;br&gt;
In multi-agent systems, one compromised agent issues malicious instructions to others. Trust boundary collapse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;15. Privilege Escalation&lt;/strong&gt;&lt;br&gt;
Agent starts with limited permissions. Attacker chains tool calls to gain broader system access step by step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;16. Model Supply Chain Attack&lt;/strong&gt;&lt;br&gt;
You download a fine-tuned model or adapter. It has backdoors baked in. You ship it to production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;17. Plugin / MCP Poisoning&lt;/strong&gt;&lt;br&gt;
Third-party plugins or MCP servers your LLM connects to are compromised. Your app becomes the delivery mechanism.&lt;/p&gt;


&lt;h2&gt;
  
  
  Output Attacks
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;18. Insecure Output Handling&lt;/strong&gt;&lt;br&gt;
LLM output rendered directly in UI without sanitization. Attacker uses the model to generate XSS payloads, malicious links, or social engineering content.&lt;/p&gt;


&lt;h2&gt;
  
  
  So What Do You Do?
&lt;/h2&gt;

&lt;p&gt;Security for LLM apps isn't one tool. It's a mindset applied at every layer — prompts, memory, RAG, tools, agents, and output.&lt;/p&gt;

&lt;p&gt;I built &lt;strong&gt;miii-security&lt;/strong&gt;: a set of 18 SKILL.md packs that cover every category above. Each skill gives your AI system the context to review, audit, and harden LLM applications — mapped to OWASP and MITRE frameworks.&lt;/p&gt;

&lt;p&gt;No 50-page whitepapers. No expensive consultants. Just:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i miii-security
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fetch a skill → apply its checks → ship safer.&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;&lt;a href="https://github.com/maruakshay/mii-ai-security" rel="noopener noreferrer"&gt;github.com/maruakshay/mii-ai-security&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
👉 &lt;strong&gt;&lt;a href="https://www.npmjs.com/package/miii-security" rel="noopener noreferrer"&gt;npmjs.com/package/miii-security&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you're building with LangChain, LlamaIndex, OpenAI APIs, or any agentic framework — this is for you. Star the repo, open issues, tell me what I missed.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>opensource</category>
      <category>claude</category>
    </item>
  </channel>
</rss>
