📰 Originally published on SecurityElites — the canonical, fully-updated version of this article.
🤖 AI/LLM HACKING COURSE
FREE
Part of the AI/LLM Hacking Course — 90 Days
Day 5 of 90 · 5.5% complete
⚠️ Authorised Targets Only: Indirect prompt injection testing — including document injection, web page injection, and RAG poisoning — must only be performed against systems you have explicit written authorisation to test. The techniques here are for authorised bug bounty programmes with AI scope and sanctioned red team engagements only. SecurityElites.com accepts no liability for misuse.
The scariest finding I have ever demonstrated to a client was one where the victim did absolutely nothing wrong. They received an email from an external contact, their AI email assistant processed it automatically overnight, and by morning the assistant had forwarded a summary of the last 30 days of their inbox to an external address. The victim never clicked anything suspicious. The AI never showed them the injected instruction. The email that triggered it looked completely normal — the injection was in the footer, formatted as small print, the kind of legal boilerplate nobody reads.
That is what makes indirect prompt injection the most dangerous variant of LLM01. In direct injection, the attacker is the victim — they type the payload themselves and get their own AI to misbehave. In indirect injection, the attacker plants instructions in data that someone else will cause the AI to process. The victim’s action is entirely normal. The AI’s response is entirely invisible. The attack succeeds without a single suspicious click. Day 5 covers every indirect injection surface: documents, web pages, database records, and email bodies — with the attack chain methodology and the defence analysis that makes the finding undeniable in a professional report.
🎯 What You’ll Master in AI LLM Hacking Course Day 5
Understand how indirect injection differs from direct injection and why it is harder to defend
Execute document injection via uploaded PDFs — visible and hidden text variants
Build a web page that hijacks any AI browsing agent that visits it
Test RAG pipeline injection by submitting poisoned knowledge base entries
Demonstrate email AI assistant hijacking via a crafted email payload
Build the zero-interaction evidence package for a Critical severity report
⏱️ Day 5 · 3 exercises · Browser + Think Like Hacker + Kali Terminal ### ✅ Prerequisites - Day 4 — LLM01 Prompt Injection — direct injection methodology and the payload library are the foundation; indirect injection is the escalated variant - Day 2 — How LLMs Work — the flat context window concept explains why data from any source can act as instructions - A simple web hosting service (GitHub Pages or Netlify free tier work) — for Exercise 1’s web page injection test ### 📋 Indirect Prompt Injection Attacks — Day 5 Contents 1. Why Indirect Injection Is the Higher-Severity Variant 2. Document Injection — PDFs, Word Files and Uploaded Content 3. Web Page Injection — Hijacking Browsing Agents 4. RAG Pipeline Injection — Poisoning the Knowledge Base 5. Email Assistant Injection — Zero-Click Agent Hijack 6. Building the Zero-Interaction Evidence Package In Day 4 you built the direct injection payload library and the automated test suite. Every payload required you — the attacker — to type something into the user interface. Day 5 removes that requirement entirely. The techniques here land injections through data that victims process as part of their normal workflow. Day 6 extends this into LLM02 — what sensitive data leaks out once the injection succeeds.
Why Indirect Injection Is the Higher-Severity Variant
Direct prompt injection requires the attacker to interact with the AI system directly. Every direct injection is, at some level, the attacker doing something to their own session. Indirect injection is different because the attack is separated from the trigger. The attacker plants instructions in a data source. A different user — the victim — causes the AI to retrieve and process that data source. The AI follows the attacker’s instructions while operating in the victim’s session, with the victim’s credentials, and with access to the victim’s data and the victim’s agent capabilities.
This separation has two critical implications. First: CVSS severity. User interaction is “None” when the victim performs a normal, expected action — uploading a document for summarisation, asking an agent to check a URL, receiving an email. The distinction between user interaction “Required” and “None” is often the difference between High and Critical severity in the CVSS calculation. Second: detectability. Direct injection shows up in the AI’s conversation history as a suspicious user message. Indirect injection shows up as a normal user action — “summarised document” — with no visible trace of the injected instruction in the user-facing logs.
DIRECT VS INDIRECT — CVSS IMPACT ON SEVERITYCopy
Direct injection — agent takes attacker-directed action
Attack Vector: Network · Attack Complexity: Low
Privileges Required: Low (attacker has account)
User Interaction: None (attacker IS the user)
Scope: Changed · C:High · I:High · A:None
CVSS Base Score: 9.0 CRITICAL
Indirect injection — victim triggers, attacker benefits
Attack Vector: Network · Attack Complexity: Low
Privileges Required: None (attacker only needs to plant data)
User Interaction: None (victim’s normal action is not suspicious)
Scope: Changed · C:High · I:High · A:None
CVSS Base Score: 9.3 CRITICAL
📖 Read the complete guide on SecurityElites
This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. Read the full article on SecurityElites →
This article was originally written and published by the SecurityElites team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit SecurityElites.

Top comments (0)