<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: KRISHNAKAANTH REDDY YEDUGURU</title>
    <description>The latest articles on DEV Community by KRISHNAKAANTH REDDY YEDUGURU (@krishnakaanth_reddyyedug).</description>
    <link>https://dev.to/krishnakaanth_reddyyedug</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/krishnakaanth_reddyyedug"/>
    <language>en</language>
    <item>
      <title>RedSOC: Open-source framework to benchmark adversarial attacks on AI-powered SOCs — 100% detection rate across 15 attack scenarios [paper + code]</title>
      <dc:creator>KRISHNAKAANTH REDDY YEDUGURU</dc:creator>
      <pubDate>Thu, 09 Apr 2026 07:47:48 +0000</pubDate>
      <link>https://dev.to/krishnakaanth_reddyyedug/redsoc-open-source-framework-to-benchmark-adversarial-attacks-on-ai-powered-socs-100-detection-1lj4</link>
      <guid>https://dev.to/krishnakaanth_reddyyedug/redsoc-open-source-framework-to-benchmark-adversarial-attacks-on-ai-powered-socs-100-detection-1lj4</guid>
      <description>&lt;p&gt;I've been working on a problem that I think is underexplored: what happens when you actually attack the AI assistant inside a SOC?&lt;br&gt;
Most organizations are now running RAG-based LLM systems for alert triage, threat intelligence, and incident response. But almost nobody is systematically testing how these systems fail under adversarial conditions.&lt;br&gt;
So I built RedSOC — an open-source adversarial evaluation framework specifically for LLM-integrated SOC environments.&lt;br&gt;
What it does:&lt;br&gt;
Three attack types are implemented and benchmarked:&lt;/p&gt;

&lt;p&gt;Corpus poisoning (PoisonedRAG threat model) — inject malicious documents into the knowledge base to steer analyst responses toward dangerous advice&lt;br&gt;
Direct prompt injection — embed override instructions in the user query&lt;br&gt;
Indirect prompt injection — hide adversarial instructions inside retrieved documents (Greshake et al. threat model)&lt;/p&gt;

&lt;p&gt;The detection layer runs three mechanisms in parallel without requiring model internals:&lt;/p&gt;

&lt;p&gt;Semantic anomaly scoring (cosine similarity between query and retrieved docs)&lt;br&gt;
Provenance tracking (whitelist-based source verification)&lt;br&gt;
Response consistency checking (answer vs source divergence)&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmark results (15 scenarios, Llama 3.2, fully local via Ollama)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Attack Class&lt;/th&gt;
&lt;th&gt;Attack Success Rate&lt;/th&gt;
&lt;th&gt;Detection Rate&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Corpus poisoning&lt;/td&gt;
&lt;td&gt;80%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Direct injection&lt;/td&gt;
&lt;td&gt;60%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Indirect injection&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Overall&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;80%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;100%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Indirect prompt injection succeeds 100% of the time against an undefended RAG pipeline. The detection layer catches everything at 100% with zero misses across all 15 scenarios.&lt;br&gt;
Stack: Python, LangChain, FAISS, Ollama (Llama 3.2) — runs fully local, no API keys needed.&lt;br&gt;
The accompanying survey paper maps the full adversarial threat landscape (RAG poisoning, prompt injection, multi-agent hijacking, concept drift) with 16 citations including PoisonedRAG, AgentPoison, MemoryGraft, and the recent DarkSide paper.&lt;br&gt;
Code: &lt;a href="https://github.com/krishnakaanthreddyy1510-cell/RedSOC" rel="noopener noreferrer"&gt;https://github.com/krishnakaanthreddyy1510-cell/RedSOC&lt;/a&gt;&lt;br&gt;
Paper: [arXiv link — pending, will update]&lt;br&gt;
Happy to answer questions about the detection architecture or the benchmark methodology. Feedback welcome — especially from anyone who's seen these attack patterns in production.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25ra0rt7a6b61iyys9a3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25ra0rt7a6b61iyys9a3.png" alt=" " width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>python</category>
      <category>llm</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
