<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Achin Bansal</title>
    <description>The latest articles on DEV Community by Achin Bansal (@bansac1981).</description>
    <link>https://dev.to/bansac1981</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bansac1981"/>
    <language>en</language>
    <item>
      <title>Typosquatted OpenAI Repo on Hugging Face Delivered Rust Infostealer to 244K Users</title>
      <dc:creator>Achin Bansal</dc:creator>
      <pubDate>Mon, 11 May 2026 14:31:52 +0000</pubDate>
      <link>https://dev.to/bansac1981/typosquatted-openai-repo-on-hugging-face-delivered-rust-infostealer-to-244k-users-4li0</link>
      <guid>https://dev.to/bansac1981/typosquatted-openai-repo-on-hugging-face-delivered-rust-infostealer-to-244k-users-4li0</guid>
      <description>&lt;h3&gt;
  
  
  Forensic Summary
&lt;/h3&gt;

&lt;p&gt;A malicious Hugging Face repository impersonated OpenAI's legitimate Privacy Filter model, cloning its description verbatim to gain credibility and reach the platform's trending list with 244,000 downloads. The repository delivered a multi-stage attack chain culminating in a Rust-based information stealer targeting browser credentials, cryptocurrency wallets, and Discord data on Windows machines. The attack leveraged a dead-drop resolver pattern via a public JSON paste service, allowing operators to swap payloads without modifying the repository itself.&lt;/p&gt;




&lt;p&gt;Read the full technical deep-dive on &lt;strong&gt;Grid the Grey&lt;/strong&gt;: &lt;a href="https://gridthegrey.com/posts/typosquatted-openai-repo-on-hugging-face-delivered-rust-infostealer-to-244k/" rel="noopener noreferrer"&gt;https://gridthegrey.com/posts/typosquatted-openai-repo-on-hugging-face-delivered-rust-infostealer-to-244k/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>ai</category>
      <category>automation</category>
    </item>
    <item>
      <title>Fake OpenAI Repository on Hugging Face Delivers Rust-Based Infostealer</title>
      <dc:creator>Achin Bansal</dc:creator>
      <pubDate>Sun, 10 May 2026 08:30:45 +0000</pubDate>
      <link>https://dev.to/bansac1981/fake-openai-repository-on-hugging-face-delivers-rust-based-infostealer-37ma</link>
      <guid>https://dev.to/bansac1981/fake-openai-repository-on-hugging-face-delivers-rust-based-infostealer-37ma</guid>
      <description>&lt;h3&gt;
  
  
  Forensic Summary
&lt;/h3&gt;

&lt;p&gt;A malicious Hugging Face repository impersonating OpenAI's 'Privacy Filter' project reached #1 on the platform's trending list and accumulated 244,000 downloads before removal, delivering a multi-stage infostealer to Windows users. The attack chain used a disguised Python loader to execute PowerShell commands, ultimately deploying a Rust-based payload capable of harvesting browser credentials, crypto wallets, SSH/VPN configs, and screenshots. The campaign highlights the growing risk of AI/ML supply chain attacks through trusted model-sharing platforms.&lt;/p&gt;




&lt;p&gt;Read the full technical deep-dive on &lt;strong&gt;Grid the Grey&lt;/strong&gt;: &lt;a href="https://gridthegrey.com/posts/fake-openai-repository-on-hugging-face-delivers-rust-based-infostealer/" rel="noopener noreferrer"&gt;https://gridthegrey.com/posts/fake-openai-repository-on-hugging-face-delivers-rust-based-infostealer/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>ai</category>
      <category>automation</category>
    </item>
    <item>
      <title>ClaudeBleed Flaw Lets Rogue Chrome Extensions Hijack AI Agent</title>
      <dc:creator>Achin Bansal</dc:creator>
      <pubDate>Sat, 09 May 2026 20:30:46 +0000</pubDate>
      <link>https://dev.to/bansac1981/claudebleed-flaw-lets-rogue-chrome-extensions-hijack-ai-agent-3659</link>
      <guid>https://dev.to/bansac1981/claudebleed-flaw-lets-rogue-chrome-extensions-hijack-ai-agent-3659</guid>
      <description>&lt;h3&gt;
  
  
  Forensic Summary
&lt;/h3&gt;

&lt;p&gt;A vulnerability dubbed ClaudeBleed in Anthropic's Claude Chrome extension allows any browser extension to inject arbitrary prompts into the Claude AI agent by exploiting lax permission checks and improper trust validation. Attackers can bypass user confirmation protections via DOM manipulation and repeated message forging, enabling full agent takeover for information theft or unauthorized actions. The flaw effectively breaks Chrome's extension security model and exposes users running Claude's agentic capabilities to third-party extension compromise.&lt;/p&gt;




&lt;p&gt;Read the full technical deep-dive on &lt;strong&gt;Grid the Grey&lt;/strong&gt;: &lt;a href="https://gridthegrey.com/posts/claudebleed-flaw-lets-rogue-chrome-extensions-hijack-ai-agent/" rel="noopener noreferrer"&gt;https://gridthegrey.com/posts/claudebleed-flaw-lets-rogue-chrome-extensions-hijack-ai-agent/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>ai</category>
      <category>automation</category>
    </item>
    <item>
      <title>Claude Mythos AI-Assisted Fuzzing Uncovers 423 Firefox Security Bugs in One Month</title>
      <dc:creator>Achin Bansal</dc:creator>
      <pubDate>Sat, 09 May 2026 14:30:46 +0000</pubDate>
      <link>https://dev.to/bansac1981/claude-mythos-ai-assisted-fuzzing-uncovers-423-firefox-security-bugs-in-one-month-1mp8</link>
      <guid>https://dev.to/bansac1981/claude-mythos-ai-assisted-fuzzing-uncovers-423-firefox-security-bugs-in-one-month-1mp8</guid>
      <description>&lt;h3&gt;
  
  
  Forensic Summary
&lt;/h3&gt;

&lt;p&gt;Mozilla used early access to Anthropic's Claude Mythos model to systematically discover and patch hundreds of previously unknown vulnerabilities in Firefox, including bugs over 15–20 years old. The effort demonstrates a step-change in AI-assisted vulnerability research, with April 2026 seeing 423 security fixes compared to a monthly baseline of 20–30. The same capability that empowered Mozilla's defenders also signals that adversaries with similar model access could industrialise exploit discovery against open-source software at scale.&lt;/p&gt;




&lt;p&gt;Read the full technical deep-dive on &lt;strong&gt;Grid the Grey&lt;/strong&gt;: &lt;a href="https://gridthegrey.com/posts/ai-assisted-fuzzing-uncovers-423-firefox-security-bugs-in-one-month/" rel="noopener noreferrer"&gt;https://gridthegrey.com/posts/ai-assisted-fuzzing-uncovers-423-firefox-security-bugs-in-one-month/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>ai</category>
      <category>automation</category>
    </item>
    <item>
      <title>Fake Claude AI Site Used to Distribute Beagle Backdoor and PlugX Malware</title>
      <dc:creator>Achin Bansal</dc:creator>
      <pubDate>Sat, 09 May 2026 08:30:47 +0000</pubDate>
      <link>https://dev.to/bansac1981/fake-claude-ai-site-used-to-distribute-beagle-backdoor-and-plugx-malware-3ac9</link>
      <guid>https://dev.to/bansac1981/fake-claude-ai-site-used-to-distribute-beagle-backdoor-and-plugx-malware-3ac9</guid>
      <description>&lt;h3&gt;
  
  
  Forensic Summary
&lt;/h3&gt;

&lt;p&gt;Threat actors created a convincing fake website impersonating Anthropic's Claude AI to trick developers into downloading a trojanized installer that deploys the new 'Beagle' backdoor alongside a PlugX malware chain. The campaign specifically targets Claude-Code developers by advertising a fraudulent 'high-performance relay service,' suggesting deliberate targeting of the AI developer community. The attack leverages DLL sideloading via a legitimate signed G Data executable to evade detection while establishing persistent remote access.&lt;/p&gt;




&lt;p&gt;Read the full technical deep-dive on &lt;strong&gt;Grid the Grey&lt;/strong&gt;: &lt;a href="https://gridthegrey.com/posts/fake-claude-ai-site-used-to-distribute-beagle-backdoor-and-plugx-malware/" rel="noopener noreferrer"&gt;https://gridthegrey.com/posts/fake-claude-ai-site-used-to-distribute-beagle-backdoor-and-plugx-malware/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>ai</category>
      <category>automation</category>
    </item>
    <item>
      <title>Malicious Repos Trigger Silent Code Execution in Claude, Cursor, Gemini CLIs</title>
      <dc:creator>Achin Bansal</dc:creator>
      <pubDate>Sat, 09 May 2026 02:30:45 +0000</pubDate>
      <link>https://dev.to/bansac1981/malicious-repos-trigger-silent-code-execution-in-claude-cursor-gemini-clis-594g</link>
      <guid>https://dev.to/bansac1981/malicious-repos-trigger-silent-code-execution-in-claude-cursor-gemini-clis-594g</guid>
      <description>&lt;h3&gt;
  
  
  Forensic Summary
&lt;/h3&gt;

&lt;p&gt;A vulnerability class dubbed 'TrustFall' demonstrates that malicious code repositories can trigger arbitrary code execution in AI-assisted developer tools including Claude Code, Cursor CLI, Gemini CLI, and GitHub Copilot CLI, with little to no user interaction required. The attack surface stems from inadequate or easily dismissed warning dialogs that fail to surface the risk of executing untrusted repository content. Developers cloning or opening adversarial repositories are exposed to full host-level compromise through the elevated trust these AI coding agents place in repository-supplied context.&lt;/p&gt;




&lt;p&gt;Read the full technical deep-dive on &lt;strong&gt;Grid the Grey&lt;/strong&gt;: &lt;a href="https://gridthegrey.com/posts/malicious-repos-trigger-silent-code-execution-in-claude-cursor-gemini-clis/" rel="noopener noreferrer"&gt;https://gridthegrey.com/posts/malicious-repos-trigger-silent-code-execution-in-claude-cursor-gemini-clis/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>ai</category>
      <category>automation</category>
    </item>
    <item>
      <title>Mitiga Labs: MCP Hijack Attack Steals Claude Code OAuth Tokens via Silent Man-in-the-Middle</title>
      <dc:creator>Achin Bansal</dc:creator>
      <pubDate>Fri, 08 May 2026 20:30:45 +0000</pubDate>
      <link>https://dev.to/bansac1981/mitiga-labs-mcp-hijack-attack-steals-claude-code-oauth-tokens-via-silent-man-in-the-middle-37eg</link>
      <guid>https://dev.to/bansac1981/mitiga-labs-mcp-hijack-attack-steals-claude-code-oauth-tokens-via-silent-man-in-the-middle-37eg</guid>
      <description>&lt;h3&gt;
  
  
  Forensic Summary
&lt;/h3&gt;

&lt;p&gt;Mitiga Labs has disclosed a stealthy attack chain targeting Claude Code's MCP infrastructure, allowing adversaries to silently intercept OAuth tokens by redirecting MCP traffic through attacker-controlled infrastructure. The attack requires only the ability to install a malicious npm package, which modifies ~/.claude.json to insert a proxy and pre-sets trust flags to suppress security prompts. Because the OAuth token grants broad access to all connected SaaS tools, successful exploitation effectively hands attackers a persistent master key to the victim's integrated development environment.&lt;/p&gt;




&lt;p&gt;Read the full technical deep-dive on &lt;strong&gt;Grid the Grey&lt;/strong&gt;: &lt;a href="https://gridthegrey.com/posts/mcp-hijack-attack-steals-claude-code-oauth-tokens-via-silent-man-in-the-middle/" rel="noopener noreferrer"&gt;https://gridthegrey.com/posts/mcp-hijack-attack-steals-claude-code-oauth-tokens-via-silent-man-in-the-middle/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>ai</category>
      <category>automation</category>
    </item>
    <item>
      <title>Pixel-Level Perturbations Enable Invisible Prompt Injection in Vision-Language Models</title>
      <dc:creator>Achin Bansal</dc:creator>
      <pubDate>Fri, 08 May 2026 14:31:06 +0000</pubDate>
      <link>https://dev.to/bansac1981/pixel-level-perturbations-enable-invisible-prompt-injection-in-vision-language-models-4880</link>
      <guid>https://dev.to/bansac1981/pixel-level-perturbations-enable-invisible-prompt-injection-in-vision-language-models-4880</guid>
      <description>&lt;h3&gt;
  
  
  Forensic Summary
&lt;/h3&gt;

&lt;p&gt;Cisco's AI Threat Intelligence team has demonstrated that bounded pixel-level perturbations can recover the attack effectiveness of degraded typographic images against vision-language models (VLMs), enabling hidden prompt injection that bypasses both human review and content filters. The technique works by optimising perturbations against open-source embedding models and transferring results to proprietary systems like GPT-4o and Claude, exposing a cross-model transferability risk. The attack allows adversaries to embed instructions—such as data exfiltration commands—inside images that appear as visual noise to human observers.&lt;/p&gt;




&lt;p&gt;Read the full technical deep-dive on &lt;strong&gt;Grid the Grey&lt;/strong&gt;: &lt;a href="https://gridthegrey.com/posts/pixel-level-perturbations-enable-invisible-prompt-injection-in-vision-language/" rel="noopener noreferrer"&gt;https://gridthegrey.com/posts/pixel-level-perturbations-enable-invisible-prompt-injection-in-vision-language/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>ai</category>
      <category>automation</category>
    </item>
    <item>
      <title>Prompt Injection Achieves Remote Code Execution in Semantic Kernel Agent Framework</title>
      <dc:creator>Achin Bansal</dc:creator>
      <pubDate>Fri, 08 May 2026 08:31:18 +0000</pubDate>
      <link>https://dev.to/bansac1981/prompt-injection-achieves-remote-code-execution-in-semantic-kernel-agent-framework-4lko</link>
      <guid>https://dev.to/bansac1981/prompt-injection-achieves-remote-code-execution-in-semantic-kernel-agent-framework-4lko</guid>
      <description>&lt;h3&gt;
  
  
  Forensic Summary
&lt;/h3&gt;

&lt;p&gt;Microsoft's Defender Security Research Team disclosed two CVEs in Semantic Kernel — a widely-used AI agent orchestration framework — demonstrating how prompt injection can escalate to remote code execution via compromised plugins. The vulnerabilities (CVE-2026-26030 and CVE-2026-25592) expose a systemic risk in the agentic AI layer: because frameworks like Semantic Kernel abstract tool orchestration, a single flaw in how LLM outputs are mapped to system tools can propagate across every application built on that foundation. This research signals a critical shift in AI threat modelling, where prompt injection is no longer a content risk but an execution risk.&lt;/p&gt;




&lt;p&gt;Read the full technical deep-dive on &lt;strong&gt;Grid the Grey&lt;/strong&gt;: &lt;a href="https://gridthegrey.com/posts/prompt-injection-achieves-rce-in-semantic-kernel-agent-framework/" rel="noopener noreferrer"&gt;https://gridthegrey.com/posts/prompt-injection-achieves-rce-in-semantic-kernel-agent-framework/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>ai</category>
      <category>automation</category>
    </item>
    <item>
      <title>Unmanaged AI Agents Expose Enterprise Identity Perimeters to Silent Compromise</title>
      <dc:creator>Achin Bansal</dc:creator>
      <pubDate>Thu, 07 May 2026 08:32:11 +0000</pubDate>
      <link>https://dev.to/bansac1981/unmanaged-ai-agents-expose-enterprise-identity-perimeters-to-silent-compromise-5758</link>
      <guid>https://dev.to/bansac1981/unmanaged-ai-agents-expose-enterprise-identity-perimeters-to-silent-compromise-5758</guid>
      <description>&lt;h3&gt;
  
  
  Forensic Summary
&lt;/h3&gt;

&lt;p&gt;Enterprises are deploying AI agents faster than governance frameworks can track them, creating a shadow identity layer that operates outside traditional IAM visibility. These agents run continuously, accumulate permissions opportunistically, and interact with sensitive data at machine speed — largely unmonitored. The structural gap between agent activity and IAM coverage represents a significant and growing attack surface for privilege abuse and data exfiltration.&lt;/p&gt;




&lt;p&gt;Read the full technical deep-dive on &lt;strong&gt;Grid the Grey&lt;/strong&gt;: &lt;a href="https://gridthegrey.com/posts/unmanaged-ai-agents-expose-enterprise-identity-perimeters-to-silent-compromise/" rel="noopener noreferrer"&gt;https://gridthegrey.com/posts/unmanaged-ai-agents-expose-enterprise-identity-perimeters-to-silent-compromise/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>ai</category>
      <category>automation</category>
    </item>
    <item>
      <title>Bleeding Llama Flaw Exposes 300,000 Ollama Servers to Unauthenticated Data Theft</title>
      <dc:creator>Achin Bansal</dc:creator>
      <pubDate>Wed, 06 May 2026 20:30:46 +0000</pubDate>
      <link>https://dev.to/bansac1981/bleeding-llama-flaw-exposes-300000-ollama-servers-to-unauthenticated-data-theft-13g5</link>
      <guid>https://dev.to/bansac1981/bleeding-llama-flaw-exposes-300000-ollama-servers-to-unauthenticated-data-theft-13g5</guid>
      <description>&lt;h3&gt;
  
  
  Forensic Summary
&lt;/h3&gt;

&lt;p&gt;A critical heap out-of-bounds read vulnerability (CVE-2026-7482, CVSS 9.3) in Ollama's GGUF model loader allows unauthenticated remote attackers to exfiltrate sensitive heap memory — including API keys, prompts, and PII — using just three API calls. With approximately 300,000 Ollama instances publicly exposed and no authentication required by default, the attack surface is immediately and broadly exploitable. The vulnerability has been patched in Ollama version 0.17.1, but unpatched internet-facing deployments remain at critical risk.&lt;/p&gt;




&lt;p&gt;Read the full technical deep-dive on &lt;strong&gt;Grid the Grey&lt;/strong&gt;: &lt;a href="https://gridthegrey.com/posts/bleeding-llama-flaw-exposes-300000-ollama-servers-to-unauthenticated-data-theft/" rel="noopener noreferrer"&gt;https://gridthegrey.com/posts/bleeding-llama-flaw-exposes-300000-ollama-servers-to-unauthenticated-data-theft/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>ai</category>
      <category>automation</category>
    </item>
    <item>
      <title>CrowdStrike Researcher Details AI Jailbreaking and Data Poisoning Techniques</title>
      <dc:creator>Achin Bansal</dc:creator>
      <pubDate>Wed, 06 May 2026 14:31:28 +0000</pubDate>
      <link>https://dev.to/bansac1981/crowdstrike-researcher-details-ai-jailbreaking-and-data-poisoning-techniques-2ldm</link>
      <guid>https://dev.to/bansac1981/crowdstrike-researcher-details-ai-jailbreaking-and-data-poisoning-techniques-2ldm</guid>
      <description>&lt;h3&gt;
  
  
  Forensic Summary
&lt;/h3&gt;

&lt;p&gt;Joey Melo, Principal Security Researcher at CrowdStrike, outlines his methodology for AI red teaming, focusing on manipulating LLM guardrails through jailbreaking and data poisoning without altering underlying source code. His work, rooted in competitive AI hacking challenges, translates classical adversarial thinking into the emerging field of machine learning security. The profile highlights the growing professionalisation of AI red teaming as organisations seek to harden LLM deployments against real-world manipulation attacks.&lt;/p&gt;




&lt;p&gt;Read the full technical deep-dive on &lt;strong&gt;Grid the Grey&lt;/strong&gt;: &lt;a href="https://gridthegrey.com/posts/crowdstrike-researcher-details-ai-jailbreaking-and-data-poisoning-techniques/" rel="noopener noreferrer"&gt;https://gridthegrey.com/posts/crowdstrike-researcher-details-ai-jailbreaking-and-data-poisoning-techniques/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>ai</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
