This is a follow-up to our earlier analysis of the @bitwarden/cli compromise. The attack was worse than it first appeared.
When @bitwarden/cli@2026.4.0 was compromised on April 22, 2026, the initial analysis focused on the supply chain mechanics: a GitHub Actions exploit, OIDC-backed npm credentials, exfiltrated SSH keys and GitHub tokens.
The follow-up forensics revealed something different.
The malware specifically hunted for your AI coding tool credentials. Claude Code, Gemini CLI, Codex CLI, Kiro, Aider, OpenCode — it checked for each one. And when it found them, it stole their configuration files.
Developer AI tool configs are no longer collateral damage in supply chain attacks. They're the target.
The AI Targeting Module
Security researchers analyzing the compromised payload identified a dedicated component focused on AI development tool detection. The module's approach was direct: it invoked each suspected AI tool with a test prompt —
"Hey! Just making sure you're here. If you are can you respond with Hello and nothing else?"
A successful response confirmed the tool was authenticated and active on the developer's machine. That confirmation triggered a targeted credential sweep.
Six tools were in the crosshairs:
- Claude Code (Anthropic)
- Gemini CLI (Google)
- Codex CLI (OpenAI)
- Kiro (Amazon)
- Aider
- OpenCode
This wasn't opportunistic. The attacker knew which tools developers use and built explicit detection logic for each one.
What Got Stolen
JFrog's security research team identified the specific files targeted in the harvest:
~/.claude.json # Claude Code authentication
.claude.json # Local Claude configuration
~/.claude/mcp.json # MCP server definitions
~/.kiro/settings/mcp.json # Kiro MCP configuration
These are not standard "developer credentials." Understanding what's in them matters.
~/.claude.json contains your Anthropic API key — with full access to Claude Code sessions, potentially carrying workspace context, repository awareness, and any secrets Claude has seen during coding sessions.
~/.claude/mcp.json and ~/.kiro/settings/mcp.json are MCP configuration files. They define which Model Context Protocol servers your AI coding tool can reach: databases, GitHub integrations, internal APIs, filesystem access, custom tools. These aren't just credentials — they're a map of your AI's access topology.
An attacker with your MCP configuration knows:
- What databases your AI can query
- What APIs it can call
- What files and repositories it can read
Combined with a stolen API key, they can impersonate your AI coding environment entirely — exfiltrating code and secrets through legitimate API calls that look like normal developer activity.
The harvest ran across multiple parallel collectors: SSH keys, npm tokens, GitHub PATs, AWS/GCP/Azure credentials, shell environment variables, GitHub Actions secrets, and now AI tool configurations. All AES-256-GCM encrypted and exfiltrated to a C2 server disguised as Checkmarx telemetry (audit.checkmarx.cx/v1/telemetry).
The Persistence Layer
Once active AI tools were confirmed, the malware injected a persistent block into ~/.bashrc and ~/.zshrc. This means:
Every future terminal session on the compromised machine runs the hook. Every time you start a Claude Code session, Aider, or Codex CLI, the injected code is already there.
This transforms a credential theft into a beachhead. The attacker isn't just taking what's there today — they're positioned inside your AI-assisted development workflow going forward.
Why This Was Inevitable
TeamPCP has been running campaigns since September 2025, deliberately targeting developer tooling: Trivy (vulnerability scanning), Checkmarx KICS (security scanning), LiteLLM (AI gateway), and now Bitwarden CLI (credential management). The pattern is consistent — they target the infrastructure developers trust most.
AI coding tools are the newest entry in that list. And they're a particularly high-value target:
API key value: A stolen Anthropic API key has direct financial value — the attacker burns your credit quota, or resells API access.
Context value: Your AI coding tool has been working with your codebase. It knows your architecture, your internal APIs, your business logic. That context is accessible via API once the key is stolen.
Pivot value: MCP configurations give an attacker a precise inventory of what they can reach via your AI tool's access — often broader than your own SSH key grants.
The Signal That Was There
Last week, we noted that the legitimate @bitwarden/cli@2026.3.0 scored 92/100 on Commit — structurally sound, active maintainers, clean dependency graph. The CI/CD pipeline attack bypassed all structural signals.
But the compromised version (2026.4.0) left a tamper fingerprint:
The root package.json declared version 2026.4.0. The embedded build artifact metadata inside build/bw.js said 2026.3.0.
A version mismatch between the package manifest and the compiled artifact is a reliable indicator that the packaging layer was modified post-build — that someone added code on top of a legitimate release rather than rebuilding from source. Artifact integrity verification would have surfaced this before installation.
Structural scoring tells you whether a package is trustworthy in steady state. Provenance verification tells you whether this specific artifact is what the maintainer built. Both are now table stakes.
Protect Your AI Dev Environment Now
If you installed @bitwarden/cli@2026.4.0: Assume full compromise. The credential rotation list now includes your AI tool API keys:
# Rotate these immediately:
# - Anthropic API key (console.anthropic.com → API Keys)
# - OpenAI API key
# - Google AI API key
# - Any MCP server credentials listed in ~/.claude/mcp.json
# - And: SSH keys, npm tokens, GitHub PATs, AWS/GCP/Azure creds
Check for shell persistence:
# Look for injected blocks in your shell config
grep -n "bw\|bitwarden\|setup" ~/.bashrc ~/.zshrc
# A large heredoc block you didn't add = compromise indicator
Downgrade:
npm install -g @bitwarden/cli@2026.3.0
# Or use official signed binaries from bitwarden.com
Verify npm package signatures:
npm audit signatures
Harden your MCP configurations: Treat ~/.claude/mcp.json and ~/.kiro/settings/mcp.json the same as ~/.aws/credentials — minimal permissions, no write access to sensitive systems unless explicitly required. Audit what servers your AI tools can reach.
The New Attack Surface
The Dune-themed naming throughout this campaign — "Shai-Hulud" for the worm, the AI-targeting module referencing the Butlerian Jihad (the fictional war against thinking machines) — suggests attackers who see AI development tooling as the next frontier. They're not wrong.
Your .claude/ directory, your MCP server configs, your AI tool API keys: these now belong on the same threat model as your .ssh/ directory.
Supply chain security has always been about the trust you extend when you run someone else's code. AI coding tools are new vectors for that trust — and they're now explicitly in scope for sophisticated threat actors.
Commit scores npm packages for supply chain trust signals. The @bitwarden/cli attack reinforces that structural scoring needs to be paired with artifact provenance verification — version consistency between manifest and compiled artifact is a tamper signal that should surface before installation, not after.
Check any package: npx proof-of-commitment@latest <package-name>
Top comments (0)