On May 12, 2026, Microsoft Threat Intelligence along with security firms (Aikido, Wiz, Socket, and others) disclosed that mistralai==2.4.6 on PyPI contained malicious code. This was the official Python client library for Mistral AI's large language models.
The malicious version remained live for only a few hours but may have been downloaded by thousands of developers working on AI agents, trading bots, smart contract tools, RAG pipelines, and internal applications.
Key facts:
- Only version 2.4.6 was affected. All other versions are clean.
- The package has been removed from PyPI.
- This attack is part of the ongoing "Mini Shai-Hulud" campaign that has already compromised many popular packages across PyPI and npm.
How the Malware Worked (Technical Breakdown)
The attack was stealthy and effective:
Execution on Import Malicious code was injected into src/mistralai/client/init.py. Simply running import mistralai on Linux systems triggered the payload.
Payload Delivery It silently downloaded https://83.142.209.194/transformers.pyz to /tmp/transformers.pyz and executed it in the background. The filename was chosen to mimic the legitimate Hugging Face transformers library.
Credential Harvesting The malware searched the system for:
- GitHub tokens
- Cloud credentials (AWS, GCP, Azure)
- API keys
- Passwords stored in common locations
Potentially crypto wallet related files
Evasion TechniquesSkipped systems set to Russian language.
On systems appearing to be in Israel or Iran, it had a random chance to run destructive commands that could wipe files.
Immediate Actions for Developers (Do This Today)
1. Check if you installed the malicious version
Check installed version
pip list | grep mistralai
Search in dependency files
grep -E "mistralai==2.4.6" \
requirements*.txt pyproject.toml uv.lock poetry.lock Pipfile Pipfile.lock 2>/dev/null
2. Revert to Safe Version
Downgrade to clean version
pip install mistralai==2.4.5 --force-reinstall
Or install the latest clean version
pip install mistralai --upgrade
3. Scan for Indicators of Compromise
Check for dropped payload
ls /tmp/transformers.pyz
Look for suspicious files
find /tmp -name "transformer" -type f 2>/dev/null
4. Rotate All Secrets
- Rotate GitHub Personal Access Tokens (especially those with broad scopes)
- Rotate cloud access keys
- Change API keys used with Mistral services
- Update secrets in CI/CD pipelines
Long-Term Defenses Every AI/Python Developer Should Adopt
Dependency Security Checklist:
- Always pin exact versions in production (never use loose versions like mistralai).
- Use lock files (poetry.lock, uv.lock, etc.) and regularly audit them.
- Add dependency scanning in CI/CD (pip-audit, safety, osv-scanner, Dependabot).
- Generate SBOMs for critical projects.
- Use virtual environments or containers for all experiments.
- Wait 24-48 hours before adopting newly released versions of popular packages.
Consider internal package mirrors (Artifactory, Nexus, or simple PyPI cache) for team projects.
For teams heavily using Mistral AI:
Audit all code using from mistralai import MistralReview automated dependency update tools and add allow-lists for critical AI packages.
Why This Keeps Happening
Supply chain attacks are increasing because developers often install packages with a single command without verification. Attackers now target widely used tools, especially in the fast-moving AI and crypto development space. The "Mini Shai-Hulud" campaign proves that even official packages from reputable companies can be compromised.
Conclusion
No package is completely safe, even from well-known AI companies like Mistral AI. Security must be part of every developer's daily workflow.
Verify. Pin versions. Scan regularly. Rotate secrets.
Top comments (0)