So, this week I stumbled across something that instantly caught my eye: PromptLock. Researchers dug it up, and it's basically the first ransomware prototype that runs on top of a local AI model.
Yeah, not ChatGPT through an API, but an actual LLM sitting right there on your machine, helping the malware decide what to do.
What's Going On Here:
PromptLock uses a local model called gpt-oss:20b (running via Ollama).
Instead of shipping one fixed script, it tells the model: "Hey, write me some Lua code to scan these folders". Then next run it might generate a slightly different script.
Those scripts can:
- list and filter your files
- pick "valuable" stuff
- encrypt or even exfiltrate it
Basically, it doesn't look the same each time, which is a nightmare for signature-based antiviruses.
Why It Matters:
- It's cross-platform - Windows, Linux, macOS.
- It's adaptive - behavior shifts each time.
- And it's local - so no obvious API calls to track.
Now, not everyone's going to be hit by this. Most people don't have a giant LLM sitting on their personal laptop.
This smells way more like a corporate-level threat: servers, research labs, or any environment where local AI models are being tested.
For everyday users, your bigger risk is still the usual phishing mails and classic ransomware.
What We Can Actually Do:
- Backups (the ones you test and actually restore).
- Least privilege (don't run daily stuff as root/admin).
- Watch your scripts - random Lua popping up on your box is never a good sign.
- Behavior-based security tools over signature-based.
Nothing revolutionary here, but PromptLock shows how fast attackers adapt.
My Take
To me, the interesting bit isn't "oh no, AI is evil," but that malware authors are now treating LLMs as toolchains. They don't even need to write perfect malware anymore - they just need a model that can generate and tweak code on demand.
Feels like a new chapter in the cat-and-mouse game. But again, this isn't a doomsday scenario for every PC out there. It's a warning shot for enterprises and anyone experimenting with self-hosted AI.
Top comments (0)