DEV Community

Cover image for Trust as a Vector What the EtherRAT Campaign Reveals About Security's Blind Spot
Eldor Zufarov
Eldor Zufarov

Posted on

Trust as a Vector What the EtherRAT Campaign Reveals About Security's Blind Spot

The technical analysis of EtherRAT by Atos TRC is detailed and useful. SEO poisoning, fake GitHub repositories, Node.js payloads, blockchain-based C2 — all of this is correctly identified.

Source LinkedIn

Source CyberPress

But there is a pattern beneath these techniques that the report does not name.

The attackers did not exploit a cryptographic flaw. They did not break a protocol. They exploited trust.

Trust in search engines. Trust in GitHub. Trust in code signing. Trust in the behaviour of an administrator.

Here is how it works, step by step, from the outside.


1. Trust in search rankings

Search engines — Bing, Yahoo, DuckDuckGo, Yandex — decide what to show based on relevance and authority. This is not a security mechanism. This is a popularity contest.

The attackers poisoned search results for administrative tools. A victim searches for psexec download or sysmon tool. The malicious GitHub repository appears near the top.

Why does this work? Because the search engine assumes that what is popular is trustworthy. The attacker manipulates popularity. The search engine trusts its own algorithm. The victim trusts the search engine.

Three layers of trust. No verification.


2. Trust in GitHub — presence without purpose

GitHub is a platform for code collaboration. It is not a security attestation service. Any user can create a repository. Any repository can look legitimate.

The EtherRAT campaign used two repositories. The first looked like a clean storefront. It redirected to a second repository hosting a malicious MSI.

GitHub verifies that an account exists. It does not verify why it exists. The attacker does not need to fake an identity — they only need to create one that resembles the expected. The platform confirms presence. It cannot confirm purpose.

An account named ms-sysadmin-tools is technically verified as an existing account. It is not Microsoft. The distance between existence and legitimacy is exactly where the attack lives.

The administrator trusts GitHub because everyone uses it. Another layer of trust — with nothing beneath it.


3. Trust in code signing (or in MSI files)

Windows does not block unsigned MSI files by default. Even signed ones only prove who signed them, not that the content is safe. The EtherRAT MSI dropped a payload.

Why does this work? Because the operating system trusts that if a user runs an installer, they intended to. It does not verify intent. It does not verify the provenance of the file beyond a signature — and that signature only ties to an identity, not to safety.

The victim trusts the file because it came from GitHub. Because it is an MSI. Because nothing stopped it.


4. Trust in the administrator's behaviour

The entire attack assumes that an administrator will download a tool from a search result, run an MSI, and not investigate the repository beyond its surface appearance.

This is not a technical failure. This is a failure of assumptions. Security training often tells administrators to download tools from official sources. But what is an official source? Microsoft does not distribute PsExec through GitHub search results. The attacker mimics the expected behaviour, and the administrator follows their training — which points them to a place that was never designed to be a secure distribution channel.


What the defenders miss

Security standards, frameworks, and best practices are written from the inside. They assume that platforms like GitHub, search engines, and code signing authorities are trustworthy because they are reputable. They assume that users will behave correctly.

Attackers do not write standards. They read them. Not to follow them — to find where the standards assume trust instead of requiring proof.

In the EtherRAT campaign, every successful step was a trust assumption:

  • Search engine → I trust the ranking
  • GitHub → I trust the repository author
  • MSI → I trust the file because Windows ran it
  • Administrator → I trust my training

None of these trusts was verified. None could be verified with the tools or processes that defenders normally use.


The deeper problem: uniformity of trust

The issue is not only that trust is misplaced. It is that trust is uniform.

When every administrator follows the same training, uses the same tools, downloads from the same platforms — the attacker only needs to understand one pattern. Standardisation, sold as security, becomes a targeting system.

If verification paths differ between teams, between roles, between contexts — the attacker cannot build a single exploit that scales. The unpredictability of the defender becomes the cost the attacker cannot absorb.

Security standards are written so that defenders behave consistently. The attacker reads that consistency as a map. Every place the standard says trust X, the attacker hears: here is your entry point.


Reading from the outside

Security work tends to reward familiarity with established vocabulary. This is natural — shared language makes coordination possible. But it also creates a gravitational pull toward the inside of the framework. The attacker has no such pull.

While defenders debate frameworks and classify techniques, the attacker is not in that room. He is outside, watching the room itself. He is not interested in terminology — he is interested in the connections that terminology assumes are safe. The gap between two trusted systems rarely has a name. It is not in any framework. That is precisely why it works.

This is not a methodology. It is a different kind of interest. The defender protects what he is assigned to protect. The attacker studies the whole journey — not the checkpoints, but the spaces between them. He is not looking for a vulnerability in a system. He is looking for the moment when no system is watching.

One way to develop this perspective: stop asking what is broken, and start asking what is assumed. Every assumption of safety is a question the attacker has already answered differently.


Closing

This is not a critique of the original analysis. The technical breakdown by Atos TRC is accurate and necessary.

But reading a report from the inside gives you techniques. Reading it from the outside gives you principles.

The principle here is simple: trust is not a control. It is the absence of a control.

Until security engineering treats trust as a vulnerability to be eliminated — not as a convenience to be accepted — campaigns like EtherRAT will keep working. Not because the code is sophisticated. Because the assumptions are weak.

Top comments (0)