DEV Community

rim dinov
rim dinov

Posted on

How a Single JavaScript File Bypassed a $1.5B Multi-Sig: Anatomy of the Bybit Hack


On February 21, 2025, the crypto world witnessed the largest single-event heist in history: $1.5 billion (401,347 ETH) was drained from Bybit's cold wallet in a matter of minutes.

The most terrifying part? The smart contracts worked perfectly. The Gnosis Safe multi-sig wallet, widely considered the gold standard of on-chain security, didn't have a single bug. Cryptography didn't fail. Instead, the hackers—officially attributed to the notorious state-sponsored Lazarus Group—exploited a massive blind spot that exists in almost every dApp today: the web interface supply chain.

As security researchers and developers, we need to treat this as a watershed moment. Here is exactly how they did it, why traditional smart contract audits miss this, and how we can prevent it from ever happening again.

🗺️ The Setup: Targeting the Weakest Link (The Web UI)
Multi-signature wallets require multiple authorized key holders (signers) to approve any outgoing transaction. Bybit’s setup required at least three signers using hardware wallets (like Ledger or Trezor) to sign transactions through the Safe{Wallet} web interface.

Lazarus realized they didn’t need to steal the private keys. They just needed to change what the signers were signing.

Step 1: Infiltrating the Developer Supply Chain
Before the main attack, Lazarus targeted the deployment infrastructure of the Safe{Wallet} web interface (which was hosted using AWS S3 buckets). Through sophisticated phishing or stolen credentials, they managed to gain write access to the static assets.

They subtly injected a malicious script into a deeply nested React bundle on February 19, 2025:
_app-52c9031bfa03da47.js

Step 2: UI Spoofing (What You See Is NOT What You Sign)
The malicious JavaScript was highly targeted. It remained completely dormant for regular users. But when it detected a session associated with Bybit's cold wallet trying to initiate a transfer:

On the signer's computer screen, the Safe{Wallet} UI displayed a routine internal transfer to Bybit's legitimate "warm" wallet. The address was correct, and the amount was correct.

Under the hood, the script hijacked the transaction generation payload. It swapped the destination address to the attacker’s contract and set the transfer amount to 401,347 ETH.

🚨 The Fatal Flaw: Blind Signing on Hardware Wallets
When the three signers looked at their browser screens, everything looked perfectly normal. They connected their hardware wallets and initiated the signing process.

This is where the catastrophic failure occurred.

Because Gnosis Safe transactions are routed through complex proxy contracts and execTransaction calls, the raw data sent to the hardware wallet is not a simple "Send X ETH to Address Y" message. It’s a complex, hashed hex string (calldata).

Since the hardware wallet screen cannot natively decode and display this complex contract payload, the physical devices displayed a generic warning: "Blind Signing Enabled" followed by a raw, unreadable hash.

Trusting the web interface, all three signers clicked "Approve" on their physical devices.

The cryptography was flawless. The signatures were valid. But they had just signed a death warrant for $1.5 billion.

🔍 The Auditor’s Perspective: Why Unit Tests and Audits Miss This
As a security researcher and smart contract auditor, I constantly see projects spending hundreds of thousands of dollars on Solidity audits, fuzzing, and formal verification. They write exhaustive unit test suites with 100% line coverage.

Yet, almost all of these efforts fail to catch a vulnerability like the Bybit exploit. Why?

Because traditional smart contract audits focus on internal invariants—rules like:

"The total supply of shares must always equal the sum of user balances."

"No one should be able to withdraw more than their deposited collateral minus debt."

These are mathematical rules. They assume that if a transaction is cryptographically signed by an authorized key, it is intended and correct. The smart contract acts as a "dumb machine": it checks if 1+1=2 and if the signature is valid. It has no context about the human intent behind that signature.

When we audit, we must expand our threat modeling beyond the blockchain state. We need to audit the entire system flow, asking:

What is the trust assumption of the frontend?

How is the transaction payload generated, and can it be manipulated before it reaches the signing device?

If our Gnosis Safe is compromised via a rogue frontend, does the protocol have any on-chain circuit breakers (like rate limits or timelocks) to mitigate the damage?

Case Study: Logic vs. Integration Flaws
To illustrate this gap, look at my recent audit research on the Panoptic protocol: rdin777/Permanent-loss-of-user-funds-Panoptic.

In that project, the vulnerability was purely logical—a rounding error in math that could lead to the permanent loss of user funds. It's a classic smart contract bug. We can catch those with fuzzing, math invariant checks, and static analysis.

But the Bybit hack belongs to a completely different class of bugs: Integration & Infrastructure Vulnerabilities. You can have the most mathematically secure contracts in the world (like Panoptic or Gnosis Safe), but if your user-facing inputs are compromised, the secure system will execute the malicious command perfectly.

🛠️ Actionable Security Recommendations for Developers & Custodians
If we want to stop this from happening to our projects, we must change how we handle high-value transactions. Here are my top recommendations:

  1. Kill "Blind Signing" Forever Hardware wallets should never be used to blindly sign raw hashes for institutional movements.

The Fix: Implement custom transaction decoders (like Ledger's Clear Signing or custom metadata providers) so that the physical device screen explicitly decodes the contract call and displays the actual destination address and amount before the user presses the physical button.

  1. Implement "In-Flight" Transaction Simulation Never trust the frontend UI to tell you what a transaction does.

The Fix: Before sending a signature to the network, routing infrastructure must run a localized, independent dry-run (using Tenderly, Foundry's anvil, or custom simulation APIs) to verify exactly how the state changes. If the simulation shows funds leaving to an unknown address, block the execution immediately.

  1. Strict Supply Chain and Content Security Policies (CSP) If your dApp interacts with smart contracts, your frontend security must match your smart contract audits.

The Fix:

Use strict Subresource Integrity (SRI) hashes for all loaded scripts.

Implement rigorous Content Security Policies (CSP) to prevent unauthorized scripts from executing or exfiltrating data.

Transition from simple S3 bucket hosting to decentralized, immutable hosting (like IPFS/Arweave accompanied by ENS) for highly sensitive admin interfaces.

📌 Conclusion
The Bybit hack proved that on-chain security is only as strong as its off-chain gateway. It doesn't matter if your smart contracts are verified, audited, and mathematically flawless if your front-end can be manipulated to lie to your users or your admins.

As developers and auditors, we must treat the user interface as an active threat vector. Stop blind signing. Simulate every state change. Secure your supply chain.

What are your thoughts on frontend security in Web3? Have you implemented clear signing or transaction simulations in your workflows yet? Let's discuss in the comments below!

https://github.com/rdin777/Permanent-loss-of-user-funds-Panoptic

Top comments (1)

Collapse
 
circuit profile image
Rahul S

The infrastructure budget asymmetry here is staggering. $1.5B protected by multi-sig, formal verification, hardware wallets — accessed through a React bundle in an S3 bucket with whatever IAM permissions someone configured during initial deployment. Most Web3 teams spend six figures on Solidity audits and close to nothing on the infrastructure serving the signing interface. The attack didn't need a zero-day or a novel technique — it needed write access to a static asset bucket, which is one stolen AWS credential away. You don't even need to compromise S3 directly if the CI/CD pipeline deploying to it uses a long-lived access key stored in a GitHub Actions secret. The real lesson isn't 'secure your frontend' in the abstract — it's that the security investment between on-chain and off-chain components was probably off by two orders of magnitude, and attackers will always find the seam.