DEV Community

Cover image for Beyond the Code: Advanced Human-Led Techniques in DeFi Security Auditing
Erick Fernandez for Extropy.IO

Posted on

Beyond the Code: Advanced Human-Led Techniques in DeFi Security Auditing

(This is the second article in our series on protocol security. In Article 1: The 3 Subtle Bugs We Found, we showed you real-world examples of critical bugs we've uncovered. Now, we'll explain the expert-led methodology we use to find them.)

The Human Element in Protocol Security

The evolution of Decentralized Finance (DeFi) has been paralleled by an evolution in security practices. Automated analysis tools and, more recently, artificial intelligence, have become adept at identifying known vulnerabilities, common bug patterns, and code-level errors. These tools excel at verification: checking that a smart contract's code is free from a predefined list of common flaws.

However, this addresses only one facet of security. The most profound risks in DeFi protocols often do not lie in syntactically incorrect code but in a perfectly correct implementation of a flawed or exploitable idea. This is the domain of validation: assessing whether a protocol's business logic, economic design, and architectural assumptions are sound, secure, and robust against a creative, motivated adversary.

Business logic flaws are not merely code errors; they represent a deviation from a protocol's intended functionality. More critically, they can manifest as intended functionality that produces an unintended and exploitable economic outcome. Automated tools, which lack an understanding of intent, cannot reliably find such flaws.

Recent security audits of DeFi projects illustrate this distinction. An audit may identify a risk vulnerability, not in a complex algorithm, but in the system's fundamental reliance on a centralised backend. This is not a code bug; it is an architectural flaw creating a single point of failure. Similarly, an audit might flag a protocol's handling of non-standard rebase tokens as a high-risk logical flaw. The code for handling standard tokens may be perfect, but the economic assumption that all tokens behave identically is a critical, human-found vulnerability.

This demonstrates that the role of the expert human auditor has evolved. It has shifted from being a code-level "bug catcher" to a systemic "risk analyst." While automated tools check if the code is correct, the human auditor checks if the design is sound. It is this distinction—verification versus validation—that establishes the indispensable and continuing value of human-led security analysis.

The Auditor's Mindset: A Prerequisite for Analysis

The single most important tool in an advanced security audit is not software, but a specialised mode of thought: the adversarial mindset. This mindset represents the fundamental difference between a developer and an auditor. A developer's primary goal is constructive—"How do I build this to make it work?" An auditor's primary goal is deconstructive—"How do I break this?".

This is not simple pessimism but a structured, creative, and adversarial application of systems thinking. The auditor views the protocol not as an isolated piece of code, but as a single, exploitable node within a complex, interconnected, and inherently hostile financial network. Every function, every variable, and every line of logic is treated as a potential attack surface.

This approach is sometimes formalised as "red team testing." Unlike a standard audit that follows a systematic checklist of known vulnerabilities (e.g., reentrancy, integer overflows), this mindset involves actively probing for creative attack vectors and novel attack patterns that are not yet on any checklist. The auditor's goal is to invent new ways to exploit the protocol's specific, unique logic.

This difference in perspective is critical for a project's security. An internal development team, no matter how skilled, is biased towards a constructive viewpoint. They are, by necessity, focused on the "happy path" where a user acts as intended. The auditor's sole focus is the "unhappy path," modelling a malicious actor who will do the unexpected, the illogical, and the "stupid" if it results in a profit.

The following table articulates the practical differences in perspective:

Perspective Developer (The "Builder") Human Auditor (The "Attacker")
Primary Goal Functionality: Make the system work as intended. Security: Make the system fail in unintended ways.
Core Question "Does this code meet the requirements?" "How can these requirements be abused?"
View of Code A solution to a problem. A collection of assumptions to be broken.
View of Failure A bug to be fixed. An "exploit" or "attack vector" to be proven.
Approach Linear and constructive. Holistic and "systems-based."
External Code A "dependency" to be used. A "composability risk" or attack vector.

This adversarial mindset is a creative and hypothesis-driven process that seeks to find unknown unknowns. While an AI can be trained on a library of all past exploits, the human auditor's value is in discovering the next class of exploit, one that is entirely bespoke to the protocol's novel design.

Technique 1: Economic and Game-Theoretic Modelling

The first advanced, human-led technique is to move beyond code and audit the protocol's economy. A DeFi protocol is a micro-economy defined by code, and its rules—interest rates, reward mechanisms, liquidation parameters—are its monetary policy. The auditor's job is to assess if this economy can be manipulated into insolvency.

This technique involves modelling the protocol as a set of rules in a game. The auditor then "plays" this game as a profit-maximising, adversarial actor. The goal is to find scenarios where the protocol can be drained of value while operating completely within its own rules. This is distinct from a code audit, which just checks if the rules are written correctly; this technique checks if the rules themselves are safe.

Application 1: Flash Loan Attack Simulation

A primary tool for this modelling is the simulation of a flash loan. A flash loan provides an attacker with near-infinite capital for the duration of a single transaction. The human auditor asks: "If I had one billion dollars for one second, what could I do?"

This model allows them to test for:

  • Oracle Manipulation: Using the borrowed capital to manipulate the price of an asset on a decentralized exchange (DEX), thereby tricking the protocol's price oracle.
  • Incentive Exploitation: Using the capital to perform an action at a massive scale that exploits a flawed reward or incentive mechanism.

Application 2: Non-Standard Asset Simulation (The "Weird ERC20" Problem)

This is a more subtle and critical application of economic modelling. DeFi protocols, for the sake of simplicity, often operate on a powerful but flawed assumption: that all tokens of a given standard (e.g., ERC20) behave identically.

Human auditors, through experience, know this is false. The "Weird ERC20" problem refers to tokens with non-standard implementations, such as:

  • Fee-on-transfer tokens: A portion of every transfer is taken as a fee.
  • Rebase tokens: The total supply and a user's balance can change automatically.
  • Upgradable or pausable tokens: A central administrator can change the token's logic.

An auditor will meticulously "fuzz" the protocol's logic with these non-standard behaviours. For example, a recent audit of the PWN protocol identified a risk related to the integration of rebase tokens. The protocol's logic involved caching a user's balance. The auditor's economic model showed that when a rebase event occurred, the token's true balance would "desynchronise" from the protocol's cached balance. This discrepancy could lead to critical accounting errors, stuck user funds, or logical failures in the loan-tendering process.

An automated tool, which assumes a standard ERC20, would never find this vulnerability. It is a flaw in an economic assumption, found only by a human auditor asking, "What happens if this token is not what it seems?"

Technique 2: Composability and Integration Risk Analysis

DeFi protocols are often described as "money legos." Their core strength is composability—the ability to snap together different, independent protocols to create a new, more complex service.

This composability, however, is a primary source of systemic risk. The security of a protocol is no longer self-contained; it becomes dependent on the security of every "lego" it touches. A human auditor must therefore perform a composability and integration risk analysis, auditing the "blast radius" of every external component.

This technique involves manually mapping all external dependencies, creating a "trust map" of the protocol. This map reveals two main categories of risk: risk from centralisation and risk from composition.

Application 1: Mapping Centralisation Risk (Trust Assumptions)

This analysis audits dependencies on single, privileged actors or components.

  • Privileged Roles: The auditor meticulously checks for any "privileged roles," such as an owner or admin address. They then ask critical questions: Can this address drain all user funds? Can it pause the contract indefinitely? Can it change critical parameters (like interest rates or fees) without warning?
  • Centralised Components: The auditor maps dependencies on off-chain infrastructure. A security review of the ZkNoid project, for example, identified a risk in its reliance on a centralised backend. This is not a code flaw, but an architectural flaw. The auditor's analysis revealed that this central component creates a "single point of failure."

Application 2: Mapping Composability Risk (External Logic)

This analysis audits dependencies on other (often decentralised) protocols. The flaw may not be in the protocol's code at all, but in a dangerous interaction between the protocol and its dependency.

A clear example of this was a risk "Inflation Attack" identified during an audit of PWN Adapters.

  1. The Integration: The protocol's code correctly implemented an ERC4626Adapter to interact with ERC-4626 tokenised vaults.
  2. The Flaw: The ERC-4626 standard itself contains a subtle, known (to experts) mathematical vulnerability.
  3. The Attack: An attacker can "front-run" the very first deposit into a new, empty vault. By sending a tiny, "dust" amount of the asset as a donation to the vault just moments before the first real deposit, the attacker can manipulate the vault's internal share-price calculation.
  4. The Result: When the legitimate user's large deposit arrives one second later, the manipulated math and a rounding error cause the user zero shares. The user's entire deposit is lost.

This vulnerability is impossible to find by looking only at the protocol's own code. It requires a human auditor with deep domain experience who, upon seeing the ERC4626Adapter integration, immediately knows to model this specific, sophisticated attack vector.

Technique 3: State and Temporal Logic Analysis

The third technique involves analysing how the timing and order of operations can be manipulated. In DeFi, when a function is called is often more important than what it does. Smart contracts are state machines, and a human auditor hunts for edge cases where the protocol's internal state can be corrupted.

The key question is: "What happens if I call functions in an unexpected order, or with minimal time passing between them?"

Application 1: Timing and State Flaws (Temporal Logic)

This analysis challenges the protocol's assumptions about time. A recent audit of the ZkNoid lottery protocol uncovered a risk flaw in its round-validation logic. The auditor hypothesised a timing-based edge case and found that if the checkCurrentRound() function was called during a specific, narrow time window during a slot transition, the logic was ambiguous enough to validate multiple rounds simultaneously. This is a classic flaw in temporal logic, where the state machine becomes vulnerable during a transition.

Application 2: Front-Running and Order Manipulation

This is one of the most common and damaging temporal attacks. Because a blockchain's transaction queue (the "mempool") is public, an attacker can see a user's transaction before it is confirmed.

  • The "Sandwich Attack": An attacker sees a user's large "buy" order. The attacker (1) front-runs the user by buying the same token, (2) lets the user's large purchase execute, pushing the price up, and then (3) "back-runs" the user by selling the token back at the new, higher price.
  • The "Zero-Time" Exploit: An audit of the PWN protocol identified a logic flaw based on a similar assumption. A developer may assume a reasonable amount of time will pass between a user accepting a loan and repaying it. The auditor challenged this, asking "what if the time is zero?" The analysis found that a malicious user could accept a loan offer and repay it in the same transaction. Because no time had passed, no interest would accrue. This "griefing" attack would not steal funds, but it would cost the legitimate loan lender a transaction fee (gas) for no purpose, effectively censoring them from the protocol.

The Future: Scaling Human Expertise with Knowledge Graphs

The human-led, systems-based techniques described above are potent, but they are also bespoke, time-consuming, and scale poorly. The core challenge for the security industry is to codify and scale this expert intuition.

This is the promise of Knowledge Graphs (KGs). A knowledge graph represents data not as a table, but as a network of entities and their relationships. For DeFi, this is a perfect fit. Instead of parsing code (which is what automated tools do), a DeFi Knowledge Graph maps the ecosystem.

This research is being actively developed to help agents—both human and, eventually, AI—understand complex exploit patterns. A DeFi-specific KG connects:

  • Entities: Protocols, tokens, smart contracts, wallets, and known exploits.
  • Relationships: "is-a-fork-of," "integrates-with," "was-exploited-by," "admin-key-is-held-by."

This KG, once populated, will for the first time allow an agent to quantify systemic risk. It scales the human auditor's "systems thinking" by allowing them to ask complex, high-level questions:

  • "Show me all protocols that integrate the ERC-4626 standard and do not have a known mitigation for the inflation attack."
  • "This new protocol just launched. Is it a fork of a protocol that has a past incident involving flash loans?"
  • "Map the dependency chain for this protocol. Which other protocols would fail if this one is exploited?"

The Knowledge Graph is the bridge between human intuition and AI analysis. It is not an AI auditor. It is a tool for codifying the systemic, contextual knowledge that only human auditors currently possess.

Conclusion: The Enduring Value of the Adversarial Human

This report has demonstrated that an audit by an expert human is not a simple "code review." It is a multi-faceted, adversarial analysis of a protocol's economic design, architectural soundness, and systemic dependencies.

We have explored three key human-led techniques:

  1. Economic and Game-Theoretic Modelling: Treating the protocol as a game to find flaws in its incentive structure, such as the vulnerability to non-standard rebase tokens.
  2. Composability and Integration Risk Analysis: Mapping all external dependencies to find risks in centralisation (like an insecure backend) or unsafe integrations (like the ERC-4626 inflation attack).
  3. State and Temporal Logic Analysis: Modelling the protocol's logic over time to find state-machine flaws and timing attacks, such as the zero-time loan griefing exploit.

Automated tools are for verification: checking if the code does what it says. Human auditors are for validation: checking if what the code says is economically sound, architecturally robust, and safe against a creative, adversarial actor. As long as DeFi protocols are built on novel economic logic and complex human incentives, the adversarial, systems-based human auditor will remain the ultimate, indispensable defence.

This research into Knowledge Graphs is the key to scaling human expertise. In our final article, "Scaling the Adversarial Mindset," we'll explore exactly how we are building these tools to create a future of pre-emptive, continuous security.


Request an Audit Consultation or visit Extropy Audits

Originally published on: security.extropy.io

Top comments (0)