Every few months, there is another headline. A protocol gets exploited. Liquidity disappears. Users panic. Founders post apology threads.
After a while, it stops feeling shocking.
What makes it frustrating is that most of these exploits are not the result of impossible mathematics or some mysterious cryptographic breakthrough. They are usually ordinary development mistakes. The kind that happens when teams move quickly, reuse code, or assume something is safe because it worked somewhere else.
Blockchain technology itself is not fragile. Smart contracts are deterministic. The weakness almost always comes from human decisions.
Let us walk through seven common mistakes that continue to cost Web3 millions.
1. Reentrancy still shows up
Reentrancy is not new. The exploit on The DAO made that painfully clear years ago. Yet it still appears in audits today.
The issue is simple in theory. A contract sends funds to an external address before updating its internal state. If that external address is malicious, it can call back into the contract before the balance changes. Funds can be withdrawn multiple times in a single transaction.
The pattern is well known. Update the state first. Interact with external contracts last. Use reentrancy guards where appropriate.
And still, it slips in. Sometimes during refactoring. Sometimes, when someone rearranges logic and forgets why the original order mattered. Security failures rarely look dramatic while you are writing them.
2. Arithmetic assumptions
Solidity now protects against overflow and underflow by default, which removes a major historical issue. But arithmetic problems did not disappear. They simply evolved.
Developers still assume values will stay within certain ranges. They assume a subtraction will never underflow because some other function checks it. They assume reward formulas cannot produce extreme outcomes.
Attackers do not make those assumptions. They test boundaries. They push numbers to edges no one thought about.
Financial logic requires paranoia. If a variable can reach an extreme value, someone will try to make it happen.
3. Weak access control
It is surprising how often exploits trace back to a missing restriction.
An administrative function is left public. A role check is misconfigured. An initialization function can be called twice. A privileged address is not properly protected.
These are not glamorous vulnerabilities. They are not technically complex. They are permission problems.
Access control deserves the same attention as cryptography. Libraries like OpenZeppelin provide solid patterns, but they only work if developers apply them carefully and consistently. One exposed function can undo months of engineering work.
4. Oracle manipulation
Many DeFi protocols depend on external price feeds. If that price can be manipulated even briefly, the consequences can be severe.
Flash loans allow attackers to borrow large amounts of capital within a single transaction. With that capital, they can distort prices on low liquidity pools, trigger liquidations or flawed calculations, and exit with profit before the system corrects itself.
The contract may function exactly as written. The flaw is in trusting a price that can be temporarily distorted.
Robust oracle design is not optional in DeFi. It is foundational.
5. Ignoring transaction ordering
Every pending transaction is visible before it is confirmed. That transparency creates opportunity for front-running.
Bots monitor the mempool for profitable trades. If they see one, they can insert their own transaction before it, or surround it. Users end up with worse prices, while the bot extracts value.
Developers sometimes design contracts without considering adversarial ordering. They assume transactions will execute in a neutral environment. That assumption does not hold in public networks.
If value can be extracted by reordering transactions, someone will attempt it.
6. Business logic flaws
These are the hardest to detect because the code itself works.
The vulnerability is in the design.
Maybe the incentive structure allows someone to accumulate rewards disproportionately. Maybe governance tokens can be acquired cheaply before a proposal. Maybe liquidity incentives can be gamed in cycles.
Static analysis tools will not warn you about flawed economics. This requires modeling behavior, not just checking syntax.
Security in Web3 is partly software engineering and partly game theory.
7. Inadequate testing and overconfidence
Many teams test extensively. Some do not. In both cases, overconfidence can be dangerous.
Tests often focus on what should happen, not on what should never happen. Edge cases are sometimes ignored because they seem unrealistic.
Audits help, but they are not magic shields. They reduce risk. They do not eliminate it.
Smart contracts manage real money in adversarial environments. The testing mindset must reflect that reality.
Why do these mistakes repeat?
Most developers in Web3 are intelligent and capable. The repetition of these vulnerabilities is not about incompetence. It is about incentives and pressure.
Teams race to launch. Competitors move fast. Investors expect progress. Code is copied from previous projects that may have hidden flaws.
In Web2, a serious bug can be patched. In Web3, a serious bug can empty a treasury in minutes.
Immutability raises the stakes.
A practical mindset
Before deployment, it helps to ask uncomfortable questions.
If I had unlimited capital for one transaction, how would I try to exploit this system?
If I could reorder transactions, what behavior becomes profitable. If a privileged key were compromised, how much damage could be done?
Thinking this way feels pessimistic, but it is realistic. Public blockchains are adversarial by design.
Final reflection
Smart contracts remove intermediaries, but they do not remove responsibility. If anything, they increase it.
Every exploit damages trust, not just in a protocol but in the ecosystem as a whole.
The technology is powerful. The math is sound. The blockchain works as expected.
The weak point is almost always the assumptions we make while writing code.
That is uncomfortable to admit, but it is also empowering because assumptions can be questioned, code can be reviewed, and incentives can be stress tested.
And the next exploit can be prevented before it becomes another headline.

Top comments (0)