DEV Community

Cover image for Gas Optimization for Solidity: Patterns and Tradeoffs
beefed.ai
beefed.ai

Posted on • Originally published at beefed.ai

Gas Optimization for Solidity: Patterns and Tradeoffs

  • How to measure and benchmark gas usage accurately
  • Designing storage layout: packing, types, and access patterns
  • Choosing calldata, memory and ABI strategies to save gas
  • Selective inline assembly and gas-saving micro-patterns
  • Balancing gas savings with security and readability
  • Practical Application: a reproducible checklist and protocol
  • Sources

Gas is the single most tangible constraint on adoption for any EVM app: users notice costs immediately and drop off fast if every interaction feels expensive. Effective solidity gas optimization is a discipline of measurement, targeted refactors, and disciplined tradeoffs — not a grab-bag of clever one-off tricks.

You’re seeing the operational symptoms: feature rollouts delayed because gas costs exceed budget, users abandoning flows where a single call costs several USD, and PRs blocked by unmeasured performance regressions. The root causes are usually predictable — careless storage layout, copying large arrays into memory repeatedly, heavy on-chain loops, or untested inline optimizations — but teams fix the wrong lines of code because they lack robust gas benchmarking and repeatable measurement.

How to measure and benchmark gas usage accurately

Start with instrumentation before refactoring: the single highest-leverage move is adding deterministic gas measurement to your test suite and CI so regressions are visible and attributable. Use unit tests that assert gasUsed for each important function and keep a baseline snapshot for each release candidate. Tooling that I rely on regularly includes Hardhat’s gas reporter, Foundry’s gas reporting, and cloud profilers like Tenderly for visual traces and forking-based comparisons .

Practical patterns:

  • Capture gasUsed from receipts in integration tests and record them as part of CI artifacts. Example with ethers.js:
const tx = await contract.heavyOp(...);
const receipt = await tx.wait();
console.log('gasUsed', receipt.gasUsed.toString());
Enter fullscreen mode Exit fullscreen mode
  • Run tests under a consistent compiler optimization setting and EVM environment. Use mainnet forking for interactions that depend on external contracts so gas behavior is realistic. Hardhat and Foundry both support mainnet forking modes .
  • Gate PRs with a gas delta threshold: if a function’s gas increases beyond X% or Y gas units, fail CI. Store baseline snapshots in the repo (or artifact storage) and compare.

Use gas profilers to find hotspots: a profiler shows where SSTOREs, SLOADs, and copies happen during a call; target the highest-cost 20% of code that produces ~80% of the cost. For stack traces and per-op insights, map profiler output to source lines and tests .

Designing storage layout: packing, types, and access patterns

Storage dominates cost. The core principle is: minimize the number of storage slots touched and the number of writes. Reordering fields to enable storage packing often yields the biggest payback with the least semantic change .

Example — before and after packing:

// BEFORE: uses 4 slots
struct UserBefore {
    uint256 id;
    bool active;
    uint8 rating;
    address account;
}

// AFTER: id + account each occupy their own slot, bool+uint8 pack into one slot
struct UserAfter {
    uint256 id;
    address account;
    uint8 rating;
    bool active;
}
Enter fullscreen mode Exit fullscreen mode

Small types (uint8, bool, bytes1) pack into 32-byte slots when adjacent, reducing SSTORE/SLOAD slot counts. The Solidity storage layout rules explain packing behavior and ordering implications .

Design notes and tradeoffs:

  • Pack for storage, but prefer uint256 for arithmetic/loop counters used in tight loops to avoid extra masking/moves that the compiler might generate for smaller integer sizes; small types save storage, not necessarily compute.
  • Use mapping for sparse or large collections to avoid linear iteration costs; use arrays only when ordered iteration is required and design removal with swap-and-pop to keep O(1) removals.
  • When you have many boolean flags, a single uint256 bitmap is often far cheaper than many separate bool fields.

Leverage immutable and constant for values that never change at runtime — the compiler inlines these into bytecode and eliminates an SLOAD . That’s a low-risk, high-payoff optimization.

Choosing calldata, memory and ABI strategies to save gas

Choosing between calldata, memory, and storage is a practical lever for gas-efficient contracts. For external entry points that accept large arrays or bytes, prefer calldata because it avoids an automatic copy into memory; this commonly converts a multi-kilobyte copy into a cheap pointer read .

Example:

function batchTransfer(address[] calldata tos, uint256[] calldata amounts) external {
    for (uint i = 0; i < tos.length; ++i) {
        _transfer(tos[i], amounts[i]);
    }
}
Enter fullscreen mode Exit fullscreen mode

Avoid unnecessary copies like bytes memory b = data; which triggers a full copy into memory. Iterate calldata directly where possible.

ABI design guidelines:

  • Make hot external functions external rather than public for large inputs so the compiler uses calldata for parameters instead of copying into memory.
  • If you need to mutate input, copy only the minimal portion to memory and free it quickly.
  • Consider packing arguments (e.g., pass a tightly-packed bytes and decode in assembly) for extreme cases, but measure first — encoding/decoding complexity often offsets the gas saved on transmission.

Reference the Solidity data location rules for exact conversion costs and semantics .

Selective inline assembly and gas-saving micro-patterns

Inline assembly can deliver real savings in focused hot paths: batch memory copies, tight parsing of calldata, or bespoke serialization/deserialization. Use it only when you have a solid benchmark showing a meaningful win and when the code can be isolated and covered by tests .

Common micro-optimizations I’ve used safely:

  • unchecked blocks for loop counters and accumulated arithmetic where overflow is provably impossible:
for (uint i = 0; i < n; ) {
    // do work
    unchecked { ++i; }
}
Enter fullscreen mode Exit fullscreen mode

Use unchecked sparingly; the cost saving is real and measurable .

  • Assembly-guided memory copy for large bytes blobs when the Solidity copy is the dominant cost. An illustrative pattern:
assembly {
  // src points to calldata or memory; copy in 32-byte chunks to dest
  // This is illustrative: test every boundary condition exhaustively.
}
Enter fullscreen mode Exit fullscreen mode
  • Avoid reinventing cryptographic primitives in assembly; use keccak256 via the opcode (access via keccak256 in Solidity or keccak256 in assembly) rather than custom hashing.

A strong guardrail: every assembly block must have a post-change test that reproduces the expected gas profile and the exact functional behavior. Document why the assembly is necessary and include a short comment mapping assembly lines to the equivalent high-level operation .

Important: assembly removes language-level safety checks and makes formal reasoning harder. Only isolate assembly into tiny helper functions, then audit them thoroughly.

Balancing gas savings with security and readability

A pattern that’s safe today can be a liability tomorrow if it reduces readability or complicates upgrades. Balance is the operational metric: prioritize optimizations that produce large, repeatable wins and keep complex micro-optimizations behind clear abstractions.

How I decide what to optimize:

  • Prioritize changes that remove storage writes or slots, or that avoid copying large calldata arrays into memory.
  • Reject micro-optimizations that make the codebase fragile or that create edge cases for auditors.
  • Require that any assembly or low-level trick has a unit test, a gas benchmark, and a short rationale comment in the codebase.

Static analysis and fuzzing belong in the pipeline: run Slither and a fuzzer (Echidna / Foundry fuzzing strategies) after optimization to catch corner-case miscompilations or reentrancy windows introduced by reordering or packing . Use OpenZeppelin’s well-audited library patterns where appropriate and avoid reimplementing battle-tested primitives unless strictly necessary .

Practical Application: a reproducible checklist and protocol

Follow a reproducible sequence that you can run in CI and on-demand:

  1. Baseline:
    • Add gas-reporting to your test suite (hardhat-gas-reporter or forge test --gas-report) and commit a baseline snapshot. Tools: Hardhat gas reporter, Foundry gas reports, Tenderly trace profiler.
  2. Local profiling:
    • Run hotspots locally with mainnet forking when external dependencies matter.
    • Identify the top 3 functions by gas per user flow.
  3. Target low-hanging fruit:
    • Convert external large-array parameters to calldata and avoid unnecessary copies .
    • Make constants constant or immutable where relevant .
    • Reorder struct fields for packing and reduce SSTORE count .
  4. Apply a focused refactor:
    • Make the smallest change that eliminates a storage write or a memory copy, then rerun benchmarks.
  5. Safety gates:
    • Add unit tests that assert functional equivalence.
    • Add fuzz tests and static analysis (Slither, Echidna).
  6. CI and PR rules:
    • Fail PRs if gas for any critical function exceeds baseline by a configured delta.
    • Store gas baselines as artifacts so every change is auditable.

Example: measuring gas in a deploy-and-call script (Hardhat):

// scripts/measure.js
const { ethers } = require("hardhat");
async function main() {
  const Factory = await ethers.getContractFactory("MyContract");
  const c = await Factory.deploy();
  await c.deployed();
  const tx = await c.heavyFunction(...);
  const receipt = await tx.wait();
  console.log("gasUsed:", receipt.gasUsed.toString());
}
main();
Enter fullscreen mode Exit fullscreen mode

Example: pack a struct, add tests that assert storage slot contents and gas delta, then submit a patch with the test and the gasUsed snapshot in CI.

A short checklist to keep in your PR template:

  • [ ] Is there a gas baseline test for modified functions?
  • [ ] Did you run the profiler to show the hotspot before/after?
  • [ ] Did the change reduce SSTOREs or eliminate memory copies?
  • [ ] Are assembly/unchecked uses covered by unit and fuzz tests?
  • [ ] Did static analysis run and pass?

Sources

Solidity — Layout of State Variables in Storage - Rules and behavior for how Solidity packs state variables into 32-byte storage slots; used to justify packing examples and field ordering.

Solidity — Data Location: memory, storage and calldata - Explanation of calldata vs memory, external function parameter behavior, and copying semantics referenced in the calldata section.

Solidity — Inline Assembly - Reference for assembly syntax, semantics, and recommended safety practices referenced in the assembly section.

Solidity — Constant and Immutable State Variables - Documentation on constant and immutable variables and why they reduce runtime SLOADs.

Solidity — Checked and Unchecked Arithmetic - Details about unchecked blocks and the gas tradeoffs for skipping overflow checks.

hardhat-gas-reporter (GitHub) - Tool used to add gas reporting to Hardhat test suites and CI.

Foundry Book - Foundry documentation and commands for testing, fuzzing, and gas reporting (forge test --gas-report guidance).

Tenderly Documentation - Profiler and forking-based tracing that helps identify costly storage/opcode operations in real-world scenarios.

OpenZeppelin Contracts Documentation - Audited contract patterns and recommendations that influence decisions about replacing custom code with well-tested libraries.

Slither — Static Analysis (GitHub) - Static analysis tooling for detecting security and correctness patterns after low-level optimizations.

The practical constraint is simple: measure before you change, target the biggest-cost operations (SSTOREs and large copies), and keep any low-level work narrowly scoped, well-tested, and documented.

Top comments (0)