Ethereum’s scaling roadmap has a bottleneck that’s been obvious for years:
rollups need enormous data availability bandwidth, and full nodes can’t keep downloading entire blob payloads forever.
EIP-4844 introduced temporary blob DA, but every node still had to download and verify all blob data in every block. That design simply can’t scale to the multi-rollup future.
EIP-7594 — PeerDAS (Peer Data Availability Sampling) — fixes this by letting nodes download only a small random portion of the data while still verifying that all data is available on the network.
This is one of the most important upgrades since the Merge.
Why PeerDAS Exists
Rollups dominate Ethereum execution. Their bottleneck isn’t compute — it’s DA.
- Proof verification → cheap
- Execution → cheap
- Data availability → expensive
- Blob throughput → the limiting factor for the next decade
Even with blobs, Ethereum cannot safely push DA bandwidth higher if every node must download everything.
The new model:
Every node downloads 1/N of the data, but the network still guarantees full data availability with overwhelming probability.
This single design shift unlocks 10×–30× future DA scaling without turning validators into datacenters.
How PeerDAS Organizes Blob Data
PeerDAS transforms block blob data into a structure that makes sampling possible:
- 1D erasure-coded extended blobs
- A 2D matrix of rows and columns
- Column → subnet mapping
- Nodes responsible for deterministic column indices
Let’s break it down.
1. 1D Erasure-Coded Extended Blobs
Before PeerDAS, a blob was just raw data.
Now, each blob is extended with parity using one-dimensional erasure coding:
Original blob: [d0, d1, d2, d3]
Extended blob: [d0, d1, d2, d3, p0, p1, p2, p3]
Why 1D?
- Fast
- Simple proofs
- Rows remain independent
- Any ≥50% of columns reconstruct the blob
The extension is deterministic and identical for all nodes.
2. Extended Blob Matrix
Each extended blob becomes a row in a 2D matrix.
If a block has 6 blobs:
Row 0 → extended blob 0
Row 1 → extended blob 1
…
Row 5 → extended blob 5
Each row is chunked into equal-sized cells.
Cells at the same index across rows form a column.
This gives Ethereum a grid of small, independently verifiable units.
3. Column Subnets
Each column is assigned to a deterministic gossip subnet.
Nodes compute:
my_responsible_columns = f(node_id)
my_subnets = g(node_id)
This distributes storage responsibility without coordination.
Network-layer architecture
Node custody responsibilities
What Nodes Actually Do in PeerDAS
Each full node has two responsibilities:
1. Custody
- Store the columns assigned by node ID
- Gossip them to peers
- Serve column requests
2. Sampling
Every slot, nodes run a data availability sampling loop:
- pick random column indices
- request those columns from random peers
- verify cells via KZG
- ensure >50% of columns are known to exist
Nodes do not reconstruct entire blobs unless sampling fails.
The DAS Loop (Sampling Process)
This is what guarantees DA without downloading the whole dataset.
DAS workflow
Steps:
- Randomly choose a set of column indices
- Select random peers
- Request column cells
- Verify KZG proofs
- Aggregate results
- Conclude “data available” if >50% columns confirmed
The math behind this is extremely strong. With thousands of nodes sampling independently, the probability of an unavailable block passing sampling is astronomically small.
Data Reconstruction Rules
A node can reconstruct the entire dataset if:
it has ≥ 50% of columns.
If not, it requests missing columns via RPC:
get_data_column_sidecarget_specific_columnsget_cells_for_indices
Reconstruction is rare — it only happens if the node misses gossip.
PeerDAS Changes the Transaction Format
One of the biggest practical changes:
Transaction senders must generate all blob cell proofs. Builders no longer compute them.
The new transaction wrapper looks like:
[
tx_payload_body,
wrapper_version = 1,
blobs,
commitments,
cell_proofs
]
Key rules:
-
wrapper_versionmust be1 - number of blobs must match versioned blob hashes
cell_proofs.length == CELLS_PER_EXT_BLOB × blob_count- commitments must match blob versioned hashes
- all cell proofs must verify under KZG
This shifts work to sequencers and wallets, reducing builder overhead.
Userflow: How a Blob Transaction Moves Through PeerDAS
Lifecycle:
- Wallet/sequencer generates extended blob + cell proofs
- Wraps transaction in 7594 format
- Gossips it
- Builder includes tx in a block
- Consensus layer computes row extension
- Data matrix is split into columns
- Columns go to subnets
- Nodes sample and verify
This completes the full DA pipeline.
Column Sampling vs. Row Sampling
Ethereum deliberately chose column sampling, because it gives:
- smaller proofs
- simpler commitments
- easier reconstruction
- better distribution across nodes
- ability to compute extensions locally
Row sampling lacks these properties.
Security Model
The only real attack is data withholding — hiding some columns while publishing commitments.
PeerDAS defends this using:
- randomized peer selection
- deterministic column responsibilities
- KZG proofs
- thousands of sampling nodes
- erasure coding that recovers full data from ≥50% columns
Failure probabilities (from EIP-7594)
| Missing Data | Probability Block Passes Sampling |
|---|---|
| 1% | ~10⁻¹⁰⁰ |
| 2% | ~10⁻²⁰ |
| 3% | ~10⁻¹⁰¹ |
| 4% | ~10⁻¹⁹⁸ |
| 5% | ~10⁻³⁰⁶ |
Ethereum gets extremely strong DA guarantees.
Where Developers Directly Interact With PeerDAS
PeerDAS lives mostly in the networking layer, but developers touch it in three places:
1. Rollup Sequencers
Must now generate:
- extended blob rows
- all cell proofs
- commitments
- wrapper version 1 format
Every modern L2 stack (OP Stack, Arbitrum, CDK, zkSync, Starknet L3s) will update accordingly.
2. Wallets & Client Libraries
Wallets that send blob txs must generate:
- extension cells
- cell KZG proofs
Libraries adding 7594 support:
- viem
- ethers.js
- web3.js
- Alloy (Rust)
- go-ethereum bindings
3. Light Clients
PeerDAS is a breakthrough:
- mobile-friendly
- browser-friendly
- trust-minimized
They finally get efficient DA verification in a rollup-centric Ethereum.
Code Examples
Rust (Alloy) — Build & Submit a PeerDAS Transaction
let sidecar = SidecarBuilder::<SimpleCoder>::from_slice(b"Rollups scale now!")
.build()?;
// Create an EIP-4844 tx
let tx = TransactionRequest::default()
.with_to(bob)
.with_blob_sidecar(sidecar);
// Convert into 7594 wrapper
let env = provider.fill(tx).await?.try_into_envelope()?;
let tx_7594: EthereumTxEnvelope<_> =
env.try_into_pooled()?.try_map_eip4844(|tx| {
tx.try_map_sidecar(|s| s.try_into_7594(EnvKzgSettings::Default.get()))
})?;
let raw = tx_7594.encoded_2718();
provider.send_raw_transaction(&raw).await?;
Python — Validate Cell Proofs
def verify_peer_das_data(blobs, commitments, proofs):
cells = [compute_cells(blob) for blob in blobs]
assert len(proofs) == CELLS_PER_EXT_BLOB * len(blobs)
verify_cell_kzg_proof_batch(
commitments,
cells,
proofs
)
Go — Sampling Loop
for i := 0; i < SAMPLES_PER_SLOT; i++ {
peer := chooseRandomPeer()
col := randomColumnIndex()
cell, proof := requestColumnCell(peer, col)
if !kzg.VerifyCell(cell, proof, col) {
panic("DAS sampling failed")
}
}
How PeerDAS Fits into Ethereum’s Future
PeerDAS is Proto-Danksharding’s natural evolution.
It unlocks:
- 8×–30× blob throughput
- cheaper L2 fees
- more decentralized nodes
- full light-client functionality
- scalable DA without new trust assumptions
It preserves compatibility with:
- EIP-4844
- existing mempools
- existing clients
- non-blob transactions
This is the base layer that future Danksharding upgrades will build on.
Final Takeaway
PeerDAS gives Ethereum what it needs to scale:
Massive DA bandwidth without sacrificing decentralization.
By letting each node download only a small sample while keeping cryptographic and probabilistic guarantees extremely strong, Ethereum becomes capable of supporting thousands of rollups — without turning into a datacenter chain.
This is the foundation of Ethereum’s long-term rollup future.




Top comments (0)