DEV Community

Cover image for How to Create a Zero Knowledge DApp: From Zero to Production, Case 1: zk Escrow
Jet Halo
Jet Halo

Posted on

How to Create a Zero Knowledge DApp: From Zero to Production, Case 1: zk Escrow

This article is about understanding an end-to-end zk application from a full-stack, end-to-end perspective.

It uses the ZK Escrow Release repository as the concrete example, and walks through one escrow flow from the business problem, to the contract, the circuit, the frontend, the verification layer, and finally on-chain consumption.

The goal here is not to walk through the code line by line, and it is not to repeat the install, run, and deployment commands. Those are better kept in step-by-step docs. This article is focused on a different question: if you want to build a complete zk app yourself, what should you think about first, what should you build next, what is each layer responsible for, and where exactly zkVerify fits in the whole path.

This article will cover:

  • where this escrow project is similar to Tornado Cash, and where it is not
  • what parts usually make up a complete zk app
  • what the contract, circuit, frontend, backend, and verification layer each do
  • how to think about zkVerify once it is part of the system

If, by the end, the reader can explain from a full-system perspective which parts a zk app needs, why those parts exist, how they connect to each other, and why zkVerify appears at the verification layer, then this article has done its job.

You can treat it as a development map, not as an operations manual.

Suggested reading approach
It is best to first follow the zkVerify installation guide and get the project running locally, then read this article alongside the code. The installation steps are here: Tutorial 01: Operations Only. The code repository used in this tutorial is here: ZK Escrow Release. That way, when you reach each section, you can jump straight to the matching module in the repo and compare the contract, circuit, frontend, and API together. It makes the whole end-to-end flow much easier to understand.

Understanding What Tornado Cash Is

Tornado Cash had a huge influence on ZK. Even today, many people first think of the controversy around it, but for developers, what it really left behind is a way of thinking:

you do not have to hand the secret directly to the chain in order for the chain to accept a valid spend.

That is why it is still hard to talk about how a zk app is built without talking about Tornado Cash first. Not because this article is trying to retell Tornado Cash itself, but because many later ZK applications, especially projects that use structures like commitment, nullifier, and a Merkle tree, are easiest to understand when you start from there.

Of course, Tornado Cash is not only those pieces. It is a complete protocol with its own deposit pools, withdrawal flow, verifier, relayer, and frontend interaction. Here, the point is to focus on its core cryptographic skeleton, because the escrow project in this repo borrows that skeleton while changing the business goal.

What Tornado Cash Actually Does

Start with Tornado Cash.

If you compress its core idea into one sentence, it looks like this:

a user deposits assets into a pool and gets a note that only they know. Later, when withdrawing, the user does not reveal the original deposit directly to the chain. Instead, they use that note to generate a proof and show the contract that they really do have the right to withdraw from the pool.

The key point is that the funds can finally be withdrawn to a new address. The contract knows the withdrawal is valid, but it does not know which original deposit inside the pool it came from. What Tornado Cash is really doing is cutting that link.

To do that, Tornado Cash naturally grows a structure like this:

  • generate a note at deposit time
  • derive a commitment from that note
  • place all commitments into a Merkle tree
  • at withdrawal time, use a proof to show “I know the secret behind one commitment, and that commitment belongs to this tree”
  • use a nullifier or nullifier hash to prevent the same credential from being spent twice

At that point, Tornado Cash already gives enough of a reference model for the rest of this article.

Now look at this project.

This project borrows the same cryptographic skeleton, but it is not doing anonymous withdrawals.

The flow here is more direct:

  • user A deposits funds into the contract
  • at deposit time, the recipient address B is written on-chain
  • the frontend first generates a credential locally and computes the corresponding commitment
  • later, whoever holds that credential and successfully proves it can trigger release
  • but the funds can only go to the originally bound B

That is the most fundamental difference from Tornado Cash.

In Tornado Cash, the withdrawal address can be a fresh new address.

This project does not work like that. Here, the recipient is already locked in when the deposit happens. The later proof is not deciding who gets paid. It is proving that the funds can now be released to the address that was bound from the beginning.

So even though both systems contain commitment, nullifier, a Merkle tree, and a proof, those pieces are serving different goals.

Tornado Cash core flow diagram showing deposit, private note, commitment, nullifier hash, proof, and withdrawal to a new address

Reconstructing the Product Flow from the User's Point of View

Do not rush into the circuit yet, and do not rush into zkVerify yet either.

First, walk through one concrete transfer inside this project. After that, it becomes much easier to come back and break down what each layer is doing.

Suppose we have this escrow:

  • user A deposits 0.1 ETH into the contract
  • at deposit time, the recipient address B is written on-chain together with it

After this step, the chain remembers two kinds of things.

The first kind is the escrow's own data, such as the amount, the recipient, and whether the funds have already been spent.

The second kind is data related to the proof, namely the commitment derived from the credential. That commitment is inserted into the on-chain Merkle tree and becomes one of the members that later proofs will refer to.

Before the deposit transaction is sent, the frontend first generates a credential locally and computes the corresponding commitment. When the transaction goes on-chain, what actually enters the contract is the commitment and the recipient address B. The contract does not keep the credential for the user, and it does not write the raw secret on-chain. Later, if anyone wants to trigger release, they must first have that local credential.

At release time, the browser uses that credential to do several things:

  • compute the commitment that belongs to this credential
  • find its position in the Merkle tree locally
  • generate a zk proof

At that point, the proof is not sent straight into the contract yet.

In this project, the proof first goes to the server API, then to Kurier, and then into zkVerify's verification path. Only after zkVerify returns a result that the chain can consume does the frontend continue.

At the end of the flow, the frontend calls the contract's finalize() with the aggregation result returned by zkVerify together with the publicInputs for this proof.

When the contract receives those parameters, it does not release the funds immediately. It first checks:

  • whether this commitment has a matching deposit in the contract
  • whether this credential has already been used
  • whether the statement behind the current proof is correct
  • whether zkVerify's aggregation result can pass on-chain verification

Only after all of those checks pass does the contract send the original 0.1 ETH to the B that was bound at deposit time, rather than to the person currently calling finalize().

So, at a rough level, the full path can be remembered like this:

A deposits 0.1 ETH -> B is locked in -> the frontend generates a local credential -> the browser produces a local proof -> the proof enters zkVerify's verification path -> the contract calls finalize -> 0.1 ETH is sent to B

zk escrow release flow diagram showing local credential creation, deposit with locked recipient B, on-chain state, browser proof generation, zkVerify validation path, and final release to B

What commitment Is

If you send the raw credential to the chain, the secret is gone. So the chain does not store nullifier and secret directly. It only stores the result derived from them, and that result is the commitment.

One good way to think about commitment is this:

you put the raw credential into a sealed envelope. Other people cannot see what is inside, but they can see the unique mark on the outside. Later, that mark is what the system recognizes.

When we say “you cannot reverse it back to the original content,” that does not mean it is mathematically impossible in an absolute sense. It means:

if all you can see is the commitment, it is very hard to recover the original nullifier and secret. That is exactly the job of the hash function here. It is easy to go from input to output, and hard to go from output back to input.

This project uses Pedersen hash. For now, you can think of it as a hash function designed to be friendly to ZK circuits. That is why it is common in zk projects and fits naturally into Circom-style constraints.

In the frontend, the source of pedersen is straightforward. It is loaded from circomlibjs:

const circomlib = await import('circomlibjs');
pedersenCache = await circomlib.buildPedersenHash();
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/zk/escrow/prover.ts in loadPedersen()

In this project, the frontend first generates nullifier and secret locally, then combines them and computes the commitment:

const commitmentBytes = new Uint8Array(62);
commitmentBytes.set(nullifierBytes, 0);
commitmentBytes.set(secretBytes, 31);

const commitmentPoint = pedersen.hash(commitmentBytes);
const commitmentUnpacked = babyJub.unpackPoint(commitmentPoint);
const commitment = babyJub.F.toObject(commitmentUnpacked[0]);
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/zk/escrow/prover.ts in computeCommitment()

Once it is computed, what actually gets written into the contract is not the raw credential, but that commitment:

function deposit(bytes32 commitment, address recipient) external payable {
    deposits[commitment] = DepositRecord({
        recipient: recipient,
        amount: msg.value,
        spent: false
    });

    uint32 leafIndex = _insert(commitment);
}
Enter fullscreen mode Exit fullscreen mode

Code location: contracts/src/ZKEscrowRelease.sol in deposit()

So the role of commitment is very direct: the frontend keeps the real secret, while the chain stores only a verifiable result that is hard to reverse.

Looking at nullifierHash

commitment alone is not enough.

Because once someone gets the credential, they could in theory keep using it to trigger release again and again. The system still needs a way to know whether that credential has already been used.

That is what nullifierHash is for.

You can think of it as a one-time redemption code.

The most important point is this: the same credential always produces the same nullifierHash. That is because it is derived only from nullifier, and nullifier is a fixed part of the credential.

So the whole thing works a lot like a one-time ticket:

  • the first time, the system sees a redemption code it has never seen before, so it allows it
  • after allowing it, the system records that code
  • the second time, the same ticket produces the same code again
  • the system checks its record, sees that the code was already used, and rejects it

This means the system can detect “this is the same spend again, not a new valid spend” without exposing the raw credential itself.

In the frontend, nullifierHash is not computed from nullifier + secret. It is computed from nullifier alone:

const nullifierPoint = pedersen.hash(nullifierBytes);
const nullifierUnpacked = babyJub.unpackPoint(nullifierPoint);
const nullifierHash = babyJub.F.toObject(nullifierUnpacked[0]);
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/zk/escrow/prover.ts in computeCommitment()

In the contract, finalize() reads publicInputs[1] as nullifierHash, then checks whether it has already been used:

bytes32 nullifierHash = bytes32(publicInputs[1]);
require(!nullifierUsed[nullifierHash], "nullifier used");

nullifierUsed[nullifierHash] = true;
dep.spent = true;
Enter fullscreen mode Exit fullscreen mode

Code location: contracts/src/ZKEscrowRelease.sol in finalize()

The circuit also hard-binds this relationship: the private nullifier that you submit must really derive the public nullifierHash. That means you cannot swap in a fresh nullifierHash and pretend it is a first-time use.

hasher.nullifierHash === nullifierHash;
hasher.commitment === commitment;
Enter fullscreen mode Exit fullscreen mode

Code location: circuits/escrow/circom/escrowRelease.circom

So nullifierHash is not solving generic “deduplication.” It solves a very specific problem:

when the same credential comes back a second time, it exposes the same one-time redemption code, so the contract can stop it.

Finally, the Merkle Tree

commitment solves “you cannot store the raw secret on-chain,” and nullifierHash solves “the same credential cannot be used twice,” but one more piece is still missing:

how does the system know that the commitment you are presenting really belongs to the set of valid on-chain deposits, rather than being something you just invented locally?

That is what the Merkle tree is doing.

You can think of it as a very long membership list. You do not have to bring the entire list with you. You only need to provide one path from “my item” up to the final summary, and the system can check whether you really are a member.

In this project, every time the contract receives a new deposit, it inserts the corresponding commitment into the tree:

uint32 leafIndex = _insert(commitment);
emit MerkleRootUpdated(getLastRoot(), leafIndex);
Enter fullscreen mode Exit fullscreen mode

Code location: contracts/src/ZKEscrowRelease.sol in deposit()

The tree maintenance logic itself lives in MerkleTreeWithHistory.sol. After a new leaf is inserted, the root is recomputed upward. Later, finalize() uses isKnownRoot() to check whether the root provided by the proof is a root the contract has seen before.

function _insert(bytes32 _leaf) internal returns (uint32 index) { ... }

function isKnownRoot(bytes32 _root) public view returns (bool) { ... }
Enter fullscreen mode Exit fullscreen mode

Code location: contracts/src/MerkleTreeWithHistory.sol

Before generating a proof locally, the frontend also computes the path for this leaf from the current commitment list:

const { root, pathElements, pathIndices } = await buildMerkleProof(
  leaves,
  leafIndex,
);
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/zk/escrow/prover.ts in buildMerkleProof()

Inside the circuit, the actual membership check looks like this:

tree.leaf <== hasher.commitment;
tree.root <== merkleRoot;
tree.pathElements[i] <== merklePath[i];
tree.pathIndices[i] <== merkleIndex[i];
Enter fullscreen mode Exit fullscreen mode

Code location: circuits/escrow/circom/escrowRelease.circom

If you translate those lines directly into plain English:

  • tree.leaf <== hasher.commitment; tells the circuit that the leaf being checked is the commitment derived from this credential
  • tree.root <== merkleRoot; tells the circuit that the target root is the public merkleRoot
  • tree.pathElements[i] <== merklePath[i]; gives the circuit the sibling node for each level
  • tree.pathIndices[i] <== merkleIndex[i]; tells the circuit whether the current node is on the left or on the right at each level

Taken together, what the circuit is really doing is:

starting from the commitment leaf, recompute upward level by level following merklePath and merkleIndex. If the final result really equals the public merkleRoot, then this commitment really belongs to the tree.

If the path is fake, or the leaf is not really in the tree, the recomputed root will not match, and the proof will fail.

So the Merkle tree solves: how do you prove that this credential really belongs to one of the system's deposits?

Put the three pieces together and their roles become clear:

  • commitment turns the raw credential into something the chain can store
  • nullifierHash makes sure the credential can only be used once
  • the Merkle tree proves that the credential really belongs to the set of valid on-chain deposits

What State the Contract Actually Stores On-Chain

As explained earlier, this project is not “if the proof passes, then decide who gets paid.” Instead, the contract first records an escrow on-chain at deposit time, and the later proof only authorizes release.

So this section focuses on one question: what exactly does the contract store on-chain?

The First Kind of State: the Data of Each Escrow

The most important part is the deposits mapping:

mapping(bytes32 => DepositRecord) public deposits;

struct DepositRecord {
    address recipient;
    uint256 amount;
    bool spent;
}
Enter fullscreen mode Exit fullscreen mode

Code location: contracts/src/ZKEscrowRelease.sol

Its key is commitment, and its value is the record for that escrow.

Only three pieces of data are actually stored there:

  • recipient: who the funds are allowed to go to
  • amount: how much money is locked
  • spent: whether the escrow has already been released

That is why this article has kept repeating that the recipient is locked at deposit time. The chain really stores it. Later, finalize() does not decide the recipient again. It reads that existing record.

The Second Kind of State: Which Commitments Belong to Valid Deposits

Storing deposits alone is not enough.

Because the proof is not only proving “I know a credential.” It also needs to prove “the commitment behind this credential really belongs to one of the on-chain deposits.”

So inside deposit(), the contract does two things:

deposits[commitment] = DepositRecord({
    recipient: recipient,
    amount: msg.value,
    spent: false
});

uint32 leafIndex = _insert(commitment);
emit MerkleRootUpdated(getLastRoot(), leafIndex);
Enter fullscreen mode Exit fullscreen mode

Code location: contracts/src/ZKEscrowRelease.sol in deposit()

The first half stores the escrow record. The second half inserts the commitment into the Merkle tree.

The tree does not keep only the latest root. It also keeps a short history of roots:

mapping(uint256 => bytes32) public roots;
uint32 public constant ROOT_HISTORY_SIZE = 30;
uint32 public currentRootIndex = 0;
uint32 public nextIndex = 0;
Enter fullscreen mode Exit fullscreen mode

Code location: contracts/src/MerkleTreeWithHistory.sol

The reason is straightforward: when a user generates a proof locally, they may not happen to do it at the exact moment the latest root exists. As long as the proof's root still belongs to the contract's known root history, finalize() can accept it.

That is why finalize() starts with this check:

require(isKnownRoot(merkleRoot), "root not known");
Enter fullscreen mode Exit fullscreen mode

Code location: contracts/src/ZKEscrowRelease.sol in finalize()

The Third Kind of State: Which Credentials Have Already Been Consumed

The same credential must not be used twice, so the contract also keeps a list of already-used credentials:

mapping(bytes32 => bool) public nullifierUsed;
Enter fullscreen mode Exit fullscreen mode

Code location: contracts/src/ZKEscrowRelease.sol

Inside finalize(), the contract first checks whether this nullifierHash has already appeared:

require(!nullifierUsed[nullifierHash], "nullifier used");

nullifierUsed[nullifierHash] = true;
dep.spent = true;
Enter fullscreen mode Exit fullscreen mode

Code location: contracts/src/ZKEscrowRelease.sol in finalize()

Two things are updated here at the same time:

  • nullifierUsed[nullifierHash] = true
  • dep.spent = true

The first prevents the same credential from being used again. The second marks the escrow itself as already released.

The Fourth Kind of State: Which Business Context This Proof Is Supposed to Belong To

This project does not only validate “whether the proof is formally valid.” It also validates “whether the proof belongs to this business flow.”

So the contract stores several expected configuration values:

IZKVerifyAggregation public zkVerify;
bytes32 public vkHash;
uint256 public expectedDomain;
uint256 public expectedAppId;
uint256 public expectedChainId;
Enter fullscreen mode Exit fullscreen mode

Code location: contracts/src/ZKEscrowRelease.sol

These fields do different jobs:

  • zkVerify: which aggregation verification contract the chain should call
  • vkHash: which verification key this statement is tied to
  • expectedDomain / expectedAppId / expectedChainId: which business domain, application, and chain this proof is supposed to belong to

So before actually releasing the funds, finalize() also checks:

require(domain == expectedDomain, "domain mismatch");
require(appId == expectedAppId, "appId mismatch");
require(chainId == expectedChainId && chainId == block.chainid, "chainId mismatch");
Enter fullscreen mode Exit fullscreen mode

Code location: contracts/src/ZKEscrowRelease.sol in finalize()

Putting Those Kinds of State Together

If you think of the contract as a state container, it is really doing four things:

  • using deposits to store the recipient, amount, and spent flag for each escrow
  • using the Merkle tree and root history to store which commitments belong to valid deposits
  • using nullifierUsed to store which credentials have already been consumed
  • using zkVerify / vkHash / expectedDomain / expectedAppId / expectedChainId to store what the verification path is supposed to look like

So later, finalize() is not “release funds if there is a proof.” It compares the current proof against these on-chain states, one by one, and decides whether this is really a release the contract is willing to accept.

What the Circuit Actually Proves, and How the Browser Computes the Proof

For many people, the moment they see the circuit, they immediately fall into details: which signals are public, which are private, which line does hashing, which line creates constraints.

But if the overall structure is not clear first, those details quickly turn into a pile of symbols that are hard to follow.

So start with the conclusion.

This circuit is not proving:

  • “I am the recipient B
  • “I am the person who originally made the deposit”
  • “I can now choose any address to receive the funds”

What this circuit is really proving is this:

In plain language, the circuit is only proving two things:

  • you really do hold the original credential
  • the commitment derived from that credential really belongs to the on-chain Merkle tree

If those two facts hold, the system knows this is not a fake credential invented on the spot. It is a real credential tied to a real on-chain deposit.

In other words, the circuit is only deciding one thing: whether this release request is backed by a valid credential that corresponds to an on-chain deposit. Who the funds are finally sent to is not decided here. That was already written into the contract record earlier, during deposit.

That is also exactly what the comment in the circuit file says:

// Authorization‑only escrow release (no recipient in circuit)
Enter fullscreen mode Exit fullscreen mode

Code location: circuits/escrow/circom/escrowRelease.circom

First Look at the Two Kinds of Inputs in This Circuit

Inside EscrowRelease(levels), the inputs are explicitly split into public and private.

The public inputs are:

signal input merkleRoot;
signal input nullifierHash;
signal input commitment;
signal input domain;
signal input appId;
signal input chainId;
signal input timestamp;
Enter fullscreen mode Exit fullscreen mode

The private inputs are:

signal input nullifier;
signal input secret;
signal input merklePath[levels];
signal input merkleIndex[levels];
Enter fullscreen mode Exit fullscreen mode

Code location: circuits/escrow/circom/escrowRelease.circom

In plain language:

  • public inputs are values the outside world can see and verify against
  • private inputs are values the prover knows but does not reveal

Applied to this project:

  • nullifier and secret are raw credential data, so they must be private
  • merklePath and merkleIndex are used for membership proof, so they are also private
  • commitment, nullifierHash, and merkleRoot are values the verifier must use, so they must be public
  • domain, appId, chainId, and timestamp are the business context for this proof, so they must also be public, otherwise the later statement and contract checks cannot bind the proof to the correct scenario

Part One of the Circuit: Recompute the Public Results from the Private Credential

The circuit does not start by checking the tree. It starts by recomputing:

can the private nullifier and secret in this witness really derive the public commitment and nullifierHash?

That logic lives in CommitmentHasher():

template CommitmentHasher() {
    signal input nullifier;
    signal input secret;
    signal output commitment;
    signal output nullifierHash;

    component commitmentHasher = Pedersen(496);
    component nullifierHasher = Pedersen(248);
Enter fullscreen mode Exit fullscreen mode

Code location: circuits/escrow/circom/escrowRelease.circom

You can split that logic into two steps.

First, nullifier and secret are broken into bits.

Second, commitmentHasher computes commitment from nullifier + secret, while nullifierHasher computes nullifierHash from nullifier alone.

These two lines are the key constraints:

hasher.nullifierHash === nullifierHash;
hasher.commitment === commitment;
Enter fullscreen mode Exit fullscreen mode

They do not mean “recompute it once and see what happens.” They mean:

the circuit forces the private witness and the public inputs to match.

If someone fills in a public commitment arbitrarily, but the nullifier and secret they hold cannot really derive it, this part fails.

If someone tries to change nullifierHash into a fresh, unused value to dodge replay protection, this part also fails, because it must really come from the same nullifier.

Part Two of the Circuit: Check That This Commitment Really Belongs to the Tree

The previous part proves only one thing: you know a raw credential that really derives the public commitment and nullifierHash.

But one more step is still missing.

The system still has to know that this commitment is not something you just invented locally, but something that really belongs to the set of valid on-chain deposits.

So the circuit next calls MerkleTreeChecker(levels):

component tree = MerkleTreeChecker(levels);
tree.leaf <== hasher.commitment;
tree.root <== merkleRoot;
for (var i = 0; i < levels; i++) {
    tree.pathElements[i] <== merklePath[i];
    tree.pathIndices[i] <== merkleIndex[i];
}
Enter fullscreen mode Exit fullscreen mode

Code location: circuits/escrow/circom/escrowRelease.circom

The meaning is clear:

  • the leaf is the commitment derived from this credential
  • the target root is the public merkleRoot
  • the prover must provide the path from that leaf up to the root

MerkleTreeChecker simply recomputes upward level by level:

selectors[i].in[0] <== i == 0 ? leaf : hashers[i - 1].hash;
selectors[i].in[1] <== pathElements[i];
selectors[i].s <== pathIndices[i];

hashers[i].left <== selectors[i].out[0];
hashers[i].right <== selectors[i].out[1];
Enter fullscreen mode Exit fullscreen mode

Code location: circuits/escrow/circom/merkleTree.circom

There are two key points here:

  • pathElements[i] gives the sibling node at each level
  • pathIndices[i] tells the circuit whether the current node is on the left or on the right

Because the left-right order matters, the final hash changes when the order changes. So if any path element is fake, or any left-right order is wrong, the final root will not match.

The last line in MerkleTreeChecker is:

root === hashers[levels - 1].hash;
Enter fullscreen mode Exit fullscreen mode

Its meaning is very direct: the root recomputed by the circuit must equal the public merkleRoot.

Part Three of the Circuit: Bind the Proof to the Current Business Context

If the circuit only checked commitment and Merkle membership, one piece would still be missing.

Because the same membership proof could be reused elsewhere if there were no context binding.

So this circuit also includes domain, appId, chainId, and timestamp in the public inputs:

signal input domain;
signal input appId;
signal input chainId;
signal input timestamp;
Enter fullscreen mode Exit fullscreen mode

Then it forces those fields to participate in constraints:

signal d2;
signal a2;
signal c2;
signal t2;
d2 <== domain * domain;
a2 <== appId * appId;
c2 <== chainId * chainId;
t2 <== timestamp * timestamp;
Enter fullscreen mode Exit fullscreen mode

Code location: circuits/escrow/circom/escrowRelease.circom

Those lines look simple, but they matter.

The point is not to add a complicated new layer of business logic. The point is to make sure those public fields are actually used by the circuit, so they really become part of the statement behind this proof. Later, the contract checks them against expectedDomain, expectedAppId, and the current chainId, and only then is the proof truly bound to this business flow.

So What Is This Circuit Really Proving?

Put the three parts together, and the circuit is really proving:

  1. the prover knows nullifier and secret
  2. those private values really derive the public commitment and nullifierHash
  3. that commitment really belongs to the public merkleRoot
  4. the proof also carries the public context fields domain / appId / chainId / timestamp

So this is not a vague “I am allowed to release.”

A more accurate sentence is:

I know a valid credential, the commitment derived from it really belongs to an on-chain root, and this proof belongs to the current business context.

How the Browser Computes This Proof

Once the circuit is clear, the order inside the frontend proveEscrow() becomes much easier to follow.

It does not call snarkjs immediately. It first prepares every input the circuit needs:

const { nullifier, secret } = parseCredential(credential);
const { commitment, nullifierHash } = await computeCommitment(nullifier, secret);
const { root, pathElements, pathIndices } = await buildMerkleProof(
  leaves,
  leafIndex,
);
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/zk/escrow/prover.ts in proveEscrow()

Those three steps line up directly with the three things the circuit needs:

  • the raw private credential: nullifier, secret
  • the public results derived from it: commitment, nullifierHash
  • the membership path: root, pathElements, pathIndices

After that, the frontend packs those values into one circuit input object:

const input = {
  merkleRoot: root.toString(),
  nullifierHash: nullifierHash.toString(),
  commitment: commitment.toString(),
  domain: domain.toString(),
  appId: appId.toString(),
  chainId: chainId.toString(),
  timestamp: timestamp.toString(),
  nullifier: nullifier.toString(),
  secret: secret.toString(),
  merklePath: pathElements.map((v) => v.toString()),
  merkleIndex: pathIndices,
};
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/zk/escrow/prover.ts in proveEscrow()

Only then does it actually generate the proof:

const { proof, publicSignals } = await snarkjs.groth16.fullProve(
  input,
  wasmPath,
  zkeyPath,
);
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/zk/escrow/prover.ts in proveEscrow()

So browser-side proving is not “throw some data into a library and magically get a proof.”

It is doing something very specific:

prepare the full witness and public inputs that the circuit needs, then let snarkjs generate a proof that an external verifier can check.

At that point, the meaning of the proof is finally complete.

Where zkVerify Enters the Flow After the Proof Is Generated

By now, the contract and the circuit have each been explained on their own. The next step is to go back to the whole business flow, because that makes the later verification path much easier to understand. In a zk project, the circuit and the contract are only two layers. The full flow also includes proof generation, proof verification, and the way the verification result is finally consumed on-chain. So first look at the general zk flow, then map this project onto it.

Before going further, it helps to place the overall flow in one frame.

A zk project usually goes through two stages:

  • first, generate a proof
  • second, verify that proof

The first stage happens on the prover side. The browser takes the inputs into the circuit, computes the witness, and calls the proving tool to generate a proof. That is where wasm and zkey are used. What is actually produced here is the new proof for this user action.

The second stage happens on the verifier side. The verification system takes that proof together with the public inputs and checks whether it is valid. This stage uses the verification material tied to the circuit, namely vk, or its identifier vkHash.

One timing detail matters here: proof is generated fresh every time a user asks for a proof. vk is not generated at that moment. It belongs to the earlier circuit preparation phase, where it is paired with zkey for this circuit.

Once you look at the timeline that way, the logic becomes much clearer.
The first timeline is the “prepare the circuit” phase, which usually happens once.

You first write the Circom circuit and compile it. After compilation, you get the wasm and the corresponding constraint system. Then you run the Groth16 setup for that circuit, which gives you the proving material you need later: zkey and the corresponding vk.
So zkey and vk do not appear out of nowhere. They both come from the same circuit, but they serve different roles: zkey is for the prover to generate proofs, and vk is for the verifier to check them.

The second timeline is what happens every time a user actually asks for a proof.

For example, when a user unlocks an escrow, the browser first prepares the inputs, then uses wasm + zkey to generate a new proof.

What is new every time is the proof, not the vk. The vk was already prepared during the earlier setup phase, and it stays tied to this circuit.

As for vkHash, you can think of it as an identifier for the vk.
Many systems do not pass around the full vk every time during runtime. Instead, they register the verification material up front and later use only an identifier such as vkHash.
This project works that way: the browser generates the proof, while the later verification path is built around proof, publicInputs, and vkHash.

In other words, what is generated fresh for each user action in this project is the proof, while vk / vkHash points to the verification material that was already prepared for this circuit.

In this project, the browser is responsible for generating the proof;
the later verification path goes through Kurier and zkVerify;
and what the chain finally consumes is not simply “the raw proof that just came out of the browser,” but the verification result returned by zkVerify.

At this point, the browser can already do something powerful:

it can generate a proof locally that says “I know a valid credential, and it belongs to some on-chain root.”

But that is still not the end of the flow.

Because what the business logic needs is not “a proof appeared in the browser.” It needs “the contract is willing to execute a real release based on that proof.”

There is still one verification layer between those two things.

That is exactly the layer zkVerify fills in this system.

zkVerify consumption flow diagram showing circuit setup, browser proof generation, server API submission, proof status tracking, aggregation result retrieval, gateway verification, and escrow finalize on the target chain

Why Generating the Proof Is Still Not Enough

If the system used the route where the contract verifies the Groth16 proof directly, then once the browser generated the proof, the next step would simply be to send that proof into a verifier contract.

But this project does not work like that.

The route here is:

  • the browser first generates the proof locally
  • the server forwards the proof to Kurier
  • Kurier and zkVerify process that proof
  • the frontend receives an aggregation result that the chain can consume
  • finally, the contract uses verifyProofAggregation(...) to decide whether finalize() can pass

So zkVerify here is not the prover, and it is not the place where the credential is generated.

Its job is this: take an already-generated proof and turn it into a verification result that an on-chain contract can rely on.

What /api/submit-proof Does

Once the proof comes out of the browser, the first stop is not the contract. It is the server-side /api/submit-proof.

This layer has two jobs.

First, it keeps the Kurier API key on the server rather than exposing it to the frontend.

Second, before forwarding the proof, it cross-checks several key fields so that the frontend does not send out a proof whose context is inconsistent.

The most important checks are:

const chainFromInputs = normalize(publicInputs[5]);
if (chainFromInputs && String(chainId) !== chainFromInputs) { ... }

const domainFromInputs = normalize(publicInputs[3]);
if (domainFromInputs && String(domain) !== domainFromInputs) { ... }

const appIdFromInputs = normalize(publicInputs[4]);
if (appIdFromInputs && String(appId) !== appIdFromInputs) { ... }

const nullifierFromInputs = normalize(publicInputs[1]);
if (nullifierFromInputs && String(antiReplay.nullifier) !== nullifierFromInputs) { ... }
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/pages/api/submit-proof.ts

In other words, the server is not blindly forwarding the proof. It first confirms:

  • the request chainId and the proof's publicInputs[5] are the same value
  • the request domain and appId really match the proof's public inputs
  • the anti-replay nullifier also matches the proof's nullifierHash

Only after those checks pass does the server build the Kurier submission payload:

return {
  proofType: 'groth16',
  chainId: Number(chainId),
  vkRegistered: true,
  proofOptions: getProofOptions(),
  proofData: {
    proof,
    publicSignals: publicInputs,
    vk: vkHash,
  },
};
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/pages/api/submit-proof.ts in buildKurierSubmitPayload()

What /api/proof-status Does

After the proof is submitted, the frontend needs to know where it is in the pipeline.

That is the job of /api/proof-status. It queries Kurier for the job status, then normalizes several possible raw fields into a frontend-friendly status:

const statusRaw =
  result.data.status ||
  result.data.state ||
  result.data.jobStatus ||
  result.data.data?.status ||
  result.data.data?.state ||
  result.data.data?.jobStatus;

res.status(200).json({
  proofId,
  status: normalizeKurierStatus(statusRaw),
  rawStatus: statusRaw ?? 'unknown',
  updatedAt,
  statement,
  aggregationDetails,
});
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/pages/api/proof-status.ts

One point matters here.

The Kurier status string is useful, but it is not the final condition that decides whether finalize() can pass on-chain. It is mostly telling the frontend where the proof is in the pipeline.

What really determines whether the contract can consume the result is whether the correct aggregation tuple can be retrieved later, and whether that tuple can pass the on-chain verifyProofAggregation(...) check.

What /api/proof-aggregation Does

When the proof reaches the aggregated status, the frontend requests /api/proof-aggregation.

This layer is not verifying the proof again. It is extracting the exact fields from Kurier's response that the contract actually needs later:

  • domainId
  • aggregationId
  • leafCount
  • index
  • merklePath
  • leaf

Code location: apps/web/src/pages/api/proof-aggregation.ts

The most important point here is:

the domainId here is not the same thing as the circuit public input domain.

The code explicitly says:

// zkVerify domainId (aggregation domain) is NOT the same semantic field as
// circuit public input domain/app domain. Never fallback to DOMAIN=1 here.
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/pages/api/proof-aggregation.ts

Those two concepts have completely different roles:

  • domain in the circuit is part of the business context
  • domainId here is the zkVerify aggregation domain

If those two ideas get mixed up, finalize() is very likely to fail.

What zkVerify Ultimately Provides

By this point, what zkVerify gives this flow is not an abstract “verified” badge. It gives a set of data that the contract can actually consume.

From the frontend's point of view, the two most important outputs are:

  • the aggregation tuple: domainId / aggregationId / leafCount / index / merklePath
  • the corresponding leaf in the aggregation tree

Once the frontend has those two things, it can first compare the statement locally, then let the contract call:

zkVerify.verifyProofAggregation(
    domainId,
    aggregationId,
    leaf,
    merklePath,
    leafCount,
    index
)
Enter fullscreen mode Exit fullscreen mode

Code location: contracts/src/ZKEscrowRelease.sol in finalize()

So if you had to summarize zkVerify's role here in one sentence, it would be:

the browser generates the proof, and zkVerify turns that proof into a verification result that the contract can consume.

How the Frontend Uses the zkVerify Result on the Target Chain

Now the previous sections connect into one flow.

Look at it in the real order.

Step 1: Start from the Credential and Produce the Local Proof

When the unlock flow starts, the frontend reads the user-provided credential, fetches the on-chain commitments, and finds the leaf that corresponds to that credential:

const commitments = await fetchCommitments();
const parsed = parseCredential(withdrawCredential);
const { commitment } = await computeCommitment(
  parsed.nullifier,
  parsed.secret,
);
const leafIndex = commitments.findIndex((c) => c === commitment);
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/pages/escrow.tsx in handleUnlock()

Once it finds the leaf, the frontend calls proveEscrow():

const bundle = await proveEscrow({
  credential: withdrawCredential,
  leaves: commitments,
  leafIndex,
  domain: domainId,
  appId,
  chainId: BigInt(activeChainId),
  timestamp,
});
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/pages/escrow.tsx in handleUnlock()

At that point, the browser already holds the local proof and the publicInputs for that proof.

Step 2: Run Local Prechecks First

After the proof is generated, the frontend still does not submit it immediately.

It first checks two things:

  • whether the proof's merkleRoot is a root the contract recognizes
  • whether the proof's chainId matches the chain the wallet is currently connected to

The corresponding code is:

const known = await publicClient.readContract({
  address: escrowAddress,
  abi: escrowAbi,
  functionName: 'isKnownRoot',
  args: [proofRoot],
});

if (proofChainId !== activeChainId) {
  throw new Error(
    `chainId mismatch (wallet=${activeChainId}, proof=${proofChainId})`,
  );
}
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/pages/escrow.tsx in handleUnlock()

The purpose is simple: stop proofs that are obviously impossible before they go any further.

Step 3: Submit the Proof, Then Poll for Status

After the prechecks pass, the frontend calls /api/submit-proof:

const submitRes = await fetch('/api/submit-proof', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    proofId: `proof_${Date.now()}`,
    proof: bundle.proof,
    publicInputs: bundle.publicInputs,
    appId: appId.toString(),
    domain: domainId.toString(),
    userAddr: address,
    chainId: proofChainId,
    timestamp: Number(timestamp),
    antiReplay: {
      nullifier: bundle.nullifierHash.toString(),
    },
  }),
});
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/pages/escrow.tsx in handleUnlock()

Once submission succeeds, the frontend keeps calling /api/proof-status until the status becomes aggregated. That status does not mean the business flow is finished. It means the frontend can now fetch the aggregation tuple.

One more time: aggregated is not the end of the flow. It only means zkVerify's side has finished preparing the verification result. The step that actually makes the business logic happen has not happened yet. The frontend still needs to send a finalize() transaction to the escrow contract on the target chain. Only when that on-chain call succeeds is the release really complete.

Step 4: Fetch the Aggregation Tuple and Confirm It Really Belongs to This Proof

Once the status becomes aggregated, the frontend calls /api/proof-aggregation:

const aggRes = await fetch('/api/proof-aggregation', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({
    proofId: submitData.proofId,
  }),
});
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/pages/escrow.tsx in handleUnlock()

What comes back is not “another proof.” It is the aggregation tuple that the later on-chain verification will use.

But the frontend also performs one very important comparison:

it reads the on-chain vkHash, recomputes the statement locally from the current publicInputs, and compares that to the leaf returned from the aggregation:

const onChainVkHash = await publicClient.readContract({
  address: escrowAddress,
  abi: escrowAbi,
  functionName: 'vkHash',
});
const localStatement = computeLocalStatement(publicInputs, onChainVkHash);
if (localStatement.toLowerCase() !== aggLeaf.toLowerCase()) {
  throw new Error(
    `Proof/job mismatch (statement=${localStatement}, leaf=${aggLeaf}).`,
  );
}
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/pages/escrow.tsx in handleUnlock()

What this step is asking is:

is the leaf returned by zkVerify really the statement produced by this exact proof?

If that does not match, then even with the tuple in hand, the later contract call will not pass.

Step 5: Run the zkVerify Precheck, Then Call finalize()

After the statement matches, the frontend still does not send a transaction immediately.

It first uses that tuple to call the zkVerify contract through a read-only precheck. The zkVerifyAddr used here is not just any address. It is the official zkVerify gateway/proxy address configured in the escrow contract:

Note
The zkVerifyAddr read here is not just some hardcoded contract address inside the project. It is the official zkVerify gateway/proxy address provided on the target chain. For the Base Sepolia setup used in this tutorial, the official address is 0x0807C544D38aE7729f8798388d89Be6502A1e8A8. For the full list of addresses, see the zkVerify docs: Contract Addresses.

zkvOk = await publicClient.readContract({
  address: zkVerifyAddr,
  abi: zkVerifyAbi,
  functionName: 'verifyProofAggregation',
  args: [domainIdFromAgg, aggregationId, localStatement, merklePath, leafCount, index],
});
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/pages/escrow.tsx in handleUnlock()

This call is an eth_call. Its purpose is to ask one question first: if the frontend now really sends finalize() to the target chain, will the official zkVerify gateway/proxy contract accept this aggregation check?

Only after that precheck passes does the frontend simulate finalize(), then actually send the transaction:

await publicClient.simulateContract({
  address: escrowAddress,
  abi: escrowAbi,
  functionName: 'finalize',
  args: [
    domainIdFromAgg,
    aggregationId,
    merklePath,
    leafCount,
    index,
    publicInputs,
  ],
  account: address,
});

const txHash = await writeContractAsync({
  address: escrowAddress,
  abi: escrowAbi,
  functionName: 'finalize',
  args: [domainIdFromAgg, aggregationId, merklePath, leafCount, index, publicInputs],
});
Enter fullscreen mode Exit fullscreen mode

Code location: apps/web/src/pages/escrow.tsx in handleUnlock()

In other words, the real allow path is:

  1. the local proof is generated
  2. the Kurier job reaches aggregated
  3. the aggregation tuple is fetched successfully
  4. the local statement matches the aggregation leaf
  5. the verifyProofAggregation(...) precheck passes
  6. only then is finalize() actually sent on-chain

One more thing needs to be said here: the read-only precheck does not mean the business flow is complete. Once finalize() is really sent, the escrow contract on the target chain will internally call the official zkVerify gateway/proxy contract's verifyProofAggregation(...) again. Only if that on-chain call also passes does finalize() continue to release the funds.

Final Summary

At this point, the frontend has already prepared everything it can.

The parameters received by finalize() are:

  • domainIdFromAgg
  • aggregationId
  • merklePath
  • leafCount
  • index
  • publicInputs

Then the contract runs one final round of checks:

  • whether publicInputs has the correct length
  • whether merkleRoot is a known root
  • whether domain / appId / chainId / timestamp are correct
  • whether verifyProofAggregation(...) passes
  • whether nullifierHash has already been used
  • whether deposits[commitment] exists and is still not spent

Only after all of those checks pass does the contract actually send the funds to the recipient that was locked at deposit time.

Top comments (1)

Collapse
 
ijaswitha profile image
Khushi Panwar

very insightful blog for developers building in zk! great work✨️✨️