DEV Community

Adam
Adam

Posted on • Originally published at rustarians.com

Execution Traces: Packing Data for Polynomials

This post has interactive code editors and animations on the original version where you can run and modify every Rust snippet in the browser.


Execution Traces: Packing Data for Polynomials

Welcome, fellow zk-enjoyer.

If you've been following along, you know how polynomials work as a data structure and why roots of unity make evaluating them fast. Now comes the question everyone skips: before the polynomial checks anything, where does the data come from?

This is post #3 in the "first principles" ZK series:

  1. Polynomials and finite fields
  2. Roots of unity
  3. Execution traces <- you are here
  4. FFT/NTT interpolation
  5. Constraints
  6. Polynomial packing
  7. KZG commitment
  8. Proof and verification

You already know how to do this

Here's a function you've written a hundred times:

fn is_valid_age(x: u32) -> bool {
    x >= 18 && x <= 90
}
Enter fullscreen mode Exit fullscreen mode

The CPU runs it, checks the condition, returns true or false. The result is immediate. You trust the machine.

This isn't hypothetical. Poland's digital ID (e-dowod) on mobile phones could generate exactly this kind of proof: a hotel checks you're 18+ from a QR code without seeing your birth date. The circuit behind it is a range check like the one we're building here. (More on real-world ZK use cases in the Krakow meetup Q&A.)

Now imagine you need to prove to someone that this returned true for a specific value of x, without revealing x. The verifier can't run your code on your private inputs. They get a proof, a small piece of data, and they verify it mathematically.

To produce that proof, you need an execution trace.


What is an execution trace?

An execution trace is a table. Each row captures a value that was relevant to your computation. You fill it in while running the program, before any proving happens.

For is_valid_age(45), the two conditions x >= 18 and x <= 90 get rewritten as subtractions: x - 18 (should be non-negative if x >= 18) and 90 - x (should be non-negative if x <= 90). This rewriting is part of translating code into polynomials. It's usually the hardest part of writing circuits, because the equations have to catch every corner case and enforce the same conditions as the original code. More on how constraints work in later posts. For now, these are the values we track:

step value
x 45
x - 18 27
90 - x 45

That's it. You're writing down every intermediate value so a polynomial can encode them later. The polynomial doesn't execute your logic. It encodes this table, and constraints check that the numbers are consistent.

OK, so we rewrote the comparison from the if as subtractions. But we still need to check that the result is non-negative, which looks like the same kind of check we started with. It is, almost, but not quite.


Why not just compare directly?

Here's where it gets interesting.

ZK proofs operate over finite fields. In a finite field with modulus p, every number is in the range [0, p-1] and arithmetic wraps around. There is no concept of "greater than" or "less than" in the algebraic sense.

This means x - 18 >= 0 is meaningless. If x = 5, then x - 18 doesn't equal -13. It equals p - 13, which is a large positive-looking number:

fn main() {
    let p: i64 = 101;

    // invalid: x = 5, below lower bound
    // in F_101: 5 - 18 = -13, and -13 mod 101 = 88
    println!("x=5:  x-18 mod p = {}", (5  - 18_i64).rem_euclid(p)); // 88

    // valid: x = 45, inside [18, 90]
    println!("x=45: x-18 mod p = {}", (45 - 18_i64).rem_euclid(p)); // 27
}
Enter fullscreen mode Exit fullscreen mode

Both results look nonnegative, the field can't tell them apart by sign.

So at this point it looks like rewriting comparisons as subtractions and checking for non-negative results was completely pointless. In a finite field every result looks non-negative. Bear with me, we're getting there. (If you want more on why circuits only speak addition and multiplication, see Q4 in the meetup Q&A.)


Do you even range-check?

You use bit decomposition.

Wait what???

You take the result of the subtraction and break it down into bits. If it fits in k bits, you know it's a small number. If it doesn't fit, it's huge (because the finite field wrapped it around, and those wrapped numbers are huuuge), and the check fails. That's it. 8 bits can hold 0 to 255, so if x - 18 fits in 8 bits, x is somewhere between 18 and 273.

One check isn't enough. You need two: one for the lower bound (x - 18 fits in k bits, so x >= 18) and one for the upper bound (90 - x fits in k bits, so x <= 90). Both together prove x is between 18 and 90. That's why the trace ends up with two bit decompositions, one per bound.

The gap between 18 and 90 is 72, so 8 bits is more than enough (8 bits can hold up to 255). But each check alone has a range that is too wide:

  • col_lower alone accepts x from 18 up to 273 (because x - 18 just needs to fit in 8 bits)
  • col_upper alone accepts x from -165 up to 90 (because 90 - x just needs to fit in 8 bits)
  • both together accept only x from 18 to 90

Try x = 91 (one above the upper bound). col_lower: 91 - 18 = 73, fits in 8 bits, passes. col_upper: 90 - 91 = p - 1, a ~254-bit number, does not fit in 8 bits, fails. The lower check alone would have let x = 91 through. You need both.

Try x = 17 (one below the lower bound). col_upper: 90 - 17 = 73, fits, passes. col_lower: 17 - 18 = p - 1, does not fit in 8 bits, fails.

use ark_bn254::Fr;

fn main() {
    // x = 91 (one above upper bound)
    let x = Fr::from(91u64);
    println!("x = 91:");
    println!("  x - 18 = {}", x - Fr::from(18u64));       // 73, small
    println!("  90 - x = {}", Fr::from(90u64) - x);       // huge wrapped number
    println!();

    // x = 17 (one below lower bound)
    let x = Fr::from(17u64);
    println!("x = 17:");
    println!("  x - 18 = {}", x - Fr::from(18u64));       // huge wrapped number
    println!("  90 - x = {}", Fr::from(90u64) - x);       // 73, small
}
Enter fullscreen mode Exit fullscreen mode

For x = 91, the lower check gives 73 (small, fits in 8 bits), but the upper check wraps around to a ~254-bit number. For x = 17, it's the other way around. Those giant numbers will never fit in 8 bits.

Here's why that's a hard constraint, not just a flag. To prove a k-bit decomposition, the prover has to provide k individual bits, each either 0 or 1, that add up to the original value when weighted by powers of two. For 27 that looks like:

1*1 + 1*2 + 0*4 + 1*8 + 1*16 + 0*32 + 0*64 + 0*128 = 27
Enter fullscreen mode Exit fullscreen mode

The most you can reach with 8 bits is 1+2+4+8+16+32+64+128 = 255. If the value is 256 or more, no combination of 0s and 1s adds up to it. So we effectively have a non-negative check that works over a finite field, in the form of a polynomial: if the number is genuinely small (a valid subtraction result), it can be represented as the polynomial above. If the field wrapped it around into a giant number, it can't.
There's one more thing needed: a constraint that forces each bit to actually be 0 or 1, not some other field element. More on that in the constraints post.


The full trace

Now you can see why the trace looks the way it does. For is_valid_age(45):

row col_x col_lower (x-18) col_upper (90-x)
0 45 27 45
1 0 1 1
2 0 1 0
3 0 0 1
4 0 1 1
5 0 1 0
6 0 0 1
7 0 0 0
8 0 0 0

col_x is there because without it there's no way to prove that col_lower[0] actually equals x - 18 and not some arbitrary value. Same for col_upper[0] and 90 - x.
You didn't write if. You wrote down what had to be true for the if to pass, for specific case of secret value x.


In arkworks: the trace as field elements

Throughout this series we build PLONK by hand using ark-ff, just field arithmetic. Each cell in the trace is a field element.

The snippet below computes the subtraction results and breaks them into bits, the same bit decomposition from the previous section, but now in arkworks code. to_bits_le extracts k least-significant bits from a field element.

fn to_bits_le(val: Fr, k: usize) -> Vec<Fr> {
    let n = val.into_bigint().0[0]; // first u64 limb, valid for small values
    (0..k).map(|i| Fr::from((n >> i) & 1)).collect()
}

let bits_lower = to_bits_le(lower_bound, 8); // [1, 1, 0, 1, 1, 0, 0, 0]
let bits_upper = to_bits_le(upper_bound, 8); // [1, 0, 1, 1, 0, 1, 0, 0]
Enter fullscreen mode Exit fullscreen mode

Now we assemble the trace. Each column holds the value in row 0 followed by its 8 bits.

// each column: [value, b0, b1, b2, b3, b4, b5, b6, b7]
// col_x carries x itself, needed to prove col_lower[0] = x - 18

let col_x:     Vec<Fr> = [vec![x],           vec![Fr::zero(); 8]].concat();
let col_lower: Vec<Fr> = [vec![lower_bound],  to_bits_le(lower_bound, 8)].concat();
let col_upper: Vec<Fr> = [vec![upper_bound], to_bits_le(upper_bound, 8)].concat();

let trace: Vec<Vec<Fr>> = vec![col_x, col_lower, col_upper];
Enter fullscreen mode Exit fullscreen mode

Visualized:

row col_x col_lower (x-18) col_upper (90-x)
0 45 27 45
1 0 1 1
2 0 1 0
3 0 0 1
4 0 1 1
5 0 1 0
6 0 0 1
7 0 0 0
8 0 0 0

What the trace is for

"OK I built this trace, but what's it for?"

Glad you asked.

A common confusion: people think proving IS executing. It's not. These are two separate steps. First, the prover runs the program on the inputs and records every intermediate value into a table, that's exactly what the first snippet in the arkworks section does. That's the execution trace. At this point the prover knows everything: every input, every output, every step.

Then the prover takes the completed trace and turns it into a proof. The constraints don't walk through the execution trace row by row, discovering what happens next. The full trace is already there. They just check that all the numbers are correct. This separation is what allows the verifier to check correctness without re-running the computation.

You pack the data, and the constraints check it. AND you can pack the data if and only if constraints are satisfied.


Next up

Post #4, FFT/NTT interpolation: how to turn this table into polynomials. And do some cool stuff with it.

Coming soon.

The FFT/NTT algorithm is covered in Roots of Unity.


If you have questions or want to discuss this post, join the ZK for Rust Devs Discord.


This is post #3 in my "first principles" ZK series for Rust devs. Interactive version with runnable code and animations on rustarians.com.

I'm running a free live session on March 17: From Polynomials to Proof in 60 Minutes. Range check circuit built step by step, with a real use case.

Questions? Found a bug? linkedin.com/in/adam-smolarek

Top comments (0)