This post has interactive code editors on the original version where you can run and modify every Rust snippet in the browser.
Welcome, fellow zk-enjoyer.
You've spent evenings staring at tutorials that don't compile. Explanations that assume you have a PhD in algebra. AI-generated content that looks legit until you run the code.
I've been in blockchain since 2016. I've written production zk code. This guide teaches polynomials the way I wish someone taught me.
No PhD required. Code that runs. Let's go.
What to Expect
Non-formal introductions to zkSNARKs internals. Yes, with scary names: Fast Fourier Transform, KZG commitment scheme, univariate polynomials over finite fields. But without eye-bleeding math.
I include fancy names so you'll recognize them in whitepapers later.
The code is written in Rust, edition 2024, using Arkworks libraries. As of January 2026, it compiles and runs.
If something breaks, I'll update it. That's the difference between a real human and a content mill.
The Road Map
- Polynomials over finite fields <- you are here
- Roots of unity
- Execution trace
- Polynomial interpolation via FFT/NTT
- Applying constraints to polynomials
- Packing into a single polynomial
- Commitment scheme (starting with KZG)
- Proof AND Verification
Eight posts. But you'll understand what happens instead of copy-pasting circuits and praying.
Let's go
Remember polynomials from high school? The things nobody explained why they existed?
Turns out they're the fundamental building block of ZK.
Polynomials work like a data structure. Think of them as Maps that only accept numbers.
Your data is bits at a low level anyway, so translation is doable.
In addition to storage, polynomials allow you to constrain exactly which numbers go where.
Today: storing stuff (encoding). Future posts: the constraints magic (business logic).
Your First Polynomial
Here's a polynomial that encodes a message:
2x^3 - 15x^2 + 28x + 96
If you did not know how it was created, it looks like regular equation. But plug in x values 3, 1, 5, 2, and you get the message back.
use ark_bn254::Fr as Bn254Fr;
use ark_poly::{DenseUVPolynomial, Polynomial, univariate::DensePolynomial};
use std::ops::Neg;
use ark_ff::{BigInteger, PrimeField};
fn main() {
type ScalarField = Bn254Fr;
// P(x) = 2x^3 - 15x^2 + 28x + 96 mod Fr_curve::MODULUS
let polynomial: DensePolynomial<ScalarField> = DensePolynomial::from_coefficients_vec(vec![
ScalarField::from(96),
ScalarField::from(28),
ScalarField::from(15).neg(),
ScalarField::from(2),
]);
let evaluation1 = polynomial.evaluate(&ScalarField::from(3));
let evaluation2 = polynomial.evaluate(&ScalarField::from(1));
let evaluation3 = polynomial.evaluate(&ScalarField::from(5));
let evaluation4 = polynomial.evaluate(&ScalarField::from(2));
println!("polynomial evaluations (x,y) = {:?}",
[(3,evaluation1), (1,evaluation2), (5,evaluation3), (2,evaluation4)]);
let c1: char = evaluation1.into_bigint().to_bytes_le()[0].into();
let c2: char = evaluation2.into_bigint().to_bytes_le()[0].into();
let c3: char = evaluation3.into_bigint().to_bytes_le()[0].into();
let c4: char = evaluation4.into_bigint().to_bytes_le()[0].into();
println!("polynomial evaluations y as letters = {:?}", [c1,c2,c3,c4]);
println!("how polynomial actually looks like: {:?}", polynomial);
}
You use specific x values (3, 1, 5, 2) as indices to encode the data.
Later, we'll encode business logic (constraints) as polynomials too.
Ok, cool we have it saved something in form of a polynomial but WHY?
Because there is this thing called Schwartz-Zippel lemma.
Do not be afraid young padawan, another fancy name, but tool very useful it is.
The one and only, the OG: Schwartz-Zippel lemma
Schwartz-Zippel simplified:
Two different polynomials will almost never give the same output for a random input.
If you know hash functions, this is similar. Different inputs produce different outputs. With hashes, the input is data. With Schwartz-Zippel the input is a polynomial plus a random value.
Why does this matter? You can verify someone has a specific polynomial by asking for ONE evaluation at a random point. One number checks an entire polynomial. Insanely efficient.
Below: two different polynomials evaluated at a random point. Run it a few times and see that evaluations are always different.
use ark_bn254::Fr as Bn254Fr;
use ark_poly::{DenseUVPolynomial, Polynomial, univariate::DensePolynomial};
use ark_std::{
UniformRand,
rand::{SeedableRng, prelude::StdRng},
};
use std::time::{SystemTime, UNIX_EPOCH};
fn main() {
type ScalarField = Bn254Fr;
let polynomial1: DensePolynomial<ScalarField> =
DensePolynomial::from_coefficients_vec(vec![
ScalarField::from(6),
ScalarField::from(2)
]);
let polynomial2: DensePolynomial<ScalarField> =
DensePolynomial::from_coefficients_vec(vec![
ScalarField::from(7),
ScalarField::from(3)
]);
let since_the_epoch = SystemTime::now()
.duration_since(UNIX_EPOCH)
.expect("I am expecting that some time has passed since EPOCH");
let mut rng: StdRng = SeedableRng::seed_from_u64(since_the_epoch.as_secs());
let x = ScalarField::rand(&mut rng);
let evaluation1 = polynomial1.evaluate(&x);
let evaluation2 = polynomial2.evaluate(&x);
println!("polynomial1 evaluation (x,y) = {:?}", (x, evaluation1));
println!("polynomial2 evaluation (x,y) = {:?}", (x, evaluation2));
assert_ne!(evaluation1, evaluation2)
}
The assertion should never fail. Collision probability is less than 1 in 10^76. To give you an idea, the observable universe has 10^80 atoms.
Back to "cool" example. You can verify I have the same polynomial by picking random x (say, 42), asking me to evaluate at that point, and comparing results. Same value = same polynomial (with overwhelming probability).
This extends to constraints and business logic. But that's for future posts.
Key takeaway: Polynomials + Schwartz-Zippel = encode stuff and verify with a single number.
"Wait, Why Does -15 Look So Weird?"
When you print the polynomial, -15 shows up as:
21888242871839275222246405745257275088548364400416034343698204186575808495602
What the hell?
use ark_bn254::Fr as Bn254Fr;
use ark_poly::{DenseUVPolynomial, univariate::DensePolynomial};
use std::ops::Neg;
fn main() {
let polynomial: DensePolynomial<Bn254Fr> =
DensePolynomial::from_coefficients_vec(vec![
Bn254Fr::from(96),
Bn254Fr::from(28),
Bn254Fr::from(15).neg(),
Bn254Fr::from(2),
]);
println!("how polynomial should look like: \n96 + \n28 * x - \n15 * x^2 + \n2 * x^3");
println!("how polynomial actually looks like: {:?}", polynomial);
}
That giant number IS -15. BUT in modular arithmetic.
Example on smaller numbers first. Regular numbers go to infinity both ways. Modular arithmetic gives you finite numbers. With mod 29, you have 29 numbers. To get infinity working with a finite number of elements they form a circle (ring). After 28 comes 0, then 1, 2, 3... Behind 0 comes 28, then 27...
So with mod 29, what acts like -15? You need something that adds to 15 and gives 0. That's 14. Because 15 + 14 = 29. And there is no 29, after 28 you wrap back to 0. So 15 + 14 = 0 mod 29. So 14 "acts like" -15 in this case.
Let's check how -15 looks like with the big modulus:
use ark_bn254::Fr as Bn254Fr;
use ark_ff::{BigInteger, PrimeField};
use std::ops::Neg;
fn main() {
type ScalarField = Bn254Fr;
let negative = ScalarField::from(15).neg();
let positive = ScalarField::from(15);
println!(
"negative + positive in modular arithmetic: {:?}",
negative + positive
);
let negative = negative.into_bigint();
let mut positive = positive.into_bigint();
positive.add_with_carry(&negative);
println!("negative + positive in regular arithmetic: {:?}", positive);
println!("ScalarField::MODULUS: {:?}", ScalarField::MODULUS);
}
Same trick works for multiplication. Normally, if you multiply a number by its inverse you get 1. With regular numbers, inverse of 5 is 1/5. Multiply them: 5 x 1/5 = 1.
But with modular arithmetic? No such thing as a fraction (1/5). Instead, there's some integer that "acts like" 1/5. Finding it is much harder, but we can do it with extended Euclidean algorithm (another fancy name).
use ark_bn254::Fr as Bn254Fr;
use ark_ff::Field;
fn main() {
type ScalarField = Bn254Fr;
let five = ScalarField::from(5);
let five_inverse = five.inverse().unwrap();
println!("5 inverse: {:?}", five_inverse);
println!("5 * 5_inverse: {:?}", five * five_inverse);
}
Why do we care?
Addition/subtraction and multiplication/division work a bit differently. Instead of regular numbers we have numbers modulo some prime.
Because we have all those things we can use another fancy name: Finite Field. Just a name you'll see in white papers. Now you know what it means.
We can call it many other names, like Knight of the Square Table, or Humpty Dumpty, but someone was faster and already called it Finite Field, and it kinda stuck. The guy was Evariste Galois.
Those Finite Fields have a critical property. Multiplication and powers are easy to compute. Going backwards (logarithms) is stupidly hard. ZK security depends on this asymmetry. Post #7 (KZG commitment scheme) shows exactly how.
What You Just Did
You computed evaluation of univariate polynomials over a finite field. Specifically, the scalar field of the elliptic curve BN254.
Told you at the beginning: these are names. They can't hurt you.
Next Up: Roots of Unity
How do we efficiently put data INTO polynomials? By using something called "roots of unity."
It's a bunch of numbers with useful properties. The name sounds intimidating. It isn't.
This is post #1 in my "first principles" ZK series for Rust devs. Interactive version with runnable code on rustarians.com.
I'm running a free live session on March 17: From Polynomials to Proof in 60 Minutes. Range check circuit built step by step, with a real use case.
Questions? Found a bug? linkedin.com/in/adam-smolarek
Top comments (0)