DEV Community

Quantum Sequrity
Quantum Sequrity

Posted on • Originally published at quantumsequrity.com

Lattice-Based Cryptography: Foundation of PQC

Lattice-Based Cryptography: Foundation of PQC

Education

Lattice-Based Cryptography: Foundation of PQC

12 min read

Why Lattices Matter

Three of the four algorithms NIST selected for post-quantum standardization in 2022 are built on lattice mathematics: ML-KEM (FIPS 203) for key encapsulation, ML-DSA (FIPS 204) for digital signatures, and FN-DSA (draft FIPS 206 (draft)) for compact signatures. Lattice-based cryptography is, by selection volume, the dominant foundation of the post-quantum era.

This article explains what lattices are, why the mathematical problems they pose are believed to resist quantum computers, and how ML-KEM and ML-DSA translate those hard problems into practical encryption and signing algorithms. Every claim about hardness, history, and algorithm design is grounded in published academic research and NIST documentation.

What Is a Lattice?

A lattice is a regular, repeating grid of points in multi-dimensional space. In two dimensions, think of an infinite sheet of graph paper where every intersection is a lattice point. Mathematically, a lattice is defined by a set of basis vectors: you generate every point in the lattice by taking integer combinations of those basis vectors.

For example, in two dimensions, if your basis vectors are (1, 0) and (0, 1), you get the standard integer grid. But basis vectors do not have to be perpendicular or unit length. The vectors (2, 1) and (1, 3) also define a lattice, just one with a different shape and spacing.

Cryptographic lattices exist in hundreds or thousands of dimensions, not just two. This is crucial, because the problems that are easy in low dimensions become extraordinarily hard in high dimensions.

Hard Problems on Lattices

The security of lattice-based cryptography rests on the computational difficulty of certain geometric problems. The two foundational problems are the Shortest Vector Problem and the Closest Vector Problem.

The Shortest Vector Problem (SVP)

Given a lattice (defined by its basis), find the shortest non-zero vector in the lattice. In two dimensions, this is trivial: you can visually inspect the grid. In 500 dimensions, no known algorithm can solve it efficiently. The best known classical algorithms, based on lattice sieving (a technique that searches for short vectors by combining slightly longer ones), run in exponential time: roughly 2^0.292n where n is the lattice dimension.

Critically, the best known quantum algorithms for SVP do not do fundamentally better. Quantum sieving achieves roughly 2^0.265n, which is a modest constant-factor improvement, not the exponential speedup that Shor's algorithm provides against RSA and elliptic curve cryptography. There is no known polynomial-time quantum algorithm for SVP, and the problem has been studied for over 40 years.

The Closest Vector Problem (CVP)

Given a lattice and an arbitrary target point in space, find the lattice point closest to the target. CVP is at least as hard as SVP (there are known reductions from SVP to CVP), and it similarly resists both classical and quantum computation in high dimensions. CVP is the geometric intuition behind the Learning With Errors problem described below.

Learning With Errors: The Core Hard Problem

The Learning With Errors (LWE) problem was introduced by Oded Regev in 2005. Regev proved a remarkable result: solving LWE on average is at least as hard as solving several standard lattice problems (including approximate SVP) in the worst case. This worst-case-to-average-case reduction is one of the strongest security guarantees in all of cryptography.

Here is how LWE works. You have a secret vector s with n integer components. You also have a public random matrix A. You compute the product A * s and then add a small random error vector e to get:

b = A * s + e
Where A is a public random matrix, s is the secret, and e is a small error vector. Given (A, b), recovering s is believed to be hard, even for quantum computers.

The addition of the small error e is what makes the problem hard. Without error, recovering s from A and b is straightforward linear algebra (Gaussian elimination). But the error transforms it into a problem equivalent to finding the closest lattice point to a noisy target, which is a variant of CVP.

The genius of Regev's 2005 result is the connection: anyone who can efficiently solve LWE can also solve worst-case lattice problems like approximate SVP. Since no one has found efficient algorithms for SVP in over four decades of trying (including with quantum computers), we have strong confidence that LWE is hard.

From LWE to Module-LWE

Plain LWE works, but it has a practical drawback: the public matrix A is enormous. For cryptographically relevant dimensions (n = 512 or higher), storing and transmitting a fully random matrix requires megabytes of data. This is impractical for real-world cryptography.

The solution is structured lattices. In 2010, Vadim Lyubashevsky, Chris Peikert, and Oded Regev introduced Ring-LWE, which replaces the random matrix with a structured matrix derived from polynomial rings. This dramatically reduces key sizes while preserving provable security reductions (though to slightly different lattice problems, namely the hardness of ideal lattices).

Module-LWE (MLWE), used by both ML-KEM and ML-DSA, is a middle ground between plain LWE and Ring-LWE. Instead of a single polynomial ring element, Module-LWE works over small matrices of polynomial ring elements. This provides:

  • Efficiency: Keys and ciphertexts are orders of magnitude smaller than plain LWE
  • Flexibility: Security levels can be adjusted by changing the module rank (the matrix dimension) rather than the underlying ring
  • Conservative security: The security assumption (Module-LWE hardness) is weaker than Ring-LWE's assumption (ideal lattice hardness), meaning MLWE is at least as hard to break and likely harder

NIST chose Module-LWE as the basis for both ML-KEM and ML-DSA precisely because of this balance between efficiency and security conservatism.

How ML-KEM Uses Lattice Cryptography

ML-KEM (FIPS 203) is a key encapsulation mechanism: it allows two parties to establish a shared secret key over an insecure channel. Here is a simplified description of how it works:

  1. Key generation: The receiver generates a random secret vector s and a public matrix A (derived from a seed for compactness). The public key is (A, b = A*s + e) where e is small noise. The private key is s.
  2. Encapsulation: The sender generates a random message m, derives a random vector r, and computes ciphertext components using the public key. The ciphertext encodes m in a way that is obscured by the MLWE problem.
  3. Decapsulation: The receiver uses the private key s to compute an approximation of the encoded message. Because s is the secret, the noise can be stripped away and m recovered exactly. The shared secret is then derived from m using a hash function.

An attacker who intercepts the public key and ciphertext faces the MLWE problem: they would need to recover s from (A, b), which requires solving approximate SVP on a high-dimensional lattice. No known classical or quantum algorithm can do this efficiently.

How ML-DSA Uses Lattice Cryptography

ML-DSA (FIPS 204) is a digital signature algorithm based on a "Fiat-Shamir with Aborts" framework applied to the MLWE problem. The construction works differently from ML-KEM but relies on the same underlying lattice hardness:

  1. Key generation: Generate a secret signing key consisting of short polynomial vectors (s1, s2). The public verification key contains a matrix A and the value t = A*s1 + s2.
  2. Signing: To sign a message, the signer picks a random masking vector y, computes w = A*y, hashes (w, message) to get a challenge c, and computes the response z = y + c*s1. The critical step is a rejection sampling check: if z would leak information about s1 (because it is too large or has the wrong distribution), the signer discards the attempt and tries again with a new y. This "abort" mechanism ensures that published signatures reveal nothing about the secret key.
  3. Verification: The verifier checks that A*z - c*t is close to the original commitment w and that z is within acceptable bounds. This can be done using only the public key.

The security argument is that forging a signature requires either recovering the short secret vectors (s1, s2) from the public key (which is the MLWE problem) or finding a valid (z, c) pair without knowing the secret (which requires solving a related hard lattice problem).

NIST Security Levels

NIST defines five security levels for post-quantum algorithms, based on the computational effort required to break them. The levels are defined relative to the cost of breaking specific symmetric algorithms:

NIST Level Equivalent Security ML-KEM Parameter Set ML-DSA Parameter Set
Level 1 At least as hard as breaking AES-128 ML-KEM-512 ML-DSA-44
Level 3 At least as hard as breaking AES-192 ML-KEM-768 ML-DSA-65
Level 5 At least as hard as breaking AES-256 ML-KEM-1024 ML-DSA-87

The numbers in the parameter set names (512, 768, 1024 for ML-KEM; 44, 65, 87 for ML-DSA) refer to internal parameters: the module rank and polynomial degree for ML-KEM, and the matrix dimensions for ML-DSA. Higher values mean larger keys and signatures but stronger security margins.

For most applications, NIST Level 1 (ML-KEM-512 / ML-DSA-44) provides ample security. Level 3 and Level 5 are recommended for classified data, long-term secrets (data that must remain confidential for 30+ years), or environments with stringent regulatory requirements.

Advantages of Lattice-Based Cryptography

  • Performance: Lattice operations involve matrix-vector multiplication over polynomial rings, which is highly parallelizable and fast on modern hardware. ML-KEM key generation, encapsulation, and decapsulation each complete in microseconds on standard processors.
  • Reasonable key sizes: ML-KEM-768 public keys are 1,184 bytes and ciphertexts are 1,088 bytes. While larger than X25519's 32-byte keys, these sizes are practical for all modern applications including TLS, email, and data encryption.
  • Strong theoretical foundations: Worst-case-to-average-case reductions (Regev 2005, Lyubashevsky-Peikert-Regev 2010) provide security guarantees that most other cryptographic assumptions lack. Breaking the average-case cryptosystem implies solving worst-case lattice problems.
  • Versatility: The same lattice framework supports KEMs, signatures, fully homomorphic encryption, and other advanced primitives. This makes lattice cryptography a long-term investment for the research community and implementers.
  • Extensive analysis: Lattice problems have been studied in mathematics and computer science since the work of Minkowski in the 1890s. The cryptographic applications have been under intense scrutiny since at least 1996 (Ajtai's worst-case hardness results). CRYSTALS-Kyber and CRYSTALS-Dilithium specifically have been publicly analyzed since their submission to NIST in 2017.

Limitations and Open Questions

No cryptographic system is without tradeoffs. Lattice-based cryptography has several limitations worth understanding:

  • Larger keys and ciphertexts than classical crypto: ML-KEM-768 keys are about 37 times larger than X25519 keys. For most applications this is acceptable, but for extremely constrained devices (smart cards, IoT sensors with minimal memory) it can be a challenge.
  • Newer than RSA and ECC: While lattice problems have a long mathematical history, their use in deployed cryptographic systems is recent. RSA has been in production since the 1970s and ECC since the 2000s. Lattice-based systems have only been standardized since 2024.
  • Structured lattice assumptions: The efficiency of ML-KEM and ML-DSA comes from using structured (Module-LWE) rather than unstructured (plain LWE) lattices. While no attack exploiting this structure has been found, it remains a theoretical concern. This is one reason NIST selected the hash-based SLH-DSA (FIPS 205) and the code-based HQC as non-lattice alternatives.
  • No unconditional security proof: LWE's security is based on the assumed hardness of lattice problems. If someone discovers a polynomial-time algorithm for SVP (classical or quantum), all lattice-based cryptography would be broken. The cryptographic community considers this extremely unlikely given decades of failed attempts, but it is not mathematically proven to be impossible.

Why Hybrid Deployment Is Essential

Given these considerations, the security community recommends hybrid deployment: combining a lattice-based algorithm with a classical algorithm in every operation. QNSQY implements this by pairing ML-KEM with X25519 for key encapsulation and ML-DSA with Ed25519 for signatures.

In a hybrid scheme, an attacker must break both algorithms to compromise the operation. If a breakthrough breaks lattice cryptography, X25519 and Ed25519 still protect the data (until quantum computers large enough to run Shor's algorithm are built). If quantum computers break X25519 and Ed25519, ML-KEM and ML-DSA still protect the data. The combined system is at least as strong as the stronger of the two components.

This defense-in-depth approach is consistent with guidance from NIST, NSA (CNSA 2.0), and the European Union Agency for Cybersecurity (ENISA). It is the safest strategy during the transition period when both classical and post-quantum algorithms are being deployed and tested at scale.

Further Reading

To learn more about the specific algorithms built on lattice cryptography and the broader post-quantum landscape:

Sources

Related Articles

Protect Your Data with Lattice-Based Cryptography

QNSQY uses ML-KEM + X25519 hybrid encryption and ML-DSA + Ed25519 hybrid signatures on every tier, including free.


Originally published at quantumsequrity.com.

Top comments (0)