tags: blockchain quantumcomputing python machinelearning
Every blockchain security article talks about the same threat: Shor's algorithm breaking ECDSA keys. Quantum computer cracks your private key, steals your funds. That's real. NIST finalised its first post-quantum cryptography standards in 2024 precisely because this is coming.
But nobody talks about the second problem.
If you replace ECDSA with a post-quantum signature scheme and keep everything else the same — you have still left the door open for quantum hardware to dominate block production. Whoever builds the fastest quantum computer controls the chain. The US and China are both spending billions on this. That is a different kind of centralisation, and arguably worse because it is invisible until it is already too late.
I spent several months building a blockchain from scratch that solves both problems. It is called XEQUES. Here is exactly how it works.
The two problems most post-quantum blockchains ignore
Problem 1 — Keys
Bitcoin and Ethereum use ECDSA. Shor's algorithm breaks ECDSA completely. Every address that has ever broadcast a transaction has its public key on-chain. Those keys are being harvested right now. The harvest-now-decrypt-later attack is already happening: adversaries store blockchain data today to decrypt when hardware catches up.
XEQUES uses Lamport One-Time Signatures over SHA3-256. Lamport is the simplest and most auditable post-quantum signature scheme — it relies only on the hardness of inverting SHA3. No elliptic curves, no lattices, no trusted setup. Under Grover's algorithm (the best quantum attack on hash functions) it retains 2^128 security.
Each wallet address is a Merkle tree of Lamport keypairs. You get a tree of one-time keys. Each transaction consumes one leaf. When all leaves are used you generate a new wallet. The Merkle root is the wallet address.
# Wallet generation
wallet = MerkleWallet(depth=10) # 2^10 = 1024 one-time keys per wallet
address = wallet.root_hash # Merkle root is the address
Problem 2 — Consensus
This is the one nobody talks about.
If consensus is a speed race — first node to solve the puzzle gets the block — then quantum hardware wins eventually. A quantum computer running at 1000x classical speed does not need 51% of stake. It just needs to be faster. Block production centralises around whoever has the best hardware.
The fix is to use a puzzle that is equally hard for everyone, including quantum computers.
The #P-hard consensus mechanism
XEQUES uses a Proof of Quantum Control VDF (PoQC-VDF). The puzzle exploits a result from quantum complexity theory that most blockchain developers have never encountered.
Simulating a random quantum circuit in the anti-concentration regime is #P-hard.
Not just NP-hard. #P-hard. This is the same hardness property behind Google's Sycamore experiment — the reason Google's paper claimed quantum advantage was that no classical algorithm can efficiently simulate their random circuit. But here is the critical insight:
A quantum computer cannot efficiently simulate another quantum computer's random circuit either.
This hardness is symmetric. Classical hardware cannot do it fast. Quantum hardware cannot do it fast. The playing field is physically level.
The circuit is constructed deliberately to land in the hard regime:
def random_vdf(n_qubits=5, depth=8):
"""
Anti-concentration circuit construction:
- T gates on every qubit per layer (non-Clifford, breaks Clifford shortcuts)
- Random Rz rotations (continuous, blocks algebraic attacks)
- Alternating brick CZ entanglement (drives anti-concentration)
"""
circuit = []
for layer in range(depth):
# T gates — takes circuit outside Clifford group
for q in range(n_qubits):
circuit.append(('T', q))
# Random Rz rotations — continuous parameter space
for q in range(n_qubits):
theta = random.uniform(0, 2 * math.pi)
circuit.append(('Rz', q, theta))
# Brick CZ entanglement — alternating pattern (Sycamore style)
pairs = [(q, q+1) for q in range(0 if layer%2==0 else 1, n_qubits-1, 2)]
for pair in pairs:
circuit.append(('CZ', pair))
return circuit
Every submitted answer is checked for correctness AND for whether the circuit was actually in the hard regime. XEQUES measures the second moment of the output distribution against the Porter-Thomas target (2/2^n). Puzzles outside the hard regime are rejected.
def is_in_hard_regime(distribution):
n = len(distribution)
second_moment = sum(p**2 for p in distribution.values())
pt_target = 2 / n # Porter-Thomas prediction for random circuits
return second_moment <= 3.0 * pt_target # reject if too uniform
Proof of Command Correctness — ordering without a central authority
Standard signatures prove who sent a transaction. They do not prove when or in what order relative to other transactions from the same sender.
XEQUES adds a per-sender hash chain to every transaction. Every command carries a chain hash that links it to all previous commands from that address.
chain_hash[0] = SHA3(address + command_hash[0])
chain_hash[n] = SHA3(chain_hash[n-1] + command_hash[n])
This is called Proof of Command Correctness (PoCC).
What it proves simultaneously:
- Authorisation — Lamport signature proves who sent it
- Ordering — chain hash proves sequence relative to all prior commands
- Integrity — any field tampered with breaks the chain hash
Replay attacks insert a copy of a previous transaction. PoCC rejects them because the chain hash would duplicate a used position. Reorder attacks try to move transaction N before transaction N-1. PoCC rejects them because the chain breaks. Field tampering changes an amount or address after signing. PoCC rejects it because the command hash changes.
All of this is caught at mempool admission, before a block is ever produced.
class PoCCRegistry:
def verify(self, command):
last = self.chains.get(command.sender)
expected = sha3(last + command.command_hash) if last else sha3(command.sender + command.command_hash)
return command.chain_hash == expected
A spiking neural network runs inside every validator node
Before PoCC even runs, every transaction is screened by a Leaky Integrate-and-Fire spiking neural network.
Architecture:
- Input: 7 features (amount ratio, fee ratio, velocity, time delta, balance ratio, network deviation, is-new-address)
- Hidden: 20 LIF neurons
- Output: 3 neurons — VALID / SUSPICIOUS / FRAUDULENT
# LIF dynamics
tau_m = 20e-3 # membrane time constant (20ms)
R_m = 80e6 # membrane resistance (80 MΩ)
V_rest = -65e-3 # resting potential
V_th = -50e-3 # threshold
dV = (-(V - V_rest) + R_m * I) / tau_m * dt
if V >= V_th:
spike(); V = V_rest
The network learns via Spike-Timing-Dependent Plasticity (STDP). After each confirmed block, weights update based on which transactions were actually confirmed as valid. No external labels. No central model. Each node learns from its own observed history.
Δw = A+ · exp(-Δt/τ+) # pre fires before post → strengthen
- A- · exp(+Δt/τ-) # post fires before pre → weaken
This means nodes that see different transaction patterns develop slightly different weight distributions. The fraud detection emerges from local learning — closer to how biological neural populations work.
Running it takes one command
git clone https://github.com/prajwalaher33/XEQUES
cd XEQUES
pip install numpy
python main.py
The demo spins up a 4-node testnet, mines 5 blocks via PoQC-VDF, runs PoCC verification on transactions from 3 wallets, fires the SNN on each transaction, and shows the stake and governance system.
Output:
=== XEQUES v0.2.0 — 4-node testnet ===
[PoCC] alice command_0: chain_hash=a3f8... VERIFIED
[PoCC] Replay attack detected and rejected
[PoQC-VDF] Block 1 puzzle: anti_concentration_score=1.84 (HARD REGIME ✓)
[PoQC-VDF] Block 1 mined. ΣP=1.000000000000
[SNN] tx_001: VALID (58 STDP weight updates applied)
[Governance] Quadratic vote: FAILED (insufficient support)
What is still missing
This is a research prototype, not a production blockchain. To be honest about what is not built yet:
- No persistent storage — chain lives in memory, restart wipes it
- No real P2P networking — validators are simulated in-process
- No economic incentive layer — staking exists but rewards are not enforced by the protocol
- No test suite — pytest coverage is next on the roadmap
The Python implementation is the reference. A Rust production implementation (xeques-core) is planned once the architecture is stable.
The question I genuinely want answered
Are there known approximate classical or quantum simulation methods that break #P-hardness at realistic circuit depths (depth 5–15, 5 qubits)? Tensor network methods? Matrix product state approximations?
I have read the literature but I want to hear from people who work on this directly. If the anti-concentration regime can be broken efficiently at these parameters, the consensus mechanism needs to be redesigned. I would rather know now.
GitHub: https://github.com/prajwalaher33/XEQUES
If you find this interesting, a star on the repo means a lot for a solo project at this stage. And if you spot a flaw in the #P-hard argument — open an issue. That is exactly the kind of feedback that makes research better.
Built with Python and numpy. No dependencies beyond the standard library except numpy for the quantum simulation.
Top comments (0)