QRL Launches Post-Quantum Smart Contract Testnet Ahead of 2.0 Mainnet

Read More

Two Papers Just Changed the Quantum Threat Conversation. Here's What They Actually Say

On 30 March 2026, two papers from overlapping author groups landed within hours of each other. Together, they represent the most significant single-day contribution to quantum cryptanalysis of blockchain systems to date. The community should engage with them seriously, and precisely.

technical

2nd April 2026

On Sunday evening, Google Quantum AI published a 57-page whitepaper titled “Securing Elliptic Curve Cryptocurrencies against Quantum Vulnerabilities: Resource Estimates and Mitigations.” The author list reads like a who’s who of the field: Ryan Babbush, Craig Gidney, and Hartmut Neven from Google Quantum AI, Justin Drake from the Ethereum Foundation, and Dan Boneh from Stanford. Hours earlier, a separate paper from Madelyn Cain, John Preskill, Hsin-Yuan Huang, Dolev Bluvstein and others at Harvard, Caltech, and Google demonstrated that Shor’s algorithm could run on as few as 10,000 reconfigurable neutral-atom qubits. Preskill and Boneh alone represent decades of foundational work in quantum information and cryptography respectively. This is not a startup whitepaper. The weight of these contributions demands careful reading.

These paper’s involvement from Google should come as no surprise given the 2029 deadline set last week for their transision to PQC. What follows is my attempt to unpack what the papers claim, where the nuances lie, and what the blockchain community should take from them.

Table of Contents

The headline numbers

The Google whitepaper presents new resource estimates for breaking the 256-bit Elliptic Curve Discrete Logarithm Problem (ECDLP-256) on secp256k1, the curve that secures transaction signatures on both Bitcoin and Ethereum. They report two optimised quantum circuits: one using no more than 1,200 logical qubits and 90 million Toffoli gates, the other using no more than 1,450 logical qubits and 70 million Toffoli gates.

Compiled onto a superconducting architecture using surface code error correction, with standard hardware assumptions (10⁻³ physical error rate, planar degree-four connectivity, 10 microsecond control system reaction time), these circuits require fewer than 500,000 physical qubits. That is approximately a 20-fold reduction over the previous best estimate for ECDLP-256, which was Litinski’s 2023 figure of roughly 9 million physical qubits on a photonic architecture.

The Cain et al. paper, working with neutral-atom architectures and high-rate quantum error-correcting codes, claims Shor’s algorithm can execute at cryptographically relevant scales with as few as 10,000 reconfigurable atomic qubits. At 26,000 physical qubits, the runtime for discrete logarithms on the P-256 elliptic curve could be a few days. RSA-2048, for comparison, is one to two orders of magnitude more expensive.

These are substantial reductions. They continue a well-documented pattern: resource estimates for quantum cryptanalysis have fallen by roughly an order of magnitude per publication cycle over the last decade. The paper includes a figure tracking this trend for RSA-2048, and the trajectory is striking. Whether this trend continues at the same rate is an open question, one I will return to, but the direction is unambiguous.

A new taxonomy of attacks

One of the whitepaper’s most useful contributions is a clear framework for thinking about quantum attacks on blockchains. They define three categories:

On-spend attacks target transactions in transit. The attacker derives the private key from a public key exposed in the mempool before the transaction is confirmed on-chain. This requires solving ECDLP within the blockchain’s settlement window: roughly 10 minutes for Bitcoin, 12 seconds for Ethereum, 400 milliseconds for Solana.

At-rest attacks target public keys that are already exposed on-chain or off-chain. The attacker has days, weeks, or longer to derive the private key. The attacker has days, weeks, or longer. Dormant wallets, reused addresses, and legacy P2PK outputs are the primary targets.

On-setup attacks target fixed public protocol parameters to produce a permanent, reusable classical backdoor. This is the most novel category. The attacker uses a CRQC once, offline, to recover secrets from a trusted setup ceremony (such as the “toxic waste” from a KZG ceremony). The resulting exploit is classical, meaning it can be used repeatedly without further quantum access.

This taxonomy matters because different quantum architectures enable different attack types. The paper introduces a distinction between “fast-clock” platforms (superconducting, photonic, silicon spin qubits) with gate times in the nanosecond range, and “slow-clock” platforms (neutral atoms, trapped ions) with operations roughly two to three orders of magnitude slower. The argument is that the first fast-clock CRQCs would enable both on-spend and at-rest attacks near-simultaneously, while slow-clock CRQCs would only enable at-rest attacks, at least initially.

The 9-minute on-spend attack

The most widely reported claim is that a superconducting CRQC could derive a secp256k1 private key in approximately 9 minutes from a “primed” state. This deserves unpacking.

The full computation takes 18 to 23 minutes depending on which circuit variant is used. But Shor’s algorithm for ECDLP can be split into two phases. The first depends only on the curve parameters (which are public and fixed), not on the specific target key. This means an attacker can pre-compute the first half and hold the quantum computer in a ready state, waiting for a public key to appear. When one does, only the second phase needs to run, roughly halving the time from public key exposure to private key derivation.

The paper models this against Bitcoin’s stochastic block time (exponentially distributed with a 10-minute mean) and arrives at approximately a 41% probability that the attacker derives the key and submits a competing transaction before the legitimate transaction is confirmed. The model assumes instant syndication of the public key to the attacker and successful fee displacement via Replace-By-Fee, which favours the attacker. It also assumes zero network congestion, which favours the defender. These are explicitly stated simplifying assumptions, not hidden ones.

An important nuance: the model also notes that multiple primed machines operating in parallel can reduce the computation further. Eleven primed machines would yield a 6.5x speedup. The base model assumes a single machine, but the paper notes that 11 primed machines in parallel would yield a 6.5x speedup, compressing the attack well inside Bitcoin’s block time.

It is also worth noting that the paper’s own analysis finds this on-spend threat essentially does not apply to Ethereum. Ethereum produces blocks in deterministic 12-second slots, with most transactions processed in under a minute. A 9-minute key derivation cannot fit inside that window. Combined with Ethereum’s existing private mempool infrastructure (such as TEE-based BuilderNet), on-spend attacks against Ethereum are, in the paper’s own assessment, unlikely from early fast-clock CRQCs.

Two papers, two architectures, two threat profiles

This is where careful reading matters. The Cain et al. paper and the Google whitepaper were released on the same day, from overlapping author groups, and have been covered together. But they describe fundamentally different threat profiles.

Cain et al. is a neutral-atom paper. Neutral atoms are a slow-clock architecture. The Google whitepaper explicitly argues that slow-clock platforms cannot perform on-spend attacks because their elementary operations are two to three orders of magnitude slower than superconducting systems. So the “10,000 qubits” headline from Cain et al. applies to at-rest attacks with runtimes measured in days, not the “9 minutes” figure from the Google whitepaper.

These are complementary findings, not interchangeable ones. Cain et al. shows that the qubit floor for at-rest attacks may be remarkably low. The Google whitepaper shows that fast-clock architectures could, in principle, compress runtimes into the on-spend window. Treating them as a single announcement, as much of the coverage has done, conflates two very different capabilities.

The Ethereum analysis nobody is talking about

Most of the public reaction has focused on the Bitcoin “9 minutes” headline. That is understandable but unfortunate, because the Google whitepaper’s analysis of Ethereum introduces genuinely novel territory.

The paper identifies five distinct vulnerability classes across Ethereum’s architecture:

Account Vulnerability arises from Ethereum’s persistent account model. Unlike Bitcoin’s UTXO system, Ethereum accounts are long-lived and the public key is permanently exposed after the first outbound transaction. The paper estimates the top 1,000 accounts by ETH balance (approximately 20.5 million ETH) could be cracked in under nine days by a fast-clock CRQC. Since these are at-rest attacks against static, exposed keys, there is nothing stopping an attacker from targeting all of them simultaneously.

Admin Vulnerability affects smart contracts controlled by admin keys or multisig arrangements. The paper identifies over 70 major contracts, including those backing key stablecoins, where admin keys are exposed on-chain.

Code Vulnerability relates to the use of ECDLP-based cryptographic primitives within smart contract logic itself, including alt_bn128 precompiles used by L2 zk-rollups.

Consensus Vulnerability targets Ethereum’s Proof-of-Stake validator signatures (BLS on BLS12-381). The paper notes that concentration of stake in large pools such as Lido, at roughly 20% of total stake, means targeting a single provider’s infrastructure could dramatically shorten the timeline to compromise consensus.

Data Availability Vulnerability is, in my view, the most significant finding in the entire paper. Ethereum’s Data Availability Sampling (DAS) mechanism depends on KZG polynomial commitments, which were established through a one-time trusted setup ceremony. The “toxic waste” from that ceremony was supposed to be destroyed. A quantum computer could recover it from publicly available parameters. Once recovered, the resulting exploit is classical and permanent. It can forge data availability proofs indefinitely without further quantum access. The paper describes this as “potentially tradable.” Every L2 that depends on Ethereum’s blob data system would be affected.

This is qualitatively different from key derivation. It is a one-time quantum computation that produces a perpetual classical exploit. The implications for Ethereum’s L2 ecosystem are severe and, as far as I can tell, have received almost no attention in the public discussion.

The zero-knowledge proof approach

The paper introduces a genuinely novel approach to responsible disclosure. Rather than publishing the optimised circuits (which could serve as attack blueprints), the authors provide a zero-knowledge proof that the circuits exist and perform correctly. This is, to my knowledge, the first time a ZK proof has been used to disclose a novel cryptographic vulnerability rather than to verify a previously known one.

The specific statement the ZK proof demonstrates is: the authors possess a classical reversible circuit of a specified size that correctly computes elliptic curve point addition on secp256k1 for 9,000 random inputs. This subroutine is the primary bottleneck in Shor’s algorithm for ECDLP, and the cost of the full algorithm can be derived from it in a straightforward way (given the use of windowed arithmetic, which the paper acknowledges).

Precision matters here. The ZK proof verifies the point addition subroutine, not an end-to-end implementation of Shor’s algorithm. The relationship between the subroutine cost and the full algorithm cost is well-understood and the derivation is standard, but the proof itself covers the subroutine. The paper is transparent about this. It is worth understanding the distinction, not because it undermines the result, but because responsible engagement with these claims requires knowing exactly what has been verified and what has been inferred.

The trend line and its limits

The paper presents a compelling graph showing how physical qubit estimates for RSA-2048 have fallen by roughly 20x per publication cycle over the past decade. It frames this as evidence that “attacks always get better,” borrowing a well-known aphorism from the cryptography community.

This is historically accurate and the graph is striking. But extrapolation from a log-scale trend line requires some care. Algorithmic optimisation does improve over time, but it is not unbounded. There are circuit-complexity lower bounds for these problems. The question is how close current ECDLP estimates are to those bounds. Justin Drake, a co-author and Ethereum Foundation researcher, commented that “low-hanging fruit is still being picked” and that “AI was not yet tasked to find optimisations.” That suggests the authors themselves believe further reductions are likely. But neither the paper nor Drake quantifies how much room remains. The trend is real. Whether it continues at the same gradient is an assumption, not a finding.

The reaction-limited execution assumption

The 9-minute runtime estimate assumes that the circuits execute in a “reaction-limited” fashion, where the speed is primarily constrained by the control system reaction time rather than the gate speed itself. The paper uses a 10 microsecond figure, which is standard for current superconducting systems.

Whether reaction-limited execution at that speed is sustained coherently across a computation involving 500,000 physical qubits and real-time surface code decoding for the duration of the algorithm is an open engineering question. The paper’s runtime estimates are only as reliable as this assumption. It is not a physics impossibility, but it is not a demonstrated capability either. The paper does not claim otherwise, but it is worth understanding that the “minutes” in the headline is downstream of this specific parameter.

What this changes

These papers shift the landscape in ways that I think are genuinely important.

First, the ECDLP-256 resource estimate is now an order of magnitude lower than it was six months ago. That matters regardless of when hardware catches up. It shrinks the gap between what exists and what is needed, and it does so on the algorithmic side, where progress is harder to predict.

Second, the fast-clock/slow-clock taxonomy is a real contribution to threat modelling. The blockchain community has tended to discuss quantum threats as a single monolithic event. These papers make clear that different architectures create different threat timelines with different mitigation requirements.

Third, the Ethereum analysis expands the attack surface beyond key derivation into consensus, data availability, and smart contract governance. The on-setup attack on KZG/DAS is a new class of threat that the community needs to reason about carefully.

Fourth, the responsible disclosure approach via ZK proof sets a precedent. If further algorithmic improvements are found, this framework provides a way to signal their existence without publishing blueprints.

I will not pretend to be neutral about the broader conclusion. If you build or hold assets on systems secured by elliptic curve cryptography, these papers should sharpen your sense of urgency. The authors include a line in the whitepaper that is worth sitting with: “It is conceivable that the existence of early CRQCs may first be detected on the blockchain rather than announced.”

That is not a prediction. It is a possibility. And it comes from the team building the hardware.


Disclosure: QRL is named in the Google whitepaper as an example of a post-quantum blockchain. Dr. Joseph Kearney is Technical Advisor at QRL and holds a PhD in post-quantum cryptography.

technical

2nd April 2026


Joseph Kearney

WRITTEN BY

Joseph Kearney

Dr. Joseph Kearney is Technical Advisor at the Quantum Resistant Ledger (QRL). He holds a PhD in post-quantum cryptography.