7001 stories
·
166 followers

[$] A more efficient implementation of Shor's algorithm

1 Comment
By Daroc Alden
April 17, 2026

Shor's algorithm is the main practical example of an algorithm that runs more quickly on a quantum computer than a classical computer — at least in theory. Shor's algorithm allows large numbers to be factored into their component prime factors quickly. In reality, existing quantum computers do not have nearly enough memory to factor interesting numbers using Shor's algorithm, despite decades of research. A new paper provides a major step in that direction, however. While still impractical on today's quantum computers, the recent discovery cuts the amount of memory needed to attack 256-bit elliptic-curve cryptography by a factor of 20. More interesting, however, is that the researchers chose to publish a zero-knowledge proof demonstrating that they know a quantum circuit that shows these improvements, rather than publishing the actual knowledge of how to do it.

Quantum background

Quantum computers store their information in qubits, each of which essentially stores a two dimensional vector with length one — a superposition. Quantum logic gates can do operations such as rotating a qubit or reflecting it around some other vector. Together, these operations can be chained together to create a quantum circuit that performs a useful operation in the same way that logic gates in a classical computer are combined into a circuit. The exact result that is measured at the end of a quantum circuit, however, can be quite sensitive to noise from the environment. This is the major practical challenge in building a quantum computer: isolating it from the environment sufficiently well that the information stored in its qubits doesn't degrade to the point of uselessness. The difficulty of this isolation goes up with the size and complexity of the quantum circuit, and with the number of qubits.

Other recent research has shown how to break 256-bit elliptic-curve cryptography using only 1,098 logical qubits, but that paper used 238 quantum gates to do it. This requires a circuit that is roughly eight orders of magnitude larger than modern quantum computers can handle.

One technique that is used to reduce the precise tolerances required inside a quantum computer is error correction. A quantum error-correction scheme works a bit like ECC memory: it emulates a single "logical" qubit using a group of less-precise "physical" qubits, in the same way that ECC memory produces more reliable memory storage by combining multiple potentially noisy memory cells. The nine researchers behind the new paper — who come from Google, the University of California Berkeley, the Ethereum Foundation, and Stanford University — have produced a quantum circuit for factoring 256-bit elliptic-curve signatures using fewer than 1,200 logical qubits and 90 million quantum gates. Depending on architecture, this corresponds to around 500,000 physical qubits. IBM's Condor quantum computer, among the largest publicly known quantum computers, has 1,121 physical qubits, meaning that engineers will need to give quantum computers around 500 times more memory before an attack using the new technique becomes practical. They'll also need to increase the number of gates, but that is believed to be less of a limiting factor.

[Readers who are not interested in the details of how the researcher's proved that they had such a breakthrough may wish to skip to the conclusion.]

Zero-knowledge proofs

It isn't possible to say the exact number of logical or physical qubits needed, however, because the researchers have not published their actual improved quantum circuit. Citing concerns that bad actors could use the research to attack digital security, including the security of the Bitcoin blockchain, they instead chose to publish a machine-verifiable proof that they know a particular circuit that lives up to their claims. This is not new territory for mathematics; 16th century mathematicians would keep their breakthroughs secret and challenge each other to "duels" where they proved that they knew an analytical solution to a complex equation by quickly solving difficult instances of the problem.

It is new in the modern era of mathematics, however. This is the first quantum-computing paper to use this style of zero-knowledge proof. The way the proof works is quite interesting. The researchers wrote a simulator for quantum circuits (available from their published reproduction data) that reads in a quantum circuit, generates thousands of random inputs, simulates the behavior of the circuit on those inputs, and checks it against a reference implementation. Ordinarily, this would not suffice to prove that the circuit was correct — what if it was correct for 99% of inputs, but incorrect for the remaining 1%? In this case, however, that doesn't matter. Shor's algorithm is, by design, robust to occasional small errors in its intermediate computations; therefore, it doesn't matter whether the quantum circuit being simulated is right in every case, only that it is right in a sufficiently high number of cases. The researchers target a circuit that is accurate in 99% of cases. As long as the researchers cannot cherry-pick the "random" inputs, running enough random trials shows that it is correct enough.

In order to ensure the inputs are really random, the researchers use a pseudorandom-number generator seeded from the hash of the description of the circuit being tested. As long as the hash function in question (SHA-256 in this case) and the pseudorandom-number generator are both secure, this ensures that the researchers couldn't have manipulated the randomly chosen test inputs.

So, in theory, running this program and feeding it the unreleased quantum circuit would prove that the circuit was good enough for use inside Shor's algorithm. But how can the researchers prove that they did run the experiment correctly if nobody else has access to the input to the program? The solution is SP1, a "zero-knowledge virtual machine" that allows users to produce cryptographically secure proofs (STARKs, or "scalable transparent arguments of knowledge") that they ran a program and that it produced a given output, without revealing what the input to the program was. SP1 works by emulating a RISC-V chip that produces a trace of the CPU's execution as it goes, and then proving that the trace must correspond to a valid execution of the corresponding RISC-V program regardless of what the input values were.

Note that the functions discussed in this section all operate over a cyclic multiplicative subgroup of a finite field — so the "fast Fourier transform" is actually a discrete Fourier transform over a ring. This detail doesn't invalidate general intuitions about the behavior of a Fourier transform, however.

SP1's CPU trace lists, for every register and memory location, what its value was at each time step. Taking the inverse fast Fourier transform of each list produces a "trace polynomial" for each register. The trace polynomials are a representation of how the register changes over time; by combining these polynomials in the right way, SP1 produces "constraint polynomials". These polynomials represent whether the rules of the emulated RISC-V CPU were violated. They include constraints such as "the output of the arithmetic logic unit must be A + B when the mode is set to addition" or "the next program counter must be one more than the previous, unless there was a jump instruction". Each constraint polynomial is constructed such that it must be zero at every time step. At this point, if the person constructing the zero-knowledge proof could show that property held, they would have demonstrated that they executed the program correctly.

To show this, the person constructing the proof can divide the combined constraint polynomials by a (fixed) polynomial that equals zero at each time step. If they were telling the truth, the result is still a polynomial. If they were lying about any of the time steps, the division results in a rational function that can eventually be distinguished from a proper polynomial. Imagine a heavily simplified example where the constraint polynomial is just "x - 1" and the fixed polynomial is also "x - 1". Dividing one by the other gives a linear function (a kind of polynomial) with a hole at the point x = 1. On the other hand, if the constraint polynomial had been "x + 1", then the result of the division wouldn't be a polynomial: it would have an asymptote at x = 1. Since those functions have different shapes, it's easier to construct a protocol that reliably distinguishes them.

In order to prove that the result of the division is a polynomial, the prover splits the function into even (symmetrical about the y-axis) and odd (symmetrical about the origin) parts, multiplies the odd part by a randomly chosen constant, squares the inputs to both parts, and adds them back together — an operation known as folding. If the input to this folding was indeed a polynomial, this produces a polynomial with a smaller finite domain; if the input to this operation was a rational function, however, the resulting function is still a rational function with high probability. Eventually, the domain shrinks down to a single point and the function becomes a constant function.

To demonstrate that they are performing the folding correctly, the prover picks random points and writes down what the function evaluates to there. Importantly, the hashes of these points are incorporated in later steps of the proof, ensuring that the prover cannot go back and modify them afterward. Later, someone interested in verifying the proof can challenge them for some of these randomly chosen points and check how they relate to one another before and after folding, making it difficult for the prover to get away with folding the function incorrectly. By repeatedly folding the function, the prover can eventually produce a single number that is shared as part of the proof. As long as the random numbers were indeed chosen randomly, it is probabilistically difficult for the prover to create a matching chain of folds that ends at the publicly shared constant without having a valid polynomial to start with.

Obtaining properly random numbers is the same problem that the researchers faced above, and is solved in the same way: the random numbers are generated according to a hash of the publicly-known portion of the proof (including the machine-code of the RISC-V program, and the hashes of the random numbers used at previous steps of the proof). This lets the verifier take the input, the list of commitments, and the final output, and check that each of the folding steps was performed correctly. If each of the folding steps were performed correctly and the results match, the verifier learns that the RISC-V program was emulated correctly, without learning details about the actual register values.

There is one problem, however: for a program that executes millions of RISC-V instructions, the generated trace and the resulting proof can be quite large — too large for people to verify quickly on their own computers. So, the researchers perform one more transformation of the data to compress the proof into a minimal form that is more efficiently checkable.

STARK to SNARK

There is a different kind of zero-knowledge proof called a succinct non-interactive argument of knowledge (SNARK) that is less flexible than STARKs. Instead of being able to handle arbitrary computations, it only handles computations that can be expressed in a limited domain. In this case, the researchers used a SNARK system called Groth16, which only handles computations that can be expressed as a set of constraints on values in an abstract circuit made from classical logic gates where each constraint involves at most one multiplication operation. It also requires a completely random (not pseudorandom, as above) number to be securely agreed upon, used to create a kind of public key, and then completely forgotten. In exchange for this loss of flexibility, SNARKs produce much shorter, simpler proofs. In particular, the size of a SNARK proof is constant, regardless of what it is proving.

As it turns out, however, the process of validating a STARK proof can be expressed as a circuit diagram that Groth16 can handle. Unlike running the RISC-V simulator program (which runs for a variable number of time steps depending on the input), the process of validating a STARK can be formulated to take a constant amount of time, letting it be represented as a fixed circuit. The researchers took a verifier that could check the large existing proof, and compiled it to a set of Groth16 constraints. Evaluating this system of constraints produces values for each intermediate wire in the circuit (the analog of the CPU trace in a more complicated STARK proof). These intermediate values are converted to polynomials in much the same way as the trace polynomials inside a STARK. These polynomials are again combined into a single function such that if all of the constraints were satisfied, the function remains a polynomial — but if any were violated, it becomes a rational function instead.

Ironically, an adversary who can efficiently factor 256-bit elliptic-curve cryptography using a quantum computer is also able to forge SNARK proofs, since the security of the unknown random location depends on the difficulty of factoring.

The method by which the verifier checks that the resulting function is a polynomial is fairly different, however. The prover uses the publicly known key referenced above to calculate three values, each of which is a different way of combining information from the trace of the circuit with elements of the public key referenced above. The public key consists of precomputed numbers that can be used to evaluate a polynomial at a specific random location that nobody knows. The public key is structured such that the numbers can't be used to derive what the unknown location was (without being able to efficiently factor large numbers). The verifier can then check the relationship between the public key and the three generated numbers to show that the prover's function, minus a polynomial, evaluated at that unknown random location, equals zero. According to the Schwartz-Zippel lemma, this is incredibly unlikely to happen by chance unless the prover's function was a polynomial to begin with.

The security of that construction hinges heavily on the random location being truly unknown — an adversary who did know it could just evaluate the function directly at that location in order to fake a SNARK proof. In this case, the researchers used the key generated by a secure multi-party computation done by Aztec Labs as part of setting up their Ethereum-based cryptocurrency. That computation was a joint venture in late 2019 between 176 different people: each one generated a random number and added it to a communal pool of randomness that was used to calculate the necessary public key using a distributed algorithm. As long as at least one of those people was diligent enough to properly erase their random number after the process was conducted, the random location used to construct and validate these Groth16 SNARKs is unrecoverable.

The result of all of this complexity is a 1.7MB proof file that (in conjunction with the source code of their simulator) shows that the researchers behind this paper did actually produce a usable quantum circuit meeting their stated claims. Of course, I had to try this out for myself. Downloading and compiling their source code was straightforward, and verifying the proof took just under 14 minutes of CPU time on my laptop. The code for the simulator, prover, and verifier comes to just under 1,500 lines of Rust — but there are over 2,800 external libraries used in support of that code, so auditing the code remains somewhat daunting. The correctness of the verifier code is not something that can be guaranteed cryptographically; it relies on the normal process of publishing the artifacts and letting interested people point out any problems.

In summary: assuming that the publicly published source code the researchers shared is correct, and that at least one person out of a list of 176 cryptography enthusiasts in 2019 was honest, then it is almost certainly the case that the verifier in the paper's reproduction data was fed a valid STARK transcript. In turn, that means that it was almost certainly the case that their simulator, when fed some input known only to the researchers, validated that the input was a quantum circuit that provides a substantial speedup over prior work on Shor's algorithm. Here "almost certainly" means that it is not theoretically impossible that they could have created a forgery by random chance — but doing so would be as difficult as breaking many other well-trusted cryptographic systems by chance. Winning the lottery 1068 times in a row would be more likely.

Now what?

I am personally delighted by this kind of careful, complicated cryptographic construction, but it does raise some concerns for the future of open mathematics. If the paper's authors had chosen to release their circuit, they would certainly have been recognized for the important progress they made in the science of quantum computing. Other researchers would have gone on to build on their work, and the entire scientific community would be richer for it.

As it is, the researchers haven't really published a breakthrough. Instead, they have published a cryptographic proof that they have a breakthrough, but they aren't going to share it. It's certainly exciting to know that more efficient quantum circuits for Shor's algorithm exist — but do the researchers deserve the same level of praise for finding it, when the rest of the scientific community won't be able to learn from and build on their work?

Worse, it's impossible to say how their work combines with other advances in the field. Another paper, published the same week, introduced an exciting technique for reducing the number of physical qubits required to simulate a logical qubit in some circumstances — albeit in a slightly convoluted way that increases the number of required quantum gates. Does that work apply to the unknown quantum circuit? If so, it could reduce the number of required physical qubits to attack 256-bit elliptic-curve cryptography to around 25,000 — still 25 times larger than any existing quantum computers, but much smaller than the best estimates from a year ago. It's impossible to say for sure whether the potential reduction in physical qubits applies, since the details of the new quantum circuit are unknown.

Practical quantum computers always seem to be years away. This paper probably doesn't change that, but it does make it much harder to tell exactly how many years away they may be. Perhaps the best we can do is continue working to adopt post-quantum cryptography, and hope that when practical quantum computers become available we get an actual explanation of how they work, and not just a cryptographic proof that they do.

Read the whole story
jepler
58 minutes ago
reply
meh. claiming you have a secret algorithm, and then creating a whole complicated edifice that's supposed to prove you have the secret, is silly. not science.
Earth, Sol system, Western spiral arm
Share this story
Delete

They Might Be Giants - Get Down (Official Video)

1 Comment
From: ParticleMen
Duration: 2:48
Views: 12,631

Directed by Paul Sahre

From They Might Be Giants' all-new album The World Is to Dig. Get the album on vinyl, CD, or cassette at https://tmbgshop.com/ and get an immediate download. Bundles available. There is even a shirt! Go to www.TMBGshop.com.

They Might Be Giants on tour! Multi-night stand in Indianapolis Detroit, Chicago, Philadelphia, Boston, and Brooklyn. Woodstock shows are SOLD OUT. www.TMBG.com for direct ticket links. Avoid resellers. They aren't real.

Read the whole story
jepler
1 day ago
reply
.. or cassette ...
Earth, Sol system, Western spiral arm
Share this story
Delete

: ()

1 Comment
Date:
Venue:Unknown
City:
Time:
Ages:
Opener:
Tickets:
Notes:Unknown
Other Links: Tell a friend!
Read the whole story
jepler
2 days ago
reply
: ()
Earth, Sol system, Western spiral arm
Share this story
Delete

NewsBlur v14 for Android: Redesigned reading experience, Ask AI, Discover, Daily Briefing, and more

2 Comments

A few weeks ago I shipped NewsBlur v14 for iOS and Mac, a major redesign of the Apple apps. Today, Android gets the same treatment. Every screen has been reworked: the feed list, the story list, the reading view, preferences, and menus. Along with the visual overhaul, several features that were previously web-only are now on Android: Ask AI, Discover Related Sites, and the Daily Briefing.

Here’s what’s new.

Ask AI

Ask AI brings the same AI-powered Q&A from the web and iOS to Android. Open any story, tap Ask AI, and ask questions about it. Summarize a long article, get background on a developing situation, or fact-check a claim. Pick your preferred AI model and keep the conversation going with follow-ups. The Ask AI sheet matches your current theme and slides up as a bottom sheet, consistent with the share and trainer dialogs.

Discover Related Sites lets you find new feeds related to any feed you’re already subscribed to. Tap the Discover button in the story list header bar, browse what’s available, and preview a feed before subscribing. Duplicate feeds are filtered out so you only see new options.

Daily Briefing

The Daily Briefing generates a personalized summary of your news, organized into sections like Top Stories, Based on Your Interests, and Long Reads. It uses native Android story rows, so it feels like a regular feed rather than a bolted-on feature. Configure your briefing frequency, writing style, and sections from the briefing view in your sidebar.

Sepia theme and refined dark themes

A new Sepia theme brings warmer tones for comfortable long reading sessions. The Dark theme has been lightened to match the iOS gray/medium palette, and the Black theme now uses true absolute black backgrounds for feed and story cells, making it ideal for OLED screens.

Story list header bar

The top of the story list now has a header bar with quick access to Discover, search, display options, and settings. The display and settings controls are split into separate menus so you can change the view without wading through unrelated options.

Redesigned reading experience

The reading view has been rethought from top to bottom. Story traversal buttons are lifted above the bottom edge for easier thumb access. A new traverse bar with refined icons shows your position and unread count. Story actions are hidden until the story finishes rendering, so you never tap a button before the content is ready. Opening a story from the list now animates smoothly into the reader, and swiping back uses an interactive gesture that tracks your finger.

Redesigned preferences and menus

Preferences have been rebuilt as a modern settings screen with inline segments instead of separate dialog pickers. The feed list menu, reading menu, and folder menus have all been redesigned with cleaner styling and better organization. Menus now scale with your device font size, so they stay readable at any accessibility setting.

Premium Archive and Pro subscriptions

You can now subscribe to Premium Archive and Premium Pro directly from the Android app. An upgrade banner appears in the story list when you’re on a lower tier, showing what you’d unlock by upgrading.

Everything else

Beyond the headline features, this release includes a long list of improvements and fixes.

Improvements

  • Interactive swipe-back gesture in both the story list and reading view with predictive back support on Android 14+.
  • Feed list aligned with iOS styling, with new collapse-all and expand-all toggles.
  • Story header pills with compact layout and title case formatting.
  • Active reading time tracking per story, synced to your account.
  • Full text and regex classifiers for the Intelligence Trainer.
  • Feed search field themed to match your current theme with autofill disabled.
  • Sync done pill delayed until feeds actually render, so you see the update happen.
  • Story thumbnails enlarged for small sizes and cropping fixed.
  • Status banners at the top of the story list for loading and error states.
  • Mute Sites redesigned with upgrade card and progress bar.
  • Custom folder and feed icon support.

Fixes

  • Fixed TransactionTooLargeException crash in the reading pager.
  • Fixed database version mismatch crash on launch.
  • Fixed ItemListMenuPopup crash on small and split-screen displays.
  • Fixed login autofill and app switching losing input.
  • Fixed story row thumbnail cropping.
  • Fixed search pill text vanishing.
  • Fixed story list edge back gesture interference.
  • Brightened story feed titles for better readability.

Coming up next: v14.2 will bring story clustering to Android, so duplicate stories across your feeds get grouped together automatically, just like on the web.

NewsBlur v14 for Android is available now on the Google Play Store. If you have feedback or run into issues, I’d love to hear about it on the NewsBlur forum.

Read the whole story
jepler
15 days ago
reply
Pleaseeeee
* make it as contrasty as the old version
* make the "daily briefing" item disappear when disabled. An enable in preferences is enough.

It nuked my swipe preferences but i was at least able to put them back.
Earth, Sol system, Western spiral arm
satadru
15 days ago
reply
This is really great!
New York, NY
Share this story
Delete

Peter Bengtsson: pytest "import file mismatch"

1 Comment
Make sure your test files in a pytest tested project sit in directories with a __init__.py
Read the whole story
jepler
15 days ago
reply
Ah looks like an accidental use of "namespace modules"...
Earth, Sol system, Western spiral arm
Share this story
Delete

WheatForce: Learning From CPU Architecture Mistakes

1 Comment

Nothing ever made is truly perfect and indeed, CPU architectures like x86, RISC-V, ARM, and PowerPC all have their own upsides and downsides. Today, I aim to make an architecture that learns from all these mistakes and improves architecture design for everyone.

I’ve consulted with many people opinionated on the matter, both from a software perspective, and from a hardware perspective. I have taken all their feedback in mind while creating this initial draft of the WheatForce architecture (PDF). It is inspired by pieces from many architectures: segmentation inspired by x86, hash table-like paging from PowerPC, dynamic endianness control from RISC-V and PowerPC, and more. Let’s look into each feature in a little bit more detail.

Segmentation

The local descriptor table (left) points to main memory (right) via its segment descriptors
x86′ segmentation scheme by [John] on Wikipedia
Segmentation is a powerful virtual-memory feature that is tragically underused today. I believe this is due to limited flexibility, so I have added an improvement above the model that x86 had used: every single register can now use its own segment selector. With this added flexibility, one can surely make better use of the address translation powers of segmentation with minimal extra overhead.

Hash Table-Like Paging

PowerPC’s hash table-like paging makes its paging vastly superior to the likes of x86, RISC-V and ARM by decreasing the number of required cache line fetches drastically. Much like a true hash table, the keys (or input addresses) are hashed and then used as an index into the table. From there, that row of the table is searched for a cell with a matching virtual address, which can be accelerated greatly due to superior cache locality of the entries in this row.

Dynamic Endianness Control

A diagram of PowerPC's paging structures
A diagram of PowerPC’s paging structures from the PowerPC manual

RISC-V and PowerPC both have some real potential for better compatibility with their dynamic endianness control. However, both these architectures can only change the endiannes from a privileged context. To make this more flexible, WheatForce can change the data endianness at any time with a simple instruction. Now, user software can directly interoperate between big-endian and little-endian data structures, eliminating the need for a costly byte-swap sequence that would need many instructions. Finally, you can have your cake and eat it to!

Conclusion

WheatForce has observed the mistakes of all architectures before it, and integrates parts of all its predecessors. You can read the full specification on GitHub. After you’ve read it, do let me know what you think of it.

Read the whole story
jepler
15 days ago
reply
> WheatForce has observed the mistakes of all architectures before it, and integrates parts of all its predecessors

More like WheaTForce
Earth, Sol system, Western spiral arm
Share this story
Delete
Next Page of Stories