Microsoft And Quantinuum Improve Quantum Error Rates By 800x

Microsoft and Quantinuum quantum computing researchers just announced a major advance in error-rate reduction using a technique called qubit virtualization. This method combines Quantinuum’s high-precision H-2 ion-trap quantum computer with Microsoft’s syndrome extraction methodology. This breakthrough sets the stage for the development of larger and more reliable quantum computers that can solve problems far beyond the reach of classical machines.

This is the great promise of quantum computing: the potential to tackle challenges that even classical supercomputers cannot solve. However, error rates of existing quantum hardware must significantly improve to ever achieve that goal, which is why error correction is such a vital area for quantum computing research.

Let’s take a look at how Quantinuum’s hardware and Microsoft’s qubit virtualization system worked together to achieve this important breakthrough.

From Physical Qubits To Logical Qubits

Scientists from the two companies created a symbiotic relationship between Quantinuum’s high-fidelity hardware and a Microsoft system to create four stable logical qubits out of 30 physical qubits. The results were a record-breaking error rate that came in at 800 times better than the underlying physical error rates.

Illustration of 800x improvement in error rate for logical qubits vs physical qubits
Microsoft

A few years ago, Google scientists speculated it would take 1,000 physical qubits to create a single logical qubit. That number has proven to be much lower in practice, as demonstrated by the error-correction performance achieved in this new research.

A better logical error rate than underlying physical error rates may be a signal that fault-tolerant quantum computers could be closer to hand than previously thought. Microsoft estimates that a quantum machine equipped with 100 reliable logical qubits could solve many scientific problems currently intractable by classical computers.

Keep in mind that Microsoft and Quantinuum still have work to do to achieve that. Ongoing efforts will correct some of the limiting factors discovered during this pioneering research, which should make future results even better.

Shared Visions

During a briefing, Matt Zanner, principal program manager at Microsoft, and Dr. Jenni Strabley, senior director of offering management at Quantinuum, reviewed the four-year history of quantum collaboration between their organizations. Both companies are focused on achieving quantum computing at scale with a shared vision of creating a hybrid classical-quantum supercomputer using fault-tolerant computations that can solve world-class problems.

“Microsoft is fully aligned on a path to quantum at scale,” Zanner said. “We have a lot of different pillars of work that align to that overall mission, and quantum computing is first and foremost.”

Microsoft plans to integrate quantum computing into its existing Azure Quantum Elements product, which already incorporates HPC and AI. That will likely run on a Quantinuum machine. You can read more about Azure Quantum Elements in my earlier Forbes article describing how Microsoft researchers used HPC and AI to create 32 million novel materials in its search for a more efficient lithium-ion material for electronics batteries.

Microsoft and Quantinuum also share an interest in chemistry and materials science. Quantinuum offers a cutting-edge quantum chemistry platform known as Inquanto that can perform intricate simulations of molecules and materials. That platform complements Microsoft’s Azure Quantum Elements.

According to Dr. Strabley, a key factor enabling these advancements is the close collaboration between Quantinuum and Microsoft as full-stack companies with expertise spanning hardware and software. Their respective error-correction and logical-qubit-mapping teams have worked hand in hand, exchanged ideas and jointly created new solutions to push quantum computing forward.

Silencing Quantum Noise

We must have a workable solution for quantum error correction before we can build quantum machines capable of solving complex issues in areas such as climate modeling, large financial optimizations and advanced physics simulations. Yet error correction is elusive and complex because of a natural restriction called the no-cloning theorem that makes it impossible to copy quantum information in the same way as with classical computers. Microsoft and Quantinuum’s joint research might lead to a solution that eliminates that barrier.

Since quantum information cannot be copied directly, correcting qubit errors relies on an alternative approach using logical qubits. Quantinuum’s quantum information stored in its physical qubits was transformed into 30 entangled physical qubits, which then formed four dependable logical qubits. To be useful, logical qubits must have lower error rates than the physical qubits used to create them. Microsoft’s qubit virtualization system combines error-correction techniques that enhance qubit reliability.

Microsoft used a method called active syndrome extraction to diagnose and repair qubit errors without collapsing quantum states. Depending on which QEC code is used, the syndrome measurement can determine if an error occurred, and it can also determine the location and the type of the error. Because Microsoft’s method addresses noise at the logical qubit level, overall reliability is significantly improved. The result is similar to the signal improvement provided by noise-canceling headphones. In this case, noisy quantum qubits are transformed into highly reliable logical qubits.

The success of this experiment also relied on having a high-performance quantum computer. Quantinuum’s H-2 employs a state-of-the-art trapped-ion, shuttling-based processor and has a best-in-class two-qubit gate fidelity of 99.8%, along with 32 fully connected qubits and a unique quantum charge-coupled device architecture.

It should be noted that Quantinuum also has plenty of experience with logical qubits. It published the first research paper that demonstrated a fault-tolerant end-to-end circuit with entangled logical qubits using real-time error correction. That was also the first time that two error-corrected logical qubits performed a circuit with higher fidelity than the constituent physical qubits. You can read my article about it here.

Prior to the release of the H-2, I was invited to a Quantinuum briefing in Bloomfield, Colorado. I also wrote a detailed white paper on its capabilities and features that you can read here. In short, the H-2’s benchmarking results are very impressive.

Real-Time Versus Post-Processing Error Correction

This research not only provided valuable quantum error correction information, it also produced interesting results because it used two error-correction methods in two different ways to provide a comparison between methods. Specifically, Steane code was used for real-time error correction and Carbon code was used post-selection.

The Steane code uses seven physical qubits to encode one logical qubit. The researchers used this code to implement active real-time error correction. This required two additional modes to detect and correct any errors that occurred during computation.

Meanwhile, circuits for the Carbon code are more efficient and its code distance is larger, allowing for post-selection when necessary. Also, the greater the code distance, the more robust the code is against errors. Its circuit efficiency and error correction capability also keep the number of discardable runs to a minimum in post-selection.

The Carbon code has a much higher threshold compared to the Steane code and can tolerate higher error rates. To help maintain the integrity of quantum information, the Carbon code’s construction is such that when errors occur, certain states or syndromes are produced that can be identified and corrected through post-selection.

Insights Gained From Running Two Error-Correction Methods

While both codes demonstrated an ability to suppress logical error rates significantly below physical error rates, the Carbon code exhibited a larger gain, yielding up to an 800x reduction compared to the (still impressive) 500x reduction for the Steane code. The difference in performance between the two codes was likely due to the greater error-correcting power of the Carbon code. The Carbon code syndrome extraction is much more efficient, so it introduces fewer errors, and because the code distance is larger it can also tolerate more errors.

One reason for using post-selection was to demonstrate that there are some errors that can be detected but cannot be corrected reliably. So if we detect those errors in a run, we can discard that run with assurance that it did contain an error.

Under some conditions, it would be possible for post-selection to be more robust to noise. For example, if a false positive is measured in error-correction mode, noise would be introduced by any unnecessary corrective actions that might be taken. However, in error-detection mode, that data would be discarded without any further action.

In the experiment, error correction was applied successfully to a large fraction of the runs. For a small fraction of the runs, the researchers were able to diagnose errors that couldn’t be corrected by the code, so those runs were discarded. The vast majority of errors in this research were corrected before data could be corrupted, and only a small fraction of the errors was uncorrectable.

According to the research team, there is no technical reason why real-time decoding couldn’t be used for all the experiments. Having two methods provided a way for the scientists to compare the impact of each method.

Next Steps For Quantinuum And Microsoft

In addition to their joint efforts, both Microsoft and Quantinuum have their own internal roadmaps that drive future developments. In the distant future, Quantinuum is looking at the prospect of creating a quantum machine with 1,000 logical qubits. Using today’s ratio, that would require 7,500 physical qubits.

In 2025, Quantinuum plans to introduce a new H-Series quantum computer called Helios. Dr. Strabley explained that Helios will be a cloud-based system offered both as-a-service and on-premises. Building on the recent announcement with Microsoft, she anticipates Helios will have 10 or more logical qubits. She regards this as rapid progress in scaling up the capabilities of the system compared to previous generations.

Meanwhile, once Microsoft has integrated highly reliable logical qubits into Azure Quantum Elements, the product will have the combined high performance of cloud computing, advanced AI models and improved quantum-computing capabilities. Microsoft plans to use logical qubits to scale a hybrid supercomputer to the point where its performance limits errors to one per 100 million operations.

Both companies also share an interest in topological research. Quantinuum’s topological interest is focused on the use of non-Abelian states for quantum information processing and ways to use non-Abelion braiding to create universal gates. Meanwhile, Microsoft’s research is centered on the development of topological qubits to take advantage of built-in error protection and digital controls. So far, Microsoft’s research team has made significant progress in its study of topological qubits.

Wrapping Up

The combination of Microsoft’s qubit-virtualization system and Quantinuum’s trapped-ion quantum computer with its QCCD architecture have done what wasn’t possible just a year ago: 14,000 experiments, flawlessly executed, without a single error. That is more than simple progress—it is a major step forward in quantum error correction.

The success of this research doesn’t benefit only these two companies; it impacts the entire quantum ecosystem and provides evidence that reliable logical qubits will likely play a significant role in solving future problems. This work points to a future where thousands or hundreds of thousands of reliable logical qubits will help create solutions for complex scientific puzzles, from chemistry and materials science to drug discovery, clean energy research, financial modeling, logistics optimization and climate prediction.

Paul Smith-Goodson
+ posts

Paul Smith-Goodson is the Moor Insights & Strategy Vice President and Principal Analyst for quantum computing and artificial intelligence.  His early interest in quantum began while working on a joint AT&T and Bell Labs project and, during 360 overviews of Murray Hill advanced projects, Peter Shor provided an overview of his ground-breaking research in quantum error correction. 

Patrick Moorhead
+ posts

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.