Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
12
result(s) for
"Arute, Frank"
Sort by:
Quantum supremacy using a programmable superconducting processor
by
Boixo, Sergio
,
Quintana, Chris
,
Rieffel, Eleanor G.
in
639/766/483
,
639/766/483/481
,
Algorithms
2019
The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor
1
. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits
2
–
7
to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2
53
(about 10
16
). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy
8
–
14
for this specific computational task, heralding a much-anticipated computing paradigm.
Quantum supremacy is demonstrated using a programmable superconducting processor known as Sycamore, taking approximately 200 seconds to sample one instance of a quantum circuit a million times, which would take a state-of-the-art supercomputer around ten thousand years to compute.
Journal Article
Resolving catastrophic error bursts from cosmic rays in large arrays of superconducting qubits
by
Quintana, Chris
,
Erickson, Catherine
,
Mi, Xiao
in
639/766/483/2802
,
639/766/483/481
,
Algorithms
2022
Scalable quantum computing can become a reality with error correction, provided that coherent qubits can be constructed in large arrays
1
,
2
. The key premise is that physical errors can remain both small and sufficiently uncorrelated as devices scale, so that logical error rates can be exponentially suppressed. However, impacts from cosmic rays and latent radioactivity violate these assumptions. An impinging particle can ionize the substrate and induce a burst of quasiparticles that destroys qubit coherence throughout the device. High-energy radiation has been identified as a source of error in pilot superconducting quantum devices
3
–
5
, but the effect on large-scale algorithms and error correction remains an open question. Elucidating the physics involved requires operating large numbers of qubits at the same rapid timescales necessary for error correction. Here, we use space- and time-resolved measurements of a large-scale quantum processor to identify bursts of quasiparticles produced by high-energy rays. We track the events from their initial localized impact as they spread, simultaneously and severely limiting the energy coherence of all qubits and causing chip-wide failure. Our results provide direct insights into the impact of these damaging error bursts and highlight the necessity of mitigation to enable quantum computing to scale.
Cosmic rays flying through superconducting quantum devices create bursts of excitations that destroy qubit coherence. Rapid, spatially resolved measurements of qubit error rates make it possible to observe the evolution of the bursts across a chip.
Journal Article
Quantum supremacy using a programmable superconducting processor
by
Arute, Frank
,
Bardin, Joseph C.
,
Babbush, Ryan
in
Equipment and supplies
,
Quantum computing
,
Superconductors
2019
The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor.sup.1. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits.sup.2-7 to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2.sup.53 (about 10.sup.16). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times--our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy.sup.8-14 for this specific computational task, heralding a much-anticipated computing paradigm.
Journal Article
Quantum supremacy using a programmable superconducting processor
by
Arute, Frank
,
Bardin, Joseph C.
,
Babbush, Ryan
in
Equipment and supplies
,
Quantum computing
,
Superconductors
2019
The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor.sup.1. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits.sup.2-7 to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2.sup.53 (about 10.sup.16). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times--our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy.sup.8-14 for this specific computational task, heralding a much-anticipated computing paradigm.
Journal Article
Quantum supremacy using a programmable superconducting processor
by
Arute, Frank
,
Bardin, Joseph C.
,
Babbush, Ryan
in
Equipment and supplies
,
Quantum computing
,
Superconductors
2019
The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor.sup.1. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits.sup.2-7 to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2.sup.53 (about 10.sup.16). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times--our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy.sup.8-14 for this specific computational task, heralding a much-anticipated computing paradigm.
Journal Article
Resolving catastrophic error bursts from cosmic rays in large arrays of superconducting qubits
2021
Scalable quantum computing can become a reality with error correction, provided coherent qubits can be constructed in large arrays. The key premise is that physical errors can remain both small and sufficiently uncorrelated as devices scale, so that logical error rates can be exponentially suppressed. However, energetic impacts from cosmic rays and latent radioactivity violate both of these assumptions. An impinging particle ionizes the substrate, radiating high energy phonons that induce a burst of quasiparticles, destroying qubit coherence throughout the device. High-energy radiation has been identified as a source of error in pilot superconducting quantum devices, but lacking a measurement technique able to resolve a single event in detail, the effect on large scale algorithms and error correction in particular remains an open question. Elucidating the physics involved requires operating large numbers of qubits at the same rapid timescales as in error correction, exposing the event's evolution in time and spread in space. Here, we directly observe high-energy rays impacting a large-scale quantum processor. We introduce a rapid space and time-multiplexed measurement method and identify large bursts of quasiparticles that simultaneously and severely limit the energy coherence of all qubits, causing chip-wide failure. We track the events from their initial localised impact to high error rates across the chip. Our results provide direct insights into the scale and dynamics of these damaging error bursts in large-scale devices, and highlight the necessity of mitigation to enable quantum computing to scale.
Exponential suppression of bit or phase flip errors with repetitive error correction
by
Hilton, Jeremy
,
Boixo, Sergio
,
Quintana, Chris
in
Correlation analysis
,
Depolarization
,
Error analysis
2021
Realizing the potential of quantum computing will require achieving sufficiently low logical error rates. Many applications call for error rates in the \\(10^{-15}\\) regime, but state-of-the-art quantum platforms typically have physical error rates near \\(10^{-3}\\). Quantum error correction (QEC) promises to bridge this divide by distributing quantum logical information across many physical qubits so that errors can be detected and corrected. Logical errors are then exponentially suppressed as the number of physical qubits grows, provided that the physical error rates are below a certain threshold. QEC also requires that the errors are local and that performance is maintained over many rounds of error correction, two major outstanding experimental challenges. Here, we implement 1D repetition codes embedded in a 2D grid of superconducting qubits which demonstrate exponential suppression of bit or phase-flip errors, reducing logical error per round by more than \\(100\\times\\) when increasing the number of qubits from 5 to 21. Crucially, this error suppression is stable over 50 rounds of error correction. We also introduce a method for analyzing error correlations with high precision, and characterize the locality of errors in a device performing QEC for the first time. Finally, we perform error detection using a small 2D surface code logical qubit on the same device, and show that the results from both 1D and 2D codes agree with numerical simulations using a simple depolarizing error model. These findings demonstrate that superconducting qubits are on a viable path towards fault tolerant quantum computing.
Quantum Approximate Optimization of Non-Planar Graph Problems on a Planar Superconducting Processor
2021
We demonstrate the application of the Google Sycamore superconducting qubit quantum processor to combinatorial optimization problems with the quantum approximate optimization algorithm (QAOA). Like past QAOA experiments, we study performance for problems defined on the (planar) connectivity graph of our hardware; however, we also apply the QAOA to the Sherrington-Kirkpatrick model and MaxCut, both high dimensional graph problems for which the QAOA requires significant compilation. Experimental scans of the QAOA energy landscape show good agreement with theory across even the largest instances studied (23 qubits) and we are able to perform variational optimization successfully. For problems defined on our hardware graph we obtain an approximation ratio that is independent of problem size and observe, for the first time, that performance increases with circuit depth. For problems requiring compilation, performance decreases with problem size but still provides an advantage over random guessing for circuits involving several thousand gates. This behavior highlights the challenge of using near-term quantum computers to optimize problems on graphs differing from hardware connectivity. As these graphs are more representative of real world instances, our results advocate for more emphasis on such problems in the developing tradition of using the QAOA as a holistic, device-level benchmark of quantum processors.
Information Scrambling in Computationally Complex Quantum Circuits
2021
Interaction in quantum systems can spread initially localized quantum information into the many degrees of freedom of the entire system. Understanding this process, known as quantum scrambling, is the key to resolving various conundrums in physics. Here, by measuring the time-dependent evolution and fluctuation of out-of-time-order correlators, we experimentally investigate the dynamics of quantum scrambling on a 53-qubit quantum processor. We engineer quantum circuits that distinguish the two mechanisms associated with quantum scrambling, operator spreading and operator entanglement, and experimentally observe their respective signatures. We show that while operator spreading is captured by an efficient classical model, operator entanglement requires exponentially scaled computational resources to simulate. These results open the path to studying complex and practically relevant physical observables with near-term quantum processors.
Hartree-Fock on a superconducting qubit quantum computer
by
Boixo, Sergio
,
Quintana, Chris
,
Gidney, Craig
in
Computer simulation
,
Entangled states
,
Experiments
2020
As the search continues for useful applications of noisy intermediate scale quantum devices, variational simulations of fermionic systems remain one of the most promising directions. Here, we perform a series of quantum simulations of chemistry the largest of which involved a dozen qubits, 78 two-qubit gates, and 114 one-qubit gates. We model the binding energy of \\({\\rm H}_6\\), \\({\\rm H}_8\\), \\({\\rm H}_{10}\\) and \\({\\rm H}_{12}\\) chains as well as the isomerization of diazene. We also demonstrate error-mitigation strategies based on \\(N\\)-representability which dramatically improve the effective fidelity of our experiments. Our parameterized ansatz circuits realize the Givens rotation approach to non-interacting fermion evolution, which we variationally optimize to prepare the Hartree-Fock wavefunction. This ubiquitous algorithmic primitive corresponds to a rotation of the orbital basis and is required by many proposals for correlated simulations of molecules and Hubbard models. Because non-interacting fermion evolutions are classically tractable to simulate, yet still generate highly entangled states over the computational basis, we use these experiments to benchmark the performance of our hardware while establishing a foundation for scaling up more complex correlated quantum simulations of chemistry.