Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,258 result(s) for "Jeffrey, Evan"
Sort by:
Quantum approximate optimization of non-planar graph problems on a planar superconducting processor
Faster algorithms for combinatorial optimization could prove transformative for diverse areas such as logistics, finance and machine learning. Accordingly, the possibility of quantum enhanced optimization has driven much interest in quantum technologies. Here we demonstrate the application of the Google Sycamore superconducting qubit quantum processor to combinatorial optimization problems with the quantum approximate optimization algorithm (QAOA). Like past QAOA experiments, we study performance for problems defined on the planar connectivity graph native to our hardware; however, we also apply the QAOA to the Sherrington–Kirkpatrick model and MaxCut, non-native problems that require extensive compilation to implement. For hardware-native problems, which are classically efficient to solve on average, we obtain an approximation ratio that is independent of problem size and observe that performance increases with circuit depth. For problems requiring compilation, performance decreases with problem size. Circuits involving several thousand gates still present an advantage over random guessing but not over some efficient classical algorithms. Our results suggest that it will be challenging to scale near-term implementations of the QAOA for problems on non-native graphs. As these graphs are closer to real-world instances, we suggest more emphasis should be placed on such problems when using the QAOA to benchmark quantum processors.It is hoped that quantum computers may be faster than classical ones at solving optimization problems. Here the authors implement a quantum optimization algorithm over 23 qubits but find more limited performance when an optimization problem structure does not match the underlying hardware.
Resolving catastrophic error bursts from cosmic rays in large arrays of superconducting qubits
Scalable quantum computing can become a reality with error correction, provided that coherent qubits can be constructed in large arrays 1 , 2 . The key premise is that physical errors can remain both small and sufficiently uncorrelated as devices scale, so that logical error rates can be exponentially suppressed. However, impacts from cosmic rays and latent radioactivity violate these assumptions. An impinging particle can ionize the substrate and induce a burst of quasiparticles that destroys qubit coherence throughout the device. High-energy radiation has been identified as a source of error in pilot superconducting quantum devices 3 – 5 , but the effect on large-scale algorithms and error correction remains an open question. Elucidating the physics involved requires operating large numbers of qubits at the same rapid timescales necessary for error correction. Here, we use space- and time-resolved measurements of a large-scale quantum processor to identify bursts of quasiparticles produced by high-energy rays. We track the events from their initial localized impact as they spread, simultaneously and severely limiting the energy coherence of all qubits and causing chip-wide failure. Our results provide direct insights into the impact of these damaging error bursts and highlight the necessity of mitigation to enable quantum computing to scale. Cosmic rays flying through superconducting quantum devices create bursts of excitations that destroy qubit coherence. Rapid, spatially resolved measurements of qubit error rates make it possible to observe the evolution of the bursts across a chip.
Suppressing quantum errors by scaling a surface code logical qubit
Practical quantum computing will require error rates well below those achievable with physical qubits. Quantum error correction 1 , 2 offers a path to algorithmically relevant error rates by encoding logical qubits within many physical qubits, for which increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low for logical performance to improve with increasing code size. Here we report the measurement of logical qubit performance scaling across several code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find that our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, in terms of both logical error probability over 25 cycles and logical error per cycle ((2.914 ± 0.016)% compared to (3.028 ± 0.023)%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7 × 10 −6 logical error per cycle floor set by a single high-energy event (1.6 × 10 −7 excluding this event). We accurately model our experiment, extracting error budgets that highlight the biggest challenges for future systems. These results mark an experimental demonstration in which quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation. A study demonstrating increasing error suppression with larger surface code logical qubits, implemented on a superconducting quantum processor.
Time-crystalline eigenstate order on a quantum processor
Quantum many-body systems display rich phase structure in their low-temperature equilibrium states 1 . However, much of nature is not in thermal equilibrium. Remarkably, it was recently predicted that out-of-equilibrium systems can exhibit novel dynamical phases 2 – 8 that may otherwise be forbidden by equilibrium thermodynamics, a paradigmatic example being the discrete time crystal (DTC) 7 , 9 – 15 . Concretely, dynamical phases can be defined in periodically driven many-body-localized (MBL) systems via the concept of eigenstate order 7 , 16 , 17 . In eigenstate-ordered MBL phases, the entire many-body spectrum exhibits quantum correlations and long-range order, with characteristic signatures in late-time dynamics from all initial states. It is, however, challenging to experimentally distinguish such stable phases from transient phenomena, or from regimes in which the dynamics of a few select states can mask typical behaviour. Here we implement tunable controlled-phase (CPHASE) gates on an array of superconducting qubits to experimentally observe an MBL-DTC and demonstrate its characteristic spatiotemporal response for generic initial states 7 , 9 , 10 . Our work employs a time-reversal protocol to quantify the impact of external decoherence, and leverages quantum typicality to circumvent the exponential cost of densely sampling the eigenspectrum. Furthermore, we locate the phase transition out of the DTC with an experimental finite-size analysis. These results establish a scalable approach to studying non-equilibrium phases of matter on quantum processors. A study establishes a scalable approach to engineer and characterize a many-body-localized discrete time crystal phase on a superconducting quantum processor.
Kepler Presearch Data Conditioning I-Architecture and Algorithms for Error Correction in Kepler Light Curves
ABSTRACT Kepler provides light curves of 156,000 stars with unprecedented precision. However, the raw data as they come from the spacecraft contain significant systematic and stochastic errors. These errors, which include discontinuities, systematic trends, and outliers, obscure the astrophysical signals in the light curves. To correct these errors is the task of the Presearch Data Conditioning (PDC) module of the Kepler data analysis pipeline. The original version of PDC in Kepler did not meet the extremely high performance requirements for the detection of miniscule planet transits or highly accurate analysis of stellar activity and rotation. One particular deficiency was that astrophysical features were often removed as a side effect of the removal of errors. In this article we introduce the completely new and significantly improved version of PDC which was implemented in Kepler SOC version 8.0. This new PDC version, which utilizes a Bayesian approach for removal of systematics, reliably corrects errors in the light curves while at the same time preserving planet transits and other astrophysically interesting signals. We describe the architecture and the algorithms of this new PDC module, show typical errors encountered in Kepler data, and illustrate the corrections using real light curve examples.
Kepler Presearch Data Conditioning II - A Bayesian Approach to Systematic Error Correction
ABSTRACT With the unprecedented photometric precision of the Kepler spacecraft, significant systematic and stochastic errors on transit signal levels are observable in the Kepler photometric data. These errors, which include discontinuities, outliers, systematic trends, and other instrumental signatures, obscure astrophysical signals. The presearch data conditioning (PDC) module of the Kepler data analysis pipeline tries to remove these errors while preserving planet transits and other astrophysically interesting signals. The completely new noise and stellar variability regime observed in Kepler data poses a significant problem to standard cotrending methods. Variable stars are often of particular astrophysical interest, so the preservation of their signals is of significant importance to the astrophysical community. We present a Bayesian maximum a posteriori (MAP) approach, where a subset of highly correlated and quiet stars is used to generate a cotrending basis vector set, which is in turn used to establish a range of \"reasonable\" robust fit parameters. These robust fit parameters are then used to generate a Bayesian prior and a Bayesian posterior probability distribution function (PDF) which, when maximized, finds the best fit that simultaneously removes systematic effects while reducing the signal distortion and noise injection that commonly afflicts simple least-squares (LS) fitting. A numerical and empirical approach is taken where the Bayesian prior PDFs are generated from fits to the light-curve distributions themselves.
Quantum supremacy using a programmable superconducting processor
The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor 1 . A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits 2 – 7 to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2 53 (about 10 16 ). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy 8 – 14 for this specific computational task, heralding a much-anticipated computing paradigm. Quantum supremacy is demonstrated using a programmable superconducting processor known as Sycamore, taking approximately 200 seconds to sample one instance of a quantum circuit a million times, which would take a state-of-the-art supercomputer around ten thousand years to compute.
Exponential suppression of bit or phase errors with cyclic error correction
Realizing the potential of quantum computing requires sufficiently low logical error rates 1 . Many applications call for error rates as low as 10 −15 (refs. 2 – 9 ), but state-of-the-art quantum platforms typically have physical error rates near 10 −3 (refs. 10 – 14 ). Quantum error correction 15 – 17 promises to bridge this divide by distributing quantum logical information across many physical qubits in such a way that errors can be detected and corrected. Errors on the encoded logical qubit state can be exponentially suppressed as the number of physical qubits grows, provided that the physical error rates are below a certain threshold and stable over the course of a computation. Here we implement one-dimensional repetition codes embedded in a two-dimensional grid of superconducting qubits that demonstrate exponential suppression of bit-flip or phase-flip errors, reducing logical error per round more than 100-fold when increasing the number of qubits from 5 to 21. Crucially, this error suppression is stable over 50 rounds of error correction. We also introduce a method for analysing error correlations with high precision, allowing us to characterize error locality while performing quantum error correction. Finally, we perform error detection with a small logical qubit using the 2D surface code on the same device 18 , 19 and show that the results from both one- and two-dimensional codes agree with numerical simulations that use a simple depolarizing error model. These experimental demonstrations provide a foundation for building a scalable fault-tolerant quantum computer with superconducting qubits. Repetition codes running many cycles of quantum error correction achieve exponential suppression of errors with increasing numbers of qubits.
Overcoming leakage in quantum error correction
The leakage of quantum information out of the two computational states of a qubit into other energy states represents a major challenge for quantum error correction. During the operation of an error-corrected algorithm, leakage builds over time and spreads through multi-qubit interactions. This leads to correlated errors that degrade the exponential suppression of the logical error with scale, thus challenging the feasibility of quantum error correction as a path towards fault-tolerant quantum computation. Here, we demonstrate a distance-3 surface code and distance-21 bit-flip code on a quantum processor for which leakage is removed from all qubits in each cycle. This shortens the lifetime of leakage and curtails its ability to spread and induce correlated errors. We report a tenfold reduction in the steady-state leakage population of the data qubits encoding the logical state and an average leakage population of less than 1 × 10−3 throughout the entire device. Our leakage removal process efficiently returns the system back to the computational basis. Adding it to a code circuit would prevent leakage from inducing correlated error across cycles. With this demonstration that leakage can be contained, we have resolved a key challenge for practical quantum error correction at scale.Physical realizations of qubits are often vulnerable to leakage errors, where the system ends up outside the basis used to store quantum information. A leakage removal protocol can suppress the impact of leakage on quantum error-correcting codes.
Expanding the Scope, Structure, and Function of Synthetic Biologics with Diels-Alder Cycloadditions and Other Strategies
Peptides offer distinct advantages as a protein targeting modality for mechanistic probe and drug development. Given their large surface area and the potential for highly tailored design, peptide ligands can have exquisite target-binding affinities and specificities. To counteract conformational and metabolic instabilities, many peptide stabilization strategies have been deployed, in some cases leading to cell-permeable bioactive compounds. A noteworthy peptide stabilization chemistry is ring-closing olefin metathesis, used in the formation of stapled peptides, constrained alpha-helical peptides that serve as protein-protein interaction inhibitors. Herein we report first-in-class cell-active stapled peptide inhibitors of RAB25, a recalcitrant protein target implicated in the pathogenesis of a variety of cancers. This work exemplifies the potential of stabilized peptides to engage challenging targets in relevant biological contexts, however the current repertoire of peptide stabilization chemistries have several limitations, including: the requirement of exogeneous reagents or harsh conditions often incompatible with unprotected peptide, full length proteins, or aqueous conditions; a narrow scope of applicable folds and resulting stabilized structures; limited diversity in stabilizing linker structure, which rigidifies peptide conformation and can itself impact target-binding. To begin addressing these shortcomings, we applied the Diels-Alder reaction as a bioorthogonal chemistry for peptide cyclization. Our studies confirm its suitability for both on-resin and in-solution peptide cyclization, in organic and aqueous solutions respectively. Cyclization kinetics are rapid and high-yielding across a range of diene and dienophile functional groups and peptide folds, predominantly resulting in cycloadducts with endo stereochemistry as ascertained by NMR and X-ray crystallographic studies. Further, Diels-Alder cyclized peptides display enhanced bioactivity, with cycloadduct composition and geometry having differential impacts on target-binding, confirmed by the observation of substantial cycloadduct-protein contacts in a crystal structure of an SRC2-derived Diels-Alder cyclized peptide bound to its target estrogen receptor alpha. Additionally, Diels-Alder peptide cyclization is shown to be compatible with both ring-closing olefin metathesis and disulfide formation, suggesting the broad applicability of this chemistry alongside other stabilization chemistries. Separate work reports on the design and screening of peptide inhibitors targeting a highly conserved coronavirus methyltransferase complex, nsp16/nsp10, central to the infectivity of SARS-CoV-2, the cause of the ongoing COVID-19 global pandemic. Finally, we present a robust, multi-readout cell penetration, compound stability, and membrane disruption assay for peptides and other synthetic biologics. This allows for conclusive reporting of key pharmacologic properties of these promising protein-targeting modalities. Taken together, these works serve to expand the application, function, and study of synthetic biologics and related compounds for probing biology and treating disease.