Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
128 result(s) for "Proctor, Timothy"
Sort by:
Detecting and tracking drift in quantum information processors
If quantum information processors are to fulfill their potential, the diverse errors that affect them must be understood and suppressed. But errors typically fluctuate over time, and the most widely used tools for characterizing them assume static error modes and rates. This mismatch can cause unheralded failures, misidentified error modes, and wasted experimental effort. Here, we demonstrate a spectral analysis technique for resolving time dependence in quantum processors. Our method is fast, simple, and statistically sound. It can be applied to time-series data from any quantum processor experiment. We use data from simulations and trapped-ion qubit experiments to show how our method can resolve time dependence when applied to popular characterization protocols, including randomized benchmarking, gate set tomography, and Ramsey spectroscopy. In the experiments, we detect instability and localize its source, implement drift control techniques to compensate for this instability, and then demonstrate that the instability has been suppressed. Time-dependent errors are one of the main obstacles to fully-fledged quantum information processing. Here, the authors develop a general methodology to monitor time-dependent errors, which could be used to make other characterisation protocols time-resolved, and demonstrate it on a trapped-ion qubit.
Precision tomography of a three-qubit donor quantum processor in silicon
Nuclear spins were among the first physical platforms to be considered for quantum information processing 1 , 2 , because of their exceptional quantum coherence 3 and atomic-scale footprint. However, their full potential for quantum computing has not yet been realized, owing to the lack of methods with which to link nuclear qubits within a scalable device combined with multi-qubit operations with sufficient fidelity to sustain fault-tolerant quantum computation. Here we demonstrate universal quantum logic operations using a pair of ion-implanted 31 P donor nuclei in a silicon nanoelectronic device. A nuclear two-qubit controlled- Z gate is obtained by imparting a geometric phase to a shared electron spin 4 , and used to prepare entangled Bell states with fidelities up to 94.2(2.7)%. The quantum operations are precisely characterized using gate set tomography (GST) 5 , yielding one-qubit average gate fidelities up to 99.95(2)%, two-qubit average gate fidelity of 99.37(11)% and two-qubit preparation/measurement fidelities of 98.95(4)%. These three metrics indicate that nuclear spins in silicon are approaching the performance demanded in fault-tolerant quantum processors 6 . We then demonstrate entanglement between the two nuclei and the shared electron by producing a Greenberger–Horne–Zeilinger three-qubit state with 92.5(1.0)% fidelity. Because electron spin qubits in semiconductors can be further coupled to other electrons 7 – 9 or physically shuttled across different locations 10 , 11 , these results establish a viable route for scalable quantum information processing using donor nuclear and electron spins. Universal quantum logic operations with fidelity exceeding 99%, approaching the threshold of fault tolerance, are realized in a scalable silicon device comprising an electron and two phosphorus nuclei, and a fidelity of 92.5% is obtained for a three-qubit entangled state.
Measuring the capabilities of quantum computers
Quantum computers can now run interesting programs, but each processor’s capability—the set of programs that it can run successfully—is limited by hardware errors. These errors can be complicated, making it difficult to accurately predict a processor’s capability. Benchmarks can be used to measure capability directly, but current benchmarks have limited flexibility and scale poorly to many-qubit processors. We show how to construct scalable, efficiently verifiable benchmarks based on any program by using a technique that we call circuit mirroring. With it, we construct two flexible, scalable volumetric benchmarks based on randomized and periodically ordered programs. We use these benchmarks to map out the capabilities of twelve publicly available processors, and to measure the impact of program structure on each one. We find that standard error metrics are poor predictors of whether a program will run successfully on today’s hardware, and that current processors vary widely in their sensitivity to program structure. Evaluations of quantum computers across architectures need reliable benchmarks. A class of benchmarks that can directly reflect the structure of any algorithm shows that different quantum computers have considerable variations in performance.
Measuring error rates of mid-circuit measurements
High-fidelity mid-circuit measurements, which read out the state of specific qubits in a multiqubit processor without destroying them or disrupting their neighbors, are a critical component for useful quantum computing. They enable fault-tolerant quantum error correction, dynamic circuits, and other paths to solving classically intractable problems. But there are few methods to assess their performance comprehensively. In this work, we address this gap by introducing the first randomized benchmarking protocol that measures the rate at which mid-circuit measurements induce errors in many-qubit circuits. Using this protocol, we detect and eliminate previously undetected measurement-induced crosstalk in a 20-qubit trapped-ion quantum computer. Then, we use the same protocol to measure the rate of measurement-induced crosstalk error on a 27-qubit IBM Q processor, and quantify how much of that error is eliminated by dynamical decoupling. Characterisation of quantum operations is fundamental in quantum technologies - quantum computing in particular - but there’s currently no reliably efficient method to assess mid-circuit measurements, which are a key component for subfields like quantum error correction. Here, the authors fill this gap, integrating MCMs into the framework of randomized benchmarking.
Probing Context-Dependent Errors in Quantum Processors
Gates in error-prone quantum information processors are often modeled using sets of one- and two-qubit process matrices, the standard model of quantum errors. However, the results of quantum circuits on real processors often depend on additional external “context” variables. Such contexts may include the state of a spectator qubit, the time of data collection, or the temperature of control electronics. In this article, we demonstrate a suite of simple, widely applicable, and statistically rigorous methods for detecting context dependence in quantum-circuit experiments. They can be used on any data that comprise two or more “pools” of measurement results obtained by repeating the same set of quantum circuits in different contexts. These tools may be integrated seamlessly into standard quantum device characterization techniques, like randomized benchmarking or tomography. We experimentally demonstrate these methods by detecting and quantifying crosstalk and drift on the publicly accessible 16-qubit ibmqx3.
Efficient flexible characterization of quantum processors with nested error models
We present a simple and powerful technique for finding a good error model for a quantum processor. The technique iteratively tests a nested sequence of models against data obtained from the processor, and keeps track of the best-fit model and its wildcard error (a metric of the amount of unmodeled error) at each step. Each best-fit model, along with a quantification of its unmodeled error, constitutes a characterization of the processor. We explain how quantum processor models can be compared with experimental data and to each other. We demonstrate the technique by using it to characterize a simulated noisy two-qubit processor.
Benchmarking quantum logic operations relative to thresholds for fault tolerance
Contemporary methods for benchmarking noisy quantum processors typically measure average error rates or process infidelities. However, thresholds for fault-tolerant quantum error correction are given in terms of worst-case error rates—defined via the diamond norm—which can differ from average error rates by orders of magnitude. One method for resolving this discrepancy is to randomize the physical implementation of quantum gates, using techniques like randomized compiling (RC). In this work, we use gate set tomography to perform precision characterization of a set of two-qubit logic gates to study RC on a superconducting quantum processor. We find that, under RC, gate errors are accurately described by a stochastic Pauli noise model without coherent errors, and that spatially correlated coherent errors and non-Markovian errors are strongly suppressed. We further show that the average and worst-case error rates are equal for randomly compiled gates, and measure a maximum worst-case error of 0.0197(3) for our gate set. Our results show that randomized benchmarks are a viable route to both verifying that a quantum processor’s error rates are below a fault-tolerance threshold, and to bounding the failure rates of near-term algorithms, if—and only if—gates are implemented via randomization methods which tailor noise.
Minimal ancilla mediated quantum computation
Schemes of universal quantum computation in which the interactions between the computational elements, in a computational register, are mediated by some ancillary system are of interest due to their relevance to the physical implementation of a quantum computer. Furthermore, reducing the level of control required over both the ancillary and register systems has the potential to simplify any experimental implementation. In this paper we consider how to minimise the control needed to implement universal quantum computation in an ancilla-mediated fashion. Considering computational schemes which require no measurements and hence evolve by unitary dynamics for the global system, we show that when employing an ancilla qubit there are certain fixed-time ancilla-register interactions which, along with ancilla initialisation in the computational basis, are universal for quantum computation with no additional control of either the ancilla or the register. We develop two distinct models based on locally inequivalent interactions and we then discuss the relationship between these unitary models and the measurement-based ancilla-mediated models known as ancilla-driven quantum computation.
Benchmarking quantum computers
The rapid pace of development in quantum computing technology has sparked a proliferation of benchmarks to assess the performance of quantum computing hardware and software. However, not all benchmarks are of equal merit. Good ones empower scientists, engineers, programmers and users to understand the power of a computing system, whereas bad ones can misdirect research and inhibit progress. In this Perspective, we survey the science of quantum computer benchmarking. We discuss the role of benchmarks and benchmarking and how good benchmarks can drive and measure progress towards the long-term goal of useful quantum computations, known as quantum utility. We explain how different kinds of benchmark quantify the performance of different parts of a quantum computer, discuss existing benchmarks, examine recent trends in benchmarking, and highlight important open research questions in this field.Although quantum computers are still in their infancy, their computational power is growing rapidly. This Perspective surveys and critiques the known ways to benchmark quantum computer performance, highlighting new challenges anticipated on the road to utility-scale quantum computing.
Unremitting Health-Care-Utilization Outcomes of Tertiary Rehabilitation of Patients with Chronic Musculoskeletal Disorders
BACKGROUND:Unremitting health-care-seeking behaviors have only infrequently been addressed in the literature as an outcome of treatment for chronic disabling work-related musculoskeletal disorders. The limited research has never focused on the patient as the “driver” of health-care utilization, to our knowledge. As a result, little attention has been paid to the differences between treated patients who seek additional health care from a new provider and those who do not. The purpose of this project was to examine the demographic and socioeconomic outcome variables that characterize patients with a chronic disabling work-related musculoskeletal disorder who pursue additional health-care services from a new provider following the completion of a tertiary rehabilitation treatment program. A prospective comparison cohort design was employed to assess characteristics and outcomes of these patients, all of whom were treated with the same interdisciplinary protocol. METHODS:A cohort of 1316 patients who had been consecutively treated with a rehabilitation program for functional restoration was divided into two groups on the basis of whether they had sought treatment from a new health-care provider in the year following completion of treatment. Group 0 (966 patients) did not visit a new health-care provider for treatment of their original occupational injury, and Group 1 (350 patients) visited a new provider on at least one occasion. A structured clinical interview to assess socioeconomic outcomes was carried out one year after discharge from the treatment program; this interview addressed pain, health-care utilization, work status, recurrent injury, and whether the Workersʼ Compensation case had been closed. RESULTS:The percentage of Group-0 patients who had undergone pre-rehabilitation surgery was significantly lower than the percentage of Group-1 patients who had done so (12% compared with 21%, odds ratio = 1.9 [95% confidence interval = 1.3, 2.7]; p < 0.001). One year after treatment, 90% of the Group-0 patients had returned to work compared with only 78% of the Group-1 patients (odds ratio = 2.6 [95% confidence interval, 1.9, 3.6]; p < 0.001). Similarly, 88% of the Group-0 patients were still working at one year compared with only 62% of the patients in Group 1 (odds ratio = 4.5 [95% confidence interval, 3.3, 6.0]; p < 0.001). Whereas 96% of the Group-0 patients had resolved all related legal and/or financial disputes by one year, only 77% of the Group-1 patients had done so (odds ratio = 6.9 [95% confidence interval, 4.5, 10.5]; p < 0.001). Only a negligible percentage (0.4%) of the patients in Group 0 had undergone a new operation at the site of the original injury, whereas 12% of the Group-1 patients had done so (odds ratio = 31.0 [95% confidence interval, 11.0, 87.3]; p < 0.001). When the above outcome variables were analyzed by dividing Group 1 according to the number of visits to a new service provider, there was a trend for poorer socioeconomic outcomes to be associated with an increasing number of health-care visits. CONCLUSIONS:To our knowledge, the present study represents the first large-scale examination of patients with a chronic disabling work-related musculoskeletal disorder who persist in seeking health-care following the completion of tertiary rehabilitation. The results demonstrate that about 25% of patients with a chronic disabling work-related musculoskeletal disorder pursue new health-care services after completing a course of treatment, and this subgroup accounts for a significant proportion of lost worker productivity, unremitting disability payments, and excess healthcare consumption. LEVEL OF EVIDENCE:Prognostic study, Level I-1 (prospective study). See Instructions to Authors for a complete description of levels of evidence.