Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
23 result(s) for "Shutty, Noah"
Sort by:
Overcoming leakage in quantum error correction
The leakage of quantum information out of the two computational states of a qubit into other energy states represents a major challenge for quantum error correction. During the operation of an error-corrected algorithm, leakage builds over time and spreads through multi-qubit interactions. This leads to correlated errors that degrade the exponential suppression of the logical error with scale, thus challenging the feasibility of quantum error correction as a path towards fault-tolerant quantum computation. Here, we demonstrate a distance-3 surface code and distance-21 bit-flip code on a quantum processor for which leakage is removed from all qubits in each cycle. This shortens the lifetime of leakage and curtails its ability to spread and induce correlated errors. We report a tenfold reduction in the steady-state leakage population of the data qubits encoding the logical state and an average leakage population of less than 1 × 10−3 throughout the entire device. Our leakage removal process efficiently returns the system back to the computational basis. Adding it to a code circuit would prevent leakage from inducing correlated error across cycles. With this demonstration that leakage can be contained, we have resolved a key challenge for practical quantum error correction at scale.Physical realizations of qubits are often vulnerable to leakage errors, where the system ends up outside the basis used to store quantum information. A leakage removal protocol can suppress the impact of leakage on quantum error-correcting codes.
Nonlocal Games, Distributed Storage, and Quantum Error Correction: Excursions in Fault-Tolerant Computation
In this thesis, we consider three computing systems afflicted by noise, which causes their behavior to deviate unpredictably from idealized theoretical models. In each system, we model the effects of noise, and characterize the extent to which fault-tolerance techniques allow the computation to proceed efficiently despite the presence of the noise. First, we consider two parties who wish to implement a computation with little communication by making use of nonlocal correlations, a task called nonlocal computation. Second, we investigate a data center computing scenario where data are stored across many nodes in an error-correcting code, and we wish to evaluate functions of this data despite unpredictable node failures. Third, we consider building a fault-tolerant quantum computer using near-term hardware, in which qubits are afflicted by a high rate of noise, and two-qubit gates are constrained to act on pairs of nearby qubits in a planar layout.A common theme in these three settings is that, in each case, the data are encoded in some code. In the nonlocal computation setup, this encoding takes the form of linear shares or distributed bits, and arises as a result of the distributed nature of the computation. In the data center computing setting, the encoding is a Reed-Solomon or other linear error-correcting code, whose purpose is to protect against catastrophic data loss due to worst-case node failures. In the fault-tolerant quantum computer, the data are encoded in quantum error-correcting codes, which protect the encoded quantum information from the unpredictable noise of the underlying hardware.We measure our fault-tolerance techniques by various notions of efficiency. In the first two systems, we aim to minimize the communication cost. In the third system, we aim to minimize the space and time overhead cost. We show both positive and negative results that better characterize the trade-off between noise level and efficiency in these three systems.
Decoding Merged Color-Surface Codes and Finding Fault-Tolerant Clifford Circuits Using Solvers for Satisfiability Modulo Theories
Universal fault-tolerant quantum computers will require the use of efficient protocols to implement encoded operations necessary in the execution of algorithms. In this work, we show how solvers for satisfiability modulo theories (SMT solvers) can be used to automate the construction of Clifford circuits with certain fault-tolerance properties and we apply our techniques to a fault-tolerant magic-state-preparation protocol. Part of the protocol requires converting magic states encoded in the color code to magic states encoded in the surface code. Since the teleportation step involves decoding a color code merged with a surface code, we develop a decoding algorithm that is applicable to such codes.
Low-bandwidth recovery of linear functions of Reed-Solomon-encoded data
We study the problem of efficiently computing on encoded data. More specifically, we study the question of low-bandwidth computation of functions \\(F:\\mathbb{F}^k \\to \\mathbb{F}\\) of some data \\(x \\in \\mathbb{F}^k\\), given access to an encoding \\(c \\in \\mathbb{F}^n\\) of \\(x\\) under an error correcting code. In our model -- relevant in distributed storage, distributed computation and secret sharing -- each symbol of \\(c\\) is held by a different party, and we aim to minimize the total amount of information downloaded from each party in order to compute \\(F(x)\\). Special cases of this problem have arisen in several domains, and we believe that it is fruitful to study this problem in generality. Our main result is a low-bandwidth scheme to compute linear functions for Reed-Solomon codes, even in the presence of erasures. More precisely, let \\(\\epsilon > 0\\) and let \\(\\mathcal{C}: \\mathbb{F}^k \\to \\mathbb{F}^n\\) be a full-length Reed-Solomon code of rate \\(1 - \\epsilon\\) over a field \\(\\mathbb{F}\\) with constant characteristic. For any \\(\\gamma \\in [0, \\epsilon)\\), our scheme can compute any linear function \\(F(x)\\) given access to any \\((1 - \\gamma)\\)-fraction of the symbols of \\(\\mathcal{C}(x)\\), with download bandwidth \\(O(n/(\\epsilon - \\gamma))\\) bits. In contrast, the naive scheme that involves reconstructing the data \\(x\\) and then computing \\(F(x)\\) uses \\(\\Theta(n \\log n)\\) bits. Our scheme has applications in distributed storage, coded computation, and homomorphic secret sharing.
Tight Limits on Nonlocality from Nontrivial Communication Complexity; a.k.a. Reliable Computation with Asymmetric Gate Noise
It has long been known that the existence of certain superquantum nonlocal correlations would cause communication complexity to collapse. The absurdity of a world in which any nonlocal binary function could be evaluated with a constant amount of communication in turn provides a tantalizing way to distinguish quantum mechanics from incorrect theories of physics; the statement \"communication complexity is nontrivial\" has even been conjectured to be a concise information-theoretic axiom for characterizing quantum mechanics. We directly address the viability of that perspective with two results. First, we exhibit a nonlocal game such that communication complexity collapses in any physical theory whose maximal winning probability exceeds the quantum value. Second, we consider the venerable CHSH game that initiated this line of inquiry. In that case, the quantum value is about 0.85 but it is known that a winning probability of approximately 0.91 would collapse communication complexity. We provide evidence that the 0.91 result is the best possible using a large class of proof strategies, suggesting that the communication complexity axiom is insufficient for characterizing CHSH correlations. Both results build on new insights about reliable classical computation. The first exploits our formalization of an equivalence between amplification and reliable computation, while the second follows from an upper bound on the threshold for reliable computation with formulas of noisy XOR and AND gates.
Computing Representations for Lie Algebraic Networks
Recent work has constructed neural networks that are equivariant to continuous symmetry groups such as 2D and 3D rotations. This is accomplished using explicit Lie group representations to derive the equivariant kernels and nonlinearities. We present three contributions motivated by frontier applications of equivariance beyond rotations and translations. First, we relax the requirement for explicit Lie group representations with a novel algorithm that finds representations of arbitrary Lie groups given only the structure constants of the associated Lie algebra. Second, we provide a self-contained method and software for building Lie group-equivariant neural networks using these representations. Third, we contribute a novel benchmark dataset for classifying objects from relativistic point clouds, and apply our methods to construct the first object-tracking model equivariant to the Poincaré group.
Efficient near-optimal decoding of the surface code through ensembling
We introduce harmonization, an ensembling method that combines several \"noisy\" decoders to generate highly accurate decoding predictions. Harmonized ensembles of MWPM-based decoders achieve lower logical error rates than their individual counterparts on repetition and surface code benchmarks, approaching maximum-likelihood accuracy at large ensemble sizes. We can use the degree of consensus among the ensemble as a confidence measure for a layered decoding scheme, in which a small ensemble flags high-risk cases to be checked by a larger, more accurate ensemble. This layered scheme can realize the accuracy improvements of large ensembles with a relatively small constant factor of computational overhead. We conclude that harmonization provides a viable path towards highly accurate real-time decoding.
Fault-tolerant qubit from a constant number of components
With gate error rates in multiple technologies now below the threshold required for fault-tolerant quantum computation, the major remaining obstacle to useful quantum computation is scaling, a challenge greatly amplified by the huge overhead imposed by quantum error correction itself. We propose a fault-tolerant quantum computing scheme that can nonetheless be assembled from a small number of experimental components, potentially dramatically reducing the engineering challenges associated with building a large-scale fault-tolerant quantum computer. Our scheme has a threshold of 0.39% for depolarising noise, assuming that memory errors are negligible. In the presence of memory errors, the logical error rate decays exponentially with \\(\\sqrt{T/\\tau}\\), where \\(T\\) is the memory coherence time and \\(\\tau\\) is the timescale for elementary gates. Our approach is based on a novel procedure for fault-tolerantly preparing three-dimensional cluster states using a single actively controlled qubit and a pair of delay lines. Although a circuit-level error may propagate to a high-weight error, the effect of this error on the prepared state is always equivalent to that of a constant-weight error. We describe how the requisite gates can be implemented using existing technologies in quantum photonic and phononic systems. With continued improvements in only a few components, we expect these systems to be promising candidates for demonstrating fault-tolerant quantum computation with a comparatively modest experimental effort.
Repairing Reed-Solomon Codes over Prime Fields via Exponential Sums
This paper presents two repair schemes for low-rate Reed-Solomon (RS) codes over prime fields that can repair any node by downloading a constant number of bits from each surviving node. The total bandwidth resulting from these schemes is greater than that incurred during trivial repair; however, this is particularly relevant in the context of leakage-resilient secret sharing. In that framework, our results provide attacks showing that \\(k\\)-out-of-\\(n\\) Shamir's Secret Sharing over prime fields for small \\(k\\) is not leakage-resilient, even when the parties leak only a constant number of bits. To the best of our knowledge, these are the first such attacks. Our results are derived from a novel connection between exponential sums and the repair of RS codes. Specifically, we establish that non-trivial bounds on certain exponential sums imply the existence of explicit nonlinear repair schemes for RS codes over prime fields.
Polynomial Identities on Eigenforms
In this paper, we fix a polynomial with complex coefficients and determine the eigenforms for SL2(Z) which can be expressed as the fixed polynomial evaluated at other eigenforms. In particular, we show that when one excludes trivial cases, only finitely many such identities hold for a fixed polynomial.