Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
7,440
result(s) for
"Run time (computers)"
Sort by:
Towards real-time photorealistic 3D holography with deep neural networks
by
Shi, Liang
,
Li, Beichen
,
Matusik, Wojciech
in
639/624/1075/146
,
639/624/1107/1110
,
639/705/1042
2021
The ability to present three-dimensional (3D) scenes with continuous depth sensation has a profound impact on virtual and augmented reality, human–computer interaction, education and training. Computer-generated holography (CGH) enables high-spatio-angular-resolution 3D projection via numerical simulation of diffraction and interference
1
. Yet, existing physically based methods fail to produce holograms with both per-pixel focal control and accurate occlusion
2
,
3
. The computationally taxing Fresnel diffraction simulation further places an explicit trade-off between image quality and runtime, making dynamic holography impractical
4
. Here we demonstrate a deep-learning-based CGH pipeline capable of synthesizing a photorealistic colour 3D hologram from a single RGB-depth image in real time. Our convolutional neural network (CNN) is extremely memory efficient (below 620 kilobytes) and runs at 60 hertz for a resolution of 1,920 × 1,080 pixels on a single consumer-grade graphics processing unit. Leveraging low-power on-device artificial intelligence acceleration chips, our CNN also runs interactively on mobile (iPhone 11 Pro at 1.1 hertz) and edge (Google Edge TPU at 2.0 hertz) devices, promising real-time performance in future-generation virtual and augmented-reality mobile headsets. We enable this pipeline by introducing a large-scale CGH dataset (MIT-CGH-4K) with 4,000 pairs of RGB-depth images and corresponding 3D holograms. Our CNN is trained with differentiable wave-based loss functions
5
and physically approximates Fresnel diffraction. With an anti-aliasing phase-only encoding method, we experimentally demonstrate speckle-free, natural-looking, high-resolution 3D holograms. Our learning-based approach and the Fresnel hologram dataset will help to unlock the full potential of holography and enable applications in metasurface design
6
,
7
, optical and acoustic tweezer-based microscopic manipulation
8
–
10
, holographic microscopy
11
and single-exposure volumetric 3D printing
12
,
13
.
A deep-learning-based approach using a convolutional neural network is used to synthesize photorealistic colour three-dimensional holograms from a single RGB-depth image in real time, and termed tensor holography.
Journal Article
Combining quantum processors with real-time classical communication
by
Carrera Vazquez, Almudena
,
Woerner, Stefan
,
Tornow, Caroline
in
639/766/483/3926
,
639/766/483/481
,
Boundary conditions
2024
Quantum computers process information with the laws of quantum mechanics. Current quantum hardware is noisy, can only store information for a short time and is limited to a few quantum bits, that is, qubits, typically arranged in a planar connectivity
1
. However, many applications of quantum computing require more connectivity than the planar lattice offered by the hardware on more qubits than is available on a single quantum processing unit (QPU). The community hopes to tackle these limitations by connecting QPUs using classical communication, which has not yet been proven experimentally. Here we experimentally realize error-mitigated dynamic circuits and circuit cutting to create quantum states requiring periodic connectivity using up to 142 qubits spanning two QPUs with 127 qubits each connected in real time with a classical link. In a dynamic circuit, quantum gates can be classically controlled by the outcomes of mid-circuit measurements within run-time, that is, within a fraction of the coherence time of the qubits. Our real-time classical link enables us to apply a quantum gate on one QPU conditioned on the outcome of a measurement on another QPU. Furthermore, the error-mitigated control flow enhances qubit connectivity and the instruction set of the hardware thus increasing the versatility of our quantum computers. Our work demonstrates that we can use several quantum processors as one with error-mitigated dynamic circuits enabled by a real-time classical link.
A 142-qubit processor can be realized by connecting two smaller quantum processors using classical communications and circuit cutting.
Journal Article
Trading Classical and Quantum Computational Resources
2016
We propose examples of a hybrid quantum-classical simulation where a classical computer assisted by a small quantum processor can efficiently simulate a larger quantum system. First, we consider sparse quantum circuits such that each qubit participates in O(1) two-qubit gates. It is shown that any sparse circuit on n+k qubits can be simulated by sparse circuits on n qubits and a classical processing that takes time 2O(k)poly(n) . Second, we study Pauli-based computation (PBC), where allowed operations are nondestructive eigenvalue measurements of n -qubit Pauli operators. The computation begins by initializing each qubit in the so-called magic state. This model is known to be equivalent to the universal quantum computer. We show that any PBC on n+k qubits can be simulated by PBCs on n qubits and a classical processing that takes time 2O(k)poly(n) . Finally, we propose a purely classical algorithm that can simulate a PBC on n qubits in a time 2αnpoly(n) , where α≈0.94 . This improves upon the brute-force simulation method, which takes time 2npoly(n) . Our algorithm exploits the fact that n -fold tensor products of magic states admit a low-rank decomposition into n -qubit stabilizer states.
Journal Article
Efficient Classical Simulation of Random Shallow 2D Quantum Circuits
by
Brandão, Fernando G. S. L.
,
Harrow, Aram W.
,
Dalzell, Alexander M.
in
Algorithms
,
Brickwork
,
Circuits
2022
A central question of quantum computing is determining the source of the advantage of quantum computation over classical computation. Even though simulating quantum dynamics on a classical computer is thought to require exponential overhead in the worst case, efficient simulations are known to exist in several special cases. It was widely assumed that these easy-to-simulate cases as well as any yet-undiscovered ones could be avoided by choosing a quantum circuit at random. We prove that this intuition is false by showing that certain families of constant-depth, 2D random circuits can be approximately simulated on a classical computer in time only linear in the number of qubits and gates, even though the same families are capable of universal quantum computation and are hard to exactly simulate in the worst case (under standard hardness assumptions). While our proof applies to specific random circuit families, we demonstrate numerically that typical instances of more general families of sufficiently shallow constant-depth 2D random circuits are also efficiently simulable. We propose two classical simulation algorithms. One is based on first simulating spatially local regions which are then “stitched” together via recovery maps. The other reduces the 2D simulation problem to a problem of simulating a form of 1D dynamics consisting of alternating rounds of random local unitaries and weak measurements. Similar processes have recently been the subject of an intensive research focus, which has observed that the dynamics generally undergo a phase transition from a low-entanglement (and efficient-to-simulate) regime to a high-entanglement (and inefficient-to-simulate) regime as measurement strength is varied. Via a mapping from random quantum circuits to classical statistical mechanical models, we give analytical evidence that a similar computational phase transition occurs for both of our algorithms as parameters of the circuit architecture like the local Hilbert space dimension and circuit depth are varied and, additionally, that the effective 1D dynamics corresponding to sufficiently shallow random quantum circuits falls within the efficient-to-simulate regime. Implementing the latter algorithm for the depth-3 “brickwork” architecture, for which exact simulation is hard, we find that a laptop could simulate typical instances on a409×409grid with a total variation distance error less than 0.01 in approximately one minute per sample, a task intractable for previously known circuit simulation algorithms. Numerical results support our analytic evidence that the algorithm is asymptotically efficient.
Journal Article
TS-CHIEF: a scalable and accurate forest algorithm for time series classification
2020
Time Series Classification (TSC) has seen enormous progress over the last two decades. HIVE-COTE (Hierarchical Vote Collective of Transformation-based Ensembles) is the current state of the art in terms of classification accuracy. HIVE-COTE recognizes that time series data are a specific data type for which the traditional attribute-value representation, used predominantly in machine learning, fails to provide a relevant representation. HIVE-COTE combines multiple types of classifiers: each extracting information about a specific aspect of a time series, be it in the time domain, frequency domain or summarization of intervals within the series. However, HIVE-COTE (and its predecessor, FLAT-COTE) is often infeasible to run on even modest amounts of data. For instance, training HIVE-COTE on a dataset with only 1500 time series can require 8 days of CPU time. It has polynomial runtime with respect to the training set size, so this problem compounds as data quantity increases. We propose a novel TSC algorithm, TS-CHIEF (Time Series Combination of Heterogeneous and Integrated Embedding Forest), which rivals HIVE-COTE in accuracy but requires only a fraction of the runtime. TS-CHIEF constructs an ensemble classifier that integrates the most effective embeddings of time series that research has developed in the last decade. It uses tree-structured classifiers to do so efficiently. We assess TS-CHIEF on 85 datasets of the University of California Riverside (UCR) archive, where it achieves state-of-the-art accuracy with scalability and efficiency. We demonstrate that TS-CHIEF can be trained on 130 k time series in 2 days, a data quantity that is beyond the reach of any TSC algorithm with comparable accuracy.
Journal Article
Deep learning for liver tumor diagnosis part I: development of a convolutional neural network classifier for multi-phasic MRI
by
Hamm, Charlie A
,
Schobert, Isabel
,
Schlachter, Todd
in
Artificial intelligence
,
Artificial neural networks
,
Classification
2019
ObjectivesTo develop and validate a proof-of-concept convolutional neural network (CNN)–based deep learning system (DLS) that classifies common hepatic lesions on multi-phasic MRI.MethodsA custom CNN was engineered by iteratively optimizing the network architecture and training cases, finally consisting of three convolutional layers with associated rectified linear units, two maximum pooling layers, and two fully connected layers. Four hundred ninety-four hepatic lesions with typical imaging features from six categories were utilized, divided into training (n = 434) and test (n = 60) sets. Established augmentation techniques were used to generate 43,400 training samples. An Adam optimizer was used for training. Monte Carlo cross-validation was performed. After model engineering was finalized, classification accuracy for the final CNN was compared with two board-certified radiologists on an identical unseen test set.ResultsThe DLS demonstrated a 92% accuracy, a 92% sensitivity (Sn), and a 98% specificity (Sp). Test set performance in a single run of random unseen cases showed an average 90% Sn and 98% Sp. The average Sn/Sp on these same cases for radiologists was 82.5%/96.5%. Results showed a 90% Sn for classifying hepatocellular carcinoma (HCC) compared to 60%/70% for radiologists. For HCC classification, the true positive and false positive rates were 93.5% and 1.6%, respectively, with a receiver operating characteristic area under the curve of 0.992. Computation time per lesion was 5.6 ms.ConclusionThis preliminary deep learning study demonstrated feasibility for classifying lesions with typical imaging features from six common hepatic lesion types, motivating future studies with larger multi-institutional datasets and more complex imaging appearances.Key Points• Deep learning demonstrates high performance in the classification of liver lesions on volumetric multi-phasic MRI, showing potential as an eventual decision-support tool for radiologists.• Demonstrating a classification runtime of a few milliseconds per lesion, a deep learning system could be incorporated into the clinical workflow in a time-efficient manner.
Journal Article
What is the Computational Value of Finite-Range Tunneling?
by
Boixo, Sergio
,
Isakov, Sergei V.
,
Babbush, Ryan
in
Algorithms
,
Computer simulation
,
Connectivity
2016
Quantum annealing (QA) has been proposed as a quantum enhanced optimization heuristic exploiting tunneling. Here, we demonstrate how finite-range tunneling can provide considerable computational advantage. For a crafted problem designed to have tall and narrow energy barriers separating local minima, the D-Wave 2X quantum annealer achieves significant runtime advantages relative to simulated annealing (SA). For instances with 945 variables, this results in a time-to-99%-success-probability that is ∼108 times faster than SA running on a single processor core. We also compare physical QA with the quantum Monte Carlo algorithm, an algorithm that emulates quantum tunneling on classical processors. We observe a substantial constant overhead against physical QA: D-Wave 2X again runs up to ∼108 times faster than an optimized implementation of the quantum Monte Carlo algorithm on a single core. We note that there exist heuristic classical algorithms that can solve most instances of Chimera structured problems in a time scale comparable to the D-Wave 2X. However, it is well known that such solvers will become ineffective for sufficiently dense connectivity graphs. To investigate whether finite-range tunneling will also confer an advantage for problems of practical interest, we conduct numerical studies on binary optimization problems that cannot yet be represented on quantum hardware. For random instances of the number partitioning problem, we find numerically that algorithms designed to simulate QA scale better than SA. We discuss the implications of these findings for the design of next-generation quantum annealers.
Journal Article
Safety Assessment of Maritime Autonomous Surface Ships: A Scenario-Based Approach
by
Hake, Georg
,
Wetzig, Nina
,
Putze, Lina
in
Formal specifications
,
Hazard assessment
,
Run time (computers)
2025
Maritime Autonomous Surface Ships (MASS) promise enhanced efficiency in shipping operations, but ensuring their safety presents significant challenges. Traditional distance-based testing, where safety is assumed after travelling a predetermined distance without incident, is impractical for MASS due to prohibitively large distance requirements. Drawing inspiration from the automotive domain, we apply scenario-based testing, where the space of operating conditions is divided into traffic scenarios. Scenario-based testing makes it possible to increase the efficiency of testing by specifically targeting high-risk scenarios. We employ maritime Traffic Sequence Charts (mTSCs) to formally specify test scenarios and corresponding requirements. Our approach encompasses three key elements. First, in a hazard analysis, risk-triggering scenario properties are systematically identified through expert-guided brainstorming and investigation of causal relationship. This results in abstract test cases and safety requirements, which are formally specified as mTSCs. In a second step, concrete test scenarios are generated for each abstract test case by converting mTSCs into mathematical SMT problems and solving these for vessel movements modelled as Bézier splines. Finally, runtime monitors derived from mTSCs are used to continuously evaluate requirement satisfaction during testing. We find that systematic hazard analysis, automated scenario generation, and runtime monitoring can successfully be applied to the verification of maritime systems. The formal specification enables automatic test execution and evaluation in both simulation and real-world environments.
Journal Article
Time-Efficient Constant-Space-Overhead Fault-Tolerant Quantum Computation
2024
Scaling up quantum computers to attain substantial speedups over classical computing requires fault tolerance. Conventionally, protocols for fault-tolerant quantum computation demand excessive space overheads by using many physical qubits for each logical qubit. A more recent protocol using quantum analogues of low-density parity-check codes needs only a constant space overhead that does not grow with the number of logical qubits. However, the overhead in the processing time required to implement this protocol grows polynomially with the number of computational steps. To address these problems, here we introduce an alternative approach to constant-space-overhead fault-tolerant quantum computing using a concatenation of multiple small-size quantum codes rather than a single large-size quantum low-density parity-check code. We develop techniques for concatenating different quantum Hamming codes with growing size. As a result, we construct a low-overhead protocol to achieve constant space overhead and only quasi-polylogarithmic time overhead simultaneously. Our protocol is fault tolerant even if a decoder has a non-constant runtime, unlike the existing constant-space-overhead protocol. This code concatenation approach will make possible a large class of quantum speedups with feasibly bounded space overhead yet negligibly short time overhead.
Large quantum computers will require error correcting codes, but most proposals have prohibitive requirements for overheads in the number of qubits, processing time or both. A way to combine smaller codes now gives a much more efficient protocol.
Journal Article
A Comparative Study of Modern Inference Techniques for Structured Discrete Energy Minimization Problems
by
Kröger, Thorben
,
Andres, Bjoern
,
Kappes, Jörg H.
in
Algorithms
,
Analysis
,
Artificial Intelligence
2015
Szeliski et al. published an influential study in 2006 on energy minimization methods for Markov random fields. This study provided valuable insights in choosing the best optimization technique for certain classes of problems. While these insights remain generally useful today, the phenomenal success of random field models means that the
kinds
of inference problems that have to be solved changed significantly. Specifically, the models today often include higher order interactions, flexible connectivity structures, large label-spaces of different cardinalities, or learned energy tables. To reflect these changes, we provide a modernized and enlarged study. We present an empirical comparison of more than 27 state-of-the-art optimization techniques on a corpus of 2453 energy minimization instances from diverse applications in computer vision. To ensure reproducibility, we evaluate all methods in the OpenGM 2 framework and report extensive results regarding runtime and solution quality. Key insights from our study agree with the results of Szeliski et al. for the types of models they studied. However, on new and challenging types of models our findings disagree and suggest that polyhedral methods and integer programming solvers are competitive in terms of runtime and solution quality over a large range of model types.
Journal Article