Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
8,923
result(s) for
"639/166/987"
Sort by:
Suppressing quantum errors by scaling a surface code logical qubit
by
Lill, Alexander
,
Hilton, Jeremy
,
Boixo, Sergio
in
639/166/987
,
639/766/483/2802
,
639/766/483/481
2023
Practical quantum computing will require error rates well below those achievable with physical qubits. Quantum error correction
1
,
2
offers a path to algorithmically relevant error rates by encoding logical qubits within many physical qubits, for which increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low for logical performance to improve with increasing code size. Here we report the measurement of logical qubit performance scaling across several code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find that our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, in terms of both logical error probability over 25 cycles and logical error per cycle ((2.914 ± 0.016)% compared to (3.028 ± 0.023)%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7 × 10
−6
logical error per cycle floor set by a single high-energy event (1.6 × 10
−7
excluding this event). We accurately model our experiment, extracting error budgets that highlight the biggest challenges for future systems. These results mark an experimental demonstration in which quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.
A study demonstrating increasing error suppression with larger surface code logical qubits, implemented on a superconducting quantum processor.
Journal Article
Champion-level drone racing using deep reinforcement learning
by
Bauersfeld, Leonard
,
Kaufmann, Elia
,
Müller, Matthias
in
639/166/984
,
639/166/987
,
639/166/988
2023
First-person view (FPV) drone racing is a televised sport in which professional competitors pilot high-speed aircraft through a 3D circuit. Each pilot sees the environment from the perspective of their drone by means of video streamed from an onboard camera. Reaching the level of professional pilots with an autonomous drone is challenging because the robot needs to fly at its physical limits while estimating its speed and location in the circuit exclusively from onboard sensors
1
. Here we introduce Swift, an autonomous system that can race physical vehicles at the level of the human world champions. The system combines deep reinforcement learning (RL) in simulation with data collected in the physical world. Swift competed against three human champions, including the world champions of two international leagues, in real-world head-to-head races. Swift won several races against each of the human champions and demonstrated the fastest recorded race time. This work represents a milestone for mobile robotics and machine intelligence
2
, which may inspire the deployment of hybrid learning-based solutions in other physical systems.
An autonomous system is described that combines deep reinforcement learning with onboard sensors collecting data from the physical world, enabling it to fly faster than human world champion drone pilots around a race track.
Journal Article
Roadmapping the next generation of silicon photonics
2024
Silicon photonics has developed into a mainstream technology driven by advances in optical communications. The current generation has led to a proliferation of integrated photonic devices from thousands to millions-mainly in the form of communication transceivers for data centers. Products in many exciting applications, such as sensing and computing, are around the corner. What will it take to increase the proliferation of silicon photonics from millions to billions of units shipped? What will the next generation of silicon photonics look like? What are the common threads in the integration and fabrication bottlenecks that silicon photonic applications face, and which emerging technologies can solve them? This perspective article is an attempt to answer such questions. We chart the generational trends in silicon photonics technology, drawing parallels from the generational definitions of CMOS technology. We identify the crucial challenges that must be solved to make giant strides in CMOS-foundry-compatible devices, circuits, integration, and packaging. We identify challenges critical to the next generation of systems and applications—in communication, signal processing, and sensing. By identifying and summarizing such challenges and opportunities, we aim to stimulate further research on devices, circuits, and systems for the silicon photonics ecosystem.
In order to complete the transition to the era of large-scale integration, silicon photonics will have to overcome several challenges. Here, the authors outline what these challenges are and what it will take to tackle them.
Journal Article
Finger-inspired rigid-soft hybrid tactile sensor with superior sensitivity at high frequency
2022
Among kinds of flexible tactile sensors, piezoelectric tactile sensor has the advantage of fast response for dynamic force detection. However, it suffers from low sensitivity at high-frequency dynamic stimuli. Here, inspired by finger structure—rigid skeleton embedded in muscle, we report a piezoelectric tactile sensor using a rigid-soft hybrid force-transmission-layer in combination with a soft bottom substrate, which not only greatly enhances the force transmission, but also triggers a significantly magnified effect in
d
31
working mode of the piezoelectric sensory layer, instead of conventional
d
33
mode. Experiments show that this sensor exhibits a super-high sensitivity of 346.5 pC N
−1
(@ 30 Hz), wide bandwidth of 5–600 Hz and a linear force detection range of 0.009–4.3 N, which is ~17 times the theoretical sensitivity of
d
33
mode. Furthermore, the sensor is able to detect multiple force directions with high reliability, and shows great potential in robotic dynamic tactile sensing.
Designing efficient tactile sensors under high-frequency dynamic stimuli remains a challenge. Here, the authors demonstrate piezoelectric tactile sensor with sensitivity of 346.5 pCN−1, wide bandwidth of 5–600 Hz and a linear force detection range of 0.009–4.3 N using a rigid-soft hybrid force-transmission-layer in combination with a soft bottom substrate.
Journal Article
An on-chip photonic deep neural network for image classification
by
Ashtiani, Farshid
,
Aflatouni, Firooz
,
Geers, Alexander J.
in
639/166/987
,
639/624
,
639/624/399/1099
2022
Deep neural networks with applications from computer vision to medical diagnosis
1
–
5
are commonly implemented using clock-based processors
6
–
14
, in which computation speed is mainly limited by the clock frequency and the memory access time. In the optical domain, despite advances in photonic computation
15
–
17
, the lack of scalable on-chip optical non-linearity and the loss of photonic devices limit the scalability of optical deep networks. Here we report an integrated end-to-end photonic deep neural network (PDNN) that performs sub-nanosecond image classification through direct processing of the optical waves impinging on the on-chip pixel array as they propagate through layers of neurons. In each neuron, linear computation is performed optically and the non-linear activation function is realized opto-electronically, allowing a classification time of under 570 ps, which is comparable with a single clock cycle of state-of-the-art digital platforms. A uniformly distributed supply light provides the same per-neuron optical output range, allowing scalability to large-scale PDNNs. Two-class and four-class classification of handwritten letters with accuracies higher than 93.8% and 89.8%, respectively, is demonstrated. Direct, clock-less processing of optical data eliminates analogue-to-digital conversion and the requirement for a large memory module, allowing faster and more energy efficient neural networks for the next generations of deep learning systems.
Using a three-layer opto-electronic neural network, direct, clock-less sub-nanosecond image classification on a silicon photonics chip is demonstrated, achieving a classification time comparable with a single clock cycle of state-of-the-art digital implementations.
Journal Article
Data-driven capacity estimation of commercial lithium-ion batteries from voltage relaxation
by
Ehrenberg, Helmut
,
Senyshyn, Anatoliy
,
Heere, Michael
in
639/166/987
,
639/301/299
,
639/4077/4079/891
2022
Accurate capacity estimation is crucial for the reliable and safe operation of lithium-ion batteries. In particular, exploiting the relaxation voltage curve features could enable battery capacity estimation without additional cycling information. Here, we report the study of three datasets comprising 130 commercial lithium-ion cells cycled under various conditions to evaluate the capacity estimation approach. One dataset is collected for model building from batteries with LiNi
0.86
Co
0.11
Al
0.03
O
2
-based positive electrodes. The other two datasets, used for validation, are obtained from batteries with LiNi
0.83
Co
0.11
Mn
0.07
O
2
-based positive electrodes and batteries with the blend of Li(NiCoMn)O
2
- Li(NiCoAl)O
2
positive electrodes. Base models that use machine learning methods are employed to estimate the battery capacity using features derived from the relaxation voltage profiles. The best model achieves a root-mean-square error of 1.1% for the dataset used for the model building. A transfer learning model is then developed by adding a featured linear transformation to the base model. This extended model achieves a root-mean-square error of less than 1.7% on the datasets used for the model validation, indicating the successful applicability of the capacity estimation approach utilizing cell voltage relaxation.
Accurate capacity estimation is crucial for lithium-ion batteries' reliable and safe operation. Here, the authors propose an approach exploiting features from the relaxation voltage curve for battery capacity estimation without requiring other previous cycling information.
Journal Article
Fully hardware-implemented memristor convolutional neural network
2020
Memristor-enabled neuromorphic computing systems provide a fast and energy-efficient approach to training neural networks
1
–
4
. However, convolutional neural networks (CNNs)—one of the most important models for image recognition
5
—have not yet been fully hardware-implemented using memristor crossbars, which are cross-point arrays with a memristor device at each intersection. Moreover, achieving software-comparable results is highly challenging owing to the poor yield, large variation and other non-ideal characteristics of devices
6
–
9
. Here we report the fabrication of high-yield, high-performance and uniform memristor crossbar arrays for the implementation of CNNs, which integrate eight 2,048-cell memristor arrays to improve parallel-computing efficiency. In addition, we propose an effective hybrid-training method to adapt to device imperfections and improve the overall system performance. We built a five-layer memristor-based CNN to perform MNIST
10
image recognition, and achieved a high accuracy of more than 96 per cent. In addition to parallel convolutions using different kernels with shared inputs, replication of multiple identical kernels in memristor arrays was demonstrated for processing different inputs in parallel. The memristor-based CNN neuromorphic system has an energy efficiency more than two orders of magnitude greater than that of state-of-the-art graphics-processing units, and is shown to be scalable to larger networks, such as residual neural networks. Our results are expected to enable a viable memristor-based non-von Neumann hardware solution for deep neural networks and edge computing.
A fully hardware-based memristor convolutional neural network using a hybrid training method achieves an energy efficiency more than two orders of magnitude greater than that of graphics-processing units.
Journal Article
The future transistors
2023
The metal–oxide–semiconductor field-effect transistor (MOSFET), a core element of complementary metal–oxide–semiconductor (CMOS) technology, represents one of the most momentous inventions since the industrial revolution. Driven by the requirements for higher speed, energy efficiency and integration density of integrated-circuit products, in the past six decades the physical gate length of MOSFETs has been scaled to sub-20 nanometres. However, the downscaling of transistors while keeping the power consumption low is increasingly challenging, even for the state-of-the-art fin field-effect transistors. Here we present a comprehensive assessment of the existing and future CMOS technologies, and discuss the challenges and opportunities for the design of FETs with sub-10-nanometre gate length based on a hierarchical framework established for FET scaling. We focus our evaluation on identifying the most promising sub-10-nanometre-gate-length MOSFETs based on the knowledge derived from previous scaling efforts, as well as the research efforts needed to make the transistors relevant to future logic integrated-circuit products. We also detail our vision of beyond-MOSFET future transistors and potential innovation opportunities. We anticipate that innovations in transistor technologies will continue to have a central role in driving future materials, device physics and topology, heterogeneous vertical and lateral integration, and computing technologies.
The challenges and opportunities for the design of field-effect transistors are discussed and a vision of future transistors and potential innovation opportunities is provided.
Journal Article
Thousands of conductance levels in memristors integrated on CMOS
2023
Neural networks based on memristive devices
1
–
3
have the ability to improve throughput and energy efficiency for machine learning
4
,
5
and artificial intelligence
6
, especially in edge applications
7
–
21
. Because training a neural network model from scratch is costly in terms of hardware resources, time and energy, it is impractical to do it individually on billions of memristive neural networks distributed at the edge. A practical approach would be to download the synaptic weights obtained from the cloud training and program them directly into memristors for the commercialization of edge applications. Some post-tuning in memristor conductance could be done afterwards or during applications to adapt to specific situations. Therefore, in neural network applications, memristors require high-precision programmability to guarantee uniform and accurate performance across a large number of memristive networks
22
–
28
. This requires many distinguishable conductance levels on each memristive device, not only laboratory-made devices but also devices fabricated in factories. Analog memristors with many conductance states also benefit other applications, such as neural network training, scientific computing and even ‘mortal computing’
25
,
29
,
30
. Here we report 2,048 conductance levels achieved with memristors in fully integrated chips with 256 × 256 memristor arrays monolithically integrated on complementary metal–oxide–semiconductor (CMOS) circuits in a commercial foundry. We have identified the underlying physics that previously limited the number of conductance levels that could be achieved in memristors and developed electrical operation protocols to avoid such limitations. These results provide insights into the fundamental understanding of the microscopic picture of memristive switching as well as approaches to enable high-precision memristors for various applications.
Chips with 256 × 256 memristor arrays that were monolithically integrated on complementary metal–oxide–semiconductor (CMOS) circuits in a commercial foundry achieved 2,048 conductance levels in individual memristors.
Journal Article
A graph placement methodology for fast chip design
2021
Chip floorplanning is the engineering task of designing the physical layout of a computer chip. Despite five decades of research
1
, chip floorplanning has defied automation, requiring months of intense effort by physical design engineers to produce manufacturable layouts. Here we present a deep reinforcement learning approach to chip floorplanning. In under six hours, our method automatically generates chip floorplans that are superior or comparable to those produced by humans in all key metrics, including power consumption, performance and chip area. To achieve this, we pose chip floorplanning as a reinforcement learning problem, and develop an edge-based graph convolutional neural network architecture capable of learning rich and transferable representations of the chip. As a result, our method utilizes past experience to become better and faster at solving new instances of the problem, allowing chip design to be performed by artificial agents with more experience than any human designer. Our method was used to design the next generation of Google’s artificial intelligence (AI) accelerators, and has the potential to save thousands of hours of human effort for each new generation. Finally, we believe that more powerful AI-designed hardware will fuel advances in AI, creating a symbiotic relationship between the two fields.
Machine learning tools are used to greatly accelerate chip layout design, by posing chip floorplanning as a reinforcement learning problem and using neural networks to generate high-performance chip layouts.
Journal Article