Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
211
result(s) for
"Le Gallo, Manuel"
Sort by:
Memory devices and applications for in-memory computing
by
Eleftheriou Evangelos
,
Le Gallo Manuel
,
Riduan, Khaddam-Aljameh
in
Artificial intelligence
,
Computation
,
Computer applications
2020
Traditional von Neumann computing systems involve separate processing and memory units. However, data movement is costly in terms of time and energy and this problem is aggravated by the recent explosive growth in highly data-centric applications related to artificial intelligence. This calls for a radical departure from the traditional systems and one such non-von Neumann computational approach is in-memory computing. Hereby certain computational tasks are performed in place in the memory itself by exploiting the physical attributes of the memory devices. Both charge-based and resistance-based memory devices are being explored for in-memory computing. In this Review, we provide a broad overview of the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, optimization, machine learning, deep learning and stochastic computing.This Review provides an overview of memory devices and the key computational primitives for in-memory computing, and examines the possibilities of applying this computing approach to a wide range of applications.
Journal Article
Accurate deep neural network inference using computational phase-change memory
by
Boybat, Irem
,
Eleftheriou, Evangelos
,
Joshi, Vinay
in
639/705/117
,
639/925/927/1007
,
Accuracy
2020
In-memory computing using resistive memory devices is a promising non-von Neumann approach for making energy-efficient deep learning inference hardware. However, due to device variability and noise, the network needs to be trained in a specific way so that transferring the digitally trained weights to the analog resistive memory devices will not result in significant loss of accuracy. Here, we introduce a methodology to train ResNet-type convolutional neural networks that results in no appreciable accuracy loss when transferring weights to phase-change memory (PCM) devices. We also propose a compensation technique that exploits the batch normalization parameters to improve the accuracy retention over time. We achieve a classification accuracy of 93.7% on CIFAR-10 and a top-1 accuracy of 71.6% on ImageNet benchmarks after mapping the trained weights to PCM. Our hardware results on CIFAR-10 with ResNet-32 demonstrate an accuracy above 93.5% retained over a one-day period, where each of the 361,722 synaptic weights is programmed on just two PCM devices organized in a differential configuration.
Designing deep learning inference hardware based on in-memory computing remains a challenge. Here, the authors propose a strategy to train ResNet-type convolutional neural networks which results in reduced accuracy loss when transferring weights to in-memory computing hardware based on phase-change memory.
Journal Article
Robust high-dimensional memory-augmented neural networks
by
Cherubini, Giovanni
,
Schmuck, Manuel
,
Benini, Luca
in
639/166/987
,
639/705/117
,
Artificial neural networks
2021
Traditional neural networks require enormous amounts of data to build their complex mappings during a slow training procedure that hinders their abilities for relearning and adapting to new data. Memory-augmented neural networks enhance neural networks with an explicit memory to overcome these issues. Access to this explicit memory, however, occurs via soft read and write operations involving every individual memory entry, resulting in a bottleneck when implemented using the conventional von Neumann computer architecture. To overcome this bottleneck, we propose a robust architecture that employs a computational memory unit as the explicit memory performing analog in-memory computation on high-dimensional (HD) vectors, while closely matching 32-bit software-equivalent accuracy. This is achieved by a content-based attention mechanism that represents unrelated items in the computational memory with uncorrelated HD vectors, whose real-valued components can be readily approximated by binary, or bipolar components. Experimental results demonstrate the efficacy of our approach on few-shot image classification tasks on the Omniglot dataset using more than 256,000 phase-change memory devices. Our approach effectively merges the richness of deep neural network representations with HD computing that paves the way for robust vector-symbolic manipulations applicable in reasoning, fusion, and compression.
The implementation of memory-augmented neural networks using conventional computer architectures is challenging due to a large number of read and write operations. Here, Karunaratne, Schmuck et al. propose an architecture that enables analog in-memory computing on high-dimensional vectors at accuracy matching 32-bit software equivalent.
Journal Article
Subthreshold electrical transport in amorphous phase-change materials
by
Gallo, Manuel Le
,
Krebs, Daniel
,
Kaes, Matthias
in
amorphous chalcogenides
,
Amorphous materials
,
Coulomb potential
2015
Chalcogenide-based phase-change materials play a prominent role in information technology. In spite of decades of research, the details of electrical transport in these materials are still debated. In this article, we present a unified model based on multiple-trapping transport together with 3D Poole-Frenkel emission from a two-center Coulomb potential. With this model, we are able to explain electrical transport both in as-deposited phase-change material thin films, similar to experimental conditions in early work dating back to the 1970s, and in melt-quenched phase-change materials in nanometer-scale phase-change memory devices typically used in recent studies. Experimental measurements on two widely different device platforms show remarkable agreement with the proposed mechanism over a wide range of temperatures and electric fields. In addition, the proposed model is able to seamlessly capture the temporal evolution of the transport properties of the melt-quenched phase upon structural relaxation.
Journal Article
Stochastic phase-change neurons
by
Le Gallo, Manuel
,
Eleftheriou, Evangelos
,
Tuma, Tomas
in
639/301/1005/1008
,
639/301/1034/1037
,
639/925/929/115
2016
Artificial neuromorphic systems based on populations of spiking neurons are an indispensable tool in understanding the human brain and in constructing neuromimetic computational systems. To reach areal and power efficiencies comparable to those seen in biological systems, electroionics-based and phase-change-based memristive devices have been explored as nanoscale counterparts of synapses. However, progress on scalable realizations of neurons has so far been limited. Here, we show that chalcogenide-based phase-change materials can be used to create an artificial neuron in which the membrane potential is represented by the phase configuration of the nanoscale phase-change device. By exploiting the physics of reversible amorphous-to-crystal phase transitions, we show that the temporal integration of postsynaptic potentials can be achieved on a nanosecond timescale. Moreover, we show that this is inherently stochastic because of the melt-quench-induced reconfiguration of the atomic structure occurring when the neuron is reset. We demonstrate the use of these phase-change neurons, and their populations, in the detection of temporal correlations in parallel data streams and in sub-Nyquist representation of high-bandwidth signals.
A nanoscale phase-change device can be used to create an artificial neuron that exhibits integrate-and-fire functionality with stochastic dynamics.
Journal Article
Experimental validation of state equations and dynamic route maps for phase change memristive devices
2022
Phase Change Memory (PCM) is an emerging technology exploiting the rapid and reversible phase transition of certain chalcogenides to realize nanoscale memory elements. PCM devices are being explored as non-volatile storage-class memory and as computing elements for in-memory and neuromorphic computing. It is well-known that PCM exhibits several characteristics of a memristive device. In this work, based on the essential physical attributes of PCM devices, we exploit the concept of
Dynamic Route Map
(DRM) to capture the complex physics underlying these devices to describe them as memristive devices defined by a state—dependent Ohm’s law. The efficacy of the DRM has been proven by comparing numerical results with experimental data obtained on PCM devices.
Journal Article
Crystal growth within a phase change memory cell
2014
In spite of the prominent role played by phase change materials in information technology, a detailed understanding of the central property of such materials, namely the phase change mechanism, is still lacking mostly because of difficulties associated with experimental measurements. Here, we measure the crystal growth velocity of a phase change material at both the nanometre length and the nanosecond timescale using phase-change memory cells. The material is studied in the technologically relevant melt-quenched phase and directly in the environment in which the phase change material is going to be used in the application. We present a consistent description of the temperature dependence of the crystal growth velocity in the glass and the super-cooled liquid up to the melting temperature.
Phase change materials play a key role in information technology. Here, the authors measure the crystal growth velocity in doped Ge
2
Sb
2
Te
5
up to the melting temperature, exploiting the nanoscale dimensions and the fast thermal dynamics of a phase change memory cell.
Journal Article
Monatomic phase change memory
by
Salinga, Martin
,
Vara Prasad Jonnalagadda
,
Vu, Xuan Thang
in
Antimony
,
Computer memory
,
Data storage
2018
Phase change memory has been developed into a mature technology capable of storing information in a fast and non-volatile way1–3, with potential for neuromorphic computing applications4–6. However, its future impact in electronics depends crucially on how the materials at the core of this technology adapt to the requirements arising from continued scaling towards higher device densities. A common strategy to fine-tune the properties of phase change memory materials, reaching reasonable thermal stability in optical data storage, relies on mixing precise amounts of different dopants, resulting often in quaternary or even more complicated compounds6–8. Here we show how the simplest material imaginable, a single element (in this case, antimony), can become a valid alternative when confined in extremely small volumes. This compositional simplification eliminates problems related to unwanted deviations from the optimized stoichiometry in the switching volume, which become increasingly pressing when devices are aggressively miniaturized9,10. Removing compositional optimization issues may allow one to capitalize on nanosize effects in information storage.
Journal Article
Neuromorphic computing with multi-memristive synapses
by
Boybat, Irem
,
Eleftheriou, Evangelos
,
Moraitis, Timoleon
in
631/378/116/1925
,
639/705/1042
,
639/925/927/1007
2018
Neuromorphic computing has emerged as a promising avenue towards building the next generation of intelligent computing systems. It has been proposed that memristive devices, which exhibit history-dependent conductivity modulation, could efficiently represent the synaptic weights in artificial neural networks. However, precise modulation of the device conductance over a wide dynamic range, necessary to maintain high network accuracy, is proving to be challenging. To address this, we present a multi-memristive synaptic architecture with an efficient global counter-based arbitration scheme. We focus on phase change memory devices, develop a comprehensive model and demonstrate via simulations the effectiveness of the concept for both spiking and non-spiking neural networks. Moreover, we present experimental results involving over a million phase change memory devices for unsupervised learning of temporal correlations using a spiking neural network. The work presents a significant step towards the realization of large-scale and energy-efficient neuromorphic computing systems.
Memristive technology is a promising avenue towards realizing efficient non-von Neumann neuromorphic hardware. Boybat et al. proposes a multi-memristive synaptic architecture with a counter-based global arbitration scheme to address challenges associated with the non-ideal memristive device behavior.
Journal Article
Temporal correlation detection using computational phase-change memory
by
Eleftheriou, Evangelos
,
Sebastian, Abu
,
Kull, Lukas
in
639/301/1005/1008
,
639/705/1042
,
639/925/927/1007
2017
Conventional computers based on the von Neumann architecture perform computation by repeatedly transferring data between their physically separated processing and memory units. As computation becomes increasingly data centric and the scalability limits in terms of performance and power are being reached, alternative computing paradigms with collocated computation and storage are actively being sought. A fascinating such approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. We present an experimental demonstration using one million phase change memory devices organized to perform a high-level computational primitive by exploiting the crystallization dynamics. Its result is imprinted in the conductance states of the memory devices. The results of using such a computational memory for processing real-world data sets show that this co-existence of computation and storage at the nanometer scale could enable ultra-dense, low-power, and massively-parallel computing systems.
New computing paradigms, such as in-memory computing, are expected to overcome the limitations of conventional computing approaches. Sebastian et al. report a large-scale demonstration of computational phase change memory (PCM) by performing high-level computational primitives using one million PCM devices.
Journal Article