Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
523 result(s) for "analog computing"
Sort by:
Solving matrix equations in one step with cross-point resistive arrays
Conventional digital computers can execute advanced operations by a sequence of elementary Boolean functions of 2 or more bits. As a result, complicated tasks such as solving a linear system or solving a differential equation require a large number of computing steps and an extensive use of memory units to store individual bits. To accelerate the execution of such advanced tasks, in-memory computing with resistive memories provides a promising avenue, thanks to analog data storage and physical computation in the memory. Here, we show that a cross-point array of resistive memory devices can directly solve a system of linear equations, or find the matrix eigenvectors. These operations are completed in just one single step, thanks to the physical computing with Ohm’s and Kirchhoff’s laws, and thanks to the negative feedback connection in the cross-point circuit. Algebraic problems are demonstrated in hardware and applied to classical computing tasks, such as ranking webpages and solving the Schrödinger equation in one step.
Perspective: an optoelectronic future for heterogeneous, dendritic computing
With the increasing number of applications reliant on large neural network models, the pursuit of more suitable computing architectures is becoming increasingly relevant. Progress toward co-integrated silicon photonic and CMOS circuits provides new opportunities for computing architectures with high bandwidth optical networks and high-speed computing. In this paper, we discuss trends in neuromorphic computing architecture and outline an optoelectronic future for heterogeneous, dendritic neuromorphic computing.
Performing mathematical operations using high-index acoustic metamaterials
The recent breakthrough in metamaterial-based optical computing devices (2014 Science 343 160) has inspired a quest for similar systems in acoustics, performing mathematical operations on sound waves. So far, acoustic analog computing has been demonstrated using thin planar metamaterials, carrying out the operator of choice in Fourier domain. These so-called filtering metasurfaces, however, are always accompanied with additional Fourier transform sub-blocks, enlarging the computing system and preventing its applicability in miniaturized architectures. Here, employing a simple high-index acoustic slab waveguide, we propose a highly compact and potentially integrable acoustic computing system and demonstrate its proper functioning by numerical simulations. The system directly performs mathematical operation in spatial domain and is therefore free of any Fourier bulk lens. Such compact computing system is highly promising for various applications including high throughput image processing, ultrafast equation solving, and real time signal processing.
Meta-optics for spatial optical analog computing
Rapidly growing demands for high-performance computing, powerful data processing, and big data necessitate the advent of novel optical devices to perform demanding computing processes effectively. Due to its unprecedented growth in the past two decades, the field of meta-optics offers a viable solution for spatially, spectrally, and/or even temporally sculpting amplitude, phase, polarization, and/or dispersion of optical wavefronts. In this review, we discuss state-of-the-art developments, as well as emerging trends, in computational metastructures as disruptive platforms for spatial optical analog computation. Two fundamental approaches based on general concepts of spatial Fourier transformation and Green’s function (GF) are discussed in detail. Moreover, numerical investigations and experimental demonstrations of computational optical surfaces and metastructures for solving a diverse set of mathematical problems (e.g., integrodifferentiation and convolution equations) necessary for on-demand information processing (e.g., edge detection) are reviewed. Finally, we explore the current challenges and the potential resolutions in computational meta-optics followed by our perspective on future research directions and possible developments in this promising area.
A Scalable Approach to Modeling on Accelerated Neuromorphic Hardware
Neuromorphic systems open up opportunities to enlarge the explorative space for computational research. However, it is often challenging to unite efficiency and usability. This work presents the software aspects of this endeavor for the BrainScaleS-2 system, a hybrid accelerated neuromorphic hardware architecture based on physical modeling. We introduce key aspects of the BrainScaleS-2 Operating System: experiment workflow, API layering, software design, and platform operation. We present use cases to discuss and derive requirements for the software and showcase the implementation. The focus lies on novel system and software features such as multi-compartmental neurons, fast re-configuration for hardware-in-the-loop training, applications for the embedded processors, the non-spiking operation mode, interactive platform access, and sustainable hardware/software co-development. Finally, we discuss further developments in terms of hardware scale-up, system usability and efficiency.
Analog nanophotonic computing going practical: silicon photonic deep learning engines for tiled optical matrix multiplication with dynamic precision
Analog photonic computing comprises a promising candidate for accelerating the linear operations of deep neural networks (DNNs), since it provides ultrahigh bandwidth, low footprint and low power consumption computing capabilities. However, the confined photonic hardware size, along with the limited bit precision of high-speed electro-optical components, impose stringent requirements towards surpassing the performance levels of current digital processors. Herein, we propose and experimentally demonstrate a speed-optimized dynamic precision neural network (NN) inference via tiled matrix multiplication (TMM) on a low-radix silicon photonic processor. We introduce a theoretical model that relates the noise figure of a photonic neuron with the bit precision requirements per neural layer. The inference evaluation of an NN trained for the classification of the IRIS dataset is, then, experimentally performed over a silicon coherent photonic neuron that can support optical TMM up to 50 GHz, allowing, simultaneously, for dynamic-precision calculations. Targeting on a high-accuracy and speed-optimized classification performance, we experimentally applied the model-extracted mixed-precision NN inference scheme via the respective alteration of the operational compute rates per neural layer. This dynamic-precision NN inference revealed a 55% decrease in the execution time of the linear operations compared to a fixed-precision scheme, without degrading its accuracy.
Spike-based time-domain analog weighted-sum calculation model for extremely low power VLSI implementation of multi-layer neural networks
In deep neural network (DNN) models, the weighted summation, or multiply-and-accumulate (MAC) operation, is an essential and heavy calculation task, which leads to high power consumption in current digital processors. The use of analog operation in complementary metal-oxide-semiconductor (CMOS) very-large-scale integration (VLSI) circuits is a promising method for achieving extremely low power-consumption operation for such calculation tasks. In this paper, a time-domain analog weighted-sum calculation model is proposed based on an integrate-and-fire-type spiking neuron model. The proposed calculation model is applied to multi-layer feedforward networks, in which weighted summations with positive and negative weights are separately performed, and two timings proportional to the positive and negative ones are produced, respectively, in each layer. The timings are then fed into the next layers without their subtraction operation. We also propose VLSI circuits to implement the proposed model. Unlike conventional analog voltage or current mode circuits, the time-domain analog circuits use transient operation in charging/discharging processes to capacitors. Since the circuits can be designed without operational amplifiers, they can operate with extremely low power consumption. We designed a proof-of-concept (PoC) CMOS circuit to verify weighted-sum operation with the same weights. Simulation results showed that the precision was above 4-bit, and the energy efficiency for the weighted-sum calculation was 237.7 Tera Operations Per Second Per Watt (TOPS/W), more than one order of magnitude higher than that in state-of-the-art digital AI processors. Our model promises to be a suitable approach for performing intensive in-memory computing (IMC) of DNNs with moderate precision very energy-efficiently while reducing the cost of analog-digital-converter (ADC) overhead.
Chip-Based High-Dimensional Optical Neural Network
HighlightsHigh-dimensional optical neural network is achieved by introducing an on-chip soliton microcomb source and wavelength division multiplexing technique.The programmable electro-optic nonlinear layer and optical meshes promote the implementation of a multi-layer optical neural network.Ultra-low coupling loss is realized between functional chips and fiber array, which is around 1 dB per facet.Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems. Optical neural network (ONN) has the native advantages of high parallelization, large bandwidth, and low power consumption to meet the demand of big data. Here, we demonstrate the dual-layer ONN with Mach–Zehnder interferometer (MZI) network and nonlinear layer, while the nonlinear activation function is achieved by optical-electronic signal conversion. Two frequency components from the microcomb source carrying digit datasets are simultaneously imposed and intelligently recognized through the ONN. We successfully achieve the digit classification of different frequency components by demultiplexing the output signal and testing power distribution. Efficient parallelization feasibility with wavelength division multiplexing is demonstrated in our high-dimensional ONN. This work provides a high-performance architecture for future parallel high-capacity optical analog computing.
Advances in photonic reservoir computing
We review a novel paradigm that has emerged in analogue neuromorphic optical computing. The goal is to implement a reservoir computer in optics, where information is encoded in the intensity and phase of the optical field. Reservoir computing is a bio-inspired approach especially suited for processing time-dependent information. The reservoir’s complex and high-dimensional transient response to the input signal is capable of universal computation. The reservoir does not need to be trained, which makes it very well suited for optics. As such, much of the promise of photonic reservoirs lies in their minimal hardware requirements, a tremendous advantage over other hardware-intensive neural network models. We review the two main approaches to optical reservoir computing: networks implemented with multiple discrete optical nodes and the continuous system of a single nonlinear device coupled to delayed feedback.
Update Disturbance‐Resilient Analog ReRAM Crossbar Arrays for In‐Memory Deep Learning Accelerators
Resistive memory (ReRAM) technologies with crossbar array architectures hold significant potential for analog AI accelerator hardware, enabling both in‐memory inference and training. Recent developments have successfully demonstrated inference acceleration by offloading compute‐heavy training workloads to off‐chip digital processors. However, in‐memory acceleration of training algorithms is crucial for more sustainable and power‐efficient AI, but still in an early stage of research. This study addresses in‐memory training acceleration using analog ReRAM arrays, focusing on a key challenge during fully parallel weight updates: disturbances of the weight values in cross‐point devices. A ReRAM device solution is presented on 350 nm silicon technology, utilizing a resistive switching conductive metal oxide (CMO) formed on a nanoscale conductive filament within a HfOx layer. The devices not only exhibit 60 ns fast, non‐volatile analog switching, but also demonstrates outstanding resilience to update disturbances, enduring over 100k pulses. The disturbance tolerance of the ReRAM is analyzed using COMSOL Multiphysics simulations, modeling the filament‐induced thermoelectric energy concentration that results in highly nonlinear device responses to input voltage amplitudes. Disturbance‐free parallel weight mapping is also demonstrated on the back‐end‐of‐line integrated ReRAM array chip. Finally, comprehensive hardware‐aware neural network simulations validate the potential of the ReRAM for in‐memory deep learning accelerators capable of fully parallel weight updates. Conductive metal oxide/HfOx analog ReRAM on 350 nm technology is presented for in‐memory deep learning accelerators. The device exhibits analog and nonvolatile conductance switching and high resilience to update disturbances, which is supported by COMSOL Multiphysics simulations. Disturbance‐free parallel weight mapping is demonstrated on the ReRAM array. Hardware‐aware neural network simulations validate the potential of the ReRAM for in‐memory training accelerations.