Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,054 result(s) for "spiking neuron"
Sort by:
Firing Frequency Maxima of Fast-Spiking Neurons in Human, Monkey, and Mouse Neocortex
Cortical fast-spiking (FS) neurons generate high-frequency action potentials (APs) without apparent frequency accommodation, thus providing fast and precise inhibition. However, the maximal firing frequency that they can reach, particularly in primate neocortex, remains unclear. Here, by recording in human, monkey, and mouse neocortical slices, we revealed that FS neurons in human association cortices (mostly temporal) could generate APs at a maximal mean frequency (F ) of 338 Hz and a maximal instantaneous frequency (F ) of 453 Hz, and they increase with age. The maximal firing frequency of FS neurons in the association cortices (frontal and temporal) of monkey was even higher (F 450 Hz, F 611 Hz), whereas in the association cortex (entorhinal) of mouse it was much lower (F 215 Hz, F 342 Hz). Moreover, FS neurons in mouse primary visual cortex (V1) could fire at higher frequencies (F 415 Hz, F 582 Hz) than those in association cortex. We further validated our data by examining spikes of putative FS neurons in behaving monkey and mouse. Together, our results demonstrate that the maximal firing frequency of FS neurons varies between species and cortical areas.
Boost event-driven tactile learning with location spiking neurons
Tactile sensing is essential for a variety of daily tasks. Inspired by the event-driven nature and sparse spiking communication of the biological systems, recent advances in event-driven tactile sensors and Spiking Neural Networks (SNNs) spur the research in related fields. However, SNN-enabled event-driven tactile learning is still in its infancy due to the limited representation abilities of existing spiking neurons and high spatio-temporal complexity in the event-driven tactile data. In this paper, to improve the representation capability of existing spiking neurons, we propose a novel neuron model called “location spiking neuron,” which enables us to extract features of event-based data in a novel way. Specifically, based on the classical Time Spike Response Model (TSRM), we develop the Location Spike Response Model (LSRM). In addition, based on the most commonly-used Time Leaky Integrate-and-Fire (TLIF) model, we develop the Location Leaky Integrate-and-Fire (LLIF) model. Moreover, to demonstrate the representation effectiveness of our proposed neurons and capture the complex spatio-temporal dependencies in the event-driven tactile data, we exploit the location spiking neurons to propose two hybrid models for event-driven tactile learning. Specifically, the first hybrid model combines a fully-connected SNN with TSRM neurons and a fully-connected SNN with LSRM neurons. And the second hybrid model fuses the spatial spiking graph neural network with TLIF neurons and the temporal spiking graph neural network with LLIF neurons. Extensive experiments demonstrate the significant improvements of our models over the state-of-the-art methods on event-driven tactile learning, including event-driven tactile object recognition and event-driven slip detection. Moreover, compared to the counterpart artificial neural networks (ANNs), our SNN models are 10× to 100× energy-efficient, which shows the superior energy efficiency of our models and may bring new opportunities to the spike-based learning community and neuromorphic engineering. Finally, we thoroughly examine the advantages and limitations of various spiking neurons and discuss the broad applicability and potential impact of this work on other spike-based learning applications.
Inhibition recruitment in prefrontal cortex during sleep spindles and gating of hippocampal inputs
During light slow-wave sleep, the thalamo-cortical network oscillates in waxing-and-waning patterns at about 7 to 14 Hz and lasting for 500 ms to 3 s, called spindles, with the thalamus rhythmically sending strong excitatory volleys to the cortex. Concurrently, the hippocampal activity is characterized by transient and strong excitatory events, Sharp-Waves-Ripples (SPWRs), directly affecting neocortical activity—in particular the medial prefrontal cortex (mPFC)—which receives monosynaptic fibers from the ventral hippocampus and subiculum. Both spindles and SPWRs have been shown to be strongly involved in memory consolidation. However, the dynamics of the cortical network during natural sleep spindles and how prefrontal circuits simultaneously process hippocampal and thalamo-cortical activity remain largely undetermined. Using multisite neuronal recordings in rat mPFC, we show that during sleep spindles, oscillatory responses of cortical cells are different for different cell types and cortical layers. Superficial neurons are more phase-locked and tonically recruited during spindle episodes. Moreover, in a given layer, interneurons were always more modulated than pyramidal cells, both in firing rate and phase, suggesting that the dynamics are dominated by inhibition. In the deep layers, where most of the hippocampal fibers make contacts, pyramidal cells respond phasically to SPWRs, but not during spindles. Similar observations were obtained when analyzing γ-oscillation modulation in the mPFC. These results demonstrate that during sleep spindles, the cortex is functionnaly \"deafferented\" from its hippocampal inputs, based on processes of cortical origin, and presumably mediated by the strong recruitment of inhibitory interneurons. The interplay between hippocampal and thalamic inputs may underlie a global mechanism involved in the consolidation of recently formed memory traces.
Extending the Functional Subnetwork Approach to a Generalized Linear Integrate-and-Fire Neuron Model
Engineering neural networks to perform specific tasks often represents a monumental challenge in determining network architecture and parameter values. In this work, we extend our previously-developed method for tuning networks of nonspiking neurons, the “Functional subnetwork approach” (FSA), to the tuning of networks composed of spiking neurons. This extension enables the direct assembly and tuning of networks of spiking neurons and synapses based on the network’s intended function, without the use of global optimization or machine learning. To extend the FSA, we show that the dynamics of a generalized linear integrate and fire (GLIF) neuron model have fundamental similarities to those of a nonspiking leaky integrator neuron model. We derive analytical expressions that show functional parallels between: 1) A spiking neuron’s steady-state spiking frequency and a nonspiking neuron’s steady-state voltage in response to an applied current; 2) a spiking neuron’s transient spiking frequency and a nonspiking neuron’s transient voltage in response to an applied current; and 3) a spiking synapse’s average conductance during steady spiking and a nonspiking synapse’s conductance. The models become more similar as additional spiking neurons are added to each population “node” in the network. We apply the FSA to model a neuromuscular reflex pathway two different ways: Via nonspiking components and then via spiking components. These results provide a concrete example of how a single nonspiking neuron may model the average spiking frequency of a population of spiking neurons. The resulting model also demonstrates that by using the FSA, models can be constructed that incorporate both spiking and nonspiking units. This work facilitates the construction of large networks of spiking neurons and synapses that perform specific functions, for example, those implemented with neuromorphic computing hardware, by providing an analytical method for directly tuning their parameters without time-consuming optimization or learning.
Sharp Tuning of Head Direction and Angular Head Velocity Cells in the Somatosensory Cortex
Head direction (HD) cells form a fundamental component in the brain's spatial navigation system and are intricately linked to spatial memory and cognition. Although HD cells have been shown to act as an internal neuronal compass in various cortical and subcortical regions, the neural substrate of HD cells is incompletely understood. It is reported that HD cells in the somatosensory cortex comprise regular‐spiking (RS, putative excitatory) and fast‐spiking (FS, putative inhibitory) neurons. Surprisingly, somatosensory FS HD cells fire in bursts and display much sharper head‐directionality than RS HD cells. These FS HD cells are nonconjunctive, rarely theta rhythmic, sparsely connected and enriched in layer 5. Moreover, sharply tuned FS HD cells, in contrast with RS HD cells, maintain stable tuning in darkness; FS HD cells’ coexistence with RS HD cells and angular head velocity (AHV) cells in a layer‐specific fashion through the somatosensory cortex presents a previously unreported configuration of spatial representation in the neocortex. Together, these findings challenge the notion that FS interneurons are weakly tuned to sensory stimuli, and offer a local circuit organization relevant to the generation and transmission of HD signaling in the brain. Head direction cells act as an internal neuronal compass in the brain's spatial navigation system. However, the neuronal substrate of head direction cells remains poorly understood. Long and co‐workers first identify sharply tuned fast‐spiking head direction cells in somatosensory cortex that fire in bursts. Their findings uncover the cellular basis for somatosensory head direction cells different from their classical hippocampal counterpart.
Cortical Neurons Adjust the Action Potential Onset Features as a Function of Stimulus Type
Pyramidal neurons and interneurons play critical roles in regulating the neuronal activities in the mammalian cortex, where they exhibit different firing patterns. Pyramidal neurons mainly exhibit regular-spiking firing patterns, while interneurons have fast-spiking firing patterns. Cortical neurons have distinct action potential onset dynamics, in which the evoked action potential is rapid and highly variable. However, it is still unclear how cortical regular-spiking and fast-spiking neurons discriminate between different types of stimuli by changing their action potential onset parameters. Thus, we used intracellular recordings of regular-spiking and fast-spiking neurons, taken from layer 2/3 in the somatosensory cortex of adult mice, to investigate changes in the action potential waveform in response to two distinct stimulation protocols: the conventional step-and-hold and frozen noise. The results show that the frozen noise stimulation paradigm evoked more rapid action potential with lower threshold potential in both neuron types. Nevertheless, the difference in the action potential rapidity in response to different stimuli was significant in regular-spiking pyramidal neurons while insignificant in fast-spiking interneurons. Furthermore, the threshold variation was significantly higher for regular-spiking neurons than for fast-spiking neurons. Our findings demonstrate that different types of cortical neurons exhibit various onset dynamics of the action potentials, implying that different mechanisms govern the initiation of action potentials across cortical neuron subtypes.
Hybrid spin‐CMOS stochastic spiking neuron for high‐speed emulation of In vivo neuron dynamics
The spintronic stochastic spiking neuron (S3N) developed herein realises biologically mimeticstochastic spiking characteristics observed within in vivo cortical neurons,while operating several orders of magnitude more rapidly and exhibiting afavourable energy profile. This work leverages a novel probabilistic spintronicswitching element device that provides thermally‐driven and current‐controlledtunable stochasticity in a compact, low‐energy, and high‐speed package. In orderto close the loop, the authors utilise a second‐order complementarymetal‐oxide‐semiconductor (CMOS) synapse with variable weight control thataccumulates incoming spikes into second‐order transient current signals, whichresemble the excitatory post‐synaptic potentials found in biological neurons,and can be used to drive post‐synaptic S3Ns. Simulation program with integratedcircuit emphasis (SPICE) simulation results indicate that the equivalent of 1 sof in vivo neuronal spiking characteristics can be generated on the order ofnanoseconds, enabling the feasibility of extremely rapid emulation of in vivoneuronal behaviours for future statistical models of cortical informationprocessing. Their results also indicate that the S3N can generate spikes on theorder of ten picoseconds while dissipating only 0.6–9.6 μW, depending on thespiking rate. Additionally, they demonstrate that an S3N can implementperceptron functionality, such as AND‐gate‐ and OR‐gate‐based logic processing,and provide future extensions of the work to more advanced stochasticneuromorphic architectures.
Multiple forms of working memory emerge from synapse—astrocyte interactions in a neuron—glia network model
Persistent activity in populations of neurons, time-varying activity across a neural population, or activity-silent mechanisms carried out by hidden internal states of the neural population have been proposed as different mechanisms of working memory (WM). Whether these mechanisms could be mutually exclusive or occur in the same neuronal circuit remains, however, elusive, and so do their biophysical underpinnings. WhileWM is traditionally regarded to depend purely on neuronal mechanisms, cortical networks also include astrocytes that can modulate neural activity. We propose and investigate a network model that includes both neurons and glia and show that glia–synapse interactions can lead to multiple stable states of synaptic transmission. Depending on parameters, these interactions can lead in turn to distinct patterns of network activity that can serve as substrates for WM.
Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules
Classical Hebbian learning puts the emphasis on joint pre- and postsynaptic activity, but neglects the potential role of neuromodulators. Since neuromodulators convey information about novelty or reward, the influence of neuromodulators on synaptic plasticity is useful not just for action learning in classical conditioning, but also to decide \"when\" to create new memories in response to a flow of sensory stimuli. In this review, we focus on timing requirements for pre- and postsynaptic activity in conjunction with one or several phasic neuromodulatory signals. While the emphasis of the text is on conceptual models and mathematical theories, we also discuss some experimental evidence for neuromodulation of Spike-Timing-Dependent Plasticity. We highlight the importance of synaptic mechanisms in bridging the temporal gap between sensory stimulation and neuromodulatory signals, and develop a framework for a class of neo-Hebbian three-factor learning rules that depend on presynaptic activity, postsynaptic variables as well as the influence of neuromodulators.
Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges
The two possible pathways towards artificial intelligence – (i) neuroscience-oriented neuromorphic computing (like spiking neural network SNN) and (ii) computer science driven machine learning (like deep learning) differ widely in their fundamental formalism and coding schemes (Pei et al. 2019). Deviating from traditional deep learning approach of relying on neuronal models with static nonlinearities, SNNs attempt to capture brain-like features like computation using spikes. This holds the promise of improving the energy efficiency of the computing platforms. In order to achieve a much higher areal and energy efficiency compared to today’s hardware implementation of SNN, we need to go beyond the traditional route of relying on CMOS-based digital or mixed-signal neuronal circuits and segregation of computation and memory under the von Neumann architecture. Recently, ferroelectric field-effect transistors (FeFETs) are being explored as a promising alternative for building neuromorphic hardware by utilizing their non-volatile nature and rich polarization switching dynamics. In this work, we propose an all FeFET-based SNN hardware that allows low-power spike-based information processing and co-localized memory and computing (a.k.a. in-memory computing). We experimentally demonstrate the essential neuronal and synaptic dynamics in a 28nm high-K metal gate FeFET technology. Furthermore, drawing inspiration from the traditional machine learning approach of optimization a cost function to adjust the synaptic weights, we implement a surrogate gradient learning algorithm on our SNN platform that allows us to perform supervised learning on MNIST dataset. As such, we provide a pathway towards building energy-efficient neuromorphic hardware that can support traditional machine learning algorithms. Finally, we undertake synergistic device-algorithm co-design by accounting for the impacts of device-level variation (stochasticity) and limited bit precision of on-chip synaptic weights (available analog states) on the classification accuracy.