Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
652 result(s) for "spiking network models"
Sort by:
Reproducing Polychronization: A Guide to Maximizing the Reproducibility of Spiking Network Models
Any modeler who has attempted to reproduce a spiking neural network model from its description in a paper has discovered what a painful endeavor this is. Even when all parameters appear to have been specified, which is rare, typically the initial attempt to reproduce the network does not yield results that are recognizably akin to those in the original publication. Causes include inaccurately reported or hidden parameters (e.g., wrong unit or the existence of an initialization distribution), differences in implementation of model dynamics, and ambiguities in the text description of the network experiment. The very fact that adequate reproduction often cannot be achieved until a series of such causes have been tracked down and resolved is in itself disconcerting, as it reveals unreported model dependencies on specific implementation choices that either were not clear to the original authors, or that they chose not to disclose. In either case, such dependencies diminish the credibility of the model's claims about the behavior of the target system. To demonstrate these issues, we provide a worked example of reproducing a seminal study for which, unusually, source code was provided at time of publication. Despite this seemingly optimal starting position, reproducing the results was time consuming and frustrating. Further examination of the correctly reproduced model reveals that it is highly sensitive to implementation choices such as the realization of background noise, the integration timestep, and the thresholding parameter of the analysis algorithm. From this process, we derive a guideline of best practices that would substantially reduce the investment in reproducing neural network studies, whilst simultaneously increasing their scientific quality. We propose that this guideline can be used by authors and reviewers to assess and improve the reproducibility of future network models.
A prefrontal network model operating near steady and oscillatory states links spike desynchronization and synaptic deficits in schizophrenia
Schizophrenia results in part from a failure of prefrontal networks but we lack full understanding of how disruptions at a synaptic level cause failures at the network level. This is a crucial gap in our understanding because it prevents us from discovering how genetic mutations and environmental risks that alter synaptic function cause prefrontal network to fail in schizophrenia. To address that question, we developed a recurrent spiking network model of prefrontal local circuits that can explain the link between NMDAR synaptic and 0-lag spike synchrony deficits we recently observed in a pharmacological monkey model of prefrontal network failure in schizophrenia. We analyze how the balance between AMPA and NMDA components of recurrent excitation and GABA inhibition in the network influence oscillatory spike synchrony to inform the biological data. We show that reducing recurrent NMDAR synaptic currents prevents the network from shifting from a steady to oscillatory state in response to extrinsic inputs such as might occur during behavior. These findings strongly parallel dynamic modulation of 0-lag spike synchrony we observed between neurons in monkey prefrontal cortex during behavior, as well as the suppression of this 0-lag spiking by administration of NMDAR antagonists. As such, our cortical network model provides a plausible mechanism explaining the link between NMDAR synaptic and 0-lag spike synchrony deficits observed in a pharmacological monkey model of prefrontal network failure in schizophrenia. Schizophrenia is a long-term mental health condition that can cause a person to see, hear or believe things that are not real. Although researchers do not fully understand the causes of schizophrenia, it is known to disrupt synapses, which connect neurons in the brain to form circuits that carry out a specific function when activated. This disruption alters the pattern of activity among the neurons, distorting the way that information is processed and leading to symptoms. Development of schizophrenia is thought to be due to interactions between many factors, including genetic makeup, changes in how the brain matures during development, and environmental stress. Despite animal studies revealing how neural circuits can fail at the level of individual cells, it remains difficult to predict or understand the complex ways that this damage affects advanced brain functions. Previous research in monkeys showed that mimicking schizophrenia using a drug that blocks a particular type of synapse prevented neurons from coordinating their activity. However, this did not address how synaptic and cellular changes lead to disrupted neural circuits. To better understand this, Crowe et al. developed a computational model of neural circuits to study how they respond to synapse disruption. To replicate the brain, the model consisted of two types of neurons – those that activate connecting cells in response to received signals and those that suppress them. This model could replicate the complex network behavior that causes brain cells to respond to sensory inputs. Increasing the strength of inputs to the network caused it to switch from a state in which the cells fired independently to one where the cells fired at the same time. As was previously seen in monkeys, blocking a particular type of synapse thought to be involved in schizophrenia prevented the cells from coordinating their signaling. The findings suggest that schizophrenia-causing factors can reduce the ability of neurons to fire at the same instant. Disrupting this process could lead to weaker and fewer synapses forming during brain development or loss of synapses in adults. If that is the case, and scientists can understand how factors combine to trigger this process, the mechanism of coordinated activity failure revealed by the model could help identify treatments that prevent or reverse the synapse disruption seen in schizophrenia.
Simulation of a Human-Scale Cerebellar Network Model on the K Computer
Computer simulation of the human brain at an individual neuron resolution is an ultimate goal of computational neuroscience. The Japanese flagship supercomputer, K, provides unprecedented computational capability toward this goal. The cerebellum contains 80% of the neurons in the whole brain. Therefore, computer simulation of the human-scale cerebellum will be a challenge for modern supercomputers. In this study, we built a human-scale spiking network model of the cerebellum, composed of 68 billion spiking neurons, on the K computer. As a benchmark, we performed a computer simulation of a cerebellum-dependent eye movement task known as the optokinetic response. We succeeded in reproducing plausible neuronal activity patterns that are observed experimentally in animals. The model was built on dedicated neural network simulation software called MONET (Millefeuille-like Organization NEural neTwork), which calculates layered sheet types of neural networks with parallelization by tile partitioning. To examine the scalability of the MONET simulator, we repeatedly performed simulations while changing the number of compute nodes from 1,024 to 82,944 and measured the computational time. We observed a good weak-scaling property for our cerebellar network model. Using all 82,944 nodes, we succeeded in simulating a human-scale cerebellum for the first time, although the simulation was 578 times slower than the wall clock time. These results suggest that the K computer is already capable of creating a simulation of a human-scale cerebellar model with the aid of the MONET simulator.
Evaluating the extent to which homeostatic plasticity learns to compute prediction errors in unstructured neuronal networks
The brain is believed to operate in part by making predictions about sensory stimuli and encoding deviations from these predictions in the activity of “prediction error neurons.” This principle defines the widely influential theory of predictive coding. The precise circuitry and plasticity mechanisms through which animals learn to compute and update their predictions are unknown. Homeostatic inhibitory synaptic plasticity is a promising mechanism for training neuronal networks to perform predictive coding. Homeostatic plasticity causes neurons to maintain a steady, baseline firing rate in response to inputs that closely match the inputs on which a network was trained, but firing rates can deviate away from this baseline in response to stimuli that are mismatched from training. We combine computer simulations and mathematical analysis systematically to test the extent to which randomly connected, unstructured networks compute prediction errors after training with homeostatic inhibitory synaptic plasticity. We find that homeostatic plasticity alone is sufficient for computing prediction errors for trivial time-constant stimuli, but not for more realistic time-varying stimuli. We use a mean-field theory of plastic networks to explain our findings and characterize the assumptions under which they apply.
A transformation from temporal to ensemble coding in a model of piriform cortex
Different coding strategies are used to represent odor information at various stages of the mammalian olfactory system. A temporal latency code represents odor identity in olfactory bulb (OB), but this temporal information is discarded in piriform cortex (PCx) where odor identity is instead encoded through ensemble membership. We developed a spiking PCx network model to understand how this transformation is implemented. In the model, the impact of OB inputs activated earliest after inhalation is amplified within PCx by diffuse recurrent collateral excitation, which then recruits strong, sustained feedback inhibition that suppresses the impact of later-responding glomeruli. We model increasing odor concentrations by decreasing glomerulus onset latencies while preserving their activation sequences. This produces a multiplexed cortical odor code in which activated ensembles are robust to concentration changes while concentration information is encoded through population synchrony. Our model demonstrates how PCx circuitry can implement multiplexed ensemble-identity/temporal-concentration odor coding.
Rigorous Neural Network Simulations: A Model Substantiation Methodology for Increasing the Correctness of Simulation Results in the Absence of Experimental Validation Data
The reproduction and replication of scientific results is an indispensable aspect of good scientific practice, enabling previous studies to be built upon and increasing our level of confidence in them. However, reproducibility and replicability are not sufficient: an incorrect result will be accurately reproduced if the same incorrect methods are used. For the field of simulations of complex neural networks, the causes of incorrect results vary from insufficient model implementations and data analysis methods, deficiencies in workmanship (e.g., simulation planning, setup, and execution) to errors induced by hardware constraints (e.g., limitations in numerical precision). In order to build credibility, methods such as verification and validation have been developed, but they are not yet well-established in the field of neural network modeling and simulation, partly due to ambiguity concerning the terminology. In this manuscript, we propose a terminology for model verification and validation in the field of neural network modeling and simulation. We outline a rigorous workflow derived from model verification and validation methodologies for increasing model credibility when it is not possible to validate against experimental data. We compare a published minimal spiking network model capable of exhibiting the development of polychronous groups, to its reproduction on the SpiNNaker neuromorphic system, where we consider the dynamics of several selected network states. As a result, by following a formalized process, we show that numerical accuracy is critically important, and even small deviations in the dynamics of individual neurons are expressed in the dynamics at network level.
Biophysical parameters control signal transfer in spiking network
Information transmission and representation in both natural and artificial networks is dependent on connectivity between units. Biological neurons, in addition, modulate synaptic dynamics and post-synaptic membrane properties, but how these relate to information transmission in a population of neurons is still poorly understood. A recent study investigated local learning rules and showed how a spiking neural network can learn to represent continuous signals. Our study builds on their model to explore how basic membrane properties and synaptic delays affect information transfer. The system consisted of three input and output units and a hidden layer of 300 excitatory and 75 inhibitory leaky integrate-and-fire (LIF) or adaptive integrate-and-fire (AdEx) units. After optimizing the connectivity to accurately replicate the input patterns in the output units, we transformed the model to more biologically accurate units and included synaptic delay and concurrent action potential generation in distinct neurons. We examined three different parameter regimes which comprised either identical physiological values for both excitatory and inhibitory units (Comrade), more biologically accurate values (Bacon), or the Comrade regime whose output units were optimized for low reconstruction error (HiFi). We evaluated information transmission and classification accuracy of the network with four distinct metrics: coherence, Granger causality, transfer entropy, and reconstruction error. Biophysical parameters showed a major impact on information transfer metrics. The classification was surprisingly robust, surviving very low firing and information rates, whereas information transmission overall and particularly low reconstruction error were more dependent on higher firing rates in LIF units. In AdEx units, the firing rates were lower and less information was transferred, but interestingly the highest information transmission rates were no longer overlapping with the highest firing rates. Our findings can be reflected on the predictive coding theory of the cerebral cortex and may suggest information transfer qualities as a phenomenological quality of biological cells.
Restoring Behavior via Inverse Neurocontroller in a Lesioned Cortical Spiking Model Driving a Virtual Arm
Neural stimulation can be used as a tool to elicit natural sensations or behaviors by modulating neural activity. This can be potentially used to mitigate the damage of brain lesions or neural disorders. However, in order to obtain the optimal stimulation sequences, it is necessary to develop neural control methods, for example by constructing an inverse model of the target system. For real brains, this can be very challenging, and often unfeasible, as it requires repeatedly stimulating the neural system to obtain enough probing data, and depends on an unwarranted assumption of stationarity. By contrast, detailed brain simulations may provide an alternative testbed for understanding the interactions between ongoing neural activity and external stimulation. Unlike real brains, the artificial system can be probed extensively and precisely, and detailed output information is readily available. Here we employed a spiking network model of sensorimotor cortex trained to drive a realistic virtual musculoskeletal arm to reach a target. The network was then perturbed, in order to simulate a lesion, by either silencing neurons or removing synaptic connections. All lesions led to significant behvaioral impairments during the reaching task. The remaining cells were then systematically probed with a set of single and multiple-cell stimulations, and results were used to build an inverse model of the neural system. The inverse model was constructed using a kernel adaptive filtering method, and was used to predict the neural stimulation pattern required to recover the pre-lesion neural activity. Applying the derived neurostimulation to the lesioned network improved the reaching behavior performance. This work proposes a novel neurocontrol method, and provides theoretical groundwork on the use biomimetic brain models to develop and evaluate neurocontrollers that restore the function of damaged brain regions and the corresponding motor behaviors.
Back-Propagation Learning in Deep Spike-By-Spike Networks
Artificial neural networks (ANNs) are important building blocks in technical applications. They rely on noiseless continuous signals in stark contrast to the discrete action potentials stochastically exchanged among the neurons in real brains. We propose to bridge this gap with Spike-by-Spike (SbS) networks which represent a compromise between non-spiking and spiking versions of generative models. What is missing, however, are algorithms for finding weight sets that would optimize the output performances of deep SbS networks with many layers. Here, a learning rule for feed-forward SbS networks is derived. The properties of this approach are investigated and its functionality is demonstrated by simulations. In particular, a Deep Convolutional SbS network for classifying handwritten digits achieves a classification performance of roughly 99.3% on the MNIST test data when the learning rule is applied together with an optimizer. Thereby it approaches the benchmark results of ANNs without extensive parameter optimization. We envision this learning rule for SBS networks to provide a new basis for research in neuroscience and for technical applications, especially when they become implemented on specialized computational hardware.
A neural network model for familiarity and context learning during honeybee foraging flights
How complex is the memory structure that honeybees use to navigate? Recently, an insect-inspired parsimonious spiking neural network model was proposed that enabled simulated ground-moving agents to follow learned routes. We adapted this model to flying insects and evaluate the route following performance in three different worlds with gradually decreasing object density. In addition, we propose an extension to the model to enable the model to associate sensory input with a behavioral context, such as foraging or homing. The spiking neural network model makes use of a sparse stimulus representation in the mushroom body and reward-based synaptic plasticity at its output synapses. In our experiments, simulated bees were able to navigate correctly even when panoramic cues were missing. The context extension we propose enabled agents to successfully discriminate partly overlapping routes. The structure of the visual environment, however, crucially determines the success rate. We find that the model fails more often in visually rich environments due to the overlap of features represented by the Kenyon cell layer. Reducing the landmark density improves the agents route following performance. In very sparse environments, we find that extended landmarks, such as roads or field edges, may help the agent stay on its route, but often act as strong distractors yielding poor route following performance. We conclude that the presented model is valid for simple route following tasks and may represent one component of insect navigation. Additional components might still be necessary for guidance and action selection while navigating along different memorized routes in complex natural environments.