Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
38
result(s) for
"Kunkel, Susanne"
Sort by:
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers
by
Jordan, Jakob
,
Igarashi, Jun
,
Kitayama, Itaru
in
Brain research
,
Computational neuroscience
,
Computer programs
2018
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Journal Article
Reproducing Polychronization: A Guide to Maximizing the Reproducibility of Spiking Network Models
by
Kunkel, Susanne
,
Morrison, Abigail
,
Weidel, Philipp
in
Bioinformatics
,
Firing pattern
,
Neural networks
2018
Any modeler who has attempted to reproduce a spiking neural network model from its description in a paper has discovered what a painful endeavor this is. Even when all parameters appear to have been specified, which is rare, typically the initial attempt to reproduce the network does not yield results that are recognizably akin to those in the original publication. Causes include inaccurately reported or hidden parameters (e.g., wrong unit or the existence of an initialization distribution), differences in implementation of model dynamics, and ambiguities in the text description of the network experiment. The very fact that adequate reproduction often cannot be achieved until a series of such causes have been tracked down and resolved is in itself disconcerting, as it reveals unreported model dependencies on specific implementation choices that either were not clear to the original authors, or that they chose not to disclose. In either case, such dependencies diminish the credibility of the model's claims about the behavior of the target system. To demonstrate these issues, we provide a worked example of reproducing a seminal study for which, unusually, source code was provided at time of publication. Despite this seemingly optimal starting position, reproducing the results was time consuming and frustrating. Further examination of the correctly reproduced model reveals that it is highly sensitive to implementation choices such as the realization of background noise, the integration timestep, and the thresholding parameter of the analysis algorithm. From this process, we derive a guideline of best practices that would substantially reduce the investment in reproducing neural network studies, whilst simultaneously increasing their scientific quality. We propose that this guideline can be used by authors and reviewers to assess and improve the reproducibility of future network models.
Journal Article
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code
2017
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
Journal Article
Efficient Communication in Distributed Simulations of Spiking Neuronal Networks With Gap Junctions
by
Helias, Moritz
,
Jordan, Jakob
,
Kunkel, Susanne
in
Communication
,
computational neuroscience
,
electrical synapses
2020
Investigating the dynamics and function of large-scale spiking neuronal networks with realistic numbers of synapses is made possible today by state-of-the-art simulation code that scales to the largest contemporary supercomputers. However, simulations that involve electrical interactions, also called gap junctions, besides chemical synapses scale only poorly due to a communication scheme that collects global data on each compute node. In comparison to chemical synapses, gap junctions are far less abundant. To improve scalability we exploit this sparsity by integrating an existing framework for continuous interactions with a recently proposed directed communication scheme for spikes. Using a reference implementation in the NEST simulator we demonstrate excellent scalability of the integrated framework, accelerating large-scale simulations with gap junctions by more than an order of magnitude. This allows, for the first time, the efficient exploration of the interactions of chemical and electrical coupling in large-scale neuronal networks models with natural synapse density distributed across thousands of compute nodes.
Journal Article
Closed-loop coupling of both physiological spindle model and spinal pathways for sensorimotor control of human center-out reaching
by
Hammer, Maria
,
Chacon, Pablo Filipe Santana
,
Wochner, Isabell
in
muscle spindle
,
neuromusculoskeletal model
,
proprioception
2025
The development of new studies that consider different structures of the hierarchical sensorimotor control system is essential to enable a more holistic understanding about movement. The incorporation of more biological proprioceptive and neuronal circuit models to muscles can turn neuromusculoskeletal systems more appropriate to investigate and elucidate motor control. Specifically, further studies that consider the closed-loop between proprioception and central nervous system may allow to better understand the yet open question about the importance of afferent feedback for sensorimotor learning and execution in the intact biological system. Therefore, this study aims to investigate the processing of spindle afferent firings by spiking neuronal network and their relevance for sensorimotor control. We integrated our previously published physiological model of the muscle spindle in a biological arm model, corresponding to a musculoskeletal system able to reproduce biological motion inside of the demoa multi-body simulation framework. We coupled this musculoskeletal system to physiologically-motivated neuronal spinal pathways, which were implemented based on literature in the NEST spiking neural network simulator, intended to perform human center-out reaching arising from spinal synaptic learning. As result, the spindle connections to the spinal neurons were strengthened for the more difficult targets (i.e. higher above placed targets) under perturbation, highlighting the importance of spindle proprioception to succeed in more difficult scenarios. Furthermore, an additionally-implemented simpler spinal network (that does not include the pathways with spindle proprioception) presented an inferior performance in the task by not being able to reach all the evaluated targets.
Journal Article
Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring
by
Wylie, Brian J N
,
Pronold, Jari
,
Jordan, Jakob
in
Algorithms
,
Communication
,
Computer programs
2022
Generic simulation code for spiking neuronal networks spends the major part of time in the phase where spikes have arrived at a compute node and need to be delivered to their target neurons. These spikes were emitted over the last interval between communication steps by source neurons distributed across many compute nodes and are inherently irregular and unsorted with respect to their targets. For finding those targets, the spikes need to be dispatched to a three-dimensional data structure with decisions on target thread and synapse type to be made on the way. With growing network size a compute node receives spikes from an increasing number of different source neurons until in the limit each synapse on the compute node has a unique source. Here we show analytically how this sparsity emerges over the practically relevant range of network sizes from a hundred thousand to a billion neurons. By profiling a production code we investigate opportunities for algorithmic changes to avoid indirections and branching. Every thread hosts an equal share of the neurons on a compute node. In the original algorithm all threads search through all spikes to pick out the relevant ones. With increasing network size the fraction of hits remains invariant but the absolute number of rejections grows. An alternative algorithm equally divides the spikes among the threads and immediately sorts them in parallel according to target thread and synapse type. After this every thread completes delivery solely of the section of spikes for its own neurons. The new algorithm halves the number of instructions in spike delivery which leads to a reduction of simulation time of up to 40%. Thus, spike delivery is a fully parallelizable process with a single synchronization point and thereby well suited for many-core systems. Our analysis indicates that further progress requires a reduction of the latency instructions experience in accessing memory. The study provides the foundation for the exploration of methods of latency hiding like software pipelining and software-induced prefetching.
Journal Article
Spiking network simulation code for petascale computers
by
Plesser, Hans E.
,
Fukai, Tomoki
,
Igarashi, Jun
in
Brain research
,
computational neuroscience
,
Computers
2014
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.
Journal Article
A Modular Workflow for Performance Benchmarking of Neuronal Network Simulations
by
Pronold, Jari
,
Kurth, Anno Christopher
,
Patronis, Alexander
in
Benchmarks
,
Computational neuroscience
,
Efficiency
2022
Modern computational neuroscience strives to develop complex network models to explain dynamics and function of brains in health and disease. This process goes hand in hand with advancements in the theory of neuronal networks and increasing availability of detailed anatomical data on brain connectivity. Large-scale models that study interactions between multiple brain areas with intricate connectivity and investigate phenomena on long time scales such as system-level learning require progress in simulation speed. The corresponding development of state-of-the-art simulation engines relies on information provided by benchmark simulations which assess the time-to-solution for scientifically relevant, complementary network models using various combinations of hardware and software revisions. However, maintaining comparability of benchmark results is difficult due to a lack of standardized specifications for measuring the scaling performance of simulators on high-performance computing (HPC) systems. Motivated by the challenging complexity of benchmarking, we define a generic workflow that decomposes the endeavor into unique segments consisting of separate modules. As a reference implementation for the conceptual workflow, we develop beNNch: an open-source software framework for the configuration, execution, and analysis of benchmarks for neuronal network simulations. The framework records benchmarking data and metadata in a unified way to foster reproducibility. For illustration, we measure the performance of various versions of the NEST simulator across network models with different levels of complexity on a contemporary HPC system, demonstrating how performance bottlenecks can be identified, ultimately guiding the development toward more efficient simulation technology.
Journal Article
A General and Efficient Method for Incorporating Precise Spike Times in Globally Time-Driven Simulations
2010
Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision.
Journal Article
Limits to the development of feed-forward structures in large recurrent neuronal networks
2011
Spike-timing dependent plasticity (STDP) has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper, we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify biologically motivated candidate adaptations to the balanced random network model that might enable it.
Journal Article