Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
22
result(s) for
"Golosio, Bruno"
Sort by:
Fast Simulations of Highly-Connected Spiking Cortical Models Using GPUs
by
Golosio, Bruno
,
Simula, Francesco
,
Tiddia, Gianmarco
in
adaptive exponential integrate-and-fire neuron model
,
Biological activity
,
C plus plus
2021
Over the past decade there has been a growing interest in the development of parallel hardware systems for simulating large-scale networks of spiking neurons. Compared to other highly-parallel systems, GPU-accelerated solutions have the advantage of a relatively low cost and a great versatility, thanks also to the possibility of using the CUDA-C/C++ programming languages. NeuronGPU is a GPU library for large-scale simulations of spiking neural network models, written in the C++ and CUDA-C++ programming languages, based on a novel spike-delivery algorithm. This library includes simple LIF (leaky-integrate-and-fire) neuron models as well as several multisynapse AdEx (adaptive-exponential-integrate-and-fire) neuron models with current or conductance based synapses, different types of spike generators, tools for recording spikes, state variables and parameters, and it supports user-definable models. The numerical solution of the differential equations of the dynamics of the AdEx models is performed through a parallel implementation, written in CUDA-C++, of the fifth-order Runge-Kutta method with adaptive step-size control. In this work we evaluate the performance of this library on the simulation of a cortical microcircuit model, based on LIF neurons and current-based synapses, and on balanced networks of excitatory and inhibitory neurons, using AdEx or Izhikevich neuron models and conductance-based or current-based synapses. On these models, we will show that the proposed library achieves state-of-the-art performance in terms of simulation time per second of biological activity. In particular, using a single NVIDIA GeForce RTX 2080 Ti GPU board, the full-scale cortical-microcircuit model, which includes about 77,000 neurons and 3 · 10 8 connections, can be simulated at a speed very close to real time, while the simulation time of a balanced network of 1,000,000 AdEx neurons with 1,000 connections per neuron was about 70 s per second of biological activity.
Journal Article
Exploring Autism Spectrum Disorder: A Comparative Study of Traditional Classifiers and Deep Learning Classifiers to Analyze Functional Connectivity Measures from a Multicenter Dataset
by
Golosio, Bruno
,
Retico, Alessandra
,
Oliva, Piernicola
in
ABIDE
,
Autism
,
autism spectrum disorders
2024
The investigation of functional magnetic resonance imaging (fMRI) data with traditional machine learning (ML) and deep learning (DL) classifiers has been widely used to study autism spectrum disorders (ASDs). This condition is characterized by symptoms that affect the individual’s behavioral aspects and social relationships. Early diagnosis is crucial for intervention, but the complexity of ASD poses challenges for the development of effective treatments. This study compares traditional ML and DL classifiers in the analysis of tabular data, in particular, functional connectivity measures obtained from the time series of a public multicenter dataset, and evaluates whether the features that contribute most to the classification task vary depending on the classifier used. Specifically, Support Vector Machine (SVM) classifiers, with both linear and radial basis function (RBF) kernels, and Extreme Gradient Boosting (XGBoost) classifiers are compared against the TabNet classifier (a DL architecture customized for tabular data analysis) and a Multi Layer Perceptron (MLP). The findings suggest that DL classifiers may not be optimal for the type of data analyzed, as their performance trails behind that of standard classifiers. Among the latter, SVMs outperform the other classifiers with an AUC of around 75%, whereas the best performances of TabNet and MLP reach 65% and 71% at most, respectively. Furthermore, the analysis of the feature importance showed that the brain regions that contribute the most to the classification task are those primarily responsible for sensory and spatial perception, as well as attention modulation, which is known to be altered in ASDs.
Journal Article
Molecular simulations of SSTR2 dynamics and interaction with ligands
by
Golosio, Bruno
,
Guccione, Camilla
,
Bosin, Andrea
in
631/114/2397
,
631/535/1267
,
639/638/309/2420
2023
The cyclic peptide hormone somatostatin regulates physiological processes involved in growth and metabolism, through its binding to G-protein coupled somatostatin receptors. The isoform 2 (SSTR2) is of particular relevance for the therapy of neuroendocrine tumours for which different analogues to somatostatin are currently in clinical use. We present an extensive and systematic computational study on the dynamics of SSTR2 in three different states: active agonist-bound, inactive antagonist-bound and
apo
inactive. We exploited the recent burst of SSTR2 experimental structures to perform μs-long multi-copy molecular dynamics simulations to sample conformational changes of the receptor and rationalize its binding to different ligands (the agonists somatostatin and octreotide, and the antagonist CYN154806). Our findings suggest that the
apo
form is more flexible compared to the
holo
ones, and confirm that the extracellular loop 2 closes upon the agonist octreotide but not upon the antagonist CYN154806. Based on interaction fingerprint analyses and free energy calculations, we found that all peptides similarly interact with residues buried into the binding pocket. Conversely, specific patterns of interactions are found with residues located in the external portion of the pocket, at the basis of the extracellular loops, particularly distinguishing the agonists from the antagonist. This study will help in the design of new somatostatin-based compounds for theranostics of neuroendocrine tumours.
Journal Article
Thalamo-cortical spiking model of incremental learning combining perception, context and NREM-sleep
by
Golosio, Bruno
,
Tiddia, Gianmarco
,
Paolucci, Pier Stanislao
in
Architecture
,
Biology and Life Sciences
,
Brain
2021
The brain exhibits capabilities of fast incremental learning from few noisy examples, as well as the ability to associate similar memories in autonomously-created categories and to combine contextual hints with sensory perceptions. Together with sleep, these mechanisms are thought to be key components of many high-level cognitive functions. Yet, little is known about the underlying processes and the specific roles of different brain states. In this work, we exploited the combination of context and perception in a thalamo-cortical model based on a soft winner-take-all circuit of excitatory and inhibitory spiking neurons. After calibrating this model to express awake and deep-sleep states with features comparable with biological measures, we demonstrate the model capability of fast incremental learning from few examples, its resilience when proposed with noisy perceptions and contextual signals, and an improvement in visual classification after sleep due to induced synaptic homeostasis and association of similar memories.
Journal Article
A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language
by
Golosio, Bruno
,
Masala, Giovanni Luca
,
Gamotina, Olesya
in
Architectural engineering
,
Architecture
,
Brain
2015
Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production. Although considerable work has been done on modeling human language abilities, it has been difficult to bring them together to a comprehensive tabula rasa system compatible with current knowledge of how verbal information is processed in the brain. This work presents a cognitive system, entirely based on a large-scale neural architecture, which was developed to shed light on the procedural knowledge involved in language elaboration. The main component of this system is the central executive, which is a supervising system that coordinates the other components of the working memory. In our model, the central executive is a neural network that takes as input the neural activation states of the short-term memory and yields as output mental actions, which control the flow of information among the working memory components through neural gating mechanisms. The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities.
Journal Article
Simulations of working memory spiking networks driven by short-term plasticity
by
Golosio, Bruno
,
Tiddia, Gianmarco
,
Paolucci, Pier Stanislao
in
Cognitive ability
,
Firing pattern
,
Memory
2022
Working Memory is a cognitive mechanism which enables temporary holding and manipulation of information in the human brain. This mechanism is mainly characterized by a neuronal persistent activity during which neuron populations are able to maintain an enhanced spiking activity after being triggered by a short external cue. In this work we implement, using the NEST simulator, a spiking neural network model in which the working memory activity is sustained by a mechanism of short-term synaptic facilitation related to calcium kinetics. The model, characterized by leaky integrate-and-fire neurons with exponential postsynaptic currents, is able to autonomously show an activity regime in which the memory information can be stored in a synaptic form as a result of synaptic facilitation, with spiking activity functional to facilitation maintenance. The network is able to simultaneously keep multiple memories by showing an alternated synchronous activity which preserves the synaptic facilitation within the neuron populations holding memory information. The results shown in this work confirm that a working memory mechanism can be sustained by synaptic facilitation.
Journal Article
Effect of data harmonization of multicentric dataset in ASD/TD classification
2023
Machine Learning (ML) is nowadays an essential tool in the analysis of Magnetic Resonance Imaging (MRI) data, in particular in the identification of brain correlates in neurological and neurodevelopmental disorders. ML requires datasets of appropriate size for training, which in neuroimaging are typically obtained collecting data from multiple acquisition centers. However, analyzing large multicentric datasets can introduce bias due to differences between acquisition centers. ComBat harmonization is commonly used to address batch effects, but it can lead to data leakage when the entire dataset is used to estimate model parameters. In this study, structural and functional MRI data from the Autism Brain Imaging Data Exchange (ABIDE) collection were used to classify subjects with Autism Spectrum Disorders (ASD) compared to Typical Developing controls (TD). We compared the classical approach (external harmonization) in which harmonization is performed before train/test split, with an harmonization calculated only on the train set (internal harmonization), and with the dataset with no harmonization. The results showed that harmonization using the whole dataset achieved higher discrimination performance, while non-harmonized data and harmonization using only the train set showed similar results, for both structural and connectivity features. We also showed that the higher performances of the external harmonization are not due to larger size of the sample for the estimation of the model and hence these improved performance with the entire dataset may be ascribed to data leakage. In order to prevent this leakage, it is recommended to define the harmonization model solely using the train set.
Journal Article
Runtime Construction of Large-Scale Spiking Neuronal Network Models on GPU Devices
by
Golosio, Bruno
,
Tiddia, Gianmarco
,
Paolucci, Pier Stanislao
in
Algorithms
,
Analysis
,
C plus plus
2023
Simulation speed matters for neuroscientific research: this includes not only how quickly the simulated model time of a large-scale spiking neuronal network progresses but also how long it takes to instantiate the network model in computer memory. On the hardware side, acceleration via highly parallel GPUs is being increasingly utilized. On the software side, code generation approaches ensure highly optimized code at the expense of repeated code regeneration and recompilation after modifications to the network model. Aiming for a greater flexibility with respect to iterative model changes, here we propose a new method for creating network connections interactively, dynamically, and directly in GPU memory through a set of commonly used high-level connection rules. We validate the simulation performance with both consumer and data center GPUs on two neuroscientifically relevant models: a cortical microcircuit of about 77,000 leaky-integrate-and-fire neuron models and 300 million static synapses, and a two-population network recurrently connected using a variety of connection rules. With our proposed ad hoc network instantiation, both network construction and simulation times are comparable or shorter than those obtained with other state-of-the-art simulation technologies while still meeting the flexibility demands of explorative network modeling.
Journal Article
Fast Simulation of a Multi-Area Spiking Network Model of Macaque Cortex on an MPI-GPU Cluster
by
Golosio, Bruno
,
Tiddia, Gianmarco
,
Pronold, Jari
in
Algorithms
,
Biological activity
,
Brain research
2022
Spiking neural network models are increasingly establishing themselves as an effective tool for simulating the dynamics of neuronal populations and for understanding the relationship between these dynamics and brain function. Furthermore, the continuous development of parallel computing technologies and the growing availability of computational resources are leading to an era of large-scale simulations capable of describing regions of the brain of ever larger dimensions at increasing detail. Recently, the possibility to use MPI-based parallel codes on GPU-equipped clusters to run such complex simulations has emerged, opening up novel paths to further speed-ups. NEST GPU is a GPU library written in CUDA-C/C++ for large-scale simulations of spiking neural networks, which was recently extended with a novel algorithm for remote spike communication through MPI on a GPU cluster. In this work we evaluate its performance on the simulation of a multi-area model of macaque vision-related cortex, made up of about 4 million neurons and 24 billion synapses and representing 32 mm^2 surface area of the macaque cortex. The outcome of the simulations is compared against that obtained using the well-known CPU-based spiking neural network simulator NEST on a high-performance computing cluster. The results show not only an optimal match with the NEST statistical measures of the neural activity in terms of three informative distributions, but also remarkable achievements in terms of simulation time per second of biological activity. Indeed, NEST GPU was able to simulate a second of biological time of the full-scale macaque cortex model in its metastable state 3.1 times faster than NEST using 32 compute nodes equipped with an NVIDIA V100 GPU each. Using the same configuration, the ground state of the full-scale macaque cortex model was simulated 2.4 times faster than NEST.
Journal Article
Sleep-like slow oscillations improve visual classification through synaptic homeostasis and memory association in a thalamo-cortical model
by
Golosio, Bruno
,
Paolucci, Pier Stanislao
,
Capone, Cristiano
in
631/378/116/1925
,
631/378/116/2396
,
631/378/1385/519
2019
The occurrence of sleep passed through the evolutionary sieve and is widespread in animal species. Sleep is known to be beneficial to cognitive and mnemonic tasks, while chronic sleep deprivation is detrimental. Despite the importance of the phenomenon, a complete understanding of its functions and underlying mechanisms is still lacking. In this paper, we show interesting effects of deep-sleep-like slow oscillation activity on a simplified thalamo-cortical model which is trained to encode, retrieve and classify images of handwritten digits. During slow oscillations, spike-timing-dependent-plasticity (STDP) produces a differential homeostatic process. It is characterized by both a specific unsupervised enhancement of connections among groups of neurons associated to instances of the same class (digit) and a simultaneous down-regulation of stronger synapses created by the training. This hierarchical organization of post-sleep internal representations favours higher performances in retrieval and classification tasks. The mechanism is based on the interaction between top-down cortico-thalamic predictions and bottom-up thalamo-cortical projections during deep-sleep-like slow oscillations. Indeed, when learned patterns are replayed during sleep, cortico-thalamo-cortical connections favour the activation of other neurons coding for similar thalamic inputs, promoting their association. Such mechanism hints at possible applications to artificial learning systems.
Journal Article