Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
23,041 result(s) for "neuronal networks dynamics"
Sort by:
Dynamic Neural Network Modelling of Soil Moisture Content for Predictive Irrigation Scheduling
Sustainable freshwater management is underpinned by technologies which improve the efficiency of agricultural irrigation systems. Irrigation scheduling has the potential to incorporate real-time feedback from soil moisture and climatic sensors. However, for robust closed-loop decision support, models of the soil moisture dynamics are essential in order to predict crop water needs while adapting to external perturbation and disturbances. This paper presents a Dynamic Neural Network approach for modelling of the temporal soil moisture fluxes. The models are trained to generate a one-day-ahead prediction of the volumetric soil moisture content based on past soil moisture, precipitation, and climatic measurements. Using field data from three sites, a R 2 value above 0.94 was obtained during model evaluation in all sites. The models were also able to generate robust soil moisture predictions for independent sites which were not used in training the models. The application of the Dynamic Neural Network models in a predictive irrigation scheduling system was demonstrated using AQUACROP simulations of the potato-growing season. The predictive irrigation scheduling system was evaluated against a rule-based system that applies irrigation based on predefined thresholds. Results indicate that the predictive system achieves a water saving ranging between 20 and 46% while realizing a yield and water use efficiency similar to that of the rule-based system.
Nonnegative matrix factorization for analyzing state dependent neuronal network dynamics in calcium recordings
Calcium imaging allows recording from hundreds of neurons in vivo with the ability to resolve single cell activity. Evaluating and analyzing neuronal responses, while also considering all dimensions of the data set to make specific conclusions, is extremely difficult. Often, descriptive statistics are used to analyze these forms of data. These analyses, however, remove variance by averaging the responses of single neurons across recording sessions, or across combinations of neurons, to create single quantitative metrics, losing the temporal dynamics of neuronal activity, and their responses relative to each other. Dimensionally Reduction (DR) methods serve as a good foundation for these analyses because they reduce the dimensions of the data into components, while still maintaining the variance. Nonnegative Matrix Factorization (NMF) is an especially promising DR analysis method for analyzing activity recorded in calcium imaging because of its mathematical constraints, which include positivity and linearity. We adapt NMF for our analyses and compare its performance to alternative dimensionality reduction methods on both artificial and in vivo data. We find that NMF is well-suited for analyzing calcium imaging recordings, accurately capturing the underlying dynamics of the data, and outperforming alternative methods in common use.
Neuronal circuits overcome imbalance in excitation and inhibition by adjusting connection numbers
The interplay between excitation and inhibition is crucial for neuronal circuitry in the brain. Inhibitory cell fractions in the neocortex and hippocampus are typically maintained at 15 to 30%, which is assumed to be important for stable dynamics. We have studied systematically the role of precisely controlled excitatory/inhibitory (E/I) cellular ratios on network activity using mice hippocampal cultures. Surprisingly, networks with varying E/I ratios maintain stable bursting dynamics. Interburst intervals remain constant for most ratios, except in the extremes of 0 to 10% and 90 to 100% inhibitory cells. Single-cell recordings and modeling suggest that networks adapt to chronic alterations of E/I compositions by balancing E/I connectivity. Gradual blockade of inhibition substantiates the agreement between the model and experiment and defines its limits. Combining measurements of population and single-cell activity with theoretical modeling, we provide a clearer picture of how E/I balance is preserved and where it fails in living neuronal networks.
Utilising activity patterns of a complex biophysical network model to optimise intra-striatal deep brain stimulation
A large-scale biophysical network model for the isolated striatal body is developed to optimise potential intrastriatal deep brain stimulation applied to, e.g. obsessive-compulsive disorder. The model is based on modified Hodgkin–Huxley equations with small-world connectivity, while the spatial information about the positions of the neurons is taken from a detailed human atlas. The model produces neuronal spatiotemporal activity patterns segregating healthy from pathological conditions. Three biomarkers were used for the optimisation of stimulation protocols regarding stimulation frequency, amplitude and localisation: the mean activity of the entire network, the frequency spectrum of the entire network (rhythmicity) and a combination of the above two. By minimising the deviation of the aforementioned biomarkers from the normal state, we compute the optimal deep brain stimulation parameters, regarding position, amplitude and frequency. Our results suggest that in the DBS optimisation process, there is a clear trade-off between frequency synchronisation and overall network activity, which has also been observed during in vivo studies.
Linear Response of General Observables in Spiking Neuronal Network Models
We establish a general linear response relation for spiking neuronal networks, based on chains with unbounded memory. This relation allow us to predict the influence of a weak amplitude time dependent external stimuli on spatio-temporal spike correlations, from the spontaneous statistics (without stimulus) in a general context where the memory in spike dynamics can extend arbitrarily far in the past. Using this approach, we show how the linear response is explicitly related to the collective effect of the stimuli, intrinsic neuronal dynamics, and network connectivity on spike train statistics. We illustrate our results with numerical simulations performed over a discrete time integrate and fire model.
Functional connectivity in in vitro neuronal assemblies
Complex network topologies represent the necessary substrate to support complex brain functions. In this work, we reviewed in vitro neuronal networks coupled to Micro-Electrode Arrays (MEAs) as biological substrate. Networks of dissociated neurons developing in vitro and coupled to MEAs, represent a valid experimental model for studying the mechanisms governing the formation, organization and conservation of neuronal cell assemblies. In this review, we present some examples of the use of statistical Cluster Coefficients and Small World indices to infer topological rules underlying the dynamics exhibited by homogeneous and engineered neuronal networks.
Thermodynamic Formalism in Neuronal Dynamics and Spike Train Statistics
The Thermodynamic Formalism provides a rigorous mathematical framework for studying quantitative and qualitative aspects of dynamical systems. At its core, there is a variational principle that corresponds, in its simplest form, to the Maximum Entropy principle. It is used as a statistical inference procedure to represent, by specific probability measures (Gibbs measures), the collective behaviour of complex systems. This framework has found applications in different domains of science. In particular, it has been fruitful and influential in neurosciences. In this article, we review how the Thermodynamic Formalism can be exploited in the field of theoretical neuroscience, as a conceptual and operational tool, in order to link the dynamics of interacting neurons and the statistics of action potentials from either experimental data or mathematical models. We comment on perspectives and open problems in theoretical neuroscience that could be addressed within this formalism.
Wearable Sensor-Based Human Activity Recognition: Performance and Interpretability of Dynamic Neural Networks
Human Activity Recognition (HAR) using wearable sensor data is increasingly important in healthcare, rehabilitation, and smart monitoring. This study systematically compared three dynamic neural network architectures—Finite Impulse Response Neural Network (FIRNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU)—to examine their suitability and specificity for HAR tasks. A controlled experimental setup was applied, training 16,500 models across different delay lengths and hidden neuron counts. The investigation focused on classification accuracy, computational cost, and model interpretability. LSTM achieved the highest classification accuracy (98.76%), followed by GRU (97.33%) and FIRNN (95.74%), with FIRNN offering the lowest computational complexity. To improve model transparency, Layer-wise Relevance Propagation (LRP) was applied to both input and hidden layers. The results showed that gyroscope Y-axis data was consistently the most informative, while accelerometer Y-axis data was the least informative. LRP analysis also revealed that GRU distributed relevance more broadly across hidden units, while FIRNN relied more on a small subset. These findings highlight trade-offs between performance, complexity, and interpretability and provide practical guidance for applying explainable neural wearable sensor-based HAR.
EAN: Event Adaptive Network for Enhanced Action Recognition
Efficiently modeling spatial–temporal information in videos is crucial for action recognition. To achieve this goal, state-of-the-art methods typically employ the convolution operator and the dense interaction modules such as non-local blocks. However, these methods cannot accurately fit the diverse events in videos. On the one hand, the adopted convolutions are with fixed scales, thus struggling with events of various scales. On the other hand, the dense interaction modeling paradigm only achieves sub-optimal performance as action-irrelevant parts bring additional noises for the final prediction. In this paper, we propose a unified action recognition framework to investigate the dynamic nature of video content by introducing the following designs. First, when extracting local cues, we generate the spatial–temporal kernels of dynamic-scale to adaptively fit the diverse events. Second, to accurately aggregate these cues into a global video representation, we propose to mine the interactions only among a few selected foreground objects by a Transformer, which yields a sparse paradigm. We call the proposed framework as Event Adaptive Network because both key designs are adaptive to the input video content. To exploit the short-term motions within local segments, we propose a novel and efficient Latent Motion Code module, further improving the performance of the framework. Extensive experiments on several large-scale video datasets, e.g., Something-to-Something V1 &V2, Kinetics, and Diving48, verify that our models achieve state-of-the-art or competitive performances at low FLOPs. Codes are available at: https://github.com/tianyuan168326/EAN-Pytorch.
Architectural richness in deep reservoir computing
Reservoir computing (RC) is a popular class of recurrent neural networks (RNNs) with untrained dynamics. Recently, advancements on deep RC architectures have shown a great impact in time-series applications, showing a convenient trade-off between predictive performance and required training complexity. In this paper, we go more in depth into the analysis of untrained RNNs by studying the quality of recurrent dynamics developed by the layers of deep RC neural networks. We do so by assessing the richness of the neural representations in the different levels of the architecture, using measures originating from the fields of dynamical systems, numerical analysis and information theory. Our experiments, on both synthetic and real-world datasets, show that depth—as an architectural factor of RNNs design—has a natural effect on the quality of RNN dynamics (even without learning of the internal connections). The interplay between depth and the values of RC scaling hyper-parameters, especially the scaling of inter-layer connections, is crucial to design rich untrained recurrent neural systems.