Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
664 result(s) for "Bayesian model reduction"
Sort by:
Bayesian model reduction and empirical Bayes for group (DCM) studies
This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level – e.g., dynamic causal models – and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction. •We describe a novel scheme for inverting non-linear models (e.g. DCMs) within subjects and linear models at the group level•We demonstrate this scheme is more robust to violations of the (commonly used) Laplace assumption than the standard approach•We validate the approach using a simulated mismatch negativity study of schizophrenia•We demonstrate the application of this scheme to classification and prediction of group membership
An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case
Within computational neuroscience, the algorithmic and neural basis of structure learning remains poorly understood. Concept learning is one primary example, which requires both a type of internal model expansion process (adding novel hidden states that explain new observations), and a model reduction process (merging different states into one underlying cause and thus reducing model complexity via meta-learning). Although various algorithmic models of concept learning have been proposed within machine learning and cognitive science, many are limited to various degrees by an inability to generalize, the need for very large amounts of training data, and/or insufficiently established biological plausibility. Using concept learning as an example case, we introduce a novel approach for modeling structure learning – and specifically state-space expansion and reduction – within the active inference framework and its accompanying neural process theory. Our aim is to demonstrate its potential to facilitate a novel line of active inference research in this area. The approach we lay out is based on the idea that a generative model can be equipped with extra (hidden state or cause) ‘slots’ that can be engaged when an agent learns about novel concepts. This can be combined with a Bayesian model reduction process, in which any concept learning – associated with these slots – can be reset in favor of a simpler model with higher model evidence. We use simulations to illustrate this model’s ability to add new concepts to its state space (with relatively few observations) and increase the granularity of the concepts it currently possesses. We also simulate the predicted neural basis of these processes. We further show that it can accomplish a simple form of ‘one-shot’ generalization to new stimuli. Although deliberately simple, these simulation results highlight ways in which active inference could offer useful resources in developing neurocomputational models of structure learning. They provide a template for how future active inference research could apply this approach to real-world structure learning problems and assess the added utility it may offer.
Adiabatic dynamic causal modelling
This technical note introduces adiabatic dynamic causal modelling, a method for inferring slow changes in biophysical parameters that control fluctuations of fast neuronal states. The application domain we have in mind is inferring slow changes in variables (e.g., extracellular ion concentrations or synaptic efficacy) that underlie phase transitions in brain activity (e.g., paroxysmal seizure activity). The scheme is efficient and yet retains a biophysical interpretation, in virtue of being based on established neural mass models that are equipped with a slow dynamic on the parameters (such as synaptic rate constants or effective connectivity). In brief, we use an adiabatic approximation to summarise fast fluctuations in hidden neuronal states (and their expression in sensors) in terms of their second order statistics; namely, their complex cross spectra. This allows one to specify and compare models of slowly changing parameters (using Bayesian model reduction) that generate a sequence of empirical cross spectra of electrophysiological recordings. Crucially, we use the slow fluctuations in the spectral power of neuronal activity as empirical priors on changes in synaptic parameters. This introduces a circular causality, in which synaptic parameters underwrite fast neuronal activity that, in turn, induces activity-dependent plasticity in synaptic parameters. In this foundational paper, we describe the underlying model, establish its face validity using simulations and provide an illustrative application to a chemoconvulsant animal model of seizure activity.
Structure learning in coupled dynamical systems and dynamic causal modelling
Identifying a coupled dynamical system out of many plausible candidates, each of which could serve as the underlying generator of some observed measurements, is a profoundly ill-posed problem that commonly arises when modelling real-world phenomena. In this review, we detail a set of statistical procedures for inferring the structure of nonlinear coupled dynamical systems (structure learning), which has proved useful in neuroscience research. A key focus here is the comparison of competing models of network architectures—and implicit coupling functions—in terms of their Bayesian model evidence. These methods are collectively referred to as dynamic causal modelling. We focus on a relatively new approach that is proving remarkably useful, namely Bayesian model reduction, which enables rapid evaluation and comparison of models that differ in their network architecture. We illustrate the usefulness of these techniques through modelling neurovascular coupling (cellular pathways linking neuronal and vascular systems), whose function is an active focus of research in neurobiology and the imaging of coupled neuronal systems. This article is part of the theme issue ‘Coupling functions: dynamical interaction mechanisms in the physical, biological and social sciences'.
Neurochemistry-enriched dynamic causal models of magnetoencephalography, using magnetic resonance spectroscopy
•We developed neurochemistry-enriched dynamic causal model of resting-state MEG data using MRS measurements of neurotransmitter concentrations.•Neurotransmitter levels, place prior constraints on synaptic connectivity, in the biologically informed model of MEG data.•Comparing evidence of group DCMs using Bayesian model selection procedure reveal synaptic connections that are likely to be informed by MRS data. We present a hierarchical empirical Bayesian framework for testing hypotheses about neurotransmitters’ concertation as empirical prior for synaptic physiology using ultra-high field magnetic resonance spectroscopy (7T-MRS) and magnetoencephalography data (MEG). A first level dynamic causal modelling of cortical microcircuits is used to infer the connectivity parameters of a generative model of individuals’ neurophysiological observations. At the second level, individuals’ 7T-MRS estimates of regional neurotransmitter concentration supply empirical priors on synaptic connectivity. We compare the group-wise evidence for alternative empirical priors, defined by monotonic functions of spectroscopic estimates, on subsets of synaptic connections. For efficiency and reproducibility, we used Bayesian model reduction (BMR), parametric empirical Bayes and variational Bayesian inversion. In particular, we used Bayesian model reduction to compare alternative model evidence of how spectroscopic neurotransmitter measures inform estimates of synaptic connectivity. This identifies the subset of synaptic connections that are influenced by individual differences in neurotransmitter levels, as measured by 7T-MRS. We demonstrate the method using resting-state MEG (i.e., task-free recording) and 7T-MRS data from healthy adults. Our results confirm the hypotheses that GABA concentration influences local recurrent inhibitory intrinsic connectivity in deep and superficial cortical layers, while glutamate influences the excitatory connections between superficial and deep layers and connections from superficial to inhibitory interneurons. Using within-subject split-sampling of the MEG dataset (i.e., validation by means of a held-out dataset), we show that model comparison for hypothesis testing can be highly reliable. The method is suitable for applications with magnetoencephalography or electroencephalography, and is well-suited to reveal the mechanisms of neurological and psychiatric disorders, including responses to psychopharmacological interventions.
Dynamics of Oddball Sound Processing: Trial-by-Trial Modeling of ECoG Signals
Recent computational models of perception conceptualize auditory oddball responses as signatures of a (Bayesian) learning process, in line with the influential view of the mismatch negativity (MMN) as a prediction error signal. Novel MMN experimental paradigms have put an emphasis on neurophysiological effects of manipulating regularity and predictability in sound sequences. This raises the question of the contextual adaptation of the learning process itself, which on the computational side speaks to the mechanisms of gain-modulated (or precision-weighted) prediction error. In this study using electrocorticographic (ECoG) signals, we manipulated the predictability of oddball sound sequences with two objectives: (i) Uncovering the computational process underlying trial-by-trial variations of the cortical responses. The fluctuations between trials, generally ignored by approaches based on averaged evoked responses, should reflect the learning involved. We used a Generalized Linear Model (GLM) and Bayesian Model Reduction (BMR) to assess the respective contributions of experimental manipulations and learning mechanisms under probabilistic assumptions. (ii) To validate and expand on previous findings regarding the effect of changes in predictability using simultaneous EEG-MEG recordings. Our trial-by-trial analysis revealed only a few stimulus-responsive sensors but the measured effects appear to be consistent over subjects in both time and space. In time, they occur at the typical latency of the MMN (between 100 and 250 ms post stimulus). In space, we found a dissociation between time-independent effects in more anterior temporal locations and time-dependent (learning) effects in more posterior locations. However, we could not observe any clear and reliable effect of our manipulation of predictability modulation onto the above learning process. Overall, these findings clearly demonstrate the potential of trial-to-trial modeling to unravel perceptual learning processes and their neurophysiological counterparts.
A new causal centrality measure reveals the prominent role of subcortical structures in the causal architecture of the extended default mode network
Network representation has been an incredibly useful concept for understanding the behavior of complex systems in social sciences, biology, neuroscience, and beyond. Network science is mathematically founded on graph theory, where nodal importance is gauged using measures of centrality. Notably, recent work suggests that the topological centrality of a node should not be over-interpreted as its dynamical or causal importance in the network. Hence, identifying the influential nodes in dynamic causal models (DCM) remains an open question. This paper introduces causal centrality for DCM, a dynamics-sensitive and causally-founded centrality measure based on the notion of intervention in graphical models. Operationally, this measure simplifies to an identifiable expression using Bayesian model reduction. As a proof of concept, the average DCM of the extended default mode network (eDMN) was computed in 74 healthy subjects. Next, causal centralities of different regions were computed for this causal graph, and compared against several graph-theoretical centralities. The results showed that the subcortical structures of the eDMN were more causally central than the cortical regions, even though the graph-theoretical centralities unanimously favored the latter. Importantly, model comparison revealed that only the pattern of causal centrality was causally relevant. These results are consistent with the crucial role of the subcortical structures in the neuromodulatory systems of the brain, and highlight their contribution to the organization of large-scale networks. Potential applications of causal centrality—to study causal models of other neurotypical and pathological functional networks—are discussed, and some future lines of research are outlined.
Empirical Bayes for Group (DCM) Studies: A Reproducibility Study
This technical note addresses some key reproducibility issues in the dynamic causal modelling of group studies of event related potentials. Specifically, we address the reproducibility of Bayesian model comparison (and inferences about model parameters) from three important perspectives namely: (i) reproducibility with independent data (obtained by averaging over odd and even trials); (ii) reproducibility over formally distinct models (namely, classic ERP and canonical microcircuit or CMC models); and (iii) reproducibility over inversion schemes (inversion of the grand average and estimation of group effects using empirical Bayes). Our hope was to illustrate the degree of reproducibility one can expect from DCM when analysing different data, under different models with different analyses.
Dimension reduction and alleviation of confounding for spatial generalized linear mixed models
Non-Gaussian spatial data are very common in many disciplines. For instance, count data are common in disease mapping, and binary data are common in ecology. When fitting spatial regressions for such data, one needs to account for dependence to ensure reliable inference for the regression coefficients. The spatial generalized linear mixed model offers a very popular and flexible approach to modelling such data, but this model suffers from two major shortcomings: variance inflation due to spatial confounding and high dimensional spatial random effects that make fully Bayesian inference for such models computationally challenging. We propose a new parameterization of the spatial generalized linear mixed model that alleviates spatial confounding and speeds computation by greatly reducing the dimension of the spatial random effects. We illustrate the application of our approach to simulated binary, count and Gaussian spatial data sets, and to a large infant mortality data set.
Forecasting Groundwater Levels using a Hybrid of Support Vector Regression and Particle Swarm Optimization
Forecasting the groundwater level is crucial to managing water resources supply sustainably. In this study, a simulation–optimization hybrid model was developed to forecast groundwater levels in aquifers. The model uses the PSO (Particle Swarm Optimization) algorithm to optimize SVR (Support Vector Regression) parameters to predict groundwater levels. The groundwater level of the Zanjan aquifer in Iran was forecasted and compared to the results of Bayesian and SVR models. In the first approach, the aquifers hydrograph was extracted using the Thiessen method, and then the time series of the hydrograph was used in training and testing the model. In the second approach, the time series data from each well was trained and tested separately. In other words, for 35 observation wells, 35 predictions were made. Aquifer’s hydrograph was evaluated using the forecasted groundwater level in the wells. The results showed that the SVR-PSO hybrid model performed better than other models in terms of Root Mean Square Error (RMSE) and coefficient of determination (R2) in both approaches. In the first approach, the SVR-PSO hybrid model forecasted the groundwater level in the next month with a training RMSE of 0.118 m and testing RMSE of 0.221 m. In the second approach, using the SVR-PSO hybrid model, the RMSE error was reduced in 88.57% of the wells compared to other models, and more reliable results were achieved. Based on the performance, the SVR-PSO hybrid model can be used as a tool for decision support and management of similar aquifers.