Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
744 result(s) for "Surrogate data"
Sort by:
Assessing chance in neuro-vascular interactions
•Surrogate-based framework calibrates inference in autocorrelated neurovascular data.•Conventional multiple-comparison correction fails with autocorrelated signals.•Delta-band LFP and a minority of neurons show genuine correlations with brain oxygen.•Aggregated multi-unit activity fails to predict PO₂ fluctuations under rigorous testing.•Synchronization of small neuronal subpopulations entrains neurovascular coupling. Interpreting correlations between neuronal activity and hemodynamic signals is complicated by their inherently strong autocorrelation. Standard parametric tests underestimate false positives, creating the appearance of widespread neurovascular coupling during rest. Here we present a surrogate-based statistical framework designed to calibrate inference in autocorrelated physiological signals. Using simultaneous recordings of cortical oxygen tension (PO₂), single-unit firing, and local field potentials (LFP) in awake rabbits, we applied amplitude-adjusted Fourier surrogates to generate null distributions that preserve temporal structure but remove cross-dependence. This workflow embeds lag optimization, controls for multiple comparisons across windows and units, and scales to population-level inference. Applying the method to 43 experiments, we found that PO₂ correlations with delta-band LFP and a minority of single neurons exceeded chance levels, while correlations with other LFP bands were not significant under surrogate testing. Aggregated activity such as multi-unit signals failed to predict PO₂, but small synchronized subpopulations produced robust associations, highlighting the role of limited synchronization rather than global activity. These findings refine resting-state neurovascular coupling: broad apparent correlations reduce to selective and reproducible effects once calibrated testing is applied. More broadly, the framework demonstrates how surrogate-based inference prevents misinterpretation of autocorrelated data and offers a generalizable approach for electrophysiology, neuroimaging, and other time-series domains where genuine interactions must be distinguished from random associations.
Can sliding-window correlations reveal dynamic functional connectivity in resting-state fMRI?
During the last several years, the focus of research on resting-state functional magnetic resonance imaging (fMRI) has shifted from the analysis of functional connectivity averaged over the duration of scanning sessions to the analysis of changes of functional connectivity within sessions. Although several studies have reported the presence of dynamic functional connectivity (dFC), statistical assessment of the results is not always carried out in a sound way and, in some studies, is even omitted. In this study, we explain why appropriate statistical tests are needed to detect dFC, we describe how they can be carried out and how to assess the performance of dFC measures, and we illustrate the methodology using spontaneous blood-oxygen level-dependent (BOLD) fMRI recordings of macaque monkeys under general anesthesia and in human subjects under resting-state conditions. We mainly focus on sliding-window correlations since these are most widely used in assessing dFC, but also consider a recently proposed non-linear measure. The simulations and methodology, however, are general and can be applied to any measure. The results are twofold. First, through simulations, we show that in typical resting-state sessions of 10min, it is almost impossible to detect dFC using sliding-window correlations. This prediction is validated by both the macaque and the human data: in none of the individual recording sessions was evidence for dFC found. Second, detection power can be considerably increased by session- or subject-averaging of the measures. In doing so, we found that most of the functional connections are in fact dynamic. With this study, we hope to raise awareness of the statistical pitfalls in the assessment of dFC and how they can be avoided by using appropriate statistical methods. •Not widely recognized is the need for proper statistical testing to assess dynamic functional connectivity in resting-state fMRI.•This study describes how to test and how not to test for dynamic functional connectivity.•Large-scale dynamic functional networks are found in macaque monkeys under anesthesia.•Dynamic functional networks comprise both cortical and sub-cortical regions.
Interpreting temporal fluctuations in resting-state functional connectivity MRI
Resting-state functional connectivity is a powerful tool for studying human functional brain networks. Temporal fluctuations in functional connectivity, i.e., dynamic functional connectivity (dFC), are thought to reflect dynamic changes in brain organization and non-stationary switching of discrete brain states. However, recent studies have suggested that dFC might be attributed to sampling variability of static FC. Despite this controversy, a detailed exposition of stationarity and statistical testing of dFC is lacking in the literature. This article seeks an in-depth exploration of these statistical issues at a level appealing to both neuroscientists and statisticians. We first review the statistical notion of stationarity, emphasizing its reliance on ensemble statistics. In contrast, all FC measures depend on sample statistics. An important consequence is that the space of stationary signals is much broader than expected, e.g., encompassing hidden markov models (HMM) widely used to extract discrete brain states. In other words, stationarity does not imply the absence of brain states. We then expound the assumptions underlying the statistical testing of dFC. It turns out that the two popular frameworks - phase randomization (PR) and autoregressive randomization (ARR) - generate stationary, linear, Gaussian null data. Therefore, statistical rejection can be due to non-stationarity, nonlinearity and/or non-Gaussianity. For example, the null hypothesis can be rejected for the stationary HMM due to nonlinearity and non-Gaussianity. Finally, we show that a common form of ARR (bivariate ARR) is susceptible to false positives compared with PR and an adapted version of ARR (multivariate ARR). Application of PR and multivariate ARR to Human Connectome Project data suggests that the stationary, linear, Gaussian null hypothesis cannot be rejected for most participants. However, failure to reject the null hypothesis does not imply that static FC can fully explain dFC. We find that first order AR models explain temporal FC fluctuations significantly better than static FC models. Since first order AR models encode both static FC and one-lag FC, this suggests the presence of dynamical information beyond static FC. Furthermore, even in subjects where the null hypothesis was rejected, AR models explain temporal FC fluctuations significantly better than a popular HMM, suggesting the lack of discrete states (as measured by resting-state fMRI). Overall, our results suggest that AR models are not only useful as a means for generating null data, but may be a powerful tool for exploring the dynamical properties of resting-state fMRI. Finally, we discuss how apparent contradictions in the growing dFC literature might be reconciled. •Space of stationary models bigger than expected; includes hidden Markov model (HMM).•Phase & autoregressive randomizations test for stationarity, linearity, Gaussianity.•Resting-state fMRI is mostly stationary, linear, and Gaussian.•1st order autoregressive (AR) model encodes static & one-lag FC.•1st order AR model explains sliding window correlations very well, better than HMM.
Bridging High‐Fidelity Simulations and Physics‐Based Learning using a Surrogate Model for Soft Robot Control
Soft robotics holds immense promise for applications requiring adaptability and compliant interactions. However, the lack of sufficiently fast and accurate simulation environments for soft robots has hindered progress, particularly in linking with reinforcement learning (RL) applications. Traditional finite element method (FEM) models provide precise insights into soft robot dynamics but are computationally intensive and impractical for accelerated simulation. This work introduces a novel framework that integrates high‐fidelity FEM simulations with computationally efficient physics‐based simulations through a surrogate model tailored for RL. The surrogate model, trained on real‐world and FEM‐generated datasets, captures complex dynamics while maintaining efficiency. Sim2real experiments validate the framework, implementing the trajectory tracking and the force control tasks with high accuracy. These results demonstrate the framework's ability to bridge the simulation gap, enabling its application to advanced tasks, such as manipulation and interaction in unstructured environments. A surrogate‐model‐based framework is proposed for combining high‐fidelity finite element method and efficient physics simulations to enable fast, accurate soft robot simulation for reinforcement learning, validated through sim‐to‐real experiments.
A deep learning approach using natural language processing and time-series forecasting towards enhanced food safety
In various application domains/sectors, data collected from the respective industries are complemented with open data providing added value to the overall analysis and decision making process. Open data refer to weather data, transportation information, stock/investment products prices, or even health-related data. One of the application domains that could harvest the added-value of analytics (including open-data) refers to the food industry and more specifically the decisions related to food recalls. The collected data can be analyzed in real-time through Artificial Intelligence techniques and obtain insights about potential unsafe goods and products. These insights are exploited to drive decision making, such as which goods are more probable to be harmful in the near future and subsequently optimize the food supply chain. The latter reflects the overall food recall process monitoring and is enhanced through a data-driven forecasting approach. This provides actionable insights regarding the enhancement of the food safety across the food supply chain given that goods and products can become unsafe for plenty of reasons, such as mislabeling allergens, contamination etc. To address this challenge, this paper introduces a deep learning approach leveraging Natural Language Processing and Time-series Forecasting techniques, to monitor and analyze the risk associated with each food product category and the corresponding potential recalls. Furthermore, we propose a technique that exploits reinforcement learning to utilize historical recall announcements of food products for predicting their future recalls, thus providing insights to food companies regarding upcoming trends in food recalls that can lead to timely recalls. We also evaluate and demonstrate the effectiveness and added-value of the proposed approaches through a real-world scenario that yields promising results. While several techniques/models have been analyzed and applied to address the challenge of food recall predictions, the usage of analogous/surrogate data has also been studied and evaluated towards more accurate outcomes.
Network inference from short, noisy, low time-resolution, partial measurements
Network link inference from measured time series data of the behavior of dynamically interacting network nodes is an important problem with wide-ranging applications, e.g., estimating synaptic connectivity among neurons from measurements of their calcium fluorescence. Network inference methods typically begin by using the measured time series to assign to any given ordered pair of nodes a numerical score reflecting the likelihood of a directed link between those two nodes. In typical cases, the measured time series data may be subject to limitations, including limited duration, low sampling rate, observational noise, and partial nodal state measurement. However, it is unknown how the performance of link inference techniques on such datasets depends on these experimental limitations of data acquisition. Here, we utilize both synthetic data generated from coupled chaotic systems as well as experimental data obtained from Caenorhabditis elegans neural activity to systematically assess the influence of data limitations on the character of scores reflecting the likelihood of a directed link between a given node pair. Wedo this for three network inference techniques: Granger causality, transfer entropy, and, a machine learning-based method. Furthermore, we assess the ability of appropriate surrogate data to determine statistical confidence levels associated with the results of link-inference techniques.
Data-driven causal analysis of observational biological time series
Complex systems are challenging to understand, especially when they defy manipulative experiments for practical or ethical reasons. Several fields have developed parallel approaches to infer causal relations from observational time series. Yet, these methods are easy to misunderstand and often controversial. Here, we provide an accessible and critical review of three statistical causal discovery approaches (pairwise correlation, Granger causality, and state space reconstruction), using examples inspired by ecological processes. For each approach, we ask what it tests for, what causal statement it might imply, and when it could lead us astray. We devise new ways of visualizing key concepts, describe some novel pathologies of existing methods, and point out how so-called ‘model-free’ causality tests are not assumption-free. We hope that our synthesis will facilitate thoughtful application of methods, promote communication across different fields, and encourage explicit statements of assumptions. A video walkthrough is available (Video 1 or https://youtu.be/AlV0ttQrjK8 ).
Interpreting null models of resting-state functional MRI dynamics: not throwing the model out with the hypothesis
Null models are useful for assessing whether a dataset exhibits a non-trivial property of interest. These models have recently gained interest in the neuroimaging community as means to explore dynamic properties of functional Magnetic Resonance Imaging (fMRI) time series. Interpretation of null-model testing in this context may not be straightforward because (i) null hypotheses associated to different null models are sometimes unclear and (ii) fMRI metrics might be ‘trivial’, i.e. preserved under the null hypothesis, and still be useful in neuroimaging applications. In this commentary, we review several commonly used null models of fMRI time series and discuss the interpretation of the corresponding tests. We argue that, while null-model testing allows for a better characterization of the statistical properties of fMRI time series and associated metrics, it should not be considered as a mandatory validation step to assess their relevance in representing brain functional dynamics.
Tuning Minimum-Norm regularization parameters for optimal MEG connectivity estimation
The accurate characterization of cortical functional connectivity from Magnetoencephalography (MEG) data remains a challenging problem due to the subjective nature of the analysis, which requires several decisions at each step of the analysis pipeline, such as the choice of a source estimation algorithm, a connectivity metric and a cortical parcellation, to name but a few. Recent studies have emphasized the importance of selecting the regularization parameter in minimum norm estimates with caution, as variations in its value can result in significant differences in connectivity estimates. In particular, the amount of regularization that is optimal for MEG source estimation can actually be suboptimal for coherence-based MEG connectivity analysis. In this study, we expand upon previous work by examining a broader range of commonly used connectivity metrics, including the imaginary part of coherence, corrected imaginary part of Phase Locking Value, and weighted Phase Lag Index, within a larger and more realistic simulation scenario. Our results show that the best estimate of connectivity is achieved using a regularization parameter that is 1 or 2 orders of magnitude smaller than the one that yields the best source estimation. This remarkable difference may imply that previous work assessing source-space connectivity using minimum-norm may have benefited from using less regularization, as this may have helped reduce false positives. Importantly, we provide the code for MEG data simulation and analysis, offering the research community a valuable open source tool for informed selections of the regularization parameter when using minimum-norm for source space connectivity analyses. •Regularization parameter within MNE affects connectivity estimation•The optimal parameter is computed for a large number of synthetic data•In connectivity studies less regularization should be employed with various metrics•Code and data are available open source.
Group Surrogate Data Generating Models and similarity quantification of multivariate time-series: A resting-state fMRI study
•We developed a Group Surrogate Data Generating Model (GSDGM) to generate the group centroid of multivariate time-series.•We developed a similarity quantification method called Multivariate Time-series Ensemble Similarity Score (MTESS).•GSDGM and MTESS can be used for fingerprint analysis in human rs-fMRI data and distinguishes outlier sessions.•We provide a GSDGM and MTESS Toolbox that can be freely downloaded from https://github.com/takuto-okuno-riken/mtess. Advancements in non-invasive brain analysis through novel approaches such as big data analytics and in silico simulation are essential for explaining brain function and associated pathologies. In this study, we extend the vector auto-regressive surrogate technique from a single multivariate time-series to group data using a novel Group Surrogate Data Generating Model (GSDGM). This methodology allowed us to generate biologically plausible human brain dynamics representative of a large human resting-state (rs-fMRI) dataset obtained from the Human Connectome Project. Simultaneously, we defined a novel similarity measure, termed the Multivariate Time-series Ensemble Similarity Score (MTESS). MTESS showed high accuracy and f-measure in subject identification, and it can directly compare the similarity between two multivariate time-series. We used MTESS to analyze both human and marmoset rs-fMRI data. Our results showed similarity differences between cortical and subcortical regions. We also conducted MTESS and state transition analysis between single and group surrogate techniques, and confirmed that a group surrogate approach can generate plausible group centroid multivariate time-series. Finally, we used GSDGM and MTESS for the fingerprint analysis of human rs-fMRI data, successfully distinguishing normal and outlier sessions. These new techniques will be useful for clinical applications and in silico simulation.