Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Is Full-Text Available
      Is Full-Text Available
      Clear All
      Is Full-Text Available
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Subject
    • Country Of Publication
    • Publisher
    • Source
    • Language
    • Place of Publication
    • Contributors
    • Location
6,155 result(s) for "Neurosciences Statistical methods."
Sort by:
Data-driven computational neuroscience : machine learning and statistical models
\"Data-driven computational neuroscience facilitates the transformation of data into insights into the structure and functions of the brain. This introduction for researchers and graduate students is the first in-depth, comprehensive treatment of statistical and machine learning methods for neuroscience. The methods are demonstrated through case studies of real problems to empower readers to build their own solutions. The book covers a wide variety of methods, including supervised classification with non-probabilistic models (nearest-neighbors, classification trees, rule induction, artificial neural networks and support vector machines) and probabilistic models (discriminant analysis, logistic regression and Bayesian network classifiers), meta-classifiers, multi-dimensional classifiers and feature subset selection methods. Other parts of the book are devoted to association discovery with probabilistic graphical models (Bayesian networks and Markov networks) and spatial statistics with point processes (complete spatial randomness and cluster, regular and Gibbs processes). Cellular, structural, functional, medical and behavioral neuroscience levels are considered\"-- Provided by publisher.
Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process
Stochastic leaky integrate-and-fire models are popular due to their simplicity and statistical tractability. They have been widely applied to gain understanding of the underlying mechanisms for spike timing in neurons, and have served as building blocks for more elaborate models. Especially the Ornstein–Uhlenbeck process is popular to describe the stochastic fluctuations in the membrane potential of a neuron, but also other models like the square-root model or models with a non-linear drift are sometimes applied. Data that can be described by such models have to be stationary and thus, the simple models can only be applied over short time windows. However, experimental data show varying time constants, state dependent noise, a graded firing threshold and time-inhomogeneous input. In the present study we build a jump diffusion model that incorporates these features, and introduce a firing mechanism with a state dependent intensity. In addition, we suggest statistical methods to estimate all unknown quantities and apply these to analyze turtle motoneuron membrane potentials. Finally, simulated and real data are compared and discussed. We find that a square-root diffusion describes the data much better than an Ornstein–Uhlenbeck process with constant diffusion coefficient. Further, the membrane time constant decreases with increasing depolarization, as expected from the increase in synaptic conductance. The network activity, which the neuron is exposed to, can be reasonably estimated to be a threshold version of the nerve output from the network. Moreover, the spiking characteristics are well described by a Poisson spike train with an intensity depending exponentially on the membrane potential.
A study of problems encountered in Granger causality analysis from a neuroscience perspective
Granger causality methods were developed to analyze the flow of information between time series. These methods have become more widely applied in neuroscience. Frequency-domain causality measures, such as those of Geweke, as well as multivariate methods, have particular appeal in neuroscience due to the prevalence of oscillatory phenomena and highly multivariate experimental recordings. Despite its widespread application in many fields, there are ongoing concerns regarding the applicability of Granger causality methods in neuroscience. When are these methods appropriate? How reliably do they recover the system structure underlying the observed data? What do frequency-domain causality measures tell us about the functional properties of oscillatory neural systems? In this paper, we analyze fundamental properties of Granger–Geweke (GG) causality, both computational and conceptual. Specifically, we show that (i) GG causality estimates can be either severely biased or of high variance, both leading to spurious results; (ii) even if estimated correctly, GG causality estimates alone are not interpretable without examining the component behaviors of the system model; and (iii) GG causality ignores critical components of a system’s dynamics. Based on this analysis, we find that the notion of causality quantified is incompatible with the objectives of many neuroscience investigations, leading to highly counterintuitive and potentially misleading results. Through the analysis of these problems, we provide important conceptual clarification of GG causality, with implications for other related causality approaches and for the role of causality analyses in neuroscience as a whole.
Finding the needle in a high-dimensional haystack: Canonical correlation analysis for neuroscientists
The 21st century marks the emergence of “big data” with a rapid increase in the availability of datasets with multiple measurements. In neuroscience, brain-imaging datasets are more commonly accompanied by dozens or hundreds of phenotypic subject descriptors on the behavioral, neural, and genomic level. The complexity of such “big data” repositories offer new opportunities and pose new challenges for systems neuroscience. Canonical correlation analysis (CCA) is a prototypical family of methods that is useful in identifying the links between variable sets from different modalities. Importantly, CCA is well suited to describing relationships across multiple sets of data, such as in recently available big biomedical datasets. Our primer discusses the rationale, promises, and pitfalls of CCA. •Introduction to the feature of canonical correlation analysis and its applications in combining two or more domains of data, such as behavioural and neuroimaging measures.•The utility of different variations the pros/cons of CCA.•Tips on application of CCA on rich phenotype datasets such as UK Biobank and HCP.
Circular analysis in systems neuroscience: the dangers of double dipping
This perspective illustrates some of the problems involved in analyzing the complex data yielded by systems neuroscience techniques, such as brain imaging and electrophysiology. Specifically, when test statistics are not independent of the selection criteria, common analyses can produce spurious results. The authors suggest ways to avoid such errors. A neuroscientific experiment typically generates a large amount of data, of which only a small fraction is analyzed in detail and presented in a publication. However, selection among noisy measurements can render circular an otherwise appropriate analysis and invalidate results. Here we argue that systems neuroscience needs to adjust some widespread practices to avoid the circularity that can arise from selection. In particular, 'double dipping', the use of the same dataset for selection and selective analysis, will give distorted descriptive statistics and invalid statistical inference whenever the results statistics are not inherently independent of the selection criteria under the null hypothesis. To demonstrate the problem, we apply widely used analyses to noise data known to not contain the experimental effects in question. Spurious effects can appear in the context of both univariate activation analysis and multivariate pattern-information analysis. We suggest a policy for avoiding circularity.
Cross-validation failure: Small sample sizes lead to large error bars
Predictive models ground many state-of-the-art developments in statistical brain image analysis: decoding, MVPA, searchlight, or extraction of biomarkers. The principled approach to establish their validity and usefulness is cross-validation, testing prediction on unseen data. Here, I would like to raise awareness on error bars of cross-validation, which are often underestimated. Simple experiments show that sample sizes of many neuroimaging studies inherently lead to large error bars, eg±10% for 100 samples. The standard error across folds strongly underestimates them. These large error bars compromise the reliability of conclusions drawn with predictive models, such as biomarkers or methods developments where, unlike with cognitive neuroimaging MVPA approaches, more samples cannot be acquired by repeating the experiment across many subjects. Solutions to increase sample size must be investigated, tackling possible increases in heterogeneity of the data.
A solution to dependency: using multilevel analysis to accommodate nested data
The authors examine papers in high profile journals and find that while collection of multiple observations from a single research object is common practice, such nested data are often analyzed using inappropriate statistical techniques. The authors show that this results in increased Type I error rates, and propose multilevel modelling to address this issue. In neuroscience, experimental designs in which multiple observations are collected from a single research object (for example, multiple neurons from one animal) are common: 53% of 314 reviewed papers from five renowned journals included this type of data. These so-called 'nested designs' yield data that cannot be considered to be independent, and so violate the independency assumption of conventional statistical methods such as the t test. Ignoring this dependency results in a probability of incorrectly concluding that an effect is statistically significant that is far higher (up to 80%) than the nominal α level (usually set at 5%). We discuss the factors affecting the type I error rate and the statistical power in nested data, methods that accommodate dependency between observations and ways to determine the optimal study design when data are nested. Notably, optimization of experimental designs nearly always concerns collection of more truly independent observations, rather than more observations from one research object.
Wiener–Granger Causality: A well established methodology
For decades, the main ways to study the effect of one part of the nervous system upon another have been either to stimulate or lesion the first part and investigate the outcome in the second. This article describes a fundamentally different approach to identifying causal connectivity in neuroscience: a focus on the predictability of ongoing activity in one part from that in another. This approach was made possible by a new method that comes from the pioneering work of Wiener (1956) and Granger (1969). The Wiener–Granger method, unlike stimulation and ablation, does not require direct intervention in the nervous system. Rather, it relies on the estimation of causal statistical influences between simultaneously recorded neural time series data, either in the absence of identifiable behavioral events or in the context of task performance. Causality in the Wiener–Granger sense is based on the statistical predictability of one time series that derives from knowledge of one or more others. This article defines Wiener–Granger Causality, discusses its merits and limitations in neuroscience, and outlines recent developments in its implementation.
Denoising of diffusion MRI using random matrix theory
We introduce and evaluate a post-processing technique for fast denoising of diffusion-weighted MR images. By exploiting the intrinsic redundancy in diffusion MRI using universal properties of the eigenspectrum of random covariance matrices, we remove noise-only principal components, thereby enabling signal-to-noise ratio enhancements. This yields parameter maps of improved quality for visual, quantitative, and statistical interpretation. By studying statistics of residuals, we demonstrate that the technique suppresses local signal fluctuations that solely originate from thermal noise rather than from other sources such as anatomical detail. Furthermore, we achieve improved precision in the estimation of diffusion parameters and fiber orientations in the human brain without compromising the accuracy and spatial resolution. •Denoising enhances the image quality for improved visual, quantitative, and statistical interpretation.•Random matrix theory enables data-driven threshold for PCA denoising.•The Marchenko-Pastur distribution is a universal signature of noise.•The technique suppresses signal fluctuations that solely originate in thermal noise.•Precision of diffusion parameter estimators increases without lowering accuracy.
Faster permutation inference in brain imaging
Permutation tests are increasingly being used as a reliable method for inference in neuroimaging analysis. However, they are computationally intensive. For small, non-imaging datasets, recomputing a model thousands of times is seldom a problem, but for large, complex models this can be prohibitively slow, even with the availability of inexpensive computing power. Here we exploit properties of statistics used with the general linear model (GLM) and their distributions to obtain accelerations irrespective of generic software or hardware improvements. We compare the following approaches: (i) performing a small number of permutations; (ii) estimating the p-value as a parameter of a negative binomial distribution; (iii) fitting a generalised Pareto distribution to the tail of the permutation distribution; (iv) computing p-values based on the expected moments of the permutation distribution, approximated from a gamma distribution; (v) direct fitting of a gamma distribution to the empirical permutation distribution; and (vi) permuting a reduced number of voxels, with completion of the remainder using low rank matrix theory. Using synthetic data we assessed the different methods in terms of their error rates, power, agreement with a reference result, and the risk of taking a different decision regarding the rejection of the null hypotheses (known as the resampling risk). We also conducted a re-analysis of a voxel-based morphometry study as a real-data example. All methods yielded exact error rates. Likewise, power was similar across methods. Resampling risk was higher for methods (i), (iii) and (v). For comparable resampling risks, the method in which no permutations are done (iv) was the absolute fastest. All methods produced visually similar maps for the real data, with stronger effects being detected in the family-wise error rate corrected maps by (iii) and (v), and generally similar to the results seen in the reference set. Overall, for uncorrected p-values, method (iv) was found the best as long as symmetric errors can be assumed. In all other settings, including for familywise error corrected p-values, we recommend the tail approximation (iii). The methods considered are freely available in the tool PALM — Permutation Analysis of Linear Models. •Permutation methods can be accelerated through additional statistical approaches.•Six approaches are described and assessed.•Methods can be 100 times faster than in the non-accelerated case.•Recommendations are provided for various common scenarios.