Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
61,826
result(s) for
"Physical sciences Statistical methods."
Sort by:
Quantitative methods of data analysis for the physical sciences and engineering
\"This book provides a thorough and comprehensive coverage of most of the new and important quantitative methods of data analysis for graduate students and practitioners. In recent years, data analysis methods have exploded alongside advanced computing power, and it is critical to understand such methods to get the most out of data, and to extract signal from noise. The book excels in explaining difficult concepts through simple explanations and detailed explanatory illustrations. Most unique is the focus on confidence limits for power spectra and their proper interpretation, something rare or completely missing in other books. Likewise, there is a thorough discussion of how to assess uncertainty via use of Expectancy, and the easy to apply and understand Bootstrap method. The book is written so that descriptions of each method are as self-contained as possible\"-- Provided by publisher.
Statistical mechanics of membranes and surfaces
by
Nelson, David R
,
Piran, Tsvi
,
Weinberg, Steven
in
Biophysics
,
Interfaces (Physical sciences)
,
Membranes (Technology)
2004
This invaluable book explores the delicate interplay between geometry and statistical mechanics in materials such as microemulsions, wetting and growth interfaces, bulk lyotropic liquid crystals, chalcogenide glasses and sheet polymers, using tools from the fields of polymer physics, differential geometry, field theory and critical phenomena. Several chapters have been updated relative to the classic 1989 edition. Moreover, there are now three entirely new chapters — on effects of anisotropy and heterogeneity, on fixed connectivity membranes and on triangulated surface models of fluctuating membranes.
Statistical Data Analysis for the Physical Sciences
2013
Data analysis lies at the heart of every experimental science. Providing a modern introduction to statistics, this book is ideal for undergraduates in physics. It introduces the necessary tools required to analyse data from experiments across a range of areas, making it a valuable resource for students. In addition to covering the basic topics, the book also takes in advanced and modern subjects, such as neural networks, decision trees, fitting techniques and issues concerning limit or interval setting. Worked examples and case studies illustrate the techniques presented, and end-of-chapter exercises help test the reader's understanding of the material.
SARTools: A DESeq2- and EdgeR-Based R Pipeline for Comprehensive Differential Analysis of RNA-Seq Data
by
Brillet-Guéguen, Loraine
,
Dillies, Marie-Agnès
,
Varet, Hugo
in
Binomial distribution
,
Bioinformatics
,
Biology
2016
Several R packages exist for the detection of differentially expressed genes from RNA-Seq data. The analysis process includes three main steps, namely normalization, dispersion estimation and test for differential expression. Quality control steps along this process are recommended but not mandatory, and failing to check the characteristics of the dataset may lead to spurious results. In addition, normalization methods and statistical models are not exchangeable across the packages without adequate transformations the users are often not aware of. Thus, dedicated analysis pipelines are needed to include systematic quality control steps and prevent errors from misusing the proposed methods.
SARTools is an R pipeline for differential analysis of RNA-Seq count data. It can handle designs involving two or more conditions of a single biological factor with or without a blocking factor (such as a batch effect or a sample pairing). It is based on DESeq2 and edgeR and is composed of an R package and two R script templates (for DESeq2 and edgeR respectively). Tuning a small number of parameters and executing one of the R scripts, users have access to the full results of the analysis, including lists of differentially expressed genes and a HTML report that (i) displays diagnostic plots for quality control and model hypotheses checking and (ii) keeps track of the whole analysis process, parameter values and versions of the R packages used.
SARTools provides systematic quality controls of the dataset as well as diagnostic plots that help to tune the model parameters. It gives access to the main parameters of DESeq2 and edgeR and prevents untrained users from misusing some functionalities of both packages. By keeping track of all the parameters of the analysis process it fits the requirements of reproducible research.
Journal Article
Generalized Network Psychometrics: Combining Network and Latent Variable Models
by
Epskamp, Sacha
,
Rhemtulla, Mijke
,
Borsboom, Denny
in
Algorithms
,
Assessment
,
Behavioral Science and Psychology
2017
We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between test items arises from the influence of one or more common latent variables. Here, we present two generalizations of the network model that encompass latent variable structures, establishing network modeling as parts of the more general framework of structural equation modeling (SEM). In the first generalization, we model the covariance structure of latent variables as a network. We term this framework latent network modeling (LNM) and show that, with LNM, a unique structure of conditional independence relationships between latent variables can be obtained in an explorative manner. In the second generalization, the residual variance–covariance structure of indicators is modeled as a network. We term this generalization residual network modeling (RNM) and show that, within this framework, identifiable models can be obtained in which local independence is structurally violated. These generalizations allow for a general modeling framework that can be used to fit, and compare, SEM models, network models, and the RNM and LNM generalizations. This methodology has been implemented in the free-to-use software package
lvnet
, which contains confirmatory model testing as well as two exploratory search algorithms: stepwise search algorithms for low-dimensional datasets and penalized maximum likelihood estimation for larger datasets. We show in simulation studies that these search algorithms perform adequately in identifying the structure of the relevant residual or latent networks. We further demonstrate the utility of these generalizations in an empirical example on a personality inventory dataset.
Journal Article
Intraclass correlation – A discussion and demonstration of basic features
by
Liljequist, David
,
Skavberg Roaldsen, Kirsti
,
Elfving, Britt
in
Analysis of variance
,
Bias
,
Computer simulation
2019
A re-analysis of intraclass correlation (ICC) theory is presented together with Monte Carlo simulations of ICC probability distributions. A partly revised and simplified theory of the single-score ICC is obtained, together with an alternative and simple recipe for its use in reliability studies. Our main, practical conclusion is that in the analysis of a reliability study it is neither necessary nor convenient to start from an initial choice of a specified statistical model. Rather, one may impartially use all three single-score ICC formulas. A near equality of the three ICC values indicates the absence of bias (systematic error), in which case the classical (one-way random) ICC may be used. A consistency ICC larger than absolute agreement ICC indicates the presence of non-negligible bias; if so, classical ICC is invalid and misleading. An F-test may be used to confirm whether biases are present. From the resulting model (without or with bias) variances and confidence intervals may then be calculated. In presence of bias, both absolute agreement ICC and consistency ICC should be reported, since they give different and complementary information about the reliability of the method. A clinical example with data from the literature is given.
Journal Article
On the Use of Biomineral Oxygen Isotope Data to Identify Human Migrants in the Archaeological Record: Intra-Sample Variation, Statistical Methods and Geographical Considerations
2016
Oxygen isotope analysis of archaeological skeletal remains is an increasingly popular tool to study past human migrations. It is based on the assumption that human body chemistry preserves the δ18O of precipitation in such a way as to be a useful technique for identifying migrants and, potentially, their homelands. In this study, the first such global survey, we draw on published human tooth enamel and bone bioapatite data to explore the validity of using oxygen isotope analyses to identify migrants in the archaeological record. We use human δ18O results to show that there are large variations in human oxygen isotope values within a population sample. This may relate to physiological factors influencing the preservation of the primary isotope signal, or due to human activities (such as brewing, boiling, stewing, differential access to water sources and so on) causing variation in ingested water and food isotope values. We compare the number of outliers identified using various statistical methods. We determine that the most appropriate method for identifying migrants is dependent on the data but is likely to be the IQR or median absolute deviation from the median under most archaeological circumstances. Finally, through a spatial assessment of the dataset, we show that the degree of overlap in human isotope values from different locations across Europe is such that identifying individuals' homelands on the basis of oxygen isotope analysis alone is not possible for the regions analysed to date. Oxygen isotope analysis is a valid method for identifying first-generation migrants from an archaeological site when used appropriately, however it is difficult to identify migrants using statistical methods for a sample size of less than c. 25 individuals. In the absence of local previous analyses, each sample should be treated as an individual dataset and statistical techniques can be used to identify migrants, but in most cases pinpointing a specific homeland should not be attempted.
Journal Article
A simple method to assess and report thematic saturation in qualitative research
2020
Data saturation is the most commonly employed concept for estimating sample sizes in qualitative research. Over the past 20 years, scholars using both empirical research and mathematical/statistical models have made significant contributions to the question: How many qualitative interviews are enough? This body of work has advanced the evidence base for sample size estimation in qualitative inquiry during the design phase of a study, prior to data collection, but it does not provide qualitative researchers with a simple and reliable way to determine the adequacy of sample sizes during and/or after data collection. Using the principle of saturation as a foundation, we describe and validate a simple-to-apply method for assessing and reporting on saturation in the context of inductive thematic analyses. Following a review of the empirical research on data saturation and sample size estimation in qualitative research, we propose an alternative way to evaluate saturation that overcomes the shortcomings and challenges associated with existing methods identified in our review. Our approach includes three primary elements in its calculation and assessment: Base Size, Run Length, and New Information Threshold. We additionally propose a more flexible approach to reporting saturation. To validate our method, we use a bootstrapping technique on three existing thematically coded qualitative datasets generated from in-depth interviews. Results from this analysis indicate the method we propose to assess and report on saturation is feasible and congruent with findings from earlier studies.
Journal Article
Paintings predict the distribution of species, or the challenge of selecting environmental predictors and evaluation statistics
by
Secondi, Jean
,
Besnard, Aurélien G.
,
Fourcade, Yoan
in
Biodiversity and Ecology
,
biogeography
,
Computation
2018
Aim: Species distribution modelling, a family of statistical methods that predicts species distributions from a set of occurrences and environmental predictors, is now routinely applied in many macroecological studies. However, the reliability of evaluation metrics usually employed to validate these models remains questioned. Moreover, the emergence of online databases of environmental variables with global coverage, especially climatic, has favoured the use of the same set of standard predictors. Unfortunately, the selection of variables is too rarely based on a careful examination of the species' ecology. In this context, our aim was to highlight the importance of selecting ad hoc variables in species distribution models, and to assess the ability of classical evaluation statistics to identify models with no biological realism. Innovation: First, we reviewed the current practices in the field of species distribution modelling in terms of variable selection and model evaluation. Then, we computed distribution models of 509 European species using pseudo-predictors derived from paintings or using a real set of climatic and topographic predictors. We calculated model performance based on the area under the receiver operating curve (AUC) and true skill statistics (TSS), partitioning occurrences into training and test data with different levels of spatial independence. Most models computed from pseudo-predictors were classified as good and sometimes were even better evaluated than models computed using real environmental variables. However, on average they were better discriminated when the partitioning of occurrences allowed testing for model transferability. Main conclusions: These findings confirm the crucial importance of variable selection and the inability of current evaluation metrics to assess the biological significance of distribution models. We recommend that researchers carefully select variables according to the species' ecology and evaluate models only according to their capacity to be transfered in distant areas. Nevertheless, statistics of model evaluations must still be interpreted with great caution.
Journal Article