Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
10,489
result(s) for
"Biostatistics methods."
Sort by:
Monitoring the health of populations by tracking disease outbreaks : saving humanity from the next plague
\"Today the citizens of developed counties have never experienced a large-scale disease outbreak. One reason is the success of the public health community, including epidemiologists and biostatisticians, in tracking and identifying disease outbreaks. Monitoring the Health of Populations by Tracking Disease Outbreaks: Saving Humanity from the Next Plague is the story of the application of statistics for disease detection and tracking. The work of public health officials often critically depends on the use of statistical methods to help discern whether an outbreak may be occurring and, if there is sufficient evidence of an outbreak, then to locate and track it\"-- Provided by publisher.
Bayesian biostatistics
by
Lawson, Andrew (Andrew B.)
,
Lesaffre, Emmanuel
in
Bayes Theorem
,
Bayesian statistical decision theory
,
Biometry
2012
The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets.
Through examples, exercises and a combination of introductory and more advanced chapters, this book provides an invaluable understanding of the complex world of biomedical statistics illustrated via a diverse range of applications taken from epidemiology, exploratory clinical studies, health promotion studies, image analysis and clinical trials.
Key Features:
* Provides an authoritative account of Bayesian methodology, from its most basic elements to its practical implementation, with an emphasis on healthcare techniques.
* Contains introductory explanations of Bayesian principles common to all areas of application.
* Presents clear and concise examples in biostatistics applications such as clinical trials, longitudinal studies, bioassay, survival, image analysis and bioinformatics.
* Illustrated throughout with examples using software including WinBUGS, OpenBUGS, SAS and various dedicated R programs.
* Highlights the differences between the Bayesian and classical approaches.
* Supported by an accompanying website hosting free software and case study guides.
Bayesian Biostatistics introduces the reader smoothly into the Bayesian statistical methods with chapters that gradually increase in level of complexity. Master students in biostatistics, applied statisticians and all researchers with a good background in classical statistics who have interest in Bayesian methods will find this book useful.
Statistical methodology for the evaluation of vaccine efficacy in a phase III multi-centre trial of the RTS,S/AS01 malaria vaccine in African children
by
Mmbando, Bruno
,
Lievens, Marc
,
Williamson, John
in
Africa
,
African American children
,
Biomedical and Life Sciences
2011
Background
There has been much debate about the appropriate statistical methodology for the evaluation of malaria field studies and the challenges in interpreting data arising from these trials.
Methods
The present paper describes, for a pivotal phase III efficacy of the RTS, S/AS01 malaria vaccine, the methods of the statistical analysis and the rationale for their selection. The methods used to estimate efficacy of the primary course of vaccination, and of a booster dose, in preventing clinical episodes of uncomplicated and severe malaria, and to determine the duration of protection, are described. The interpretation of various measures of efficacy in terms of the potential public health impact of the vaccine is discussed.
Conclusions
The methodology selected to analyse the clinical trial must be scientifically sound, acceptable to regulatory authorities and meaningful to those responsible for malaria control and public health policy.
Trial registration
Clinicaltrials.gov
NCT00866619
Journal Article
A review of spline function procedures in R
by
Abrahamowicz, Michal
,
Perperoglou, Aris
,
Sauerbrei, Willi
in
Algorithms
,
Biostatistics - methods
,
Blogs
2019
Background
With progress on both the theoretical and the computational fronts the use of spline modelling has become an established tool in statistical regression analysis. An important issue in spline modelling is the availability of user friendly, well documented software packages. Following the idea of the STRengthening Analytical Thinking for Observational Studies initiative to provide users with guidance documents on the application of statistical methods in observational research, the aim of this article is to provide an overview of the most widely used spline-based techniques and their implementation in R.
Methods
In this work, we focus on the R Language for Statistical Computing which has become a hugely popular statistics software. We identified a set of packages that include functions for spline modelling within a regression framework. Using simulated and real data we provide an introduction to spline modelling and an overview of the most popular spline functions.
Results
We present a series of simple scenarios of univariate data, where different basis functions are used to identify the correct functional form of an independent variable. Even in simple data, using routines from different packages would lead to different results.
Conclusions
This work illustrate challenges that an analyst faces when working with data. Most differences can be attributed to the choice of hyper-parameters rather than the basis used. In fact an experienced user will know how to obtain a reasonable outcome, regardless of the type of spline used. However, many analysts do not have sufficient knowledge to use these powerful tools adequately and will need more guidance.
Journal Article
False discovery rate control is a recommended alternative to Bonferroni-type adjustments in health studies
by
Glickman, Mark E.
,
Rao, Sowmya R.
,
Schultz, Mark R.
in
Analysis. Health state
,
Biological and medical sciences
,
Biomedical Research - methods
2014
Procedures for controlling the false positive rate when performing many hypothesis tests are commonplace in health and medical studies. Such procedures, most notably the Bonferroni adjustment, suffer from the problem that error rate control cannot be localized to individual tests, and that these procedures do not distinguish between exploratory and/or data-driven testing vs. hypothesis-driven testing. Instead, procedures derived from limiting false discovery rates may be a more appealing method to control error rates in multiple tests.
Controlling the false positive rate can lead to philosophical inconsistencies that can negatively impact the practice of reporting statistically significant findings. We demonstrate that the false discovery rate approach can overcome these inconsistencies and illustrate its benefit through an application to two recent health studies.
The false discovery rate approach is more powerful than methods like the Bonferroni procedure that control false positive rates. Controlling the false discovery rate in a study that arguably consisted of scientifically driven hypotheses found nearly as many significant results as without any adjustment, whereas the Bonferroni procedure found no significant results.
Although still unfamiliar to many health researchers, the use of false discovery rate control in the context of multiple testing can provide a solid basis for drawing conclusions about statistical significance.
Journal Article
Genetic algorithm for the optimization of features and neural networks in ECG signals classification
2017
Feature extraction and classification of electrocardiogram (ECG) signals are necessary for the automatic diagnosis of cardiac diseases. In this study, a novel method based on genetic algorithm-back propagation neural network (GA-BPNN) for classifying ECG signals with feature extraction using wavelet packet decomposition (WPD) is proposed. WPD combined with the statistical method is utilized to extract the effective features of ECG signals. The statistical features of the wavelet packet coefficients are calculated as the feature sets. GA is employed to decrease the dimensions of the feature sets and to optimize the weights and biases of the back propagation neural network (BPNN). Thereafter, the optimized BPNN classifier is applied to classify six types of ECG signals. In addition, an experimental platform is constructed for ECG signal acquisition to supply the ECG data for verifying the effectiveness of the proposed method. The GA-BPNN method with the MIT-BIH arrhythmia database achieved a dimension reduction of nearly 50% and produced good classification results with an accuracy of 97.78%. The experimental results based on the established acquisition platform indicated that the GA-BPNN method achieved a high classification accuracy of 99.33% and could be efficiently applied in the automatic identification of cardiac arrhythmias.
Journal Article
A phylogenetic transform enhances analysis of compositional microbiota data
by
Mukherjee, Sayan
,
David, Lawrence A
,
Silverman, Justin D
in
Abundance
,
Bioinformatics
,
Biology
2017
Surveys of microbial communities (microbiota), typically measured as relative abundance of species, have illustrated the importance of these communities in human health and disease. Yet, statistical artifacts commonly plague the analysis of relative abundance data. Here, we introduce the PhILR transform, which incorporates microbial evolutionary models with the isometric log-ratio transform to allow off-the-shelf statistical tools to be safely applied to microbiota surveys. We demonstrate that analyses of community-level structure can be applied to PhILR transformed data with performance on benchmarks rivaling or surpassing standard tools. Additionally, by decomposing distance in the PhILR transformed space, we identified neighboring clades that may have adapted to distinct human body sites. Decomposing variance revealed that covariation of bacterial clades within human body sites increases with phylogenetic relatedness. Together, these findings illustrate how the PhILR transform combines statistical and phylogenetic models to overcome compositional data challenges and enable evolutionary insights relevant to microbial communities.
Journal Article
Predicting the synergy of multiple stress effects
2016
Toxicants and other, non-chemical environmental stressors contribute to the global biodiversity crisis. Examples include the loss of bees and the reduction of aquatic biodiversity. Although non-compliance with regulations might be contributing, the widespread existence of these impacts suggests that for example the current approach of pesticide risk assessment fails to protect biodiversity when multiple stressors concurrently affect organisms. To quantify such multiple stress effects, we analysed all applicable aquatic studies and found that the presence of environmental stressors increases individual sensitivity to toxicants (pesticides, trace metals) by a factor of up to 100. To predict this dependence, we developed the “Stress Addition Model” (SAM). With the SAM, we assume that each individual has a general stress capacity towards all types of specific stress that should not be exhausted. Experimental stress levels are transferred into general stress levels of the SAM using the stress-related mortality as a common link. These general stress levels of independent stressors are additive, with the sum determining the total stress exerted on a population. With this approach, we provide a tool that quantitatively predicts the highly synergistic direct effects of independent stressor combinations.
Journal Article
On the importance of statistics in molecular simulations for thermodynamics, kinetics and simulation box size
2020
Computational simulations, akin to wetlab experimentation, are subject to statistical fluctuations. Assessing the magnitude of these fluctuations, that is, assigning uncertainties to the computed results, is of critical importance to drawing statistically reliable conclusions. Here, we use a simulation box size as an independent variable, to demonstrate how crucial it is to gather sufficient amounts of data before drawing any conclusions about the potential thermodynamic and kinetic effects. In various systems, ranging from solvation free energies to protein conformational transition rates, we showcase how the proposed simulation box size effect disappears with increased sampling. This indicates that, if at all, the simulation box size only minimally affects both the thermodynamics and kinetics of the type of biomolecular systems presented in this work.
Journal Article
Genotype imputation for genome-wide association studies
2010
Key Points
We review the statistical methods available for carrying out genotype imputation and compare their properties and performance.
We also review the downstream uses of imputation, including boosting the power of genome-wide association studies, fine-mapping and allowing comparisons between studies.
Several factors influence imputation accuracy, such as reference panel and study sample combination, sample size, genotyping chip and allele frequency.
Both Bayesian and frequentist methods can be used to impute SNP genotypes to test for association.
We review and compare the information metrics that are commonly used when carrying out quality control of imputed genotype data.
Genotype imputation is an important tool for genome-wide association studies as it increases power, aids in fine-mapping of associations and facilitates meta-analyses. This Review provides a guide to and comparison of imputation methods and discusses association testing using imputed data.
In the past few years genome-wide association (GWA) studies have uncovered a large number of convincingly replicated associations for many complex human diseases. Genotype imputation has been used widely in the analysis of GWA studies to boost power, fine-map associations and facilitate the combination of results across studies using meta-analysis. This Review describes the details of several different statistical methods for imputing genotypes, illustrates and discusses the factors that influence imputation performance, and reviews methods that can be used to assess imputation performance and test association at imputed SNPs.
Journal Article