Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,347 result(s) for "Biometry -- Methodology"
Sort by:
Bayesian biostatistics
The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introductory and more advanced chapters, this book provides an invaluable understanding of the complex world of biomedical statistics illustrated via a diverse range of applications taken from epidemiology, exploratory clinical studies, health promotion studies, image analysis and clinical trials. Key Features: * Provides an authoritative account of Bayesian methodology, from its most basic elements to its practical implementation, with an emphasis on healthcare techniques. * Contains introductory explanations of Bayesian principles common to all areas of application. * Presents clear and concise examples in biostatistics applications such as clinical trials, longitudinal studies, bioassay, survival, image analysis and bioinformatics. * Illustrated throughout with examples using software including WinBUGS, OpenBUGS, SAS and various dedicated R programs. * Highlights the differences between the Bayesian and classical approaches. * Supported by an accompanying website hosting free software and case study guides. Bayesian Biostatistics introduces the reader smoothly into the Bayesian statistical methods with chapters that gradually increase in level of complexity. Master students in biostatistics, applied statisticians and all researchers with a good background in classical statistics who have interest in Bayesian methods will find this book useful.
Biostatistics : a computing approach
The emergence of high-speed computing has facilitated the development of many exciting statistical and mathematical methods in the last 25 years, broadening the landscape of available tools in statistical investigations of complex data. Biostatistics: A Computing Approachfocuses on visualization and computational approaches associated with both modern and classical techniques. Furthermore, it promotes computing as a tool for performing both analyses and simulations that can facilitate such understanding. As a practical matter, programs in R and SAS are presented throughout the text. In addition to these programs, appendices describing the basic use of SAS and R are provided. Teaching by example, this book emphasizes the importance of simulation and numerical exploration in a modern-day statistical investigation. A few statistical methods that can be implemented with simple calculations are also worked into the text to build insight about how the methods really work. Suitable for students who have an interest in the application of statistical methods but do not necessarily intend to become statisticians, this book has been developed from Introduction to Biostatistics II, which the author taught for more than a decade at the University of Pittsburgh.
Recent advances in biostatistics
This unique volume provides self-contained accounts of some recent trends in Biostatistics methodology and their applications. It includes state-of-the-art reviews and original contributions. The articles included in this volume are based on a careful selection of peer-reviewed papers, authored by eminent experts in the field, representing a well balanced mix of researchers from the academia, R&D sectors of government and the pharmaceutical industry.
Quantitative methods for health research
A practical introduction to epidemiology, biostatistics, and research methodology for the whole health care community This comprehensive text, which has been extensively revised with new material and additional topics, utilizes a practical slant to introduce health professionals and students to epidemiology, biostatistics, and research methodology. It draws examples from a wide range of topics, covering all of the main contemporary health research methods, including survival analysis, Cox regression, and systematic reviews and meta-analysis—the explanation of which go beyond introductory concepts. This second edition of Quantitative Methods for Health Research: A Practical Interactive Guide to Epidemiology and Statistics also helps develop critical skills that will prepare students to move on to more advanced and specialized methods. A clear distinction is made between knowledge and concepts that all students should ensure they understand, and those that can be pursued further by those who wish to do so. Self-assessment exercises throughout the text help students explore and reflect on their understanding. A program of practical exercises in SPSS (using a prepared data set) helps to consolidate the theory and develop skills and confidence in data handling, analysis, and interpretation. Highlights of the book include: * Combining epidemiology and bio-statistics to demonstrate the relevance and strength of statistical methods * Emphasis on the interpretation of statistics using examples from a variety of public health and health care situations to stress relevance and application * Use of concepts related to examples of published research to show the application of methods and balance between ideals and the realities of research in practice * Integration of practical data analysis exercises to develop skills and confidence * Supplementation by a student companion website which provides guidance on data handling in SPSS and study data sets as referred to in the text Quantitative Methods for Health Research, Second Edition is a practical learning resource for students, practitioners and researchers in public health, health care and related disciplines, providing both a course book and a useful introductory reference. 
SVM-RFE: selection and visualization of the most relevant features through non-linear kernels
Background Support vector machines (SVM) are a powerful tool to analyze data with a number of predictors approximately equal or larger than the number of observations. However, originally, application of SVM to analyze biomedical data was limited because SVM was not designed to evaluate importance of predictor variables. Creating predictor models based on only the most relevant variables is essential in biomedical research. Currently, substantial work has been done to allow assessment of variable importance in SVM models but this work has focused on SVM implemented with linear kernels. The power of SVM as a prediction model is associated with the flexibility generated by use of non-linear kernels. Moreover, SVM has been extended to model survival outcomes. This paper extends the Recursive Feature Elimination (RFE) algorithm by proposing three approaches to rank variables based on non-linear SVM and SVM for survival analysis. Results The proposed algorithms allows visualization of each one the RFE iterations, and hence, identification of the most relevant predictors of the response variable. Using simulation studies based on time-to-event outcomes and three real datasets, we evaluate the three methods, based on pseudo-samples and kernel principal component analysis, and compare them with the original SVM-RFE algorithm for non-linear kernels. The three algorithms we proposed performed generally better than the gold standard RFE for non-linear kernels, when comparing the truly most relevant variables with the variable ranks produced by each algorithm in simulation studies. Generally, the RFE-pseudo-samples outperformed the other three methods, even when variables were assumed to be correlated in all tested scenarios. Conclusions The proposed approaches can be implemented with accuracy to select variables and assess direction and strength of associations in analysis of biomedical data using SVM for categorical or time-to-event responses. Conducting variable selection and interpreting direction and strength of associations between predictors and outcomes with the proposed approaches, particularly with the RFE-pseudo-samples approach can be implemented with accuracy when analyzing biomedical data. These approaches, perform better than the classical RFE of Guyon for realistic scenarios about the structure of biomedical data.
Causal mediation analysis with multiple mediators
In diverse fields of empirical research—including many in the biological sciences—attempts are made to decompose the effect of an exposure on an outcome into its effects via a number of different pathways. For example, we may wish to separate the effect of heavy alcohol consumption on systolic blood pressure (SBP) into effects via body mass index (BMI), via gamma-glutamyl transpeptidase (GGT), and via other pathways. Much progress has been made, mainly due to contributions from the field of causal inference, in understanding the precise nature of statistical estimands that capture such intuitive effects, the assumptions under which they can be identified, and statistical methods for doing so. These contributions have focused almost entirely on settings with a single mediator, or a set of mediators considered en bloc; in many applications, however, researchers attempt a much more ambitious decomposition into numerous path-specific effects through many mediators. In this article, we give counterfactual definitions of such path-specific estimands in settings with multiple mediators, when earlier mediators may affect later ones, showing that there are many ways in which decomposition can be done. We discuss the strong assumptions under which the effects are identified, suggesting a sensitivity analysis approach when a particular subset of the assumptions cannot be justified. These ideas are illustrated using data on alcohol consumption, SBP, BMI, and GGT from the Izhevsk Family Study. We aim to bridge the gap from \"single mediator theory\" to \"multiple mediator practice,\" highlighting the ambitious nature of this endeavor and giving practical suggestions on how to proceed.
On Estimation in Relative Survival
Estimation of relative survival has become the first and the most basic step when reporting cancer survival statistics. Standard estimators are in routine use by all cancer registries. However, it has been recently noted that these estimators do not provide information on cancer mortality that is independent of the national general population mortality. Thus they are not suitable for comparison between countries. Furthermore, the commonly used interpretation of the relative survival curve is vague and misleading. The present article attempts to remedy these basic problems. The population quantities of the traditional estimators are carefully described and their interpretation discussed. We then propose a new estimator of net survival probability that enables the desired comparability between countries. The new estimator requires no modeling and is accompanied with a straightforward variance estimate. The methods are described on real as well as simulated data.
Quantifying Publication Bias in Meta-Analysis
Publication bias is a serious problem in systematic reviews and meta-analyses, which can affect the validity and generalization of conclusions. Currently, approaches to dealing with publication bias can be distinguished into two classes: selection models and funnel-plot-based methods. Selection models use weight functions to adjust the overall effect size estimate and are usually employed as sensitivity analyses to assess the potential impact of publication bias. Punnel-plot-based methods include visual examination of a funnel plot, regression and rank tests, and the nonparametric trim and fill method. Although these approaches have been widely used in applications, measures for quantifying publication bias are seldom studied in the literature. Such measures can be used as a characteristic of a meta-analysis; also, they permit comparisons of publication biases between different meta-analyses. Egger's regression intercept may be considered as a candidate measure, but it lacks an intuitive interpretation. This article introduces a new measure, the skewness of the standardized deviates, to quantify publication bias. This measure describes the asymmetry of the collected studies' distribution. In addition, a new test for publication bias is derived based on the skewness. Large sample properties of the new measure are studied, and its performance is illustrated using simulations and three case studies.
Dynamic Predictions and Prospective Accuracy in Joint Models for Longitudinal and Time‐to‐Event Data
In longitudinal studies it is often of interest to investigate how a marker that is repeatedly measured in time is associated with a time to an event of interest. This type of research question has given rise to a rapidly developing field of biostatistics research that deals with the joint modeling of longitudinal and time‐to‐event data. In this article, we consider this modeling framework and focus particularly on the assessment of the predictive ability of the longitudinal marker for the time‐to‐event outcome. In particular, we start by presenting how survival probabilities can be estimated for future subjects based on their available longitudinal measurements and a fitted joint model. Following we derive accuracy measures under the joint modeling framework and assess how well the marker is capable of discriminating between subjects who experience the event within a medically meaningful time frame from subjects who do not. We illustrate our proposals on a real data set on human immunodeficiency virus infected patients for which we are interested in predicting the time‐to‐death using their longitudinal CD4 cell count measurements.