Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
25,036 result(s) for "Computer Simulation - statistics "
Sort by:
Learning and decision-making from rank data
The ubiquitous challenge of learning and decision-making from rank data arises in situations where intelligent systems collect preference and behavior data from humans, learn from the data, and then use the data to help humans make efficient, effective, and timely decisions. Often, such data are represented by rankings. This book surveys some recent progress toward addressing the challenge from the considerations of statistics, computation, and socio-economics. We will cover classical statistical models for rank data, including random utility models, distance-based models, and mixture models. We will discuss and compare classical and state-of-the-art algorithms, such as algorithms based on Minorize-Majorization (MM), Expectation-Maximization (EM), Generalized Method-of-Moments (GMM), rank breaking, and tensor decomposition. We will also introduce principled Bayesian preference elicitation frameworks for collecting rank data. Finally, we will examine socio-economic aspects of statistically desirable decision-making mechanisms, such as Bayesian estimators. This book can be useful in three ways: (1) for theoreticians in statistics and machine learning to better understand the considerations and caveats of learning from rank data, compared to learning from other types of data, especially cardinal data; (2) for practitioners to apply algorithms covered by the book for sampling, learning, and aggregation; and (3) as a textbook for graduate students or advanced undergraduate students to learn about the field. This book requires that the reader has basic knowledge in probability, statistics, and algorithms. Knowledge in social choice would also help but is not required.
Explicit inclusion of treatment in prognostic modeling was recommended in observational and randomized settings
To compare different methods to handle treatment when developing a prognostic model that aims to produce accurate probabilities of the outcome of individuals if left untreated. Simulations were performed based on two normally distributed predictors, a binary outcome, and a binary treatment, mimicking a randomized trial or an observational study. Comparison was made between simply ignoring treatment (SIT), restricting the analytical data set to untreated individuals (AUT), inverse probability weighting (IPW), and explicit modeling of treatment (MT). Methods were compared in terms of predictive performance of the model and the proportion of incorrect treatment decisions. Omitting a genuine predictor of the outcome from the prognostic model decreased model performance, in both an observational study and a randomized trial. In randomized trials, the proportion of incorrect treatment decisions was smaller when applying AUT or MT, compared to SIT and IPW. In observational studies, MT was superior to all other methods regarding the proportion of incorrect treatment decisions. If a prognostic model aims to produce correct probabilities of the outcome in the absence of treatment, ignoring treatments that affect that outcome can lead to suboptimal model performance and incorrect treatment decisions. Explicitly, modeling treatment is recommended.
Power Analysis and Sample Size Planning in ANCOVA Designs
The analysis of covariance (ANCOVA) has notably proven to be an effective tool in a broad range of scientific applications. Despite the well-documented literature about its principal uses and statistical properties, the corresponding power analysis for the general linear hypothesis tests of treatment differences remains a less discussed issue. The frequently recommended procedure is a direct application of the ANOVA formula in combination with a reduced degrees of freedom and a correlation-adjusted variance. This article aims to explicate the conceptual problems and practical limitations of the common method. An exact approach is proposed for power and sample size calculations in ANCOVA with random assignment and multinormal covariates. Both theoretical examination and numerical simulation are presented to justify the advantages of the suggested technique over the current formula. The improved solution is illustrated with an example regarding the comparative effectiveness of interventions. In order to facilitate the application of the described power and sample size calculations, accompanying computer programs are also presented.
Estimation of the Optimal Surrogate Based on a Randomized Trial
A common scientific problem is to determine a surrogate outcome for a long-term outcome so that future randomized studies can restrict themselves to only collecting the surrogate outcome. We consider the setting that we observe independent and identically distributed observations of a random variable consisting of baseline covariates, a treatment, a vector of candidate surrogate outcomes at an intermediate time point, and the final outcome of interest at a final time point. We assume the treatment is randomized, conditional on the baseline covariates. The goal is to use these data to learn a most-promising surrogate for use in future trials for inference about a mean contrast treatment effect on the final outcome. We define an optimal surrogate for the current study as the function of the data generating distribution collected by the intermediate time point that satisfies the Prentice definition of a valid surrogate endpoint and that optimally predicts the final outcome: this optimal surrogate is an unknown parameter. We show that this optimal surrogate is a conditional mean and present super-learner and targeted super-learner based estimators, whose predicted outcomes are used as the surrogate in applications. We demonstrate a number of desirable properties of this optimal surrogate and its estimators, and study the methodology in simulations and an application to dengue vaccine efficacy trials.
Probability and Statistics
Probability Statistics with Integrated Software Routines is a calculus-based treatment of probability concurrent with and integrated with statistics through interactive, tailored software applications designed to enhance the phenomena of probability and statistics. The software programs make the book unique.The book comes with a CD containing the interactive software leading to the Statistical Genie. The student can issue commands repeatedly while making parameter changes to observe the effects. Computer programming is an excellent skill for problem solvers, involving design, prototyping, data gathering, testing, redesign, validating, etc, all wrapped up in the scientific method.See also: CD to accompany Probability and Stats with Integrated Software Routines (0123694698) * Incorporates more than 1,000 engaging problems with answers* Includes more than 300 solved examples* Uses varied problem solving methods
A novel algorithm-driven hybrid simulation learning method to improve acquisition of endotracheal intubation skills: a randomized controlled study
Background Simulation-based training is a clinical skill learning method that can replicate real-life situations in an interactive manner. In our study, we compared a novel hybrid learning method with conventional simulation learning in the teaching of endotracheal intubation. Methods One hundred medical students and residents were randomly divided into two groups and were taught endotracheal intubation. The first group of subjects (control group) studied in the conventional way via lectures and classic simulation-based training sessions. The second group (experimental group) used the hybrid learning method where the teaching process consisted of distance learning and small group peer-to-peer simulation training sessions with remote supervision by the instructors. After the teaching process, endotracheal intubation (ETI) procedures were performed on real patients under the supervision of an anesthesiologist in an operating theater. Each step of the procedure was evaluated by a standardized assessment form (checklist) for both groups. Results Thirty-four subjects constituted the control group and 43 were in the experimental group. The hybrid group (88%) showed significantly better ETI performance in the operating theater compared with the control group (52%). Further, all hybrid group subjects (100%) followed the correct sequence of actions, while in the control group only 32% followed proper sequencing. Conclusions We conclude that our novel algorithm-driven hybrid simulation learning method improves acquisition of endotracheal intubation with a high degree of acceptability and satisfaction by the learners’ as compared with classic simulation-based training.
A simulation-based pilot study of crisis checklists in the emergency department
Checklists can improve adherence to standardized procedures and minimize human error. We aimed to test if implementation of a checklist was feasible and effective in enhancing patient care in an emergency department handling internal medicine cases. We developed four critical event checklists and confronted volunteer teams with a series of four simulated emergency scenarios. In two scenarios, the teams were provided access to the crisis checklists in a randomized cross-over design. Simulated patient outcome plus statement of the underlying diagnosis defined the primary endpoint and adherence to key processes such as time to commence CPR represented the secondary endpoints. A questionnaire was used to capture participants’ perception of clinical relevance and manageability of the checklists. Six teams of four volunteers completed a total of 24 crisis sequences. The primary endpoint was reached in 8 out of 12 sequences with and in 2 out of 12 sequences without a checklist (Odds ratio, 10; CI 1.11, 123.43; p = 0.03607, Fisher’s exact test). Adherence to critical steps was significantly higher in all scenarios for which a checklist was available (performance score of 56.3% without checklist, 81.9% with checklist, p = 0.00284, linear regression model). All participants rated the checklist as useful and 22 of 24 participants would use the checklist in real life. Checklist use had no influence on CPR quality. The use of context-specific checklists showed a statistically significant influence on team performance and simulated patient outcome and contributed to adherence to standard clinical practices in emergency situations.
An Integrated Paediatric Population PK/PD Analysis of dDAVP: How do PK Differences Translate to Clinical Outcomes?
Introduction The bioequivalence of two formulations of desmopressin (dDAVP), a vasopressin analogue prescribed for nocturnal enuresis treatment in children, has been previously confirmed in adults but not in children. In this study, we aimed to study the pharmacokinetics (PK) and pharmacodynamics (PD) of these two formulations, in both fasted and fed children, including patients younger than 6 years of age. Methods Previously published data from one PK study and one PK/PD study in children aged between 6 and 16 years were combined with a new PK/PD study in children aged between 6 months and 8 years, and analysed using population PK/PD modelling. Simulations were performed to further explore the relative bioavailability of both formulations and evaluate current dosing strategies. Results The complex absorption behaviour of the lyophilizate was modelled using a double input, linked to a one-compartmental model with linear elimination and an indirect response model linking dDAVP concentration to produced urine volume and osmolality. The final model described the observed data well and elucidated the complexity of bioequivalence and therapeutic equivalence of the two formulations. Simulations showed that current dosing regimens using a fixed dose of lyophilizate 120 μg is not adequate for children, assuming children to be in the fed state when taking dDAVP. A new age- and weight-based dosing regimen was suggested and was shown to lead to improved, better tailored effects. Conclusions Bioequivalence and therapeutic equivalence data of two formulations of the same drug in adults cannot be readily extrapolated to children. This study shows the importance of well-designed paediatric clinical trials and how they can be analysed using mixed-effects modelling to make clinically relevant inferences. A follow-up clinical trial testing the proposed dDAVP dosing regimen should be performed. Clinical Trial Registration This trial has been registered at www.clinicaltrials.gov (identifier NCT02584231; EudraCT 2014-005200-13).
Simulations to Predict Clinical Trial Outcome of Bevacizumab Plus Chemotherapy vs. Chemotherapy Alone in Patients With First‐Line Gastric Cancer and Elevated Plasma VEGF‐A
To simulate clinical trials to assess overall survival (OS) benefit of bevacizumab in combination with chemotherapy in selected patients with gastric cancer (GC), a modeling framework linking OS with tumor growth inhibition (TGI) metrics and baseline patient characteristics was developed. Various TGI metrics were estimated using TGI models and data from two phase III studies comparing bevacizumab plus chemotherapy vs. chemotherapy as first‐line therapy in 976 GC patients. Time‐to‐tumor‐growth (TTG) was the best TGI metric to predict OS. TTG, Eastern Cooperative Oncology Group (ECOG) score, albumin level, and Asian ethnicity were significant covariates in the final OS model. The model correctly predicted a decreased hazard ratio favorable to bevacizumab in patients with high baseline plasma VEGF‐A above the median of 113.4 ng/L. Based on trial simulations, in trials enrolling patients with elevated baseline plasma VEGF‐A (500 patients per arm), the expected hazard ratio was 0.82 (95% prediction interval: 0.70–0.95), independent of ethnicity.
External validation of clinical prediction models: simulation-based sample size calculations were more reliable than rules-of-thumb
•After a clinical prediction model is developed, it is usually necessary to undertake an external validation study that examines the model's performance in new data from the same or different population. External validation studies should have an appropriate sample size, in order to estimate model performance measures precisely for calibration, discrimination and clinical utility.•Rules-of-thumb suggest at least 100 events and 100 nonevents. Such blanket guidance is imprecise, and not specific to the model or validation setting.•Our works shows that precision of performance estimates is affected by the model's linear predictor (LP) distribution, in addition to number of events and total sample size. Furthermore, sample sizes of 100 (or even 200) events and non-events can give imprecise estimates, especially for calibration.•Our new proposal uses a simulation-based sample size calculation, which accounts for the LP distribution and (mis)calibration in the validation sample, and calculates the sample size (and events) required conditional on these factors.•The approach requires the researcher to specify the desired precision for each performance measure of interest (calibration, discrimination, net benefit, etc), the model's anticipated LP distribution in the validation population, and whether or not the model is well calibrated. Guidance for how to specify these values is given, and R and Stata code is provided. Sample size “rules-of-thumb” for external validation of clinical prediction models suggest at least 100 events and 100 non-events. Such blanket guidance is imprecise, and not specific to the model or validation setting. We investigate factors affecting precision of model performance estimates upon external validation, and propose a more tailored sample size approach. Simulation of logistic regression prediction models to investigate factors associated with precision of performance estimates. Then, explanation and illustration of a simulation-based approach to calculate the minimum sample size required to precisely estimate a model's calibration, discrimination and clinical utility. Precision is affected by the model's linear predictor (LP) distribution, in addition to number of events and total sample size. Sample sizes of 100 (or even 200) events and non-events can give imprecise estimates, especially for calibration. The simulation-based calculation accounts for the LP distribution and (mis)calibration in the validation sample. Application identifies 2430 required participants (531 events) for external validation of a deep vein thrombosis diagnostic model. Where researchers can anticipate the distribution of the model's LP (eg, based on development sample, or a pilot study), a simulation-based approach for calculating sample size for external validation offers more flexibility and reliability than rules-of-thumb.