Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
251,940 result(s) for "statistical model"
Sort by:
Handbook of biosurveillance
Provides a coherent and comprehensive account of the theory and practice of real-time human disease outbreak detection, explicitly recognizing the revolution in practices of infection control and public health surveillance. *Reviews the current mathematical, statistical, and computer science systems for early detection of disease outbreaks*Provides extensive coverage of existing surveillance data*Discusses experimental methods for data measurement and evaluation*Addresses engineering and practical implementation of effective early detection systems*Includes real case studies
Predicting spatial and temporal variability in crop yields: an inter-comparison of machine learning, regression and process-based models
Pervious assessments of crop yield response to climate change are mainly aided with either process-based models or statistical models, with a focus on predicting the changes in average yields, whilst there is growing interest in yield variability and extremes. In this study, we simulate US maize yield using process-based models, traditional regression model and a machine-learning algorithm, and importantly, identify the weakness and strength of each method in simulating the average, variability and extremes of maize yield across the country. We show that both regression and machine learning models can well reproduce the observed pattern of yield averages, while large bias is found for process-based crop models even fed with harmonized parameters. As for the probability distribution of yields, machine learning shows the best skill, followed by regression model and process-based models. For the country as a whole, machine learning can explain 93% of observed yield variability, followed by regression model (51%) and process-based models (42%). Based on the improved capability of the machine learning algorithm, we estimate that US maize yield is projected to decrease by 13.5% under the 2 °C global warming scenario (by ∼2050 s). Yields less than or equal to the 10th percentile in the yield distribution for the baseline period are predicted to occur in 19% and 25% of years in 1.5 °C (by ∼2040 s) and 2 °C global warming scenarios, with potentially significant implications for food supply, prices and trade. The machine learning and regression methods are computationally much more efficient than process-based models, making it feasible to do probabilistic risk analysis of climate impacts on crop production for a wide range of future scenarios.
Contribution of vaccination to improved survival and health: modelling 50 years of the Expanded Programme on Immunization
WHO, as requested by its member states, launched the Expanded Programme on Immunization (EPI) in 1974 to make life-saving vaccines available to all globally. To mark the 50-year anniversary of EPI, we sought to quantify the public health impact of vaccination globally since the programme's inception. In this modelling study, we used a suite of mathematical and statistical models to estimate the global and regional public health impact of 50 years of vaccination against 14 pathogens in EPI. For the modelled pathogens, we considered coverage of all routine and supplementary vaccines delivered since 1974 and estimated the mortality and morbidity averted for each age cohort relative to a hypothetical scenario of no historical vaccination. We then used these modelled outcomes to estimate the contribution of vaccination to globally declining infant and child mortality rates over this period. Since 1974, vaccination has averted 154 million deaths, including 146 million among children younger than 5 years of whom 101 million were infants younger than 1 year. For every death averted, 66 years of full health were gained on average, translating to 10·2 billion years of full health gained. We estimate that vaccination has accounted for 40% of the observed decline in global infant mortality, 52% in the African region. In 2024, a child younger than 10 years is 40% more likely to survive to their next birthday relative to a hypothetical scenario of no historical vaccination. Increased survival probability is observed even well into late adulthood. Since 1974 substantial gains in childhood survival have occurred in every global region. We estimate that EPI has provided the single greatest contribution to improved infant survival over the past 50 years. In the context of strengthening primary health care, our results show that equitable universal access to immunisation remains crucial to sustain health gains and continue to save future lives from preventable infectious mortality. WHO.
Global burden of bacterial antimicrobial resistance in 2019: a systematic analysis
Antimicrobial resistance (AMR) poses a major threat to human health around the world. Previous publications have estimated the effect of AMR on incidence, deaths, hospital length of stay, and health-care costs for specific pathogen–drug combinations in select locations. To our knowledge, this study presents the most comprehensive estimates of AMR burden to date. We estimated deaths and disability-adjusted life-years (DALYs) attributable to and associated with bacterial AMR for 23 pathogens and 88 pathogen–drug combinations in 204 countries and territories in 2019. We obtained data from systematic literature reviews, hospital systems, surveillance systems, and other sources, covering 471 million individual records or isolates and 7585 study-location-years. We used predictive statistical modelling to produce estimates of AMR burden for all locations, including for locations with no data. Our approach can be divided into five broad components: number of deaths where infection played a role, proportion of infectious deaths attributable to a given infectious syndrome, proportion of infectious syndrome deaths attributable to a given pathogen, the percentage of a given pathogen resistant to an antibiotic of interest, and the excess risk of death or duration of an infection associated with this resistance. Using these components, we estimated disease burden based on two counterfactuals: deaths attributable to AMR (based on an alternative scenario in which all drug-resistant infections were replaced by drug-susceptible infections), and deaths associated with AMR (based on an alternative scenario in which all drug-resistant infections were replaced by no infection). We generated 95% uncertainty intervals (UIs) for final estimates as the 25th and 975th ordered values across 1000 posterior draws, and models were cross-validated for out-of-sample predictive validity. We present final estimates aggregated to the global and regional level. On the basis of our predictive statistical models, there were an estimated 4·95 million (3·62–6·57) deaths associated with bacterial AMR in 2019, including 1·27 million (95% UI 0·911–1·71) deaths attributable to bacterial AMR. At the regional level, we estimated the all-age death rate attributable to resistance to be highest in western sub-Saharan Africa, at 27·3 deaths per 100 000 (20·9–35·3), and lowest in Australasia, at 6·5 deaths (4·3–9·4) per 100 000. Lower respiratory infections accounted for more than 1·5 million deaths associated with resistance in 2019, making it the most burdensome infectious syndrome. The six leading pathogens for deaths associated with resistance (Escherichia coli, followed by Staphylococcus aureus, Klebsiella pneumoniae, Streptococcus pneumoniae, Acinetobacter baumannii, and Pseudomonas aeruginosa) were responsible for 929 000 (660 000–1 270 000) deaths attributable to AMR and 3·57 million (2·62–4·78) deaths associated with AMR in 2019. One pathogen–drug combination, meticillin-resistant S aureus, caused more than 100 000 deaths attributable to AMR in 2019, while six more each caused 50 000–100 000 deaths: multidrug-resistant excluding extensively drug-resistant tuberculosis, third-generation cephalosporin-resistant E coli, carbapenem-resistant A baumannii, fluoroquinolone-resistant E coli, carbapenem-resistant K pneumoniae, and third-generation cephalosporin-resistant K pneumoniae. To our knowledge, this study provides the first comprehensive assessment of the global burden of AMR, as well as an evaluation of the availability of data. AMR is a leading cause of death around the world, with the highest burdens in low-resource settings. Understanding the burden of AMR and the leading pathogen–drug combinations contributing to it is crucial to making informed and location-specific policy decisions, particularly about infection prevention and control programmes, access to essential antibiotics, and research and development of new vaccines and antibiotics. There are serious data gaps in many low-income settings, emphasising the need to expand microbiology laboratory capacity and data collection systems to improve our understanding of this important human health threat. Bill & Melinda Gates Foundation, Wellcome Trust, and Department of Health and Social Care using UK aid funding managed by the Fleming Fund.
Comparing Different Statistical Models and Multiple Testing Corrections for Association Mapping in Soybean and Maize
Association mapping (AM) is a powerful tool for fine mapping complex trait variation down to nucleotide sequences by exploiting historical recombination events. A major problem in AM is controlling false positives that can arise from population structure and family relatedness. False positives are often controlled by incorporating covariates for structure and kinship in mixed linear models (MLM). These MLM-based methods are single locus models and can introduce false negatives due to over fitting of the model. In this study, eight different statistical models, ranging from single-locus to multilocus, were compared for AM for three traits differing in heritability in two crop species: soybean ( L.) and maize ( L.). Soybean and maize were chosen, in part, due to their highly differentiated rate of linkage disequilibrium (LD) decay, which can influence false positive and false negative rates. The fixed and random model circulating probability unification (FarmCPU) performed better than other models based on an analysis of Q-Q plots and on the identification of the known number of quantitative trait loci (QTLs) in a simulated data set. These results indicate that the FarmCPU controls both false positives and false negatives. Six qualitative traits in soybean with known published genomic positions were also used to compare these models, and results indicated that the FarmCPU consistently identified a single highly significant SNP closest to these known published genes. Multiple comparison adjustments (Bonferroni, false discovery rate, and positive false discovery rate) were compared for these models using a simulated trait having 60% heritability and 20 QTLs. Multiple comparison adjustments were overly conservative for MLM, CMLM, ECMLM, and MLMM and did not find any significant markers; in contrast, ANOVA, GLM, and SUPER models found an excessive number of markers, far more than 20 QTLs. The FarmCPU model, using less conservative methods (false discovery rate, and positive false discovery rate) identified 10 QTLs, which was closer to the simulated number of QTLs than the number found by other models.
Comparing and combining process-based crop models and statistical models with some implications for climate change
We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.
Statistical learning and selective inference
We describe the problem of “selective inference.” This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have “cherry-picked”—searched for the strongest associations—means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis. Significance Most statistical analyses involve some kind of ”selection”—searching through the data for the strongest associations. Measuring the strength of the resulting associations is a challenging task, because one must account for the effects of the selection. There are some new tools in selective inference for this task, and we illustrate their use in forward stepwise regression, the lasso, and principal components analysis.
Longitudinal study of fingerprint recognition
Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject’s age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that ( i ) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and ( ii ) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis.
On the Reliability of N-Mixture Models for Count Data
N-mixture models describe count data replicated in time and across sites in terms of abundance N and detectability p. They are popular because they allow inference about N while controlling for factors that influence p without the need for marking animals. Using a capture-recapture perspective, we show that the loss of information that results from not marking animals is critical, making reliable statistical modeling of N and p problematic using just count data. One cannot reliably fit a model in which the detection probabilities are distinct among repeat visits as this model is overspecified. This makes uncontrolled variation in p problematic. By counter example, we show that even if p is constant after adjusting for covariate effects (the \"constant p\" assumption) scientifically plausible alternative models in which N (or its expectation) is non-identifiable or does not even exist as a parameter, lead to data that are practically indistinguishable from data generated under an N-mixture model. This is particularly the case for sparse data as is commonly seen in applications. We conclude that under the constant p assumption reliable inference is only possible for relative abundance in the absence of questionable and/or untestable assumptions or with better quality data than seen in typical applications. Relative abundance models for counts can be readily fitted using Poisson regression in standard software such as R and are sufficiently flexible to allow controlling for p through the use covariates while simultaneously modeling variation in relative abundance. If users require estimates of absolute abundance, they should collect auxiliary data that help with estimation of p.
Association between mobility patterns and COVID-19 transmission in the USA: a mathematical modelling study
Within 4 months of COVID-19 first being reported in the USA, it spread to every state and to more than 90% of all counties. During this period, the US COVID-19 response was highly decentralised, with stay-at-home directives issued by state and local officials, subject to varying levels of enforcement. The absence of a centralised policy and timeline combined with the complex dynamics of human mobility and the variable intensity of local outbreaks makes assessing the effect of large-scale social distancing on COVID-19 transmission in the USA a challenge. We used daily mobility data derived from aggregated and anonymised cell (mobile) phone data, provided by Teralytics (Zürich, Switzerland) from Jan 1 to April 20, 2020, to capture real-time trends in movement patterns for each US county, and used these data to generate a social distancing metric. We used epidemiological data to compute the COVID-19 growth rate ratio for a given county on a given day. Using these metrics, we evaluated how social distancing, measured by the relative change in mobility, affected the rate of new infections in the 25 counties in the USA with the highest number of confirmed cases on April 16, 2020, by fitting a statistical model for each county. Our analysis revealed that mobility patterns are strongly correlated with decreased COVID-19 case growth rates for the most affected counties in the USA, with Pearson correlation coefficients above 0·7 for 20 of the 25 counties evaluated. Additionally, the effect of changes in mobility patterns, which dropped by 35–63% relative to the normal conditions, on COVID-19 transmission are not likely to be perceptible for 9–12 days, and potentially up to 3 weeks, which is consistent with the incubation time of severe acute respiratory syndrome coronavirus 2 plus additional time for reporting. We also show evidence that behavioural changes were already underway in many US counties days to weeks before state-level or local-level stay-at-home policies were implemented, implying that individuals anticipated public health directives where social distancing was adopted, despite a mixed political message. This study strongly supports a role of social distancing as an effective way to mitigate COVID-19 transmission in the USA. Until a COVID-19 vaccine is widely available, social distancing will remain one of the primary measures to combat disease spread, and these findings should serve to support more timely policy making around social distancing in the USA in the future. None.