Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
17,625 result(s) for "statistical interpretations"
Sort by:
Multiple imputation and its application
A practical guide to analysing partially observed data. Collecting, analysing and drawing inferences from data is central to research in the medical and social sciences. Unfortunately, it is rarely possible to collect all the intended data. The literature on inference from the resulting incomplete  data is now huge, and continues to grow both as methods are developed for large and complex data structures, and as increasing computer power and suitable software enable researchers to apply these methods. This book focuses on a particular statistical method for analysing and drawing inferences from incomplete data, called Multiple Imputation (MI). MI is attractive because it is both practical and widely applicable. The authors aim is to clarify the issues raised by missing data, describing the rationale for MI, the relationship between the various imputation models and associated algorithms and its application to increasingly complex data structures. Multiple Imputation and its Application: * Discusses the issues raised by the analysis of partially observed data, and the assumptions on which analyses rest. * Presents a practical guide to the issues to consider when analysing incomplete data from both observational studies and randomized trials. * Provides a detailed discussion of the practical use of MI with real-world examples drawn from medical and social statistics. * Explores handling non-linear relationships and interactions with multiple imputation, survival analysis, multilevel multiple imputation, sensitivity analysis via multiple imputation, using non-response weights with multiple imputation and doubly robust multiple imputation. Multiple Imputation and its Application is aimed at quantitative researchers and students in the medical and social sciences with the aim of clarifying the issues raised by the analysis of incomplete data data, outlining the rationale for MI and describing how to consider and address the issues that arise in its application.
Global, regional, and national burden of suicide mortality 1990 to 2016: systematic analysis for the Global Burden of Disease Study 2016
AbstractObjectivesTo use the estimates from the Global Burden of Disease Study 2016 to describe patterns of suicide mortality globally, regionally, and for 195 countries and territories by age, sex, and Socio-demographic index, and to describe temporal trends between 1990 and 2016.DesignSystematic analysis.Main outcome measuresCrude and age standardised rates from suicide mortality and years of life lost were compared across regions and countries, and by age, sex, and Socio-demographic index (a composite measure of fertility, income, and education).ResultsThe total number of deaths from suicide increased by 6.7% (95% uncertainty interval 0.4% to 15.6%) globally over the 27 year study period to 817 000 (762 000 to 884 000) deaths in 2016. However, the age standardised mortality rate for suicide decreased by 32.7% (27.2% to 36.6%) worldwide between 1990 and 2016, similar to the decline in the global age standardised mortality rate of 30.6%. Suicide was the leading cause of age standardised years of life lost in the Global Burden of Disease region of high income Asia Pacific and was among the top 10 leading causes in eastern Europe, central Europe, western Europe, central Asia, Australasia, southern Latin America, and high income North America. Rates for men were higher than for women across regions, countries, and age groups, except for the 15 to 19 age group. There was variation in the female to male ratio, with higher ratios at lower levels of Socio-demographic index. Women experienced greater decreases in mortality rates (49.0%, 95% uncertainty interval 42.6% to 54.6%) than men (23.8%, 15.6% to 32.7%).ConclusionsAge standardised mortality rates for suicide have greatly reduced since 1990, but suicide remains an important contributor to mortality worldwide. Suicide mortality was variable across locations, between sexes, and between age groups. Suicide prevention strategies can be targeted towards vulnerable populations if they are informed by variations in mortality rates.
Handbook of biosurveillance
Provides a coherent and comprehensive account of the theory and practice of real-time human disease outbreak detection, explicitly recognizing the revolution in practices of infection control and public health surveillance.- Reviews the current mathematical, statistical, and computer science systems for early detection of disease outbreaks-.
Cochran's Q test was useful to assess heterogeneity in likelihood ratios in studies of diagnostic accuracy
Empirical evaluations have demonstrated that diagnostic accuracy frequently shows significant heterogeneity between subgroups of patients within a study. We propose to use Cochran's Q test to assess heterogeneity in diagnostic likelihood ratios (LRs). We reanalyzed published data of six articles that showed within-study heterogeneity in diagnostic accuracy. We used the Q test to assess heterogeneity in LRs and compared the results of the Q test with those obtained using another method for stratified analysis of LRs, based on subgroup confidence intervals. We also studied the behavior of the Q test using hypothetical data. The Q test detected significant heterogeneity in LRs in all six example data sets. The Q test detected significant heterogeneity in LRs more frequently than the confidence interval approach (38% vs. 20%). When applied to hypothetical data, the Q test would be able to detect relatively small variations in LRs, of about a twofold increase, in a study including 300 participants. Reanalysis of published data using the Q test can be easily performed to assess heterogeneity in diagnostic LRs between subgroups of patients, potentially providing important information to clinicians who base their decisions on published LRs.
Introduction to biostatistical applications in health research with Microsoft Office Excel and R
The second edition of Introduction to Biostatistical Applications in Health Research delivers a thorough examination of the basic techniques and most commonly used statistical methods in health research.
Two Faced Janus of Quantum Nonlocality
This paper is a new step towards understanding why “quantum nonlocality” is a misleading concept. Metaphorically speaking, “quantum nonlocality” is Janus faced. One face is an apparent nonlocality of the Lüders projection and another face is Bell nonlocality (a wrong conclusion that the violation of Bell type inequalities implies the existence of mysterious instantaneous influences between distant physical systems). According to the Lüders projection postulate, a quantum measurement performed on one of the two distant entangled physical systems modifies their compound quantum state instantaneously. Therefore, if the quantum state is considered to be an attribute of the individual physical system and if one assumes that experimental outcomes are produced in a perfectly random way, one quickly arrives at the contradiction. It is a primary source of speculations about a spooky action at a distance. Bell nonlocality as defined above was explained and rejected by several authors; thus, we concentrate in this paper on the apparent nonlocality of the Lüders projection. As already pointed out by Einstein, the quantum paradoxes disappear if one adopts the purely statistical interpretation of quantum mechanics (QM). In the statistical interpretation of QM, if probabilities are considered to be objective properties of random experiments we show that the Lüders projection corresponds to the passage from joint probabilities describing all set of data to some marginal conditional probabilities describing some particular subsets of data. If one adopts a subjective interpretation of probabilities, such as QBism, then the Lüders projection corresponds to standard Bayesian updating of the probabilities. The latter represents degrees of beliefs of local agents about outcomes of individual measurements which are placed or which will be placed at distant locations. In both approaches, probability-transformation does not happen in the physical space, but only in the information space. Thus, all speculations about spooky interactions or spooky predictions at a distance are simply misleading. Coming back to Bell nonlocality, we recall that in a recent paper we demonstrated, using exclusively the quantum formalism, that CHSH inequalities may be violated for some quantum states only because of the incompatibility of quantum observables and Bohr’s complementarity. Finally, we explain that our criticism of quantum nonlocality is in the spirit of Hertz-Boltzmann methodology of scientific theories.
Approaches for reporting and interpreting statistically nonsignificant findings in evidence syntheses: a systematic review
To systematically review approaches for reporting and interpreting statistically nonsignificant findings with clinical relevance in evidence synthesis and to assess their methodological quality and the extent of their empirical validation. We searched Ovid MEDLINE ALL, Scopus, PsycInfo, Library of Guidance for Health Scientists, and MathSciNet for published studies in English from January 1, 2000, to January 30, 2025, for (1) best practices in guidance documents for evidence synthesis when interpreting clinically relevant nonsignificant findings, (2) statistical methods to support the interpretation, and (3) reporting practices. To identify relevant reporting guidelines, we also searched the Enhancing the QUAlity and Transparency Of health Research Network. The quality assessment applied the Mixed Methods Appraisal Tool, Appraisal tool for Cross-Sectional Studies, and checklists for expert opinion and systematic reviews from the Joanna Briggs Institute. At least two reviewers independently conducted all procedures, and a large language model facilitated data extraction and quality appraisal. Of the 5332 records, 37 were eligible for inclusion. Of these, 15 were editorials or opinion pieces, nine addressed methods, eight were cross-sectional or mixed-methods studies, four were journal guidance documents, and one was a systematic review. Twenty-seven records met the quality criteria of the appraisal tool relevant to their study design or publication type, while 10 records, comprising one systematic review, two editorials or opinion pieces, and seven cross-sectional studies, did not. Relevant methodological approaches to evidence synthesis included utilization of uncertainty intervals and their integration with various statistical measures (15 of 37, 41%), Bayes factors (six of 37, 16%), likelihood ratios (three of 37, 8%), effect conversion measures (two of 37, 5%), equivalence testing (two of 37, 5%), modified Fisher's test (one of 37, 3%), and reverse fragility index (one of 37, 3%). Reporting practices included problematic “null acceptance” language (14 of 37, 38%), with some records discouraging the inappropriate claim of no effect based on nonsignificant findings (nine of 37, 24%). None of the proposed methods were empirically tested with interest holders. Although various approaches have been proposed to improve the presentation and interpretation of statistically nonsignificant findings, a widely accepted consensus has not emerged, as these approaches have yet to be systematically tested for their practicality and validity. This review provides a comprehensive review of available methodological approaches spanning both the frequentist and Bayesian statistical frameworks and identifies critical gaps in empirical validation of some approaches, namely the lack of thresholds to guide the interpretation of results. These findings highlight the need for systematic testing of proposed methods with interest holders and the development of evidence-based guidance to support appropriate interpretation of nonsignificant results in evidence synthesis. This review looked at how to best report results that are not statistically significant because some of these findings can still be important to inform clinical care or health policy. We searched databases for studies published between 2000 and 2025. Out of more than 5000 records, 37 studies were relevant. These studies showed that there is no single best way to report nonsignificant findings. •Statistically nonsignificant findings are inaccurately interpreted and reported.•Mapping diverse methods reveals complexity in interpreting nonsignificant findings.•No guidelines exist for interpretation of meaningful but nonsignificant findings.•Methods outlined herein may complement interpreting nonsignificant findings.•Empirical validation of methods to interpret nonsignificant findings is warranted.
Understanding effect size: an international online survey among psychiatrists, psychologists, physicians from other medical specialities, dentists and other health professionals
Background and objectiveVarious ways exist to display the effectiveness of medical treatment options. This study examined various psychiatric, medical and allied professionals’ understanding and perceived usefulness of eight effect size indices for presenting both dichotomous and continuous outcome data.MethodsWe surveyed 1316 participants from 13 countries using an online questionnaire. We presented hypothetical treatment effects of interventions versus placebo concerning chronic pain using eight different effect size measures. For each index, the participants had to judge the magnitude of the shown effect, to indicate how certain they felt about their own answer and how useful they found the given effect size index.FindingsOverall, 762 (57.9%) participants fully completed the questionnaire. In terms of understanding, the best results emerged when both the control event rate (CER) and the experimental event rate (EER) were presented. The difference in minimal importance difference units (MID unit) was understood worst. Respondents also found CER and EER to be the most useful presentation approach while they rated MID unit as the least useful. Confidence in the risk ratio ranked high, even though it was rather poorly understood.Conclusions and clinical implicationsFor dichotomous outcomes, presenting the effects in terms of the CER and EER could lead to the most correct interpretation. Relative measures including the risk ratio must be supplemented with absolute measures such as the CER and EER. Effects on continuous outcomes were better understood through standardised mean differences than mean differences. These can also be supplemented by dichotomised CER and EER.
Data-driven healthcare
Data is revolutionizing the healthcare industry. With more data available than ever before, and applying the right analytics you can spur growth. Benefits extend to patients, providers, and board members, and the technology can make centralized patient management a reality. Despite the potential for growth, many in the industry and government are questioning the value of data in health care, wondering if it's worth the investment. This book tackles the issue and proves why BI is not only worth it, but necessary for industry advancement. Madsen challenges the notion that data has little value in healthcare, and shows how BI can ease regulatory reporting pressures and streamline the entire system as it evolves. She illustrates how a data-driven organization is created, and how it can transform the industry. --
Introduction to Statistical Analysis of Laboratory Data
Introduction to Statistical Analysis of Laboratory Data presents a detailed discussion of important statistical concepts and methods of data presentation and analysis. Provides detailed discussions on statistical applications including a comprehensive package of statistical tools that are specific to the laboratory experiment process; Introduces terminology used in many applications such as the interpretation of assay design and validation as well as \"fit for purpose\" procedures including real world examples; Includes a rigorous review of statistical quality control procedures in laboratory methodologies and influences on capabilities; Presents methodologies used in the areas such as method comparison procedures, limit and bias detection, outlier analysis and detecting sources of variation; Analysis of robustness and ruggedness including multivariate influences on response are introduced to account for controllable/uncontrollable laboratory conditions.