Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
23,118 result(s) for "Data interpretations"
Sort by:
Global, regional, and national burden of suicide mortality 1990 to 2016: systematic analysis for the Global Burden of Disease Study 2016
AbstractObjectivesTo use the estimates from the Global Burden of Disease Study 2016 to describe patterns of suicide mortality globally, regionally, and for 195 countries and territories by age, sex, and Socio-demographic index, and to describe temporal trends between 1990 and 2016.DesignSystematic analysis.Main outcome measuresCrude and age standardised rates from suicide mortality and years of life lost were compared across regions and countries, and by age, sex, and Socio-demographic index (a composite measure of fertility, income, and education).ResultsThe total number of deaths from suicide increased by 6.7% (95% uncertainty interval 0.4% to 15.6%) globally over the 27 year study period to 817 000 (762 000 to 884 000) deaths in 2016. However, the age standardised mortality rate for suicide decreased by 32.7% (27.2% to 36.6%) worldwide between 1990 and 2016, similar to the decline in the global age standardised mortality rate of 30.6%. Suicide was the leading cause of age standardised years of life lost in the Global Burden of Disease region of high income Asia Pacific and was among the top 10 leading causes in eastern Europe, central Europe, western Europe, central Asia, Australasia, southern Latin America, and high income North America. Rates for men were higher than for women across regions, countries, and age groups, except for the 15 to 19 age group. There was variation in the female to male ratio, with higher ratios at lower levels of Socio-demographic index. Women experienced greater decreases in mortality rates (49.0%, 95% uncertainty interval 42.6% to 54.6%) than men (23.8%, 15.6% to 32.7%).ConclusionsAge standardised mortality rates for suicide have greatly reduced since 1990, but suicide remains an important contributor to mortality worldwide. Suicide mortality was variable across locations, between sexes, and between age groups. Suicide prevention strategies can be targeted towards vulnerable populations if they are informed by variations in mortality rates.
Blinded interpretation of study results can feasibly and effectively diminish interpretation bias
Controversial and misleading interpretation of data from randomized trials is common. How to avoid misleading interpretation has received little attention. Herein, we describe two applications of an approach that involves blinded interpretation of the results by study investigators. The approach involves developing two interpretations of the results on the basis of a blinded review of the primary outcome data (experimental treatment A compared with control treatment B). One interpretation assumes that A is the experimental intervention and another assumes that A is the control. After agreeing that there will be no further changes, the investigators record their decisions and sign the resulting document. The randomization code is then broken, the correct interpretation chosen, and the manuscript finalized. Review of the document by an external authority before finalization can provide another safeguard against interpretation bias. We found the blinded preparation of a summary of data interpretation described in this article practical, efficient, and useful. Blinded data interpretation may decrease the frequency of misleading data interpretation. Widespread adoption of blinded data interpretation would be greatly facilitated were it added to the minimum set of recommendations outlining proper conduct of randomized controlled trials (eg, the Consolidated Standards of Reporting Trials statement).
Multiple imputation and its application
A practical guide to analysing partially observed data. Collecting, analysing and drawing inferences from data is central to research in the medical and social sciences. Unfortunately, it is rarely possible to collect all the intended data. The literature on inference from the resulting incomplete  data is now huge, and continues to grow both as methods are developed for large and complex data structures, and as increasing computer power and suitable software enable researchers to apply these methods. This book focuses on a particular statistical method for analysing and drawing inferences from incomplete data, called Multiple Imputation (MI). MI is attractive because it is both practical and widely applicable. The authors aim is to clarify the issues raised by missing data, describing the rationale for MI, the relationship between the various imputation models and associated algorithms and its application to increasingly complex data structures. Multiple Imputation and its Application: * Discusses the issues raised by the analysis of partially observed data, and the assumptions on which analyses rest. * Presents a practical guide to the issues to consider when analysing incomplete data from both observational studies and randomized trials. * Provides a detailed discussion of the practical use of MI with real-world examples drawn from medical and social statistics. * Explores handling non-linear relationships and interactions with multiple imputation, survival analysis, multilevel multiple imputation, sensitivity analysis via multiple imputation, using non-response weights with multiple imputation and doubly robust multiple imputation. Multiple Imputation and its Application is aimed at quantitative researchers and students in the medical and social sciences with the aim of clarifying the issues raised by the analysis of incomplete data data, outlining the rationale for MI and describing how to consider and address the issues that arise in its application.
Cochran's Q test was useful to assess heterogeneity in likelihood ratios in studies of diagnostic accuracy
Empirical evaluations have demonstrated that diagnostic accuracy frequently shows significant heterogeneity between subgroups of patients within a study. We propose to use Cochran's Q test to assess heterogeneity in diagnostic likelihood ratios (LRs). We reanalyzed published data of six articles that showed within-study heterogeneity in diagnostic accuracy. We used the Q test to assess heterogeneity in LRs and compared the results of the Q test with those obtained using another method for stratified analysis of LRs, based on subgroup confidence intervals. We also studied the behavior of the Q test using hypothetical data. The Q test detected significant heterogeneity in LRs in all six example data sets. The Q test detected significant heterogeneity in LRs more frequently than the confidence interval approach (38% vs. 20%). When applied to hypothetical data, the Q test would be able to detect relatively small variations in LRs, of about a twofold increase, in a study including 300 participants. Reanalysis of published data using the Q test can be easily performed to assess heterogeneity in diagnostic LRs between subgroups of patients, potentially providing important information to clinicians who base their decisions on published LRs.
Challenges and Opportunities with Causal Discovery Algorithms: Application to Alzheimer’s Pathophysiology
Causal Structure Discovery (CSD) is the problem of identifying causal relationships from large quantities of data through computational methods. With the limited ability of traditional association-based computational methods to discover causal relationships, CSD methodologies are gaining popularity. The goal of the study was to systematically examine whether (i) CSD methods can discover the known causal relationships from observational clinical data and (ii) to offer guidance to accurately discover known causal relationships. We used Alzheimer’s disease (AD), a complex progressive disease, as a model because the well-established evidence provides a “gold-standard” causal graph for evaluation. We evaluated two CSD methods, Fast Causal Inference (FCI) and Fast Greedy Equivalence Search (FGES) in their ability to discover this structure from data collected by the Alzheimer’s Disease Neuroimaging Initiative (ADNI). We used structural equation models (which is not designed for CSD) as control. We applied these methods under three scenarios defined by increasing amounts of background knowledge provided to the methods. The methods were evaluated by comparing the resulting causal relationships with the “gold standard” graph that was constructed from literature. Dedicated CSD methods managed to discover graphs that nearly coincided with the gold standard. For best results, CSD algorithms should be used with longitudinal data providing as much prior knowledge as possible.
Six Persistent Research Misconceptions
ABSTRACT Scientific knowledge changes rapidly, but the concepts and methods of the conduct of research change more slowly. To stimulate discussion of outmoded thinking regarding the conduct of research, I list six misconceptions about research that persist long after their flaws have become apparent. The misconceptions are: 1) There is a hierarchy of study designs; randomized trials provide the greatest validity, followed by cohort studies, with case–control studies being least reliable. 2) An essential element for valid generalization is that the study subjects constitute a representative sample of a target population. 3) If a term that denotes the product of two factors in a regression model is not statistically significant, then there is no biologic interaction between those factors. 4) When categorizing a continuous variable, a reasonable scheme for choosing category cut-points is to use percentile-defined boundaries, such as quartiles or quintiles of the distribution. 5) One should always report P values or confidence intervals that have been adjusted for multiple comparisons. 6) Significance testing is useful and important for the interpretation of data. These misconceptions have been perpetuated in journals, classrooms and textbooks. They persist because they represent intellectual shortcuts that avoid more thoughtful approaches to research problems. I hope that calling attention to these misconceptions will spark the debates needed to shelve these outmoded ideas for good.
Handbook of biosurveillance
Provides a coherent and comprehensive account of the theory and practice of real-time human disease outbreak detection, explicitly recognizing the revolution in practices of infection control and public health surveillance. *Reviews the current mathematical, statistical, and computer science systems for early detection of disease outbreaks*Provides extensive coverage of existing surveillance data*Discusses experimental methods for data measurement and evaluation*Addresses engineering and practical implementation of effective early detection systems*Includes real case studies
Ion mobility–mass spectrometry analysis of large protein complexes
Here we describe a detailed protocol for both data collection and interpretation with respect to ion mobility–mass spectrometry analysis of large protein assemblies. Ion mobility is a technique that can separate gaseous ions based on their size and shape. Specifically, within this protocol, we cover general approaches to data interpretation, methods of predicting whether specific model structures for a given protein assembly can be separated by ion mobility, and generalized strategies for data normalization and modeling. The protocol also covers basic instrument settings and best practices for both observation and detection of large noncovalent protein complexes by ion mobility–mass spectrometry.
Fold change and p-value cutoffs significantly alter microarray interpretations
Background As context is important to gene expression, so is the preprocessing of microarray to transcriptomics. Microarray data suffers from several normalization and significance problems. Arbitrary fold change (FC) cut-offs of >2 and significance p-values of <0.02 lead data collection to look only at genes which vary wildly amongst other genes. Therefore, questions arise as to whether the biology or the statistical cutoff are more important within the interpretation. In this paper, we reanalyzed a zebrafish ( D. rerio ) microarray data set using GeneSpring and different differential gene expression cut-offs and found the data interpretation was drastically different. Furthermore, despite the advances in microarray technology, the array captures a large portion of genes known but yet still leaving large voids in the number of genes assayed, such as leptin a pleiotropic hormone directly related to hypoxia-induced angiogenesis. Results The data strongly suggests that the number of differentially expressed genes is more up-regulated than down-regulated, with many genes indicating conserved signalling to previously known functions. Recapitulated data from Marques et al. (2008) was similar but surprisingly different with some genes showing unexpected signalling which may be a product of tissue (heart) or that the intended response was transient. Conclusions Our analyses suggest that based on the chosen statistical or fold change cut-off; microarray analysis can provide essentially more than one answer, implying data interpretation as more of an art than a science, with follow up gene expression studies a must. Furthermore, gene chip annotation and development needs to maintain pace with not only new genomes being sequenced but also novel genes that are crucial to the overall gene chips interpretation.
Topological data analysis for discovery in preclinical spinal cord injury and traumatic brain injury
Data-driven discovery in complex neurological disorders has potential to extract meaningful syndromic knowledge from large, heterogeneous data sets to enhance potential for precision medicine. Here we describe the application of topological data analysis (TDA) for data-driven discovery in preclinical traumatic brain injury (TBI) and spinal cord injury (SCI) data sets mined from the Visualized Syndromic Information and Outcomes for Neurotrauma-SCI (VISION-SCI) repository. Through direct visualization of inter-related histopathological, functional and health outcomes, TDA detected novel patterns across the syndromic network, uncovering interactions between SCI and co-occurring TBI, as well as detrimental drug effects in unpublished multicentre preclinical drug trial data in SCI. TDA also revealed that perioperative hypertension predicted long-term recovery better than any tested drug after thoracic SCI in rats. TDA-based data-driven discovery has great potential application for decision-support for basic research and clinical problems such as outcome assessment, neurocritical care, treatment planning and rapid, precision-diagnosis. Data-driven discovery in complex neurological disorders has potential to extract meaningful knowledge from large, heterogeneous datasets. Here the authors apply topological data analysis to assess therapeutic effects in preclinical traumatic brain injury and spinal cord injury research studies.