Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
22,448 result(s) for "Data Interpretation"
Sort by:
Data mining in biomedical imaging, signaling, and systems
\"Data mining has rapidly emerged as an enabling, robust, and scalable technique to analyze data for novel patterns, trends, anomalies, structures, and features that can be employed for a variety of biomedical and clinical domains. Approaching the techniques and challenges of image mining from a multidisciplinary perspective, this book presents data mining techniques, methodologies, algorithms, and strategies to analyze biomedical signals and images. Written by experts, the text addresses data mining paradigms for the development of biomedical systems. It also includes special coverage of knowledge discovery in mammograms and emphasizes both the diagnostic and therapeutic fields of eye imaging\"--Provided by publisher.
Global, regional, and national burden of suicide mortality 1990 to 2016: systematic analysis for the Global Burden of Disease Study 2016
AbstractObjectivesTo use the estimates from the Global Burden of Disease Study 2016 to describe patterns of suicide mortality globally, regionally, and for 195 countries and territories by age, sex, and Socio-demographic index, and to describe temporal trends between 1990 and 2016.DesignSystematic analysis.Main outcome measuresCrude and age standardised rates from suicide mortality and years of life lost were compared across regions and countries, and by age, sex, and Socio-demographic index (a composite measure of fertility, income, and education).ResultsThe total number of deaths from suicide increased by 6.7% (95% uncertainty interval 0.4% to 15.6%) globally over the 27 year study period to 817 000 (762 000 to 884 000) deaths in 2016. However, the age standardised mortality rate for suicide decreased by 32.7% (27.2% to 36.6%) worldwide between 1990 and 2016, similar to the decline in the global age standardised mortality rate of 30.6%. Suicide was the leading cause of age standardised years of life lost in the Global Burden of Disease region of high income Asia Pacific and was among the top 10 leading causes in eastern Europe, central Europe, western Europe, central Asia, Australasia, southern Latin America, and high income North America. Rates for men were higher than for women across regions, countries, and age groups, except for the 15 to 19 age group. There was variation in the female to male ratio, with higher ratios at lower levels of Socio-demographic index. Women experienced greater decreases in mortality rates (49.0%, 95% uncertainty interval 42.6% to 54.6%) than men (23.8%, 15.6% to 32.7%).ConclusionsAge standardised mortality rates for suicide have greatly reduced since 1990, but suicide remains an important contributor to mortality worldwide. Suicide mortality was variable across locations, between sexes, and between age groups. Suicide prevention strategies can be targeted towards vulnerable populations if they are informed by variations in mortality rates.
Cochran's Q test was useful to assess heterogeneity in likelihood ratios in studies of diagnostic accuracy
Empirical evaluations have demonstrated that diagnostic accuracy frequently shows significant heterogeneity between subgroups of patients within a study. We propose to use Cochran's Q test to assess heterogeneity in diagnostic likelihood ratios (LRs). We reanalyzed published data of six articles that showed within-study heterogeneity in diagnostic accuracy. We used the Q test to assess heterogeneity in LRs and compared the results of the Q test with those obtained using another method for stratified analysis of LRs, based on subgroup confidence intervals. We also studied the behavior of the Q test using hypothetical data. The Q test detected significant heterogeneity in LRs in all six example data sets. The Q test detected significant heterogeneity in LRs more frequently than the confidence interval approach (38% vs. 20%). When applied to hypothetical data, the Q test would be able to detect relatively small variations in LRs, of about a twofold increase, in a study including 300 participants. Reanalysis of published data using the Q test can be easily performed to assess heterogeneity in diagnostic LRs between subgroups of patients, potentially providing important information to clinicians who base their decisions on published LRs.
Challenges and Opportunities with Causal Discovery Algorithms: Application to Alzheimer’s Pathophysiology
Causal Structure Discovery (CSD) is the problem of identifying causal relationships from large quantities of data through computational methods. With the limited ability of traditional association-based computational methods to discover causal relationships, CSD methodologies are gaining popularity. The goal of the study was to systematically examine whether (i) CSD methods can discover the known causal relationships from observational clinical data and (ii) to offer guidance to accurately discover known causal relationships. We used Alzheimer’s disease (AD), a complex progressive disease, as a model because the well-established evidence provides a “gold-standard” causal graph for evaluation. We evaluated two CSD methods, Fast Causal Inference (FCI) and Fast Greedy Equivalence Search (FGES) in their ability to discover this structure from data collected by the Alzheimer’s Disease Neuroimaging Initiative (ADNI). We used structural equation models (which is not designed for CSD) as control. We applied these methods under three scenarios defined by increasing amounts of background knowledge provided to the methods. The methods were evaluated by comparing the resulting causal relationships with the “gold standard” graph that was constructed from literature. Dedicated CSD methods managed to discover graphs that nearly coincided with the gold standard. For best results, CSD algorithms should be used with longitudinal data providing as much prior knowledge as possible.
Multiple imputation and its application
A practical guide to analysing partially observed data. Collecting, analysing and drawing inferences from data is central to research in the medical and social sciences. Unfortunately, it is rarely possible to collect all the intended data. The literature on inference from the resulting incomplete  data is now huge, and continues to grow both as methods are developed for large and complex data structures, and as increasing computer power and suitable software enable researchers to apply these methods. This book focuses on a particular statistical method for analysing and drawing inferences from incomplete data, called Multiple Imputation (MI). MI is attractive because it is both practical and widely applicable. The authors aim is to clarify the issues raised by missing data, describing the rationale for MI, the relationship between the various imputation models and associated algorithms and its application to increasingly complex data structures. Multiple Imputation and its Application: * Discusses the issues raised by the analysis of partially observed data, and the assumptions on which analyses rest. * Presents a practical guide to the issues to consider when analysing incomplete data from both observational studies and randomized trials. * Provides a detailed discussion of the practical use of MI with real-world examples drawn from medical and social statistics. * Explores handling non-linear relationships and interactions with multiple imputation, survival analysis, multilevel multiple imputation, sensitivity analysis via multiple imputation, using non-response weights with multiple imputation and doubly robust multiple imputation. Multiple Imputation and its Application is aimed at quantitative researchers and students in the medical and social sciences with the aim of clarifying the issues raised by the analysis of incomplete data data, outlining the rationale for MI and describing how to consider and address the issues that arise in its application.
Six Persistent Research Misconceptions
ABSTRACT Scientific knowledge changes rapidly, but the concepts and methods of the conduct of research change more slowly. To stimulate discussion of outmoded thinking regarding the conduct of research, I list six misconceptions about research that persist long after their flaws have become apparent. The misconceptions are: 1) There is a hierarchy of study designs; randomized trials provide the greatest validity, followed by cohort studies, with case–control studies being least reliable. 2) An essential element for valid generalization is that the study subjects constitute a representative sample of a target population. 3) If a term that denotes the product of two factors in a regression model is not statistically significant, then there is no biologic interaction between those factors. 4) When categorizing a continuous variable, a reasonable scheme for choosing category cut-points is to use percentile-defined boundaries, such as quartiles or quintiles of the distribution. 5) One should always report P values or confidence intervals that have been adjusted for multiple comparisons. 6) Significance testing is useful and important for the interpretation of data. These misconceptions have been perpetuated in journals, classrooms and textbooks. They persist because they represent intellectual shortcuts that avoid more thoughtful approaches to research problems. I hope that calling attention to these misconceptions will spark the debates needed to shelve these outmoded ideas for good.
Ion mobility–mass spectrometry analysis of large protein complexes
Here we describe a detailed protocol for both data collection and interpretation with respect to ion mobility–mass spectrometry analysis of large protein assemblies. Ion mobility is a technique that can separate gaseous ions based on their size and shape. Specifically, within this protocol, we cover general approaches to data interpretation, methods of predicting whether specific model structures for a given protein assembly can be separated by ion mobility, and generalized strategies for data normalization and modeling. The protocol also covers basic instrument settings and best practices for both observation and detection of large noncovalent protein complexes by ion mobility–mass spectrometry.
Handbook of biosurveillance
Provides a coherent and comprehensive account of the theory and practice of real-time human disease outbreak detection, explicitly recognizing the revolution in practices of infection control and public health surveillance. *Reviews the current mathematical, statistical, and computer science systems for early detection of disease outbreaks*Provides extensive coverage of existing surveillance data*Discusses experimental methods for data measurement and evaluation*Addresses engineering and practical implementation of effective early detection systems*Includes real case studies