Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
25,623 result(s) for "Validation study"
Sort by:
Prediction models need appropriate internal, internal–external, and external validation
[...]we may consider more direct tests for heterogeneity in predictor effects by place or time. [...]fully independent external validation with data not available at the time of prediction model development can be important (Fig. 2).
Sample size used to validate a scale: a review of publications on newly-developed patient reported outcomes measures
Purpose New patient reported outcome (PRO) measures are regularly developed to assess various aspects of the patients’ perspective on their disease and treatment. For these instruments to be useful in clinical research, they must undergo a proper psychometric validation, including demonstration of cross-sectional and longitudinal measurement properties. This quantitative evaluation requires a study to be conducted on an appropriate sample size. The aim of this research was to list and describe practices in PRO and proxy PRO primary psychometric validation studies, focusing primarily on the practices used to determine sample size. Methods A literature review of articles published in PubMed between January 2009 and September 2011 was conducted. Three selection criteria were applied including a search strategy, an article selection strategy, and data extraction. Agreements between authors were assessed, and practices of validation were described. Results Data were extracted from 114 relevant articles. Within these, sample size determination was low (9.6%, 11/114), and were reported as either an arbitrary minimum sample size (n = 2), a subject to item ratio (n = 4), or the method was not explicitly stated (n = 5). Very few articles (4%, 5/114) compared a posteriori their sample size to a subject to item ratio. Content validity, construct validity, criterion validity and internal consistency were the most frequently measurement properties assessed in the validation studies. Approximately 92% of the articles reported a subject to item ratio greater than or equal to 2, whereas 25% had a ratio greater than or equal to 20. About 90% of articles had a sample size greater than or equal to 100, whereas 7% had a sample size greater than or equal to 1000. Conclusions The sample size determination for psychometric validation studies is rarely ever justified a priori . This emphasizes the lack of clear scientifically sound recommendations on this topic. Existing methods to determine the sample size needed to assess the various measurement properties of interest should be made more easily available.
Exploratory factor analysis in validation studies: Uses and recommendations
The Exploratory Factor Analysis (EFA) procedure is one of the most commonly used in social and behavioral sciences. However, it is also one of the most criticized due to the poor management researchers usually display. The main goal is to examine the relationship between practices usually considered more appropriate and actual decisions made by researchers. The use of exploratory factor analysis is examined in 117 papers published between 2011 and 2012 in 3 Spanish psychological journals with the highest impact within the previous five years. RESULTS show significant rates of questionable decisions in conducting EFA, based on unjustified or mistaken decisions regarding the method of extraction, retention, and rotation of factors. Overall, the current review provides support for some improvement guidelines regarding how to apply and report an EFA.
COSMIN methodology for evaluating the content validity of patient-reported outcome measures: a Delphi study
Background Content validity is the most important measurement property of a patient-reported outcome measure (PROM) and the most challenging to assess. Our aims were to: (1) develop standards for evaluating the quality of PROM development; (2) update the original COSMIN standards for assessing the quality of content validity studies of PROMs; (3) develop criteria for what constitutes good content validity of PROMs, and (4) develop a rating system for summarizing the evidence on a PROM's content validity and grading the quality of the evidence in systematic reviews of PROMs. Methods An online 4-round Delphi study was performed among 159 experts from 21 countries. Panelists rated the degree to which they (dis)agreed to proposed standards, criteria, and rating issues on 5-point rating scales ('strongly disagree' to 'strongly agree'), and provided arguments for their ratings. Results Discussion focused on sample size requirements, recording and field notes, transcribing cognitive interviews, and data coding. After four rounds, the required 67% consensus was reached on all standards, criteria, and rating issues. After pilot-testing, the steering committee made some final changes. Ten criteria for good content validity were defined regarding item relevance, appropriateness of response options and recall period, comprehensiveness, and comprehensibility of the PROM. Discussion The consensus-based COSMIN methodology for content validity is more detailed, standardized, and transparent than earlier published guidelines, including the previous COSMIN standards. This methodology can contribute to the selection and use of high-quality PROMs in research and clinical practice.
A guide to systematic review and meta-analysis of prediction model performance
Validation of prediction models is highly recommended and increasingly common in the literature. A systematic review of validation studies is therefore helpful, with meta-analysis needed to summarise the predictive performance of the model being validated across different settings and populations. This article provides guidance for researchers systematically reviewing and meta-analysing the existing evidence on a specific prediction model, discusses good practice when quantitatively summarising the predictive performance of the model across studies, and provides recommendations for interpreting meta-analysis estimates of model performance. We present key steps of the meta-analysis and illustrate each step in an example review, by summarising the discrimination and calibration performance of the EuroSCORE for predicting operative mortality in patients undergoing coronary artery bypass grafting.
External validation of clinical prediction models using big datasets from e-health records or IPD meta-analysis: opportunities and challenges
Access to big datasets from e-health records and individual participant data (IPD) meta-analysis is signalling a new advent of external validation studies for clinical prediction models. In this article, the authors illustrate novel opportunities for external validation in big, combined datasets, while drawing attention to methodological challenges and reporting issues.
A new framework to enhance the interpretation of external validation studies of clinical prediction models
It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from “different but related” samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. The proposed framework enhances the interpretation of findings at external validation of prediction models.
The mHealth App Usability Questionnaire (MAUQ): Development and Validation Study
After a mobile health (mHealth) app is created, an important step is to evaluate the usability of the app before it is released to the public. There are multiple ways of conducting a usability study, one of which is collecting target users' feedback with a usability questionnaire. Different groups have used different questionnaires for mHealth app usability evaluation: The commonly used questionnaires are the System Usability Scale (SUS) and Post-Study System Usability Questionnaire (PSSUQ). However, the SUS and PSSUQ were not designed to evaluate the usability of mHealth apps. Self-written questionnaires are also commonly used for evaluation of mHealth app usability but they have not been validated. The goal of this project was to develop and validate a new mHealth app usability questionnaire. An mHealth app usability questionnaire (MAUQ) was designed by the research team based on a number of existing questionnaires used in previous mobile app usability studies, especially the well-validated questionnaires. MAUQ, SUS, and PSSUQ were then used to evaluate the usability of two mHealth apps: an interactive mHealth app and a standalone mHealth app. The reliability and validity of the new questionnaire were evaluated. The correlation coefficients among MAUQ, SUS, and PSSUQ were calculated. In this study, 128 study participants provided responses to the questionnaire statements. Psychometric analysis indicated that the MAUQ has three subscales and their internal consistency reliability is high. The relevant subscales correlated well with the subscales of the PSSUQ. The overall scale also strongly correlated with the PSSUQ and SUS. Four versions of the MAUQ were created in relation to the type of app (interactive or standalone) and target user of the app (patient or provider). A website has been created to make it convenient for mHealth app developers to use this new questionnaire in order to assess the usability of their mHealth apps. The newly created mHealth app usability questionnaire-MAUQ-has the reliability and validity required to assess mHealth app usability.
Prognosis in Moderate and Severe Traumatic Brain Injury: A Systematic Review of Contemporary Models and Validation Studies
Outcome prognostication in traumatic brain injury (TBI) is important but challenging due to heterogeneity of the disease. The aim of this systematic review is to present the current state-of-the-art on prognostic models for outcome after moderate and severe TBI and evidence on their validity. We searched for studies reporting on the development, validation or extension of prognostic models for functional outcome after TBI with Glasgow Coma Scale (GCS) ≤12 published between 2006–2018. Studies with patients age ≥14 years and evaluating a multi-variable prognostic model based on admission characteristics were included. Model discrimination was expressed with the area under the receiver operating characteristic curve (AUC), and model calibration with calibration slope and intercept. We included 58 studies describing 67 different prognostic models, comprising the development of 42 models, 149 external validations of 31 models, and 12 model extensions. The most common predictors were GCS (motor) score (n = 55), age (n = 54), and pupillary reactivity (n = 48). Model discrimination varied substantially between studies. The International Mission for Prognosis and Analysis of Clinical Trials (IMPACT) and Corticoid Randomisation After Significant Head injury (CRASH) models were developed on the largest cohorts (8509 and 10,008 patients, respectively) and were most often externally validated (n = 91), yielding AUCs ranging between 0.65-0.90 and 0.66-1.00, respectively. Model calibration was reported with a calibration intercept and slope for seven models in 53 validations, and was highly variable. In conclusion, the discriminatory validity of the IMPACT and CRASH prognostic models is supported across a range of settings. The variation in calibration, reflecting heterogeneity in reliability of predictions, motivates continuous validation and updating if clinical implementation is pursued.
Psychometric evaluation of the English version of the Düsseldorf Orthorexie Scale (DOS) and the prevalence of orthorexia nervosa among a U.S. student sample
Purpose Recently, the concept of orthorexia nervosa (ON) as a potential new variant of disordered eating behavior has gained popularity. However, published prevalence rates appear to be questionable given the lack of validity of the available questionnaires. The Düsseldorf Orthorexie Scale (DOS) is a validated questionnaire only available in German to measure orthorexic behavior. Methods The DOS was translated into English using the back-translation process. Cronbach’s alpha was used to establish internal consistency and an intra-class correlation coefficient was calculated to assess reliability. The Eating Habits Questionnaire (EHQ) was used to test construct validity and the Eating Disorders Inventory was used to test discriminant validity. Principal and confirmatory factor analyses were carried out to test the factor structure. The sample consists of 384 university students in the U.S. Results English (E)-DOS and EHQ were highly correlated ( r  = 0.76, p  < .001) indicating very good construct validity. Cronbach’s alpha coefficient reached 0.88, indicating very good internal consistency. Confirmatory factor analyses revealed a poorly fitted one-factor model, but good results for the standardized coefficients for the 10 items ranging between 0.52 and 0.82 were found. According to the E-DOS, 8.0% of the students exceeded the preliminary cut-off score, while an additional 12.4% would be considered being at risk of developing ON. Conclusions The E-DOS appears to be a valid, reliable measure for assessing ON. The results revealed higher prevalence rates for orthorexic behavior among U.S. students compared to German students. Cultural aspects could play a role in those differences. Level of evidence Descriptive study, Level V.