Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
26,403
result(s) for
"Validation studies"
Sort by:
Validating psychological constructs : historical, philosophical, and practical dimensions
\"This book critically examines the historical and philosophical foundations of construct validity theory (CVT), and how these have and continue to inform and constrain the conceptualization of validity and its application in research. CVT has had an immense impact on how researchers in the behavioural sciences conceptualize and approach their subject matter. Yet, there is equivocation regarding the foundations of the CVT framework as well as ambiguities concerning the nature of the 'constructs' that are its raison d'etre. The book is organized in terms of three major parts that speak, respectively, to the historical, philosophical, and pragmatic dimensions of CVT. The primary objective is to provide researchers and students with a critical lens through which a deeper understanding may be gained of both the utility and limitations of CVT and the validation practices to which it has given rise.\"-- Back cover.
Prediction models need appropriate internal, internal–external, and external validation
2016
[...]we may consider more direct tests for heterogeneity in predictor effects by place or time. [...]fully independent external validation with data not available at the time of prediction model development can be important (Fig. 2).
Journal Article
Sample size used to validate a scale: a review of publications on newly-developed patient reported outcomes measures
by
Regnault, Antoine
,
Moret, Leïla
,
Hardouin, Jean-Benoit
in
Clinical medicine
,
Cross-Sectional Studies
,
Design
2014
Purpose
New patient reported outcome (PRO) measures are regularly developed to assess various aspects of the patients’ perspective on their disease and treatment. For these instruments to be useful in clinical research, they must undergo a proper psychometric validation, including demonstration of cross-sectional and longitudinal measurement properties. This quantitative evaluation requires a study to be conducted on an appropriate sample size. The aim of this research was to list and describe practices in PRO and proxy PRO primary psychometric validation studies, focusing primarily on the practices used to determine sample size.
Methods
A literature review of articles published in PubMed between January 2009 and September 2011 was conducted. Three selection criteria were applied including a search strategy, an article selection strategy, and data extraction. Agreements between authors were assessed, and practices of validation were described.
Results
Data were extracted from 114 relevant articles. Within these, sample size determination was low (9.6%, 11/114), and were reported as either an arbitrary minimum sample size (n = 2), a subject to item ratio (n = 4), or the method was not explicitly stated (n = 5). Very few articles (4%, 5/114) compared
a posteriori
their sample size to a subject to item ratio. Content validity, construct validity, criterion validity and internal consistency were the most frequently measurement properties assessed in the validation studies.
Approximately 92% of the articles reported a subject to item ratio greater than or equal to 2, whereas 25% had a ratio greater than or equal to 20. About 90% of articles had a sample size greater than or equal to 100, whereas 7% had a sample size greater than or equal to 1000.
Conclusions
The sample size determination for psychometric validation studies is rarely ever justified
a priori
. This emphasizes the lack of clear scientifically sound recommendations on this topic. Existing methods to determine the sample size needed to assess the various measurement properties of interest should be made more easily available.
Journal Article
Exploratory factor analysis in validation studies: Uses and recommendations
by
Olea, Julio
,
Abad, Francisco
,
Izquierdo, Isabel
in
Factor Analysis, Statistical
,
Guidelines as Topic
,
Validation studies
2014
The Exploratory Factor Analysis (EFA) procedure is one of the most commonly used in social and behavioral sciences. However, it is also one of the most criticized due to the poor management researchers usually display. The main goal is to examine the relationship between practices usually considered more appropriate and actual decisions made by researchers.
The use of exploratory factor analysis is examined in 117 papers published between 2011 and 2012 in 3 Spanish psychological journals with the highest impact within the previous five years.
RESULTS show significant rates of questionable decisions in conducting EFA, based on unjustified or mistaken decisions regarding the method of extraction, retention, and rotation of factors.
Overall, the current review provides support for some improvement guidelines regarding how to apply and report an EFA.
Journal Article
Don't be misled: 3 misconceptions about external validation of clinical prediction models
by
Dunias, Zoë S.
,
de Hond, Anne
,
Kant, Ilse
in
Artificial intelligence
,
Clinical algorithm
,
Clinical prediction model
2024
Clinical prediction models provide risks of health outcomes that can inform patients and support medical decisions. However, most models never make it to actual implementation in practice. A commonly heard reason for this lack of implementation is that prediction models are often not externally validated. While we generally encourage external validation, we argue that an external validation is often neither sufficient nor required as an essential step before implementation. As such, any available external validation should not be perceived as a license for model implementation. We clarify this argument by discussing 3 common misconceptions about external validation. We argue that there is not one type of recommended validation design, not always a necessity for external validation, and sometimes a need for multiple external validations. The insights from this paper can help readers to consider, design, interpret, and appreciate external validation studies.
Journal Article
COSMIN methodology for evaluating the content validity of patient-reported outcome measures: a Delphi study
2018
Background Content validity is the most important measurement property of a patient-reported outcome measure (PROM) and the most challenging to assess. Our aims were to: (1) develop standards for evaluating the quality of PROM development; (2) update the original COSMIN standards for assessing the quality of content validity studies of PROMs; (3) develop criteria for what constitutes good content validity of PROMs, and (4) develop a rating system for summarizing the evidence on a PROM's content validity and grading the quality of the evidence in systematic reviews of PROMs. Methods An online 4-round Delphi study was performed among 159 experts from 21 countries. Panelists rated the degree to which they (dis)agreed to proposed standards, criteria, and rating issues on 5-point rating scales ('strongly disagree' to 'strongly agree'), and provided arguments for their ratings. Results Discussion focused on sample size requirements, recording and field notes, transcribing cognitive interviews, and data coding. After four rounds, the required 67% consensus was reached on all standards, criteria, and rating issues. After pilot-testing, the steering committee made some final changes. Ten criteria for good content validity were defined regarding item relevance, appropriateness of response options and recall period, comprehensiveness, and comprehensibility of the PROM. Discussion The consensus-based COSMIN methodology for content validity is more detailed, standardized, and transparent than earlier published guidelines, including the previous COSMIN standards. This methodology can contribute to the selection and use of high-quality PROMs in research and clinical practice.
Journal Article
A guide to systematic review and meta-analysis of prediction model performance
by
Reitsma, Johannes B
,
Debray, Thomas P A
,
Moons, Karel G M
in
Case studies
,
Content analysis
,
Coronary artery
2017
Validation of prediction models is highly recommended and increasingly common in the literature. A systematic review of validation studies is therefore helpful, with meta-analysis needed to summarise the predictive performance of the model being validated across different settings and populations. This article provides guidance for researchers systematically reviewing and meta-analysing the existing evidence on a specific prediction model, discusses good practice when quantitatively summarising the predictive performance of the model across studies, and provides recommendations for interpreting meta-analysis estimates of model performance. We present key steps of the meta-analysis and illustrate each step in an example review, by summarising the discrimination and calibration performance of the EuroSCORE for predicting operative mortality in patients undergoing coronary artery bypass grafting.
Journal Article
External validation of clinical prediction models using big datasets from e-health records or IPD meta-analysis: opportunities and challenges
by
Altman, Doug G
,
Debray, Thomas P A
,
Collins, Gary S
in
Calibration
,
Clinical medicine
,
Consortia
2016
Access to big datasets from e-health records and individual participant data (IPD) meta-analysis is signalling a new advent of external validation studies for clinical prediction models. In this article, the authors illustrate novel opportunities for external validation in big, combined datasets, while drawing attention to methodological challenges and reporting issues.
Journal Article
A new framework to enhance the interpretation of external validation studies of clinical prediction models
by
Nieboer, Daan
,
Debray, Thomas P.A.
,
Steyerberg, Ewout W.
in
Case mix
,
Data Interpretation, Statistical
,
Epidemiology
2015
It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from “different but related” samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models.
We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting.
We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings.
The proposed framework enhances the interpretation of findings at external validation of prediction models.
Journal Article
The mHealth App Usability Questionnaire (MAUQ): Development and Validation Study
2019
After a mobile health (mHealth) app is created, an important step is to evaluate the usability of the app before it is released to the public. There are multiple ways of conducting a usability study, one of which is collecting target users' feedback with a usability questionnaire. Different groups have used different questionnaires for mHealth app usability evaluation: The commonly used questionnaires are the System Usability Scale (SUS) and Post-Study System Usability Questionnaire (PSSUQ). However, the SUS and PSSUQ were not designed to evaluate the usability of mHealth apps. Self-written questionnaires are also commonly used for evaluation of mHealth app usability but they have not been validated.
The goal of this project was to develop and validate a new mHealth app usability questionnaire.
An mHealth app usability questionnaire (MAUQ) was designed by the research team based on a number of existing questionnaires used in previous mobile app usability studies, especially the well-validated questionnaires. MAUQ, SUS, and PSSUQ were then used to evaluate the usability of two mHealth apps: an interactive mHealth app and a standalone mHealth app. The reliability and validity of the new questionnaire were evaluated. The correlation coefficients among MAUQ, SUS, and PSSUQ were calculated.
In this study, 128 study participants provided responses to the questionnaire statements. Psychometric analysis indicated that the MAUQ has three subscales and their internal consistency reliability is high. The relevant subscales correlated well with the subscales of the PSSUQ. The overall scale also strongly correlated with the PSSUQ and SUS. Four versions of the MAUQ were created in relation to the type of app (interactive or standalone) and target user of the app (patient or provider). A website has been created to make it convenient for mHealth app developers to use this new questionnaire in order to assess the usability of their mHealth apps.
The newly created mHealth app usability questionnaire-MAUQ-has the reliability and validity required to assess mHealth app usability.
Journal Article