Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
2,635 result(s) for "Validation Studies as Topic."
Sort by:
Validating psychological constructs : historical, philosophical, and practical dimensions
\"This book critically examines the historical and philosophical foundations of construct validity theory (CVT), and how these have and continue to inform and constrain the conceptualization of validity and its application in research. CVT has had an immense impact on how researchers in the behavioural sciences conceptualize and approach their subject matter. Yet, there is equivocation regarding the foundations of the CVT framework as well as ambiguities concerning the nature of the 'constructs' that are its raison d'etre. The book is organized in terms of three major parts that speak, respectively, to the historical, philosophical, and pragmatic dimensions of CVT. The primary objective is to provide researchers and students with a critical lens through which a deeper understanding may be gained of both the utility and limitations of CVT and the validation practices to which it has given rise.\"-- Back cover.
Prediction models need appropriate internal, internal–external, and external validation
[...]we may consider more direct tests for heterogeneity in predictor effects by place or time. [...]fully independent external validation with data not available at the time of prediction model development can be important (Fig. 2).
External validation of clinical prediction models using big datasets from e-health records or IPD meta-analysis: opportunities and challenges
Access to big datasets from e-health records and individual participant data (IPD) meta-analysis is signalling a new advent of external validation studies for clinical prediction models. In this article, the authors illustrate novel opportunities for external validation in big, combined datasets, while drawing attention to methodological challenges and reporting issues.
A guide to systematic review and meta-analysis of prediction model performance
Validation of prediction models is highly recommended and increasingly common in the literature. A systematic review of validation studies is therefore helpful, with meta-analysis needed to summarise the predictive performance of the model being validated across different settings and populations. This article provides guidance for researchers systematically reviewing and meta-analysing the existing evidence on a specific prediction model, discusses good practice when quantitatively summarising the predictive performance of the model across studies, and provides recommendations for interpreting meta-analysis estimates of model performance. We present key steps of the meta-analysis and illustrate each step in an example review, by summarising the discrimination and calibration performance of the EuroSCORE for predicting operative mortality in patients undergoing coronary artery bypass grafting.
Exploratory factor analysis in validation studies: Uses and recommendations
The Exploratory Factor Analysis (EFA) procedure is one of the most commonly used in social and behavioral sciences. However, it is also one of the most criticized due to the poor management researchers usually display. The main goal is to examine the relationship between practices usually considered more appropriate and actual decisions made by researchers. The use of exploratory factor analysis is examined in 117 papers published between 2011 and 2012 in 3 Spanish psychological journals with the highest impact within the previous five years. RESULTS show significant rates of questionable decisions in conducting EFA, based on unjustified or mistaken decisions regarding the method of extraction, retention, and rotation of factors. Overall, the current review provides support for some improvement guidelines regarding how to apply and report an EFA.
Defining Feasibility and Pilot Studies in Preparation for Randomised Controlled Trials: Development of a Conceptual Framework
We describe a framework for defining pilot and feasibility studies focusing on studies conducted in preparation for a randomised controlled trial. To develop the framework, we undertook a Delphi survey; ran an open meeting at a trial methodology conference; conducted a review of definitions outside the health research context; consulted experts at an international consensus meeting; and reviewed 27 empirical pilot or feasibility studies. We initially adopted mutually exclusive definitions of pilot and feasibility studies. However, some Delphi survey respondents and the majority of open meeting attendees disagreed with the idea of mutually exclusive definitions. Their viewpoint was supported by definitions outside the health research context, the use of the terms 'pilot' and 'feasibility' in the literature, and participants at the international consensus meeting. In our framework, pilot studies are a subset of feasibility studies, rather than the two being mutually exclusive. A feasibility study asks whether something can be done, should we proceed with it, and if so, how. A pilot study asks the same questions but also has a specific design feature: in a pilot study a future study, or part of a future study, is conducted on a smaller scale. We suggest that to facilitate their identification, these studies should be clearly identified using the terms 'feasibility' or 'pilot' as appropriate. This should include feasibility studies that are largely qualitative; we found these difficult to identify in electronic searches because researchers rarely used the term 'feasibility' in the title or abstract of such studies. Investigators should also report appropriate objectives and methods related to feasibility; and give clear confirmation that their study is in preparation for a future randomised controlled trial designed to assess the effect of an intervention.
Validation of Suitable Housekeeping Genes for the Normalization of mRNA Expression for Studying Tumor Acidosis
Similar to other types of cancer, acidification of tumor microenvironment is an important feature of osteosarcoma, and a major source of cellular stress that triggers cancer aggressiveness, drug resistance, and progression. Among the different effects of low extracellular pH on tumor cells, we have recently found that short-term exposure to acidosis strongly affects gene expression. This alteration might also occur for the most commonly used housekeeping genes (HKG), thereby causing erroneous interpretation of RT-qPCR data. On this basis, by using osteosarcoma cells cultured at different pH values, we aimed to identify the ideal HKG to be considered in studies on tumor-associated acidosis. We verified the stability of 15 commonly used HKG through five algorithms (NormFinder, geNorm, BestKeeper, ΔCT, coefficient of variation) and found that no universal HKG is suitable, since at least four HKG are necessary for proper normalization. Furthermore, according to the acceptable range of values, YWHAZ, GAPDH, GUSB, and 18S rRNA were the most stable reference genes at different pH. Our results will be helpful for future investigations focusing on the effect of altered microenvironment on cancer behavior, particularly on the effectiveness of anticancer therapies in acid conditions.
Sample size used to validate a scale: a review of publications on newly-developed patient reported outcomes measures
Purpose New patient reported outcome (PRO) measures are regularly developed to assess various aspects of the patients’ perspective on their disease and treatment. For these instruments to be useful in clinical research, they must undergo a proper psychometric validation, including demonstration of cross-sectional and longitudinal measurement properties. This quantitative evaluation requires a study to be conducted on an appropriate sample size. The aim of this research was to list and describe practices in PRO and proxy PRO primary psychometric validation studies, focusing primarily on the practices used to determine sample size. Methods A literature review of articles published in PubMed between January 2009 and September 2011 was conducted. Three selection criteria were applied including a search strategy, an article selection strategy, and data extraction. Agreements between authors were assessed, and practices of validation were described. Results Data were extracted from 114 relevant articles. Within these, sample size determination was low (9.6%, 11/114), and were reported as either an arbitrary minimum sample size (n = 2), a subject to item ratio (n = 4), or the method was not explicitly stated (n = 5). Very few articles (4%, 5/114) compared a posteriori their sample size to a subject to item ratio. Content validity, construct validity, criterion validity and internal consistency were the most frequently measurement properties assessed in the validation studies. Approximately 92% of the articles reported a subject to item ratio greater than or equal to 2, whereas 25% had a ratio greater than or equal to 20. About 90% of articles had a sample size greater than or equal to 100, whereas 7% had a sample size greater than or equal to 1000. Conclusions The sample size determination for psychometric validation studies is rarely ever justified a priori . This emphasizes the lack of clear scientifically sound recommendations on this topic. Existing methods to determine the sample size needed to assess the various measurement properties of interest should be made more easily available.
COSMIN methodology for evaluating the content validity of patient-reported outcome measures: a Delphi study
Background Content validity is the most important measurement property of a patient-reported outcome measure (PROM) and the most challenging to assess. Our aims were to: (1) develop standards for evaluating the quality of PROM development; (2) update the original COSMIN standards for assessing the quality of content validity studies of PROMs; (3) develop criteria for what constitutes good content validity of PROMs, and (4) develop a rating system for summarizing the evidence on a PROM's content validity and grading the quality of the evidence in systematic reviews of PROMs. Methods An online 4-round Delphi study was performed among 159 experts from 21 countries. Panelists rated the degree to which they (dis)agreed to proposed standards, criteria, and rating issues on 5-point rating scales ('strongly disagree' to 'strongly agree'), and provided arguments for their ratings. Results Discussion focused on sample size requirements, recording and field notes, transcribing cognitive interviews, and data coding. After four rounds, the required 67% consensus was reached on all standards, criteria, and rating issues. After pilot-testing, the steering committee made some final changes. Ten criteria for good content validity were defined regarding item relevance, appropriateness of response options and recall period, comprehensiveness, and comprehensibility of the PROM. Discussion The consensus-based COSMIN methodology for content validity is more detailed, standardized, and transparent than earlier published guidelines, including the previous COSMIN standards. This methodology can contribute to the selection and use of high-quality PROMs in research and clinical practice.
A new framework to enhance the interpretation of external validation studies of clinical prediction models
It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from “different but related” samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. The proposed framework enhances the interpretation of findings at external validation of prediction models.