Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
6,198
result(s) for
"Performance validity"
Sort by:
Cross-Validating the Atypical Response Scale of the TSI-2 in a Sample of Motor Vehicle Collision Survivors
by
Erdodi, Laszlo
,
Giromini, Luciano
,
Watson, Mark
in
Accuracy
,
Behavioral Science and Psychology
,
Classification
2023
This study was designed to evaluate the utility of the Atypical Responses (ATR) scale of the Trauma Symptom Inventory – Second Edition (TSI-2) as a symptom validity test (SVT) in a medicolegal sample. Archival data were collected from a consecutive case sequence of 99 patients referred for neuropsychological evaluation following a motor vehicle collision. The ATR’s classification accuracy was computed against criterion measures consisting of composite indices based on SVTs and performance validity tests (PVTs). An ATR cutoff of ≥ 9 emerged as the optimal cutoff, producing a good combination of sensitivity (.35-.53) and specificity (.92-.95) to the criterion SVT, correctly classifying 71–79% of the sample. Predictably, classification accuracy was lower against PVTs as criterion measures (.26-.37 sensitivity at .90-.93 specificity, correctly classifying 66–69% of the sample). The originally proposed ATR cutoff (≥ 15) was prohibitively conservative, resulting in a 90–95% false negative rate. In contrast, although the more liberal alternative (≥ 8) fell short of the specificity standard (.89), it was associated with notably higher sensitivity (.43-.68) and the highest overall classification accuracy (71–82% of the sample). Non-credible symptom report was a stronger confound on the posttraumatic stress scale of the TSI-2 than that of the Personality Assessment Inventory. The ATR demonstrated its clinical utility in identifying non-credible symptom report (and to a lesser extent, invalid performance) in a medicolegal setting, with ≥ 9 emerging as the optimal cutoff. The ATR demonstrated its potential to serve as a quick (potentially stand-alone) screener for the overall credibility of neuropsychological deficits. More research is needed in patients with different clinical characteristics assessed in different settings to establish the generalizability of the findings.
Journal Article
Performance Validity Test Failure in the Clinical Population: A Systematic Review and Meta-Analysis of Prevalence Rates
by
Peters, Maarten J. V.
,
Dandachi-FitzGerald, Brechje
,
Roor, Jeroen J.
in
Biomedical and Life Sciences
,
Biomedicine
,
Humans
2024
Performance validity tests (PVTs) are used to measure the validity of the obtained neuropsychological test data. However, when an individual fails a PVT, the likelihood that failure truly reflects invalid performance (i.e., the positive predictive value) depends on the base rate in the context in which the assessment takes place. Therefore, accurate base rate information is needed to guide interpretation of PVT performance. This systematic review and meta-analysis examined the base rate of PVT failure in the clinical population (PROSPERO number: CRD42020164128). PubMed/MEDLINE, Web of Science, and PsychINFO were searched to identify articles published up to November 5, 2021. Main eligibility criteria were a clinical evaluation context and utilization of stand-alone and well-validated PVTs. Of the 457 articles scrutinized for eligibility, 47 were selected for systematic review and meta-analyses. Pooled base rate of PVT failure for all included studies was 16%, 95% CI [14, 19]. High heterogeneity existed among these studies (Cochran's Q = 697.97,
p
< .001;
I
2
= 91%; τ
2
= 0.08). Subgroup analysis indicated that pooled PVT failure rates varied across clinical context, presence of external incentives, clinical diagnosis, and utilized PVT. Our findings can be used for calculating clinically applied statistics (i.e., positive and negative predictive values, and likelihood ratios) to increase the diagnostic accuracy of performance validity determination in clinical evaluation. Future research is necessary with more detailed recruitment procedures and sample descriptions to further improve the accuracy of the base rate of PVT failure in clinical practice.
Journal Article
A UK Perspective on Pain and Atypical Performance - When the Maths doesn’t Add up!
2022
This presentation provides an overview of factors that can cause symptom exaggeration and/or fabrication in chronic pain. It will explore how symptom and performance validity tests can be applied to chronic pain in the context of a malingering framework and the problems of implementing this in the UK through a case example.
Journal Article
A Lowlands Perspective on Exaggeration and Feigned Symptoms
2022
Some patients present symptoms in an exaggerated manner [1,2]. This behavior can be assessed with specialized tests: Symptom validity tests (SVTs) to measure overreporting of symptoms, and performance validity tests (PVTs) to measure underperformance on cognitive tests. But what does it mean when patients fail on multiple SVTs and/or PVTs? Does it reflect malingering; i.e. grossly exaggerating or feigning symptoms to gain an external benefit? Could it be seen as a plea for help in some cases? Or could pain, fatigue or cognitive impairment be underlying reasons for the validity test failures? In this presentation some credible and non-credible explanations for failing on validity tests will be discussed. A tentative framework that might aid in conceptualizing poor symptom validity will be presented. References [1] Dandachi-FitzGerald, B., Merckelbach, H., Bošković, I., & Jelicic, M. (2020). Do you know people who feign? Proxy respondents about feigned symptoms. Psychological Injury and Law, 13, 225–234. [2] Merckelbach, H., Dandachi-FitzGerald, B., van Helvoort, D., Jelicic, M., & Otgaar, H. (2019). When patients overreport symptoms: More than just malingering. Current Directions in Psychological Science, 28, 321–326.
Journal Article
The Relationship Between Cognitive Functioning and Symptoms of Depression, Anxiety, and Post-Traumatic Stress Disorder in Adults with a Traumatic Brain Injury: a Meta-Analysis
2022
A thorough understanding of the relationship between cognitive test performance and symptoms of depression, anxiety, or post-traumatic stress disorder (PTSD) in people with traumatic brain injury (TBI) is important given the high prevalence of these emotional symptoms following injury. It is also important to understand whether these relationships are affected by TBI severity, and the validity of test performance and symptom report. This meta-analysis was conducted to investigate whether these symptoms are associated with cognitive test performance alterations in adults with a TBI. This meta-analysis was prospectively registered on the PROSPERO International Prospective Register of Systematic Reviews website (registration number: CRD42018089194). The electronic databases Medline, PsycINFO, and CINAHL were searched for journal articles published up until May 2020. In total, 61 studies were included, which enabled calculation of pooled effect sizes for the cognitive domains of immediate memory (verbal and visual), recent memory (verbal and visual), attention, executive function, processing speed, and language. Depression had a small, negative relationship with most cognitive domains. These relationships remained, for the most part, when samples with mild TBI (mTBI)-only were analysed separately, but not for samples with more severe TBI (sTBI)-only. A similar pattern of results was found in the anxiety analysis. PTSD had a small, negative relationship with verbal memory, in samples with mTBI-only. No data were available for the PTSD analysis with sTBI samples. Moderator analyses indicated that the relationships between emotional symptoms and cognitive test performance may be impacted to some degree by exclusion of participants with atypical performance on performance validity tests (PVTs) or symptom validity tests (SVTs), however there were small study numbers and changes in effect size were not statistically significant. These findings are useful in synthesising what is currently known about the relationship between cognitive test performance and emotional symptoms in adults with TBI, demonstrating significant, albeit small, relationships between emotional symptoms and cognitive test performance in multiple domains, in non-military samples. Some of these relationships appeared to be mildly impacted by controlling for performance validity or symptom validity, however this was based on the relatively few studies using validity tests. More research including PVTs and SVTs whilst examining the relationship between emotional symptoms and cognitive outcomes is needed.
Journal Article
Victoria Symptom Validity Test: A Systematic Review and Cross-Validation Study
by
Resch, Zachary J
,
Soble, Jason R
,
Bernstein, Matthew T
in
Accuracy
,
Dementia disorders
,
Population studies
2021
The Victoria Symptom Validity Test (VSVT) is a performance validity test (PVT) with over two decades of empirical backing, although methodological limitations within the extant literature restrict its clinical and research generalizability. Chief among these constraints includes limited consensus on the most accurate index within the VSVT and the most appropriate cut-scores within each VSVT validity index. The current systematic review synthesizes existing VSVT validation studies and provides additional cross-validation in an independent sample using a known-groups design. We completed a systematic search of the literature, identifying 17 peer-reviewed studies for synthesis (7 simulation designs, 7 differential prevalence designs, and 3 known-groups designs). The independent cross-validation sample consisted of 200 mixed clinical neuropsychiatric patients referred for outpatient neuropsychological evaluation. Across all indices, Total item accuracy produced the strongest psychometric properties at an optimal cut-score of ≤ 40 (62% sensitivity/88% specificity). However, ROC curve analyses for all VSVT indices yielded statistically significant areas under the curve (AUCs; .73–81), suggestive of moderate classification accuracy. Cut-scores derived using the independent cross-validation sample converged with some previous findings supporting cut-scores of ≤ 22 for Easy item accuracy and ≤ 40 for Total item accuracy, although divergent findings were noted for Difficult item accuracy. Overall, VSVT validity indicators have adequate diagnostic accuracy across populations, with the current study providing additional support for its use as a psychometrically sound PVT in clinical settings. However, caution is recommended among patients with certain verified clinical conditions (e.g., dementia) and those with pronounced working memory deficits due to concerns for increased risk of false positives.
Journal Article
Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part I: Statistical Issues
2023
Forensic neuropsychological examinations with determination of malingering have tremendous social, legal, and economic consequences. Thousands of studies have been published aimed at developing and validating methods to diagnose malingering in forensic settings, based largely on approximately 50 validity tests, including embedded and stand-alone performance validity tests. This is the first part of a two-part review. Part I explores three statistical issues related to the validation of validity tests as predictors of malingering, including (a) the need to report a complete set of classification accuracy statistics, (b) how to detect and handle collinearity among validity tests, and (c) how to assess the classification accuracy of algorithms for aggregating information from multiple validity tests. In the Part II companion paper, three closely related research methodological issues will be examined. Statistical issues are explored through conceptual analysis, statistical simulations, and through reanalysis of findings from prior validation studies. Findings suggest extant neuropsychological validity tests are collinear and contribute redundant information to the prediction of malingering among forensic examinees. Findings further suggest that existing diagnostic algorithms may miss diagnostic accuracy targets under most realistic conditions. The review makes several recommendations to address these concerns, including (a) reporting of full confusion table statistics with 95% confidence intervals in diagnostic trials, (b) the use of logistic regression, and (c) adoption of the consensus model on the “transparent reporting of multivariate prediction models for individual prognosis or diagnosis” (TRIPOD) in the malingering literature.
Journal Article
Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part II—Methodological Issues
Forensic neuropsychological examinations to detect malingering in patients with neurocognitive, physical, and psychological dysfunction have tremendous social, legal, and economic importance. Thousands of studies have been published to develop and validate methods to forensically detect malingering based largely on approximately 50 validity tests, including embedded and stand-alone performance and symptom validity tests. This is Part II of a two-part review of statistical and methodological issues in the forensic prediction of malingering based on validity tests. The Part I companion paper explored key statistical issues. Part II examines related methodological issues through conceptual analysis, statistical simulations, and reanalysis of findings from prior validity test validation studies. Methodological issues examined include the distinction between analog simulation and forensic studies, the effect of excluding too-close-to-call (TCTC) cases from analyses, the distinction between criterion-related and construct validation studies, and the application of the Revised Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS-2) in all Test of Memory Malingering (TOMM) validation studies published within approximately the first 20 years following its initial publication to assess risk of bias. Findings include that analog studies are commonly confused for forensic validation studies, and that construct validation studies are routinely presented as if they were criterion-reference validation studies. After accounting for the exclusion of TCTC cases, actual classification accuracy was found to be well below claimed levels. QUADAS-2 results revealed that extant TOMM validation studies all had a high risk of bias, with not a single TOMM validation study with low risk of bias. Recommendations include adoption of well-established guidelines from the biomedical diagnostics literature for good quality criterion-referenced validation studies and examination of implications for malingering determination practices. Design of future studies may hinge on the availability of an incontrovertible reference standard of the malingering status of examinees.
Journal Article
Reliability and validity of the Chronojump open-source jump mat system
by
Jimenez-Olmedo, Jose
,
Penichet-Tomas, Alfonso
,
Pueo, Basilio
in
instrument validity performance flight time swc vertical jump lower limb
,
Original Paper
2020
Vertical jump performance is a commonly used test to measure lower-limb muscle power that is carried out with several types of equipment. The aim of this study was to validate an open-source jump mat (Chronojump Boscosystems) against a proprietary jump mat (Globus Ergo Tester). Sixty-three active sportsmen (age 23.3 ± 2.4 years) completed 8 maximal-effort countermovement jumps (CMJ). The heights of the 504 CMJ were measured from the two jump mats simultaneously. Reliability was examined with intra-class correlation coefficients (ICC), paired samples t-tests, coefficient of variation (CV) and Cronbach’s α. Bivariate Pearson’s correlation coefficient (r) was used to examine validity. Effects were evaluated using non-clinical magnitudebased inference. There was almost perfect agreement between instruments (ICC = 0.999−1.000, most likely positive 100/0/0). Paired t-test showed a mean difference of 0.03 ± 0.21 cm (90% CI -0.04 − -0.01) between
Journal Article
Malingering-Related Assessments in Psychological Injury: Performance Validity Tests (PVTs), Symptom Validity Tests (SVTs), and Invalid Response Set
by
Erdodi, Laszlo
,
Giromini, Luciano
,
Young, Gerald
in
Accuracy
,
Behavioral Science and Psychology
,
Classification
2025
The field of psychological injury and law is marked by use of psychometrically sound validity tests that use empirically derived cut scores to determine the credibility of cognitive deficits and psychological symptoms in forensic and related disability assessments (FDRA). Performance validity tests (PVTs) are used in neuropsychological/cognitive assessments to determine the extent to which test scores reflect true ability levels. Symptom validity tests (SVTs) are designed to evaluate the credibility of self-reported level of excessive report in behaviors, emotions, and thoughts. They monitor the rate of endorsement of rare, absurd, impossible, and improbable symptoms. The authors argue for a 30% rule as a tentative multivariate threshold for invalid presentation (with provisos). In other words, failure on about one third of the PVTs/SVTs administered should be required before deeming the overall profile non-credible to control for the threat of inflated false positive error due to the increasing number of instruments used. Typically, workers in the field use the multivariate threshold of ≥ 2 PVT failures in FDRA to deem an entire profile invalid, without considering the number of tests administered. The proposed 30% rule accommodates this face validity question. It is tentatively proposed as a starting point for future research, and with sufficient empirical support, a general guideline for FDRA.
Journal Article