Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
385,678
result(s) for
"Diagnostic test"
Sort by:
Delirium detection in older acute medical inpatients: a multicentre prospective comparative diagnostic test accuracy study of the 4AT and the confusion assessment method
by
Goodacre, Steve
,
Stephen, Jacqueline
,
Weir, Christopher J.
in
Acute Disease
,
Aged
,
Aged, 80 and over
2019
Background
Delirium affects > 15% of hospitalised patients but is grossly underdetected, contributing to poor care. The 4 ‘A’s Test (4AT,
www.the4AT.com
) is a short delirium assessment tool designed for routine use without special training. The primary objective was to assess the accuracy of the 4AT for delirium detection. The secondary objective was to compare the 4AT with another commonly used delirium assessment tool, the Confusion Assessment Method (CAM).
Methods
This was a prospective diagnostic test accuracy study set in emergency departments or acute medical wards involving acute medical patients aged ≥ 70. All those without acutely life-threatening illness or coma were eligible. Patients underwent (1) reference standard delirium assessment based on DSM-IV criteria and (2) were randomised to either the index test (4AT, scores 0–12; prespecified score of > 3 considered positive) or the comparator (CAM; scored positive or negative), in a random order, using computer-generated pseudo-random numbers, stratified by study site, with block allocation. Reference standard and 4AT or CAM assessments were performed by pairs of independent raters blinded to the results of the other assessment.
Results
Eight hundred forty-three individuals were randomised: 21 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome, and 785 were included in the analysis. Mean age was 81.4 (SD 6.4) years. 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT had an area under the receiver operating characteristic curve of 0.90 (95% CI 0.84–0.96). The 4AT had a sensitivity of 76% (95% CI 61–87%) and a specificity of 94% (95% CI 92–97%). The CAM had a sensitivity of 40% (95% CI 26–57%) and a specificity of 100% (95% CI 98–100%).
Conclusions
The 4AT is a short, pragmatic tool which can help improving detection rates of delirium in routine clinical care.
Trial registration
International standard randomised controlled trial number (ISRCTN)
53388093
. Date applied 30/05/2014; date assigned 02/06/2014.
Journal Article
Comparison of four commercial, automated antigen tests to detect SARS-CoV-2 variants of concern
by
Blum, Helmut
,
Späth, Patricia
,
Kaderali, Lars
in
Amino acids
,
Antigens
,
Antigens, Viral - analysis
2021
A versatile portfolio of diagnostic tests is essential for the containment of the severe acute respiratory syndrome coronavirus type 2 (SARS-CoV-2) pandemic. Besides nucleic acid-based test systems and point-of-care (POCT) antigen (Ag) tests, quantitative, laboratory-based nucleocapsid Ag tests for SARS-CoV-2 have recently been launched. Here, we evaluated four commercial Ag tests on automated platforms and one POCT to detect SARS-CoV-2. We evaluated PCR-positive (
n
= 107) and PCR-negative (
n
= 303) respiratory swabs from asymptomatic and symptomatic patients at the end of the second pandemic wave in Germany (February–March 2021) as well as clinical isolates EU1 (B.1.117), variant of concern (VOC) Alpha (B.1.1.7) or Beta (B.1.351), which had been expanded in a biosafety level 3 laboratory. The specificities of automated SARS-CoV-2 Ag tests ranged between 97.0 and 99.7% (Lumipulse G SARS-CoV-2 Ag (Fujirebio): 97.03%, Elecsys SARS-CoV-2 Ag (Roche Diagnostics): 97.69%; LIAISON
®
SARS-CoV-2 Ag (Diasorin) and SARS-CoV-2 Ag ELISA (Euroimmun): 99.67%). In this study cohort of hospitalized patients, the clinical sensitivities of tests were low, ranging from 17.76 to 52.34%, and analytical sensitivities ranged from 420,000 to 25,000,000 Geq/ml. In comparison, the detection limit of the Roche Rapid Ag Test (RAT) was 9,300,000 Geq/ml, detecting 23.58% of respiratory samples. Receiver-operating-characteristics (ROCs) and Youden’s index analyses were performed to further characterize the assays’ overall performance and determine optimal assay cutoffs for sensitivity and specificity. VOCs carrying up to four amino acid mutations in nucleocapsid were detected by all five assays with characteristics comparable to non-VOCs. In summary, automated, quantitative SARS-CoV-2 Ag tests show variable performance and are not necessarily superior to a standard POCT. The efficacy of any alternative testing strategies to complement nucleic acid-based assays must be carefully evaluated by independent laboratories prior to widespread implementation.
Journal Article
Visualizing and diagnosing spillover within randomized concurrent controlled trials through the application of diagnostic test assessment methods
2024
Background
Spillover of effect, whether positive or negative, from intervention to control group patients invalidates the Stable Unit Treatment Variable Assumption (SUTVA). SUTVA is critical to valid causal inference from randomized concurrent controlled trials (RCCT). Spillover of infection prevention is an important population level effect mediating herd immunity. This herd effect, being additional to any individual level effect, is subsumed within the overall effect size (ES) estimate derived by contrast-based techniques from RCCT’s. This herd effect would manifest only as increased dispersion among the control group infection incidence rates above background.
Methods and results
The objective here is to explore aspects of spillover and how this might be visualized and diagnosed. I use, for illustration, data from 190 RCCT’s abstracted in 13 Cochrane reviews of various antimicrobial versus non-antimicrobial based interventions to prevent pneumonia in ICU patients. Spillover has long been postulated in this context. Arm-based techniques enable three approaches to identify increased dispersion, not available from contrast-based techniques, which enable the diagnosis of spillover within antimicrobial versus non-antimicrobial based infection prevention RCCT’s. These three approaches are benchmarking the pneumonia incidence rates versus a clinically relevant range, comparing the dispersion in pneumonia incidence among the control versus the intervention groups and thirdly, visualizing the incidence dispersion within summary receiver operator characteristic (SROC) plots. By these criteria there is harmful spillover effects to concurrent control group patients.
Conclusions
Arm-based versus contrast-based techniques lead to contrary inferences from the aggregated RCCT’s of antimicrobial based interventions despite similar summary ES estimates. Moreover, the inferred relationship between underlying control group risk and ES is ‘flipped’.
Journal Article
Review of Rapid Diagnostic Tests Used by Antimicrobial Stewardship Programs
by
Bauer, Karri A.
,
Forrest, Graeme N.
,
Goff, Debra A.
in
Anti-Infective Agents - therapeutic use
,
Antibiotics
,
Antimicrobial agents
2014
Rapid microbiologic tests provide opportunities for antimicrobial stewardship programs to improve antimicrobial use and clinical and economic outcomes. Standard techniques for identification of organisms require at least 48–72 hours for final results, compared with rapid diagnostic tests that provide final organism identification within hours of growth. Importantly, rapid microbiologic tests are considered \"game changers\" and represent a significant advancement in the management of infectious diseases. This review focuses on currently available rapid diagnostic tests and, importantly, the impact of rapid testing in combination with antimicrobial stewardship on patient outcomes.
Journal Article
A guide to aid the selection of diagnostic tests
by
Page, Anne-Laure
,
Kosack, Cara S
,
Klatser, Paul R
in
Accuracy
,
Clinical medicine
,
Clinical outcomes
2017
In recent years, a wide range of diagnostic tests has become available for use in resource-constrained settings. Accordingly, a huge number of guidelines, performance evaluations and implementation reports have been produced. However, this wealth of information is unstructured and of uneven quality, which has made it difficult for end-users, such as clinics, laboratories and health ministries, to determine which test would be best for improving clinical care and patient outcomes in a specific context. This paper outlines a six-step guide to the selection and implementation of in vitro diagnostic tests based on Médecins Sans Frontières' practical experience: (i) define the test's purpose; (ii) review the market; (iii) ascertain regulatory approval; (iv) determine the test's diagnostic accuracy under ideal conditions; (v) determine the test's diagnostic accuracy in clinical practice; and (vi) monitor the test's performance in routine use. Gaps in the information needed to complete these six steps and gaps in regulatory systems are highlighted. Finally, ways of improving the quality of diagnostic tests are suggested, such as establishing a model list of essential diagnostics, establishing a repository of information on the design of diagnostic studies and improving quality control and postmarketing surveillance.
Journal Article
Macimorelin as a Diagnostic Test for Adult GH Deficiency
by
Bolanowski, Marek
,
Yuen, Kevin C J
,
Strasburger, Christian J
in
Agreements
,
Comparative analysis
,
Diagnosis
2018
Abstract
Purpose
The diagnosis of adult GH deficiency (AGHD) is challenging and often requires confirmation with a GH stimulation test (GHST). The insulin tolerance test (ITT) is considered the reference standard GHST but is labor intensive, can cause severe hypoglycemia, and is contraindicated for certain patients. Macimorelin, an orally active GH secretagogue, could be used to diagnose AGHD by measuring stimulated GH levels after an oral dose.
Materials and Methods
The present multicenter, open-label, randomized, two-way crossover trial was designed to validate the efficacy and safety of single-dose oral macimorelin for AGHD diagnosis compared with the ITT. Subjects with high (n = 38), intermediate (n = 37), and low (n = 39) likelihood for AGHD and healthy, matched controls (n = 25) were included in the efficacy analysis.
Results
After the first test, 99% of macimorelin tests and 82% of ITTs were evaluable. Using GH cutoff levels of 2.8 ng/mL for macimorelin and 5.1 ng/mL for ITTs, the negative agreement was 95.38% (95% CI, 87% to 99%), the positive agreement was 74.32% (95% CI, 63% to 84%), sensitivity was 87%, and specificity was 96%. On retesting, the reproducibility was 97% for macimorelin (n = 33). In post hoc analyses, a GH cutoff of 5.1 ng/mL for both tests resulted in 94% (95% CI, 85% to 98%) negative agreement, 82% (95% CI, 72% to 90%) positive agreement, 92% sensitivity, and 96% specificity. No serious adverse events were reported for macimorelin.
Conclusions
Oral macimorelin is a simple, well-tolerated, reproducible, and safe diagnostic test for AGHD with accuracy comparable to that of the ITT. A GH cutoff of 5.1 ng/mL for the macimorelin test provides an excellent balance between sensitivity and specificity.
The present multicenter, open-label, randomized, two-way crossover trial of macimorelin vs the ITT showed that macimorelin is a simple, well-tolerated, reproducible, and safe diagnostic test for AGHD.
Journal Article
Eating Disorder Screening: a Systematic Review and Meta-analysis of Diagnostic Test Characteristics of the SCOFF
2020
BackgroundEating disorders affect upwards of 30 million people worldwide and often go undertreated and underdiagnosed. The purpose of this systematic review and meta-analysis was to evaluate the diagnostic accuracy of the Sick, Control, One, Fat and Food (SCOFF) questionnaire for DSM-5 eating disorders in the general population.MethodThe Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) were followed. A PubMed search was conducted among peer-reviewed articles. Information regarding validation of the SCOFF was required for inclusion. Study quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool.ResultsThe final analysis included 25 studies. The validity of the SCOFF was high across samples with a pooled sensitivity of 0.86 (95% CI, 0.78–0.91) and specificity of 0.83 (95% CI, 0.77–0.88). Subgroup analyses were conducted to examine the impact of methodology, study quality, and clinical characteristics on diagnostic accuracy. Studies with the highest sensitivity tended to be case-control studies of young women with anorexia nervosa (AN) and bulimia nervosa (BN). Studies which included more men, included those diagnosed with binge eating disorder, and recruited from large community samples tended to have lower sensitivity. Few studies reported on BMI and race/ethnicity; thus, subgroups for these factors could not be examined. No studies used reference standards which assessed all DSM-5 eating disorders.ConclusionThis meta-analysis of 25 validation studies demonstrates that the SCOFF is a simple and useful screening tool for young women at risk for AN and BN. However, there is not enough evidence to support utilizing the SCOFF for screening for the range of DSM-5 eating disorders in primary care and community-based settings. Further examination of the validity of the SCOFF or development of a new screening tool, or multiple tools, to screen for the range of DSM-5 eating disorders heterogenous populations is warranted.Trial RegistrationThis study is registered online with PROSPERO (CRD42018089906).
Journal Article
Variation in sensitivity and specificity of diverse diagnostic tests across health-care settings: a meta-epidemiological study
by
Vijfschagt, Natasja D.
,
van den Bruel, Ann
,
Leeflang, Mariska M.G.
in
Accuracy
,
Bias
,
Biomarkers
2025
Diagnostic test accuracy (DTA) may vary among health-care settings, which among other reasons may be due to referral from primary to secondary care. The true magnitude and direction of any difference is not certain. We analyzed the results of meta-analyses of DTA to compare sensitivity and specificity between patients in nonreferred and referred care settings.
We systematically searched EBSCOhost MEDLINE for systematic reviews that included at least ten original studies of the same diagnostic test, with at least three studies each performed in nonreferred and referred care. Random-effects models, with setting as a binary covariate, were used to calculate pooled sensitivity and specificity estimates per test. Sensitivity analyses were conducted limiting the analyses to studies from countries with gatekeeping systems only.
In total, nine systematic reviews evaluating thirteen diagnostic tests were included. For signs and symptoms (seven tests), the differences in sensitivity and specificity ranged from +0.03 to +0.30 and from −0.12 to +0.03, respectively; for biomarkers (four tests) differences in sensitivity ranged from −0.11 to +0.21 and specificity from −0.01 to −0.19. Differences in sensitivity and specificity for one questionnaire test were +0.1 and −0.07 respectively and for one imaging test were −0.22 and −0.07. Sensitivity analyses limited to countries with gatekeeping health care systems produced similar results.
Sensitivity and specificity vary in both direction and magnitude between nonreferred and referred settings, depending on the test and target condition, with no universal patterns governing performance differences.
Doctors use diagnostic tests to help assess the likelihood if a patient has a certain condition. However, the accuracy of these tests may vary depending on where they are used—such as in primary care (where patients first seek help) or in specialist care (after being referred by a doctor). We wanted to find out how much test accuracy changes between these settings. To do this, we analyzed previous studies that reviewed the accuracy of different diagnostic tests. We compared how well these tests worked in patients who had not yet been referred to a specialist vs those who had. Our analysis included results from thirteen different diagnostic tests, covering symptoms, biomarkers (such as blood tests), a questionnaire, and an imaging test. We found that test accuracy varied depending on the type of test and the condition being diagnosed. Some tests had higher sensitivity (correctly identifying patients with the disease) or specificity (correctly identifying healthy individuals) in primary care, while in specialist care, the same test could perform better, worse, or similarly. There was no clear pattern that applied to all tests. This suggests that researchers should consider how test accuracy may differ across health-care settings when conducting and interpreting diagnostic test accuracy studies.
•Sensitivity and specificity vary both in direction and magnitude between settings.•Differences do not follow a specific pattern; they vary across tests and conditions.•Differences in sensitivity were larger than those in specificity.•Consider the setting in diagnostic accuracy interpretation and research design.
Journal Article
A reason to celebrate in a time of uncertainty. Response to: Linblade et al. “Assessing the accuracy of the recording and reporting of malaria rapid diagnostic test results in four African countries—methods and key results”
2025
Linblade and colleagues should be commended for their recent publication “Assessing the accuracy of the recording and reporting of malaria rapid diagnostic test results in four African countries: methods and key results.” The authors missed an opportunity to assess those findings in the context of the past and recent history of malaria diagnosis and treatment. If viewed from such an historical perspective, there is reason to celebrate these findings and what they demonstrate about the success of country and global efforts to scale up diagnostic testing for malaria over the last 15 years.
Journal Article