Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
107
result(s) for
"Saini, Sameer D."
Sort by:
Machine learning models to predict disease progression among veterans with hepatitis C virus
2019
Machine learning (ML) algorithms provide effective ways to build prediction models using longitudinal information given their capacity to incorporate numerous predictor variables without compromising the accuracy of the risk prediction. Clinical risk prediction models in chronic hepatitis C virus (CHC) can be challenging due to non-linear nature of disease progression. We developed and compared two ML algorithms to predict cirrhosis development in a large CHC-infected cohort using longitudinal data.
We used national Veterans Health Administration (VHA) data to identify CHC patients in care between 2000-2016. The primary outcome was cirrhosis development ascertained by two consecutive aspartate aminotransferase (AST)-to-platelet ratio indexes (APRIs) > 2 after time zero given the infrequency of liver biopsy in clinical practice and that APRI is a validated non-invasive biomarker of fibrosis in CHC. We excluded those with initial APRI > 2 or pre-existing diagnosis of cirrhosis, hepatocellular carcinoma or hepatic decompensation. Enrollment was defined as the date of the first APRI. Time zero was defined as 2 years after enrollment. Cross-sectional (CS) models used predictors at or closest before time zero as a comparison. Longitudinal models used CS predictors plus longitudinal summary variables (maximum, minimum, maximum of slope, minimum of slope and total variation) between enrollment and time zero. Covariates included demographics, labs, and body mass index. Model performance was evaluated using concordance and area under the receiver operating curve (AuROC). A total of 72,683 individuals with CHC were analyzed with the cohort having a mean age of 52.8, 96.8% male and 53% white. There are 11,616 individuals (16%) who met the primary outcome over a mean follow-up of 7 years. We found superior predictive performance for the longitudinal Cox model compared to the CS Cox model (concordance 0.764 vs 0.746), and for the longitudinal boosted-survival-tree model compared to the linear Cox model (concordance 0.774 vs 0.764). The accuracy of the longitudinal models at 1,3,5 years after time zero also showed superior performance compared to the CS model, based on AuROC.
Boosted-survival-tree based models using longitudinal information are statistically superior to cross-sectional or linear models for predicting development of cirrhosis in CHC, though all four models were highly accurate. Similar statistical methods could be applied to predict outcomes in other non-linear chronic disease states.
Journal Article
Effect of Flexible Sigmoidoscopy-Based Screening on Incidence and Mortality of Colorectal Cancer: A Systematic Review and Meta-Analysis of Randomized Controlled Trials
by
Schoenfeld, Philip S.
,
Deshpande, Amar
,
Elmunzer, B. Joseph
in
Clinical trials
,
Colorectal cancer
,
Colorectal Neoplasms - diagnosis
2012
Randomized controlled trials (RCTs) have yielded varying estimates of the benefit of flexible sigmoidoscopy (FS) screening for colorectal cancer (CRC). Our objective was to more precisely estimate the effect of FS-based screening on the incidence and mortality of CRC by performing a meta-analysis of published RCTs.
Medline and Embase databases were searched for eligible articles published between 1966 and 28 May 2012. After screening 3,319 citations and 29 potentially relevant articles, two reviewers identified five RCTs evaluating the effect of FS screening on the incidence and mortality of CRC. The reviewers independently extracted relevant data; discrepancies were resolved by consensus. The quality of included studies was assessed using criteria set out by the Evidence-Based Gastroenterology Steering Group. Random effects meta-analysis was performed. The five RCTs meeting eligibility criteria were determined to be of high methodologic quality and enrolled 416,159 total subjects. Four European studies compared FS to no screening and one study from the United States compared FS to usual care. By intention to treat analysis, FS-based screening was associated with an 18% relative risk reduction in the incidence of CRC (0.82, 95% CI 0.73-0.91, p<0.001, number needed to screen [NNS] to prevent one case of CRC = 361), a 33% reduction in the incidence of left-sided CRC (RR 0.67, 95% CI 0.59-0.76, p<0.001, NNS = 332), and a 28% reduction in the mortality of CRC (relative risk [RR] 0.72, 95% CI 0.65-0.80, p<0.001, NNS = 850). The efficacy estimate, the amount of benefit for those who actually adhered to the recommended treatment, suggested that FS screening reduced CRC incidence by 32% (p<0.001), and CRC-related mortality by 50% (p<0.001). Limitations of this meta-analysis include heterogeneity in the design of the included trials, absence of studies from Africa, Asia, or South America, and lack of studies comparing FS with colonoscopy or stool-based testing.
This meta-analysis of randomized controlled trials demonstrates that FS-based screening significantly reduces the incidence and mortality of colorectal cancer in average-risk patients.
Journal Article
Patients' Perceptions of Proton Pump Inhibitor Risks and Attempts at Discontinuation: A National Survey
by
De Vries, Raymond
,
Kurlander, Jacob E.
,
Richardson, Caroline R.
in
Adult
,
Attitude to Health
,
Deprescriptions
2019
Little is known about how reports on the adverse effects of proton pump inhibitors (PPIs) impact patients' perceptions of these drugs and medication use. We sought to determine patients' level of concern about PPI adverse effects and its association with attempts to discontinue these drugs.
This study is an online survey of US adults who use PPIs for gastroesophageal reflux disease. Topics included awareness of and concern about PPI adverse effects, prior discussion with providers, and attempts to stop PPI because of concern about adverse effects. For the primary analysis, we used logistic regression to identify associations between having attempted to stop PPI and concern about PPI-related adverse effects, a provider's recommendation to stop, risk of upper gastrointestinal bleeding (UGIB), age, and gender.
Among 755 patient participants, mean age was 49 years (s.d. 16), 71% were women, and 24% were at high risk of UGIB. Twenty percent of patients were able to write in ≥1 reported adverse effect, and 46% endorsed awareness of ≥1 adverse effect when presented with a list, most commonly chronic kidney disease (17%). Thirty-three percent of patients were slightly concerned, 32% somewhat concerned, and 14% extremely concerned about adverse effects. Twenty-four percent of patients had discussed PPI risks and benefits with a provider, and 9% had been recommended to stop. Thirty-nine percent had attempted to stop their PPI, most (83%) without a provider recommendation. Factors associated with an attempt at stopping PPI included: (i) provider recommendation to stop (odds ratio [OR] 3.26 [1.82-5.83]); (ii) concern about adverse effects (OR 5.13 [2.77-9.51] for slightly, 12.0 [6.51-22.2] for somewhat, and 19.4 [9.75-38.7] for extremely concerned); and (iii) female gender (OR 1.64 [1.12-2.39]). Patients at high risk of UGIB were as likely to have attempted to stop as others (OR 0.98 [0.66-1.44]).
Concern about PPIs is common and strongly associated with attempts at discontinuation, even without a provider's recommendation. Notably, individuals at high risk of UGIB, who benefit from PPIs, were equally likely to have tried stopping PPIs as others. Providers should proactively discuss the risks and benefits of PPIs with their patients, who may otherwise make unwise decisions about PPI management on their own.
Journal Article
How Efficacious Are Patient Education Interventions to Improve Bowel Preparation for Colonoscopy? A Systematic Review
by
Schoenfeld, Philip S.
,
Waljee, Akbar K.
,
Menees, Stacy B.
in
Analysis
,
Colon
,
Colonic Diseases - diagnosis
2016
Bowel preparation is inadequate in a large proportion of colonoscopies, leading to multiple clinical and economic harms. While most patients receive some form of education before colonoscopy, there is no consensus on the best approach.
This systematic review aimed to evaluate the efficacy of patient education interventions to improve bowel preparation.
We searched the Cochrane Database, CINAHL, EMBASE, Ovid, and Web of Science. Inclusion criteria were: (1) a patient education intervention; (2) a primary aim of improving bowel preparation; (3) a validated bowel preparation scale; (4) a prospective design; (5) a concurrent control group; and, (6) adult participants. Study validity was assessed using a modified Downs and Black scale.
1,080 abstracts were screened. Seven full text studies met inclusion criteria, including 2,660 patients. These studies evaluated multiple delivery platforms, including paper-based interventions (three studies), videos (two studies), re-education telephone calls the day before colonoscopy (one study), and in-person education by physicians (one study). Bowel preparation significantly improved with the intervention in all but one study. All but one study were done in a single center. Validity scores ranged from 13 to 24 (maximum 27). Four of five abstracts and research letters that met inclusion criteria also showed improvements in bowel preparation. Statistical and clinical heterogeneity precluded meta-analysis.
Compared to usual care, patient education interventions appear efficacious in improving the quality of bowel preparation. However, because of the small scale of the studies and individualized nature of the interventions, results of these studies may not be generalizable to other settings. Healthcare practices should consider systematically evaluating their current bowel preparation education methods before undertaking new interventions.
Journal Article
A cooling off period: decline in the use of hot biopsy forceps technique in colonoscopy in the U.S. Medicare population 2000–2019
by
Read, Andrew J.
,
Waljee, Akbar K.
,
Kurlander, Jacob E.
in
Biopsy
,
Biopsy - instrumentation
,
Biopsy - methods
2025
Background
The use of hot biopsy forceps (with electrocautery) is no longer routinely recommended given increased complications compared to cold biopsy forceps (without electrocautery). It is unknown how often the technique is currently used in the United States (U.S.) or how its usage has changed over time.
Aim
To characterize the use of hot biopsy forceps by U.S. Medicare providers over time, identify provider characteristics of those who more commonly perform this technique, and determine if there are regional differences in use of this technique within the U.S.
Methods
We performed a retrospective cross-sectional study using U.S. Medicare summary data from 2000 to 2019 to analyze the frequency of cold and hot biopsies. We used detailed provider and state summary files to characterize providers’ demographics, including geographic region, to identify regional variation in use of these techniques, and identify factors associated with use of hot biopsy forceps from 2012 to 2019.
Results
The hot biopsy forceps technique peaked in 2003 (412,165/year) and declined to 108,232/year in 2019, while the cold biopsy forceps technique increased from 482,862/year in 2000 to 1,533,558/year in 2019. Use of hot biopsy forceps was more common by non-gastroenterologists and in rural practice settings. In addition, there was up to 50-fold difference in utilization in these techniques between states (on a population normalized basis), with the highest rate of use in the southeastern U.S.
Conclusion
Variation in the use of hot biopsy forceps by region and provider suggests a potential area for quality improvement given the comparative advantages of the cold biopsy forceps technique. De-implementation of an existing endoscopic practice may require different approaches than implementation of a new practice.
Journal Article
Prediction of Gastrointestinal Tract Cancers Using Longitudinal Electronic Health Record Data
2023
Background: Luminal gastrointestinal (GI) tract cancers, including esophageal, gastric, small bowel, colorectal, and anal cancers, are often diagnosed at late stages. These tumors can cause gradual GI bleeding, which may be unrecognized but detectable by subtle laboratory changes. Our aim was to develop models to predict luminal GI tract cancers using laboratory studies and patient characteristics using logistic regression and random forest machine learning methods. Methods: The study was a single-center, retrospective cohort at an academic medical center, with enrollment between 2004–2013 and with follow-up until 2018, who had at least two complete blood counts (CBCs). The primary outcome was the diagnosis of GI tract cancer. Prediction models were developed using multivariable single timepoint logistic regression, longitudinal logistic regression, and random forest machine learning. Results: The cohort included 148,158 individuals, with 1025 GI tract cancers. For 3-year prediction of GI tract cancers, the longitudinal random forest model performed the best, with an area under the receiver operator curve (AuROC) of 0.750 (95% CI 0.729–0.771) and Brier score of 0.116, compared to the longitudinal logistic regression model, with an AuROC of 0.735 (95% CI 0.713–0.757) and Brier score of 0.205. Conclusions: Prediction models incorporating longitudinal features of the CBC outperformed the single timepoint logistic regression models at 3-years, with a trend toward improved accuracy of prediction using a random forest machine learning model compared to a longitudinal logistic regression model.
Journal Article
Measures Used to Assess the Impact of Interventions to Reduce Low-Value Care: a Systematic Review
by
Maratt, Jennifer K
,
R Sacha Bhatia
,
Klamerus, Mandi L
in
Antibiotics
,
Clinical trials
,
Decision making
2019
ImportanceStudies of interventions to reduce low-value care are increasingly common. However, little is known about how the effects of such interventions are measured.ObjectiveTo characterize measures used to assess interventions to reduce low-value care.Evidence ReviewWe searched PubMed and Web of Science to identify studies published between 2010 and 2016 that examined the effects of interventions to reduce low-value care. We also searched ClinicalTrials.gov to identify ongoing studies. We extracted data on characteristics of studies, interventions, and measures. We then developed a framework to classify measures into the following categories: utilization (e.g., number of tests ordered), outcome (e.g., mortality), appropriateness (e.g., overuse of antibiotics), patient-reported (e.g., satisfaction), provider-reported (e.g., satisfaction), patient-provider interaction (e.g., informed decision-making elements), value, and cost. We also determined whether each measure was designed to assess unintended consequences.FindingsA total of 1805 studies were identified, of which 101 published and 16 ongoing studies were included. Of published studies (N = 101), 68% included at least one measure of utilization, 41% of an outcome, 52% of appropriateness, 36% of cost, 8% patient-reported, and 3% provider-reported. Funded studies were more likely to use patient-reported measures (17% vs 0%). Of ongoing studies (registered trials) (N = 16), 69% included at least one measure of utilization, 75% of an outcome, 50% of appropriateness, 19% of cost, 50% patient-reported, 13% provider-reported, and 6% patient-provider interaction. Of published studies, 34% included at least one measure of an unintended consequence as compared to 63% of ongoing studies.Conclusions and RelevanceMost published studies focused on reductions in utilization rather than on clinically meaningful measures (e.g., improvements in appropriateness, patient-reported outcomes) or unintended consequences. Investigators should systematically incorporate more clinically meaningful measures into their study designs, and sponsors should develop standardized guidance for the evaluation of interventions to reduce low-value care.
Journal Article
Primary care physicians are under-testing for celiac disease in patients with iron deficiency anemia: Results of a national survey
2017
Iron deficiency anemia (IDA) is a common extra-intestinal manifestation of celiac disease (CD). Little is known about the frequency with which primary care physicians (PCPs) test for CD in patients with IDA. We aimed to describe how PCPs approach testing for CD in asymptomatic patients with IDA.
We electronically distributed a survey to PCPs who are members of the American College of Physicians. Respondents were asked whether they would test for CD (serologic testing, refer for esophagogastroduodenoscopy [EGD], or refer to GI) in hypothetical patients with new IDA, including: (1) a young Caucasian man, (2) a premenopausal Caucasian woman, (3) an elderly Caucasian man, and (4) a young African American man. These scenarios were chosen to assess for differences in testing for CD based on age, gender, and race. Multivariable logistic regression was used to identify independent predictors of testing.
Testing for CD varied significantly according to patient characteristics, with young Caucasian men being the most frequently tested (61% of respondents reporting they would perform serologic testing in this subgroup (p<0.001)). Contrary to guideline recommendations, 80% of respondents reported they would definitely or probably start a patient with positive serologies for CD on a gluten free diet prior to confirmatory upper endoscopy.
PCPs are under-testing for CD in patients with IDA, regardless of age, gender, race, or post-menopausal status. The majority of PCPs surveyed reported they do not strictly adhere to established guidelines regarding a confirmatory duodenal biopsy in a patient with positive serology for CD.
Journal Article
Age Disparities in the Use of Steroid-sparing Therapy for Inflammatory Bowel Disease
by
Wiitala, Wyndy L.
,
Govani, Shail M.
,
Sussman, Jeremy B.
in
Adrenal Cortex Hormones - adverse effects
,
Adrenal Cortex Hormones - therapeutic use
,
Adult
2016
Corticosteroids are effective rescue therapies for patients with inflammatory bowel disease (IBD), but have significant side effects, which may be amplified in the growing population of elderly patients with IBD. We aimed to compare the use of steroids and steroid-sparing therapies (immunomodulators and biologics) and rates of complications among elderly (≥65) and younger patients in a national cohort of veterans with IBD.MethodsWe used national Veterans Health Administrative data to conduct a retrospective study of veterans with IBD between 2002 and 2010. Medications and the incidence of complications were obtained from the Veterans Health Administrative Decision Support Systems. Multivariate logistic regression accounting for facility-level clustering was used to identify predictors of use of steroid-sparing medications.ResultsWe identified 30,456 veterans with IBD. Of these, 94% were men and 40% were more than 65, and 32% were given steroids. Elderly veterans were less likely to receive steroids (23.8% versus 38.3%, P < 0.001) and were less likely to be prescribed steroid-sparing medications (25.5% versus 46.9%, respectively, P < 0.001). In multivariate analysis controlling for sex, age <65 (odds ratio, 2.19; 95% CI, 1.54–3.11) and gastroenterology care (odds ratio, 8.42; 95% CI, 6.18–11.47) were associated with initiation of steroid-sparing medications. After starting steroids, fracture rates increased in the elderly patients with IBD, whereas increases in venous thromboembolism and infections after starting steroids affected both age groups.ConclusionsElderly veterans are less likely to receive steroids and steroid-sparing medications than younger veterans; elderly patients exposed to steroids were more likely to have fractures than the younger population.
Journal Article
Diffusion of an innovation: growth in video capsule endoscopy in the U.S. Medicare population from 2003 to 2019
2022
Background
Video capsule endoscopy (VCE), approved by the U.S. Food and Drug Administration (FDA) in 2001, represented a disruptive technology that transformed evaluation of the small intestine. Adoption of this technology over time and current use within the U.S. clinical population has not been well described.
Methods
To assess the growth of capsule endoscopy within the U.S. Medicare provider population (absolute growth and on a population-adjusted basis), characterize the providers performing VCE, and describe potential regional differences in use. Medicare summary data from 2003 to 2019 were used to retrospectively analyze capsule endoscopy use in a multiple cross-sectional design. In addition, detailed provider summary files were used from 2012 to 2018 to characterize provider demographics.
Results
VCE use grew rapidly from 2003 to 2008 followed by a plateau from 2008 to 2019. There was significant variation in use of VCE between states, with up to 10-fold variation between states (14.6 to 156.1 per 100,000 enrollees in 2018). During this time, the adjusted VCE use on a population-adjusted basis declined, reflecting saturation of growth.
Conclusions
Growth of VCE use over time follows an S-shaped diffusion of innovation curve demonstrating a successful diffusion of innovation within gastroenterology. The lack of additional growth since 2008 suggests that current levels of use are well matched to overall population need within the constraints of reimbursement. Future studies should examine whether this lack of growth has implications for access and healthcare inequities.
Journal Article