Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
50
result(s) for
"Schulz, Wade L"
Sort by:
Enhancing the prediction of acute kidney injury risk after percutaneous coronary intervention using machine learning techniques: A retrospective cohort study
by
Huang, Chenxi
,
Li, Shu-Xia
,
Wang, Yongfei
in
Acute kidney failure
,
Acute Kidney Injury - diagnosis
,
Acute Kidney Injury - etiology
2018
The current acute kidney injury (AKI) risk prediction model for patients undergoing percutaneous coronary intervention (PCI) from the American College of Cardiology (ACC) National Cardiovascular Data Registry (NCDR) employed regression techniques. This study aimed to evaluate whether models using machine learning techniques could significantly improve AKI risk prediction after PCI.
We used the same cohort and candidate variables used to develop the current NCDR CathPCI Registry AKI model, including 947,091 patients who underwent PCI procedures between June 1, 2009, and June 30, 2011. The mean age of these patients was 64.8 years, and 32.8% were women, with a total of 69,826 (7.4%) AKI events. We replicated the current AKI model as the baseline model and compared it with a series of new models. Temporal validation was performed using data from 970,869 patients undergoing PCIs between July 1, 2016, and March 31, 2017, with a mean age of 65.7 years; 31.9% were women, and 72,954 (7.5%) had AKI events. Each model was derived by implementing one of two strategies for preprocessing candidate variables (preselecting and transforming candidate variables or using all candidate variables in their original forms), one of three variable-selection methods (stepwise backward selection, lasso regularization, or permutation-based selection), and one of two methods to model the relationship between variables and outcome (logistic regression or gradient descent boosting). The cohort was divided into different training (70%) and test (30%) sets using 100 different random splits, and the performance of the models was evaluated internally in the test sets. The best model, according to the internal evaluation, was derived by using all available candidate variables in their original form, permutation-based variable selection, and gradient descent boosting. Compared with the baseline model that uses 11 variables, the best model used 13 variables and achieved a significantly better area under the receiver operating characteristic curve (AUC) of 0.752 (95% confidence interval [CI] 0.749-0.754) versus 0.711 (95% CI 0.708-0.714), a significantly better Brier score of 0.0617 (95% CI 0.0615-0.0618) versus 0.0636 (95% CI 0.0634-0.0638), and a better calibration slope of observed versus predicted rate of 1.008 (95% CI 0.988-1.028) versus 1.036 (95% CI 1.015-1.056). The best model also had a significantly wider predictive range (25.3% versus 21.6%, p < 0.001) and was more accurate in stratifying AKI risk for patients. Evaluated on a more contemporary CathPCI cohort (July 1, 2015-March 31, 2017), the best model consistently achieved significantly better performance than the baseline model in AUC (0.785 versus 0.753), Brier score (0.0610 versus 0.0627), calibration slope (1.003 versus 1.062), and predictive range (29.4% versus 26.2%). The current study does not address implementation for risk calculation at the point of care, and potential challenges include the availability and accessibility of the predictors.
Machine learning techniques and data-driven approaches resulted in improved prediction of AKI risk after PCI. The results support the potential of these techniques for improving risk prediction models and identification of patients who may benefit from risk-mitigation strategies.
Journal Article
Automated multilabel diagnosis on electrocardiographic images and signals
by
Khera, Rohan
,
Schulz, Wade L.
,
Brandt, Cynthia A.
in
692/4019
,
692/700/139/1449
,
Artificial Intelligence
2022
The application of artificial intelligence (AI) for automated diagnosis of electrocardiograms (ECGs) can improve care in remote settings but is limited by the reliance on infrequently available signal-based data. We report the development of a multilabel automated diagnosis model for electrocardiographic images, more suitable for broader use. A total of 2,228,236 12-lead ECGs signals from 811 municipalities in Brazil are transformed to ECG images in varying lead conformations to train a convolutional neural network (CNN) identifying 6 physician-defined clinical labels spanning rhythm and conduction disorders, and a hidden label for gender. The image-based model performs well on a distinct test set validated by at least two cardiologists (average AUROC 0.99, AUPRC 0.86), an external validation set of 21,785 ECGs from Germany (average AUROC 0.97, AUPRC 0.73), and printed ECGs, with performance superior to signal-based models, and learning clinically relevant cues based on Grad-CAM. The model allows the application of AI to ECGs across broad settings.
The application of artificial intelligence for automated diagnosis of electrocardiograms can improve care in remote settings but is limited by the reliance on infrequently available signal-based data. Here, the authors report the development of a multi-label automated diagnosis model for electrocardiographic images.
Journal Article
Assessment of the integrity of real-time electronic health record data used in clinical research
by
Coppi, Andreas
,
Schulz, Wade L.
,
Liu, Jessica
in
Benchmarking
,
Benchmarks
,
Biomedical Research
2026
Near real-time electronic health record (EHR) data offers significant potential for secondary use in research, operations, and clinical care, yet challenges remain in ensuring data quality and stability. While prior studies have assessed retrospective EHR datasets, few have systematically examined the integrity of real-time data for research readiness.
We developed an automated benchmarking pipeline to evaluate the stability and completeness of real-time EHR data from the Yale New Haven Health clinical data warehouse, transformed into the OMOP common data model. Twenty-nine weekly snapshots of the EHR collected from July to November 2024 and twenty-two daily snapshots collected from April to May 2025 were analyzed. Benchmarks focused on (1) clinical actions such as patient additions, deletions, and merges; (2) changes in demographic variables (date of birth, gender, race, ethnicity); and (3) stability of discharge information (time and status). A synthetic dataset derived from MIMIC-III was used to validate the benchmarking code prior to large-scale analyses.
Benchmarking revealed frequent updates due to clinical actions and demographic corrections across consecutive snapshots. Demographic changes were most frequently related to race and ethnicity, highlighting potential workflow and data entry inconsistencies. Discharge time and status values demonstrated instability for several days post-encounter, typically reaching a stable state within 4-7 days. These findings indicate that while near real-time EHR data provide valuable insights, the timing of data stabilization is critical for accurate secondary use.
This study demonstrates the feasibility of automated benchmarking to assess the integrity of real-time EHR data and identify when such data become analysis ready. Our findings highlight key challenges for secondary use of dynamic clinical data and provide an automated framework that can be applied across health systems to support high-quality research, surveillance, and clinical trial readiness.
Journal Article
Evidence of leaky protection following COVID-19 vaccination and SARS-CoV-2 infection in an incarcerated population
by
Thomas, Russell
,
Cummings, Derek A. T.
,
Ko, Albert I.
in
631/326/596/4130
,
692/308/174
,
692/699/255/2514
2023
Whether SARS-CoV-2 infection and COVID-19 vaccines confer exposure-dependent (“leaky”) protection against infection remains unknown. We examined the effect of prior infection, vaccination, and hybrid immunity on infection risk among residents of Connecticut correctional facilities during periods of predominant Omicron and Delta transmission. Residents with cell, cellblock, and no documented exposure to SARS-CoV-2 infected residents were matched by facility and date. During the Omicron period, prior infection, vaccination, and hybrid immunity reduced the infection risk of residents without a documented exposure (HR: 0.36 [0.25–0.54]; 0.57 [0.42–0.78]; 0.24 [0.15–0.39]; respectively) and with cellblock exposures (0.61 [0.49–0.75]; 0.69 [0.58–0.83]; 0.41 [0.31–0.55]; respectively) but not with cell exposures (0.89 [0.58–1.35]; 0.96 [0.64–1.46]; 0.80 [0.46–1.39]; respectively). Associations were similar during the Delta period and when analyses were restricted to tested residents. Although associations may not have been thoroughly adjusted due to dataset limitations, the findings suggest that prior infection and vaccination may be leaky, highlighting the potential benefits of pairing vaccination with non-pharmaceutical interventions in crowded settings.
Measuring an individual’s level of exposure to COVID-19 is challenging, and it is therefore unclear whether high exposure may impact immunity. Here, the authors investigate this question using data from a correctional facility in Connecticut, USA, by comparing rates of infection in people who share cells, cellblocks, and with no known exposure.
Journal Article
Association between primary or booster COVID-19 mRNA vaccination and Omicron lineage BA.1 SARS-CoV-2 infection in people with a prior SARS-CoV-2 infection: A test-negative case–control analysis
by
Cummings, Derek A. T.
,
Ko, Albert I.
,
Robertson, Alexander J.
in
Adult
,
Biology and life sciences
,
Case-Control Studies
2022
The benefit of primary and booster vaccination in people who experienced a prior Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection remains unclear. The objective of this study was to estimate the effectiveness of primary (two-dose series) and booster (third dose) mRNA vaccination against Omicron (lineage BA.1) infection among people with a prior documented infection.
We conducted a test-negative case-control study of reverse transcription PCRs (RT-PCRs) analyzed with the TaqPath (Thermo Fisher Scientific) assay and recorded in the Yale New Haven Health system from November 1, 2021, to April 30, 2022. Overall, 11,307 cases (positive TaqPath analyzed RT-PCRs with S-gene target failure [SGTF]) and 130,041 controls (negative TaqPath analyzed RT-PCRs) were included (median age: cases: 35 years, controls: 39 years). Among cases and controls, 5.9% and 8.1% had a documented prior infection (positive SARS-CoV-2 test record ≥90 days prior to the included test), respectively. We estimated the effectiveness of primary and booster vaccination relative to SGTF-defined Omicron (lineage BA.1) variant infection using a logistic regression adjusted for date of test, age, sex, race/ethnicity, insurance, comorbidities, social venerability index, municipality, and healthcare utilization. The effectiveness of primary vaccination 14 to 149 days after the second dose was 41.0% (95% confidence interval (CI): 14.1% to 59.4%, p 0.006) and 27.1% (95% CI: 18.7% to 34.6%, p < 0.001) for people with and without a documented prior infection, respectively. The effectiveness of booster vaccination (≥14 days after booster dose) was 47.1% (95% CI: 22.4% to 63.9%, p 0.001) and 54.1% (95% CI: 49.2% to 58.4%, p < 0.001) in people with and without a documented prior infection, respectively. To test whether booster vaccination reduced the risk of infection beyond that of the primary series, we compared the odds of infection among boosted (≥14 days after booster dose) and booster-eligible people (≥150 days after second dose). The odds ratio (OR) comparing boosted and booster-eligible people with a documented prior infection was 0.79 (95% CI: 0.54 to 1.16, p 0.222), whereas the OR comparing boosted and booster-eligible people without a documented prior infection was 0.54 (95% CI: 0.49 to 0.59, p < 0.001). This study's limitations include the risk of residual confounding, the use of data from a single system, and the reliance on TaqPath analyzed RT-PCR results.
In this study, we observed that primary vaccination provided significant but limited protection against Omicron (lineage BA.1) infection among people with and without a documented prior infection. While booster vaccination was associated with additional protection against Omicron BA.1 infection in people without a documented prior infection, it was not found to be associated with additional protection among people with a documented prior infection. These findings support primary vaccination in people regardless of documented prior infection status but suggest that infection history may impact the relative benefit of booster doses.
Journal Article
Temporal relationship of computed and structured diagnoses in electronic health record data
2021
Background
The electronic health record (EHR) holds the prospect of providing more complete and timely access to clinical information for biomedical research, quality assessments, and quality improvement compared to other data sources, such as administrative claims. In this study, we sought to assess the completeness and timeliness of structured diagnoses in the EHR compared to computed diagnoses for hypertension (HTN), hyperlipidemia (HLD), and diabetes mellitus (DM).
Methods
We determined the amount of time for a structured diagnosis to be recorded in the EHR from when an equivalent diagnosis could be computed from other structured data elements, such as vital signs and laboratory results. We used EHR data for encounters from January 1, 2012 through February 10, 2019 from an academic health system. Diagnoses for HTN, HLD, and DM were computed for patients with at least two observations above threshold separated by at least 30 days, where the thresholds were outpatient blood pressure of ≥ 140/90 mmHg, any low-density lipoprotein ≥ 130 mg/dl, or any hemoglobin A1c ≥ 6.5%, respectively. The primary measure was the length of time between the computed diagnosis and the time at which a structured diagnosis could be identified within the EHR history or problem list.
Results
We found that 39.8% of those with HTN, 21.6% with HLD, and 5.2% with DM did not receive a corresponding structured diagnosis recorded in the EHR. For those who received a structured diagnosis, a mean of 389, 198, and 166 days elapsed before the patient had the corresponding diagnosis of HTN, HLD, or DM, respectively, recorded in the EHR.
Conclusions
We found a marked temporal delay between when a diagnosis can be computed or inferred and when an equivalent structured diagnosis is recorded within the EHR. These findings demonstrate the continued need for additional study of the EHR to avoid bias when using observational data and reinforce the need for computational approaches to identify clinical phenotypes.
Journal Article
Artificial Intelligence and Mapping a New Direction in Laboratory Medicine: A Review
by
Herman, Daniel S
,
Durant, Thomas J S
,
Rhoads, Daniel D
in
Artificial Intelligence
,
Best practice
,
Communications systems
2021
Modern artificial intelligence (AI) and machine learning (ML) methods are now capable of completing tasks with performance characteristics that are comparable to those of expert human operators. As a result, many areas throughout healthcare are incorporating these technologies, including in vitro diagnostics and, more broadly, laboratory medicine. However, there are limited literature reviews of the landscape, likely future, and challenges of the application of AI/ML in laboratory medicine.
In this review, we begin with a brief introduction to AI and its subfield of ML. The ensuing sections describe ML systems that are currently in clinical laboratory practice or are being proposed for such use in recent literature, ML systems that use laboratory data outside the clinical laboratory, challenges to the adoption of ML, and future opportunities for ML in laboratory medicine.
AI and ML have and will continue to influence the practice and scope of laboratory medicine dramatically. This has been made possible by advancements in modern computing and the widespread digitization of health information. These technologies are being rapidly developed and described, but in comparison, their implementation thus far has been modest. To spur the implementation of reliable and sophisticated ML-based technologies, we need to establish best practices further and improve our information system and communication infrastructure. The participation of the clinical laboratory community is essential to ensure that laboratory data are sufficiently available and incorporated conscientiously into robust, safe, and clinically effective ML-supported clinical diagnostics.
Journal Article
HOS2 and HDA1 Encode Histone Deacetylases with Opposing Roles in Candida albicans Morphogenesis
by
Zacchi, Lucia F.
,
Schulz, Wade L.
,
Davis, Dana A.
in
Antifungal agents
,
Baking yeast
,
Candida albicans
2010
Epigenetic mechanisms regulate the expression of virulence traits in diverse pathogens, including protozoan and fungi. In the human fungal pathogen Candida albicans, virulence traits such as antifungal resistance, white-opaque switching, and adhesion to lung cells are regulated by histone deacetylases (HDACs). However, the role of HDACs in the regulation of the yeast-hyphal morphogenetic transitions, a critical virulence attribute of C. albicans, remains poorly explored. In this study, we wished to determine the relevance of other HDACs on C. albicans morphogenesis. We generated mutants in the HDACs HOS1, HOS2, RPD31, and HDA1 and determined their ability to filament in response to different environmental stimuli. We found that while HOS1 and RPD31 have no or a more limited role in morphogenesis, the HDACs HOS2 and HDA1 have opposite roles in the regulation of hyphal formation. Our results demonstrate an important role for HDACs on the regulation of yeast-hyphal transitions in the human pathogen C. albicans.
Journal Article
Agile analytics to support rapid knowledge pipelines
by
Krumholz, Harlan M.
,
Kvedar, Joseph C.
,
Schulz, Wade L.
in
692/700/228
,
Biomedicine
,
Biotechnology
2020
Windows of opportunity can open during a global health threat and transform how we solve problems and generate knowledge in medicine. In an industry that can take years to change, the COVID19 pandemic has led to massive international data collection and research efforts that would have seemed impossible before. The resistance to adopting new technology, data sources, and analytical methods has begun to yield to the urgency of the moment. Access to information has transformed the literature, with numerous publications that leverage digital health data, such as that from the electronic health record, to generate the evidence needed to better understand SARS-CoV-2 infection, treatment, and outcomes1,2 .
Journal Article
Capacity assessment for EHR-based medical device post-market surveillance for synthetic mid-urethral slings among women with stress urinary incontinence: a NEST consortium study
by
Reynolds, W Stuart
,
Shah, Nilay D
,
Winter, Robert
in
Active Surveillance
,
Device Surveillance
,
Electronic health records
2025
ObjectivesTo evaluate the feasibility for use of electronic health record (EHR) data in conducting adverse event surveillance among women who received mid-urethral slings (MUS) to treat stress urinary incontinence (SUI) in five health systems.DesignRetrospective observational study using EHR data from 2010 through 2021. Women with a history of MUS were identified using common data models; a common analytic code was executed at each site. A manual chart review was conducted in a per-site random patient subset to establish a reference standard. Automated text processing (Text Processed Integrated (TPI)) was developed and evaluated at each site to determine the surgical approach and synthetic mesh implantation. Patients were characterized and surgical outcomes were ascertained over 730 subsequent days.SettingFive large tertiary care academic medical centers.ParticipantsAcross five health systems, 9,906 eligible patients (mean age 57–60 per site) were identified.Main outcome measuresDetermination of surgical approach, synthetic mesh implantation, and assessment of the duration of surveillance for mortality and reoperation rates following MUS implantation.ResultsIn the TPI cohort analysis, 3,331 patients were identified. Surgical approach per site was retropubic (42% to 77%), transobturator (6% to 44%), single incision (0% to 24%), and adjustable sling (0% to <4%). Concordance rates for TPI using chart review were 71%–90% at each site for the surgical approach and 28%–85% for synthetic mesh implantation. Patient follow-up observation rates for mortality and reoperation ranged from 22% to 36% at 90 days, 15% to 30% at 365 days, and 8% to 19% at 730 days.ConclusionUsing EHR data alone, identification of medical devices and surgical approaches was feasible among women with MUS surgery for SUI, but long-term follow-up ascertainment rates were low. Medical device surveillance using EHR data should be evaluated in the context of the clinical use case, as applicability may vary.
Journal Article