Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
597
result(s) for
"Lee, Su-In"
Sort by:
أليس في بلاد العجائب
by
Carroll, Lewis, 1832-1898 مؤلف
,
Carroll, Lewis, 1832-1898. Alice in wonderland
,
Puk, Sun Jun رسام
in
أليس (شخصية خيالية) قصص الناشئة
,
القصص الإنجليزية للأطفال قرن 19 ترجمات إلى العربية
,
الأدب الإنجليزي للأطفال قرن 19 ترجمات إلى العربية
2011
تبدأ القصة في حديقة البيت كانت أليس مع أختها الكبرى حيث كانت تخبرها عن العالم من كتاب التاريخ لكن اليس لم تكن تصغي لها لأن هذا الكتاب لم يكن يحتوي على صور ونقاش فأصبحت تفكر في عالمها الخاص بها حيث تغني الأزهار وترقص العصافير فيظهر أرنب من العدم وكان مستعجلا مع ساعة جيب وفضولها جعلها تلحق الأرنب لترى لم هو مستعجل فلحقت به حتى وصولها إلى جحر صغير لا يمكنها دخوله حتى يدلها الأرنب على قنينة صغيرة يجعل الجميع يتقلص ويصبح صغيرا جدا فتمكنها من الدخول لتقع في الجحر المعتم وتمشي في الممر المظلم حتى تصل إلى غابة فيظهر لها أخوان توأمان ويبدأ كل منهما بقص القصص عليها ولكن أليس قامت بتجاهلهما وهربت مشيا في الغابة الكبيرة حين سمعت أصوات غناء ورقص فتبعت الصوت حتى وصلت إلى بيت المجنون حيث كانت حفلة شاي.
Explaining a series of models by propagating Shapley values
2022
Local feature attribution methods are increasingly used to explain complex machine learning models. However, current methods are limited because they are extremely expensive to compute or are not capable of explaining a distributed series of models where each model is owned by a separate institution. The latter is particularly important because it often arises in finance where explanations are mandated. Here, we present Generalized DeepSHAP (G-DeepSHAP), a tractable method to propagate local feature attributions through complex series of models based on a connection to the Shapley value. We evaluate G-DeepSHAP across biological, health, and financial datasets to show that it provides equally salient explanations an order of magnitude faster than existing model-agnostic attribution techniques and demonstrate its use in an important distributed series of models setting.
Series of machine learning models, relevant for tasks in biology, medicine, and finance, usually involve complex feature attribution techniques. The authors introduce a tractable method to compute local feature attributions for a series of machine learning models inspired by connections to the Shapley value.
Journal Article
AI for radiographic COVID-19 detection selects shortcuts over signal
by
DeGrave, Alex J.
,
Lee, Su-In
,
Janizek, Joseph D.
in
631/114/2413
,
631/326/596/4130
,
639/705/1042
2021
Artificial intelligence (AI) researchers and radiologists have recently reported AI systems that accurately detect COVID-19 in chest radiographs. However, the robustness of these systems remains unclear. Using state-of-the-art techniques in explainable AI, we demonstrate that recent deep learning systems to detect COVID-19 from chest radiographs rely on confounding factors rather than medical pathology, creating an alarming situation in which the systems appear accurate, but fail when tested in new hospitals. We observe that the approach to obtain training data for these AI systems introduces a nearly ideal scenario for AI to learn these spurious ‘shortcuts’. Because this approach to data collection has also been used to obtain training data for the detection of COVID-19 in computed tomography scans and for medical imaging tasks related to other diseases, our study reveals a far-reaching problem in medical-imaging AI. In addition, we show that evaluation of a model on external data is insufficient to ensure AI systems rely on medically relevant pathology, because the undesired ‘shortcuts’ learned by AI systems may not impair performance in new hospitals. These findings demonstrate that explainable AI should be seen as a prerequisite to clinical deployment of machine-learning healthcare models.
The urgency of the developing COVID-19 epidemic has led to a large number of novel diagnostic approaches, many of which use machine learning. DeGrave and colleagues use explainable AI techniques to analyse a selection of these approaches and find that the methods frequently learn to identify features unrelated to the actual disease.
Journal Article
A machine learning approach to integrate big data for precision medicine in acute myeloid leukemia
2018
Cancers that appear pathologically similar often respond differently to the same drug regimens. Methods to better match patients to drugs are in high demand. We demonstrate a promising approach to identify robust molecular markers for targeted treatment of acute myeloid leukemia (AML) by introducing: data from 30 AML patients including genome-wide gene expression profiles and in vitro sensitivity to 160 chemotherapy drugs, a computational method to identify reliable gene expression markers for drug sensitivity by incorporating multi-omic prior information relevant to each gene’s potential to drive cancer. We show that our method outperforms several state-of-the-art approaches in identifying molecular markers replicated in validation data and predicting drug sensitivity accurately. Finally, we identify
SMARCA4
as a marker and driver of sensitivity to topoisomerase II inhibitors, mitoxantrone, and etoposide, in AML by showing that cell lines transduced to have high
SMARCA4
expression reveal dramatically increased sensitivity to these agents.
Identification of markers of drug response is essential for precision therapy. Here the authors introduce an algorithm that uses prior information about each gene’s importance in AML to identify the most predictive gene-drug associations from transcriptome and drug response data from 30 AML samples.
Journal Article
Explainable machine-learning predictions for the prevention of hypoxaemia during surgery
2018
Although anaesthesiologists strive to avoid hypoxaemia during surgery, reliably predicting future intraoperative hypoxaemia is not possible at present. Here, we report the development and testing of a machine-learning-based system that predicts the risk of hypoxaemia and provides explanations of the risk factors in real time during general anaesthesia. The system, which was trained on minute-by-minute data from the electronic medical records of over 50,000 surgeries, improved the performance of anaesthesiologists by providing interpretable hypoxaemia risks and contributing factors. The explanations for the predictions are broadly consistent with the literature and with prior knowledge from anaesthesiologists. Our results suggest that if anaesthesiologists currently anticipate 15% of hypoxaemia events, with the assistance of this system they could anticipate 30%, a large portion of which may benefit from early intervention because they are associated with modifiable factors. The system can help improve the clinical understanding of hypoxaemia risk during anaesthesia care by providing general insights into the exact changes in risk induced by certain characteristics of the patient or procedure.
An alert system based on machine learning and trained on surgical data from electronic medical records helps anaesthesiologists prevent hypoxaemia during surgery by providing interpretable real-time predictions.
Journal Article
Predictive and robust gene selection for spatial transcriptomics
2023
A prominent trend in single-cell transcriptomics is providing spatial context alongside a characterization of each cell’s molecular state. This typically requires targeting an a priori selection of genes, often covering less than 1% of the genome, and a key question is how to optimally determine the small gene panel. We address this challenge by introducing a flexible deep learning framework, PERSIST, to identify informative gene targets for spatial transcriptomics studies by leveraging reference scRNA-seq data. Using datasets spanning different brain regions, species, and scRNA-seq technologies, we show that PERSIST reliably identifies panels that provide more accurate prediction of the genome-wide expression profile, thereby capturing more information with fewer genes. PERSIST can be adapted to specific biological goals, and we demonstrate that PERSIST’s binarization of gene expression levels enables models trained on scRNA-seq data to generalize with to spatial transcriptomics data, despite the complex shift between these technologies.
Gene selection for spatial transcriptomics is currently not optimal. Here the authors report PERSIST, a flexible deep learning framework that uses existing scRNA-seq data to identify gene targets for spatial transcriptomics; they show this allows you to capture more information with fewer genes.
Journal Article
PAUSE: principled feature attribution for unsupervised gene expression analysis
by
Lee, Ting-I
,
Janizek, Joseph D.
,
Spiro, Anna
in
Alzheimer's disease
,
Animal Genetics and Genomics
,
architecture
2023
As interest in using unsupervised deep learning models to analyze gene expression data has grown, an increasing number of methods have been developed to make these models more interpretable. These methods can be separated into two groups: post hoc analyses of black box models through feature attribution methods and approaches to build inherently interpretable models through biologically-constrained architectures. We argue that these approaches are not mutually exclusive, but can in fact be usefully combined. We propose PAUSE (
https://github.com/suinleelab/PAUSE
), an unsupervised pathway attribution method that identifies major sources of transcriptomic variation when combined with biologically-constrained neural network models.
Journal Article
Massively parallel functional dissection of mammalian enhancers in vivo
2012
Two groups describe approaches for synthesizing and assaying the function of thousands of variants of mammalian DNA regulatory elements. Melnikov
et al
. use their results to engineer short optimized regulatory elements in human cells, whereas Patwardhan
et al
. study enhancers hundreds of bases long in mice.
The functional consequences of genetic variation in mammalian regulatory elements are poorly understood. We report the
in vivo
dissection of three mammalian enhancers at single-nucleotide resolution through a massively parallel reporter assay. For each enhancer, we synthesized a library of >100,000 mutant haplotypes with 2–3% divergence from the wild-type sequence. Each haplotype was linked to a unique sequence tag embedded within a transcriptional cassette. We introduced each enhancer library into mouse liver and measured the relative activities of individual haplotypes
en masse
by sequencing the transcribed tags. Linear regression analysis yielded highly reproducible estimates of the effect of every possible single-nucleotide change on enhancer activity. The functional consequence of most mutations was modest, with ∼22% affecting activity by >1.2-fold and ∼3% by >2-fold. Several, but not all, positions with higher effects showed evidence for purifying selection, or co-localized with known liver-associated transcription factor binding sites, demonstrating the value of empirical high-resolution functional analysis.
Journal Article
Interpretable machine learning prediction of all-cause mortality
2022
Background
Unlike linear models which are traditionally used to study all-cause mortality, complex machine learning models can capture non-linear interrelations and provide opportunities to identify unexplored risk factors. Explainable artificial intelligence can improve prediction accuracy over linear models and reveal great insights into outcomes like mortality. This paper comprehensively analyzes all-cause mortality by explaining complex machine learning models.
Methods
We propose the IMPACT framework that uses XAI technique to explain a state-of-the-art tree ensemble mortality prediction model. We apply IMPACT to understand all-cause mortality for 1-, 3-, 5-, and 10-year follow-up times within the NHANES dataset, which contains 47,261 samples and 151 features.
Results
We show that IMPACT models achieve higher accuracy than linear models and neural networks. Using IMPACT, we identify several overlooked risk factors and interaction effects. Furthermore, we identify relationships between laboratory features and mortality that may suggest adjusting established reference intervals. Finally, we develop highly accurate, efficient and interpretable mortality risk scores that can be used by medical professionals and individuals without medical expertise. We ensure generalizability by performing temporal validation of the mortality risk scores and external validation of important findings with the UK Biobank dataset.
Conclusions
IMPACT’s unique strength is the explainable prediction, which provides insights into the complex, non-linear relationships between mortality and features, while maintaining high accuracy. Our explainable risk scores could help individuals improve self-awareness of their health status and help clinicians identify patients with high risk. IMPACT takes a consequential step towards bringing contemporary developments in XAI to epidemiology.
Qiu et al. present a new approach, IMPACT, that uses explainable artificial intelligence to analyze all-cause mortality. IMPACT provides insights into the individualized mortality risk scores, while maintaining high model accuracy and the expressive power to capture complex, non-linear relationships between mortality and individuals’ features.
Journal Article