Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
90
result(s) for
"Jiao, Zhicheng"
Sort by:
Reconstructing seen image from brain activity by visually-guided cognitive representation and adversarial learning
2021
•We combined a dual-VAE structure with GAN to build a D-Vae/Gan framework.•Gan-based inter-modality knowledge distillation was introduced for feature learning.•Model training process was divided into cascade stages with a three-stage strategy.•Reconstructions on four fMRI datasets were objectively and subjectively identifiable.
Reconstructing perceived stimulus (image) only from human brain activity measured with functional Magnetic Resonance Imaging (fMRI) is a significant task in brain decoding. However, the inconsistent distribution and representation between fMRI signals and visual images cause great ‘domain gap’. Moreover, the limited fMRI data instances generally suffer from the issues of low signal noise ratio (SNR), extremely high dimensionality, and limited spatial resolution. Existing methods are often affected by these issues so that a satisfactory reconstruction is still an open problem. In this paper, we show that it is possible to obtain a promising solution by learning visually-guided latent cognitive representations from the fMRI signals, and inversely decoding them to the image stimuli. The resulting framework is called Dual-Variational Autoencoder/ Generative Adversarial Network (D-Vae/Gan), which combines the advantages of adversarial representation learning with knowledge distillation. In addition, we introduce a novel three-stage learning strategy which enables the (cognitive) encoder to gradually distill useful knowledge from the paired (visual) encoder during the learning process. Extensive experimental results on both artificial and natural images have demonstrated that our method could achieve surprisingly good results and outperform the available alternatives.
Journal Article
Automated machine learning for differentiation of hepatocellular carcinoma from intrahepatic cholangiocarcinoma on multiphasic MRI
by
Zou, Beiji
,
Thomasian, Nicole M.
,
Bai, Harrison X.
in
631/114/1305
,
631/67/2321
,
Artificial Intelligence
2022
With modern management of primary liver cancer shifting towards non-invasive diagnostics, accurate tumor classification on medical imaging is increasingly critical for disease surveillance and appropriate targeting of therapy. Recent advancements in machine learning raise the possibility of automated tools that can accelerate workflow, enhance performance, and increase the accessibility of artificial intelligence to clinical researchers. We explore the use of an automated Tree-Based Optimization Tool that leverages a genetic programming algorithm for differentiation of the two common primary liver cancers on multiphasic MRI. Manual and automated analyses were performed to select an optimal machine learning model, with an accuracy of 73–75% (95% CI 0.59–0.85), sensitivity of 70–75% (95% CI 0.48–0.89), and specificity of 71–79% (95% CI 0.52–0.90) on manual optimization, and an accuracy of 73–75% (95% CI 0.59–0.85), sensitivity of 65–75% (95% CI 0.43–0.89) and specificity of 75–79% (95% CI 0.56–0.90) for automated machine learning. We found that automated machine learning performance was similar to that of manual optimization, and it could classify hepatocellular carcinoma and intrahepatic cholangiocarcinoma with an sensitivity and specificity comparable to that of radiologists. However, automated machine learning performance was poor on a subset of scans that met LI-RADS criteria for LR-M. Exploration of additional feature selection and classifier methods with automated machine learning to improve performance on LR-M cases as well as prospective validation in the clinical setting are needed prior to implementation.
Journal Article
LAZY Gene Family in Plant Gravitropism
2021
Adapting to the omnipresent gravitational field was a fundamental basis driving the flourishing of terrestrial plants on the Earth. Plants have evolved a remarkable capability that not only allows them to live and develop within the Earth’s gravity field, but it also enables them to use the gravity vector to guide the growth of roots and shoots, in a process known as gravitropism. Triggered by gravistimulation, plant gravitropism is a highly complex, multistep process that requires many organelles and players to function in an intricate coordinated way. Although this process has been studied for several 100 years, much remains unclear, particularly the early events that trigger the relocation of the auxin efflux carrier PIN-FORMED (PIN) proteins, which presumably leads to the asymmetrical redistribution of auxin. In the past decade, the LAZY gene family has been identified as a crucial player that ensures the proper redistribution of auxin and a normal tropic response for both roots and shoots upon gravistimulation. LAZY proteins appear to be participating in the early steps of gravity signaling, as the mutation of LAZY genes consistently leads to altered auxin redistribution in multiple plant species. The identification and characterization of the LAZY gene family have significantly advanced our understanding of plant gravitropism, and opened new frontiers of investigation into the novel molecular details of the early events of gravitropism. Here we review current knowledge of the LAZY gene family and the mechanism modulated by LAZY proteins for controlling both roots and shoots gravitropism. We also discuss the evolutionary significance and conservation of the LAZY gene family in plants.
Journal Article
Attention-based multimodal deep learning for interpretable and generalizable prediction of pathological complete response in breast cancer
2025
Background
Accurate prediction of pathological complete response (pCR) to neoadjuvant chemotherapy has significant clinical utility in the management of breast cancer treatment. Although multimodal deep learning models have shown promise for predicting pCR from medical imaging and other clinical data, their adoption has been limited due to challenges with interpretability and generalizability across institutions.
Methods
We developed a multimodal deep learning model combining post contrast-enhanced whole-breast MRI at pre- and post-treatment timepoints with non-imaging clinical features. The model integrates 3D convolutional neural networks and self-attention to capture spatial and cross-modal interactions. We utilized two public multi-institutional datasets to perform internal and external validation of the model. For model training and validation, we used data from the I-SPY 2 trial (N = 660). For external validation, we used the I-SPY 1 dataset (N = 114).
Results
Of the 660 patients in I-SPY 2, 217 patients achieved pCR (32.88%). Of the 114 patients in I-SPY 1, 29 achieved pCR (25.44%). The attention-based multimodal model yielded the best predictive performance with an AUC of 0.73 ± 0.04 on the internal data and an AUC of 0.71 ± 0.02 on the external dataset. The MRI-only model (internal AUC = 0.68 ± 0.03, external AUC = 0.70 ± 0.04) and the non-MRI clinical features-only model (internal AUC = 0.66 ± 0.08, external AUC = 0.71 ± 0.03) trailed in performance, indicating the combination of both modalities is most effective.
Conclusion
We present a robust and interpretable deep learning framework for pCR prediction in breast cancer patients undergoing NAC. By combining imaging and clinical data with attention-based fusion, the model achieves strong predictive performance and generalizes across institutions.
Journal Article
Genome-wide study of C2H2 zinc finger gene family in Medicago truncatula
2020
Background
C2H2 zinc finger proteins (C2H2 ZFPs) play vital roles in shaping many aspects of plant growth and adaptation to the environment. Plant genomes harbor hundreds of C2H2 ZFPs, which compose one of the most important and largest transcription factor families in higher plants. Although the C2H2 ZFP gene family has been reported in several plant species, it has not been described in the model leguminous species
Medicago truncatula
.
Results
In this study, we identified 218 C2H2 type ZFPs with 337 individual C2H2 motifs in
M. truncatula
. We showed that the high rate of local gene duplication has significantly contributed to the expansion of the C2H2 gene family in
M. truncatula
. The identified ZFPs exhibit high variation in motif arrangement and expression pattern, suggesting that the short C2H2 zinc finger motif has been adopted as a scaffold by numerous transcription factors with different functions to recognize cis-elements. By analyzing the public expression datasets and quantitative RT-PCR (qRT-PCR), we identified several C2H2 ZFPs that are specifically expressed in certain tissues, such as the nodule, seed, and flower.
Conclusion
Our genome-wide work revealed an expanded C2H2 ZFP gene family in an important legume
M. truncatula
, and provides new insights into the diversification and expansion of C2H2 ZFPs in higher plants.
Journal Article
Artificial intelligence for prediction of COVID-19 progression using CT imaging and clinical data
2022
Objectives
Early recognition of coronavirus disease 2019 (COVID-19) severity can guide patient management. However, it is challenging to predict when COVID-19 patients will progress to critical illness. This study aimed to develop an artificial intelligence system to predict future deterioration to critical illness in COVID-19 patients.
Methods
An artificial intelligence (AI) system in a time-to-event analysis framework was developed to integrate chest CT and clinical data for risk prediction of future deterioration to critical illness in patients with COVID-19.
Results
A multi-institutional international cohort of 1,051 patients with RT-PCR confirmed COVID-19 and chest CT was included in this study. Of them, 282 patients developed critical illness, which was defined as requiring ICU admission and/or mechanical ventilation and/or reaching death during their hospital stay. The AI system achieved a C-index of 0.80 for predicting individual COVID-19 patients’ to critical illness. The AI system successfully stratified the patients into high-risk and low-risk groups with distinct progression risks (
p
< 0.0001).
Conclusions
Using CT imaging and clinical data, the AI system successfully predicted time to critical illness for individual patients and identified patients with high risk. AI has the potential to accurately triage patients and facilitate personalized treatment.
Key Point
• AI system can predict time to critical illness for patients with COVID-19 by using CT imaging and clinical data.
Journal Article
Radiomics-based machine learning analysis and characterization of breast lesions with multiparametric diffusion-weighted MR
2021
Background
This study aimed to evaluate the utility of radiomics-based machine learning analysis with multiparametric DWI and to compare the diagnostic performance of radiomics features and mean diffusion metrics in the characterization of breast lesions.
Methods
This retrospective study included 542 lesions from February 2018 to November 2018. One hundred radiomics features were computed from mono-exponential (ME), biexponential (BE), stretched exponential (SE), and diffusion-kurtosis imaging (DKI). Radiomics-based analysis was performed by comparing four classifiers, including random forest (RF), principal component analysis (PCA), L1 regularization (L1R), and support vector machine (SVM). These four classifiers were trained on a training set with 271 patients via ten-fold cross-validation and tested on an independent testing set with 271 patients. The diagnostic performance of the mean diffusion metrics of ME (mADC
all b
, mADC
0–1000
), BE (mD, mD
*
, mf), SE (mDDC, mα), and DKI (mK, mD) were also calculated for comparison. The area under the receiver operating characteristic curve (AUC) was used to compare the diagnostic performance.
Results
RF attained higher AUCs than L1R, PCA and SVM. The AUCs of radiomics features for the differential diagnosis of breast lesions ranged from 0.80 (BE_D*) to 0.85 (BE_D). The AUCs of the mean diffusion metrics ranged from 0.54 (BE_mf) to 0.79 (ME_mADC
0–1000
). There were significant differences in the AUCs between the mean values of all diffusion metrics and radiomics features of AUCs (all
P
< 0.001) for the differentiation of benign and malignant breast lesions. Of the radiomics features computed, the most important sequence was BE_D (AUC: 0.85), and the most important feature was FO-10 percentile (Feature Importance: 0.04).
Conclusions
The radiomics-based analysis of multiparametric DWI by RF enables better differentiation of benign and malignant breast lesions than the mean diffusion metrics.
Journal Article
Rapid identification of mutations caused by fast neutron bombardment in Medicago truncatula
2021
Background
Fast neutron bombardment (FNB) is a very effective approach for mutagenesis and has been widely used in generating mutant libraries in many plant species. The main type of mutations of FNB mutants are deletions of DNA fragments ranging from few base pairs to several hundred kilobases, thus usually leading to the null mutation of genes. Despite its efficiency in mutagenesis, identification of the mutation sites is still challenging in many species. The traditional strategy of positional cloning is very effective in identifying the mutation but time-consuming. With the availability of genome sequences, the array-based comparative genomic hybridization (CGH) method has been developed to detect the mutation sites by comparing the signal intensities of probes between wild-type and mutant plants. Though CGH method is effective in detecting copy number variations (CNVs), the resolution and coverage of CGH probes are not adequate to identify mutations other than CNVs.
Results
We report a new strategy and pipeline to sensitively identify the mutation sites of FNB mutants by combining deep-coverage whole-genome sequencing (WGS), polymorphism calling, and customized filtering in
Medicago truncatula
. Initially, we performed a bulked sequencing for a FNB
white nodule
(
wn
) mutant and its wild-type like plants derived from a backcross population. Following polymorphism calling and filtering, validation by manual check and Sanger sequencing, we identified that
SymCRK
is the causative gene of
white nodule
mutant. We also sequenced an individual FNB mutant
yellow leaves 1
(
yl1
) and wild-type plant. We identified that
ETHYLENE-DEPENDENT GRAVITROPISM-DEFICIENT AND YELLOW-GREEN 1
(
EGY1
) is the candidate gene for
M. truncatula yl1
mutant.
Conclusion
Our results demonstrated that the method reported here is rather robust in identifying the mutation sites for FNB mutants.
Journal Article
An automated COVID-19 triage pipeline using artificial intelligence based on chest radiographs and clinical data
by
Eweje, Feyisope
,
Liao, Wei-Hua
,
Gong, Ji Sheng
in
639/705/117
,
692/700/1421/1770
,
Artificial intelligence
2022
While COVID-19 diagnosis and prognosis artificial intelligence models exist, very few can be implemented for practical use given their high risk of bias. We aimed to develop a diagnosis model that addresses notable shortcomings of prior studies, integrating it into a fully automated triage pipeline that examines chest radiographs for the presence, severity, and progression of COVID-19 pneumonia. Scans were collected using the DICOM Image Analysis and Archive, a system that communicates with a hospital’s image repository. The authors collected over 6,500 non-public chest X-rays comprising diverse COVID-19 severities, along with radiology reports and RT-PCR data. The authors provisioned one internally held-out and two external test sets to assess model generalizability and compare performance to traditional radiologist interpretation. The pipeline was evaluated on a prospective cohort of 80 radiographs, reporting a 95% diagnostic accuracy. The study mitigates bias in AI model development and demonstrates the value of an end-to-end COVID-19 triage platform.
Journal Article
Machine Learning-Based Prediction of COVID-19 Severity and Progression to Critical Illness Using CT Imaging and Clinical Data
by
Wang, Robin
,
Liao, Wei-hua
,
Zhang, Paul J.
in
Artificial intelligence
,
Coronaviruses
,
COVID-19
2021
To develop a machine learning (ML) pipeline based on radiomics to predict Coronavirus Disease 2019 (COVID-19) severity and the future deterioration to critical illness using CT and clinical variables.
Clinical data were collected from 981 patients from a multi-institutional international cohort with real-time polymerase chain reaction-confirmed COVID-19. Radiomics features were extracted from chest CT of the patients. The data of the cohort were randomly divided into training, validation, and test sets using a 7:1:2 ratio. A ML pipeline consisting of a model to predict severity and time-to-event model to predict progression to critical illness were trained on radiomics features and clinical variables. The receiver operating characteristic area under the curve (ROC-AUC), concordance index (C-index), and time-dependent ROC-AUC were calculated to determine model performance, which was compared with consensus CT severity scores obtained by visual interpretation by radiologists.
Among 981 patients with confirmed COVID-19, 274 patients developed critical illness. Radiomics features and clinical variables resulted in the best performance for the prediction of disease severity with a highest test ROC-AUC of 0.76 compared with 0.70 (0.76 vs. 0.70,
= 0.023) for visual CT severity score and clinical variables. The progression prediction model achieved a test C-index of 0.868 when it was based on the combination of CT radiomics and clinical variables compared with 0.767 when based on CT radiomics features alone (
< 0.001), 0.847 when based on clinical variables alone (
= 0.110), and 0.860 when based on the combination of visual CT severity scores and clinical variables (
= 0.549). Furthermore, the model based on the combination of CT radiomics and clinical variables achieved time-dependent ROC-AUCs of 0.897, 0.933, and 0.927 for the prediction of progression risks at 3, 5 and 7 days, respectively.
CT radiomics features combined with clinical variables were predictive of COVID-19 severity and progression to critical illness with fairly high accuracy.
Journal Article