Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
162 result(s) for "Lee, Honglak"
Sort by:
Artificial-intelligence-based molecular classification of diffuse gliomas using rapid, label-free optical imaging
Molecular classification has transformed the management of brain tumors by enabling more accurate prognostication and personalized treatment. However, timely molecular diagnostic testing for patients with brain tumors is limited, complicating surgical and adjuvant treatment and obstructing clinical trial enrollment. In this study, we developed DeepGlioma, a rapid (<90 seconds), artificial-intelligence-based diagnostic screening system to streamline the molecular diagnosis of diffuse gliomas. DeepGlioma is trained using a multimodal dataset that includes stimulated Raman histology (SRH); a rapid, label-free, non-consumptive, optical imaging method; and large-scale, public genomic data. In a prospective, multicenter, international testing cohort of patients with diffuse glioma ( n  = 153) who underwent real-time SRH imaging, we demonstrate that DeepGlioma can predict the molecular alterations used by the World Health Organization to define the adult-type diffuse glioma taxonomy (IDH mutation, 1p19q co-deletion and ATRX mutation), achieving a mean molecular classification accuracy of 93.3 ± 1.6%. Our results represent how artificial intelligence and optical histology can be used to provide a rapid and scalable adjunct to wet lab methods for the molecular screening of patients with diffuse glioma. DeepGlioma, a multimodal deep learning approach for intraoperative diagnostic screening of diffuse glioma, trained on stimulated Raman histology and large-scale public genomic data, can predict molecular alterations for diffuse glioma diagnosis with high accuracy.
Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders
Deep neural networks are increasingly used in medical imaging for tasks such as pathological classification, but they face challenges due to the scarcity of high-quality, expert-labeled training data. Recent efforts have utilized pre-trained contrastive image-text models like CLIP, adapting them for medical use by fine-tuning the model with chest X-ray images and corresponding reports for zero-shot pathology classification, thus eliminating the need for pathology-specific annotations. However, most studies continue to use the same contrastive learning objectives as in the general domain, overlooking the multi-labeled nature of medical image-report pairs. In this paper, we propose a new fine-tuning strategy that includes positive-pair loss relaxation and random sentence sampling. We aim to improve the performance of zero-shot pathology classification without relying on external knowledge. Our method can be applied to any pre-trained contrastive image-text encoder and easily transferred to out-of-domain datasets without further training, as it does not use external data. Our approach consistently improves overall zero-shot pathology classification across four chest X-ray datasets and three pre-trained models, with an average macro AUROC increase of 4.3%. Additionally, our method outperforms the state-of-the-art and marginally surpasses board-certified radiologists in zero-shot classification for the five competition pathologies in the CheXpert dataset.
Attention-based solubility prediction of polysulfide and electrolyte analysis for lithium–sulfur batteries
During the continuous charge and discharge process in lithium-sulfur batteries, one of the next-generation batteries, polysulfides are generated in the battery’s electrolyte, and impact its performance in terms of power and capacity by involving the process. The amount of polysulfides in the electrolyte could be estimated by the change of the Gibbs free energy of the electrolyte, Δ mix G in the presence of polysulfide. However, obtaining Δ mix G of the diverse mixtures of components in the electrolyte is a complex and expensive task that shows itself as a bottleneck in optimization of electrolytes. In this work, we present a machine-learning approach for predicting Δ mix G of electrolytes. The proposed architecture utilizes (1) an attention-based model (Attentive FP), a contrastive learning model (MolCLR) or morgan fingerprints to represent chemical components, and (2) transformers to account for the interactions between chemicals in the electrolyte. This architecture was not only capable of predicting electrolyte properties, including those of chemicals not used during training, but also providing insights into chemical interactions within electrolytes. It revealed that interactions with other chemicals relate to the logP and molecular weight of the chemicals.
Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks
Intraoperative diagnosis is essential for providing safe and effective care during cancer surgery 1 . The existing workflow for intraoperative diagnosis based on hematoxylin and eosin staining of processed tissue is time, resource and labor intensive 2 , 3 . Moreover, interpretation of intraoperative histologic images is dependent on a contracting, unevenly distributed, pathology workforce 4 . In the present study, we report a parallel workflow that combines stimulated Raman histology (SRH) 5 – 7 , a label-free optical imaging method and deep convolutional neural networks (CNNs) to predict diagnosis at the bedside in near real-time in an automated fashion. Specifically, our CNNs, trained on over 2.5 million SRH images, predict brain tumor diagnosis in the operating room in under 150 s, an order of magnitude faster than conventional techniques (for example, 20–30 min) 2 . In a multicenter, prospective clinical trial ( n  = 278), we demonstrated that CNN-based diagnosis of SRH images was noninferior to pathologist-based interpretation of conventional histologic images (overall accuracy, 94.6% versus 93.9%). Our CNNs learned a hierarchy of recognizable histologic feature representations to classify the major histopathologic classes of brain tumors. In addition, we implemented a semantic segmentation method to identify tumor-infiltrated diagnostic regions within SRH images. These results demonstrate how intraoperative cancer diagnosis can be streamlined, creating a complementary pathway for tissue diagnosis that is independent of a traditional pathology laboratory. A prospective, multicenter, case–control clinical trial evaluates the potential of artificial intelligence for providing accurate bedside diagnosis of patients with brain tumors.
Al-based molecular classification of diffuse gliomas using rapid, label-free optical imaging
Molecular classification has transformed the management of brain tumors by enabling more accurate prognostication and personalized treatment. Access to timely molecular diagnostic testing for brain tumor patients is limited [1–3], complicating surgical and adjuvant treatment and obstructing clinical trial enrollment [4]. We developed a rapid (<90 seconds), AI-based diagnostic screening system that can provide molecular classification of diffuse gliomas and report its use in a prospective, multicenter, international testing cohort of diffuse glioma patients (N = 153). By combining stimulated Raman histology (SRH), a rapid, label-free, non-consumptive, optical imaging method [5–7], and deep learning-based image classification, we are able to predict the molecular features used by the World Health Organization (WHO) to define the adult-type diffuse glioma taxonomy [8]. We developed a transformer-based multimodal training strategy that uses a pretrained SRH image feature encoder and a large-scale, genetic embedding model to achieve optimal molecular classification performance. Using this system, called DeepGlioma, we were able to achieve an average molecular genetic classification accuracy of 93.2% and identify the correct diffuse glioma molecular subgroup with 91.5% accuracy. Our results represent how artificial intelligence and optical histology can be used to provide a rapid and scalable alternative to wet lab methods for the molecular diagnosis of brain tumor patients.
High-throughput identification of transcription start sites, conserved promoter motifs and predicted regulons
Using 62 probe-level datasets obtained with a custom-designed Caulobacter crescentus microarray chip, we identify transcriptional start sites of 769 genes, 53 of which are transcribed from multiple start sites. Transcriptional start sites are identified by analyzing probe signal cross-correlation matrices created from probe pairs tiled every 5 bp upstream of the genes. Signals from probes binding the same message are correlated. The contribution of each promoter for genes transcribed from multiple promoters is identified. Knowing the transcription start site enables targeted searching for regulatory-protein binding motifs in the promoter regions of genes with similar expression patterns. We identified 27 motifs, 17 of which share no similarity to the characterized motifs of other C. crescentus transcriptional regulators. Using these motifs, we predict coregulated genes. We verified novel promoter motifs that regulate stress-response genes, including those responding to uranium challenge, a stress-response sigma factor and a stress-response noncoding RNA.
Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders
Deep neural networks are increasingly used in medical imaging for tasks such as pathological classification, but they face challenges due to the scarcity of high-quality, expert-labeled training data. Recent efforts have utilized pre-trained contrastive image-text models like CLIP, adapting them for medical use by fine-tuning the model with chest X-ray images and corresponding reports for zero-shot pathology classification, thus eliminating the need for pathology-specific annotations. However, most studies continue to use the same contrastive learning objectives as in the general domain, overlooking the multi-labeled nature of medical image-report pairs. In this paper, we propose a new fine-tuning strategy that includes positive-pair loss relaxation and random sentence sampling. We aim to improve the performance of zero-shot pathology classification without relying on external knowledge. Our method can be applied to any pre-trained contrastive image-text encoder and easily transferred to out-of-domain datasets without further training, as it does not use external data. Our approach consistently improves overall zero-shot pathology classification across four chest X-ray datasets and three pre-trained models, with an average macro AUROC increase of 4.3%. Additionally, our method outperforms the state-of-the-art and marginally surpasses board-certified radiologists in zero-shot classification for the five competition pathologies in the CheXpert dataset.
Unsupervised Feature Learning Via Sparse Hierarchical Representations
Machine learning has proved a powerful tool for artificial intelligence and data mining problems. However, its success has usually relied on having a good feature representation of the data, and having a poor representation can severely limit the performance of learning algorithms. These feature representations are often hand-designed, require significant amounts of domain knowledge and human labor, and do not generalize well to new domains.To address these issues, I will present machine learning algorithms that can automatically learn good feature representations from unlabeled data in various domains, such as images, audio, text, and robotic sensors. Specifically, I will first describe how efficient sparse coding algorithms --- which represent each input example using a small number of basis vectors --- can be used to learn good low-level representations from unlabeled data. I also show that this gives feature representations that yield improved performance in many machine learning tasks.In addition, building on the deep learning framework, I will present two new algorithms, sparse deep belief networks and convolutional deep belief networks, for building more complex, hierarchical representations, in which more complex features are automatically learned as a composition of simpler ones. When applied to images, this method automatically learns features that correspond to objects and decompositions of objects into object-parts. These features often lead to performance competitive with or better than highly hand-engineered computer vision algorithms in object recognition and segmentation tasks. Further, the same algorithm can be used to learn feature representations from audio data. In particular, the learned features yield improved performance over state-of-the-art methods in several speech recognition tasks.
View Selection for 3D Captioning via Diffusion Ranking
Scalable annotation approaches are crucial for constructing extensive 3D-text datasets, facilitating a broader range of applications. However, existing methods sometimes lead to the generation of hallucinated captions, compromising caption quality. This paper explores the issue of hallucination in 3D object captioning, with a focus on Cap3D method, which renders 3D objects into 2D views for captioning using pre-trained models. We pinpoint a major challenge: certain rendered views of 3D objects are atypical, deviating from the training data of standard image captioning models and causing hallucinations. To tackle this, we present DiffuRank, a method that leverages a pre-trained text-to-3D model to assess the alignment between 3D objects and their 2D rendered views, where the view with high alignment closely represent the object's characteristics. By ranking all rendered views and feeding the top-ranked ones into GPT4-Vision, we enhance the accuracy and detail of captions, enabling the correction of 200k captions in the Cap3D dataset and extending it to 1 million captions across Objaverse and Objaverse-XL datasets. Additionally, we showcase the adaptability of DiffuRank by applying it to pre-trained text-to-image models for a Visual Question Answering task, where it outperforms the CLIP model.