Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
117 result(s) for "Hamarneh, Ghassan"
Sort by:
Predicting the clinical management of skin lesions using deep learning
Automated machine learning approaches to skin lesion diagnosis from images are approaching dermatologist-level performance. However, current machine learning approaches that suggest management decisions rely on predicting the underlying skin condition to infer a management decision without considering the variability of management decisions that may exist within a single condition. We present the first work to explore image-based prediction of clinical management decisions directly without explicitly predicting the diagnosis. In particular, we use clinical and dermoscopic images of skin lesions along with patient metadata from the Interactive Atlas of Dermoscopy dataset (1011 cases; 20 disease labels; 3 management decisions) and demonstrate that predicting management labels directly is more accurate than predicting the diagnosis and then inferring the management decision ( 13.73 ± 3.93 % and 6.59 ± 2.86 % improvement in overall accuracy and AUROC respectively), statistically significant at p < 0.001 . Directly predicting management decisions also considerably reduces the over-excision rate as compared to management decisions inferred from diagnosis predictions (24.56% fewer cases wrongly predicted to be excised). Furthermore, we show that training a model to also simultaneously predict the seven-point criteria and the diagnosis of skin lesions yields an even higher accuracy (improvements of 4.68 ± 1.89 % and 2.24 ± 2.04 % in overall accuracy and AUROC respectively) of management predictions. Finally, we demonstrate our model’s generalizability by evaluating on the publicly available MClass-D dataset and show that our model agrees with the clinical management recommendations of 157 dermatologists as much as they agree amongst each other.
BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment
We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain. •First deep convolutional neural network architecture designed for connectomes.•Novel convolutional layers for leveraging topological locality in brain networks.•Prediction of neurodevelopmental outcomes in preterm infants.•Visualization of brain connections learned to be important for prediction.
A fast and fully-automated deep-learning approach for accurate hemorrhage segmentation and volume quantification in non-contrast whole-head CT
This project aimed to develop and evaluate a fast and fully-automated deep-learning method applying convolutional neural networks with deep supervision (CNN-DS) for accurate hematoma segmentation and volume quantification in computed tomography (CT) scans. Non-contrast whole-head CT scans of 55 patients with hemorrhagic stroke were used. Individual scans were standardized to 64 axial slices of 128 × 128 voxels. Each voxel was annotated independently by experienced raters, generating a binary label of hematoma versus normal brain tissue based on majority voting. The dataset was split randomly into training (n = 45) and testing (n = 10) subsets. A CNN-DS model was built applying the training data and examined using the testing data. Performance of the CNN-DS solution was compared with three previously established methods. The CNN-DS achieved a Dice coefficient score of 0.84 ± 0.06 and recall of 0.83 ± 0.07, higher than patch-wise U-Net (< 0.76). CNN-DS average running time of 0.74 ± 0.07 s was faster than PItcHPERFeCT (> 1412 s) and slice-based U-Net (> 12 s). Comparable interrater agreement rates were observed between “method-human” vs. “human–human” (Cohen’s kappa coefficients > 0.82). The fully automated CNN-DS approach demonstrated expert-level accuracy in fast segmentation and quantification of hematoma, substantially improving over previous methods. Further research is warranted to test the CNN-DS solution as a software tool in clinical settings for effective stroke management.
SPECHT: Self-tuning Plausibility based object detection Enables quantification of Conflict in Heterogeneous multi-scale microscopy
Identification of small objects in fluorescence microscopy is a non-trivial task burdened by parameter-sensitive algorithms, for which there is a clear need for an approach that adapts dynamically to changing imaging conditions. Here, we introduce an adaptive object detection method that, given a microscopy image and an image level label, uses kurtosis-based matching of the distribution of the image differential to express operator intent in terms of recall or precision. We show how a theoretical upper bound of the statistical distance in feature space enables application of belief theory to obtain statistical support for each detected object, capturing those aspects of the image that support the label, and to what extent. We validate our method on 2 datasets: distinguishing sub-diffraction limit caveolae and scaffold by stimulated emission depletion (STED) super-resolution microscopy; and detecting amyloid- β deposits in confocal microscopy retinal cross-sections of neuropathologically confirmed Alzheimer’s disease donor tissue. Our results are consistent with biological ground truth and with previous subcellular object classification results, and add insight into more nuanced class transition dynamics. We illustrate the novel application of belief theory to object detection in heterogeneous microscopy datasets and the quantification of conflict of evidence in a joint belief function. By applying our method successfully to diffraction-limited confocal imaging of tissue sections and super-resolution microscopy of subcellular structures, we demonstrate multi-scale applicability.
Investigating the Quality of DermaMNIST and Fitzpatrick17k Dermatological Image Datasets
The remarkable progress of deep learning in dermatological tasks has brought us closer to achieving diagnostic accuracies comparable to those of human experts. However, while large datasets play a crucial role in the development of reliable deep neural network models, the quality of data therein and their correct usage are of paramount importance. Several factors can impact data quality, such as the presence of duplicates, data leakage across train-test partitions, mislabeled images, and the absence of a well-defined test partition. In this paper, we conduct meticulous analyses of three popular dermatological image datasets: DermaMNIST, its source HAM10000, and Fitzpatrick17k, uncovering these data quality issues, measure the effects of these problems on the benchmark results, and propose corrections to the datasets. Besides ensuring the reproducibility of our analysis, by making our analysis pipeline and the accompanying code publicly available, we aim to encourage similar explorations and to facilitate the identification and addressing of potential data quality issues in other large datasets.
Single molecule network analysis identifies structural changes to caveolae and scaffolds due to mutation of the caveolin-1 scaffolding domain
Caveolin-1 (CAV1), the caveolae coat protein, also associates with non-caveolar scaffold domains. Single molecule localization microscopy (SMLM) network analysis distinguishes caveolae and three scaffold domains, hemispherical S2 scaffolds and smaller S1B and S1A scaffolds. The caveolin scaffolding domain (CSD) is a highly conserved hydrophobic region that mediates interaction of CAV1 with multiple effector molecules. F92A/V94A mutation disrupts CSD function, however the structural impact of CSD mutation on caveolae or scaffolds remains unknown. Here, SMLM network analysis quantitatively shows that expression of the CAV1 CSD F92A/V94A mutant in CRISPR/Cas CAV1 knockout MDA-MB-231 breast cancer cells reduces the size and volume and enhances the elongation of caveolae and scaffold domains, with more pronounced effects on S2 and S1B scaffolds. Convex hull analysis of the outer surface of the CAV1 point clouds confirms the size reduction of CSD mutant CAV1 blobs and shows that CSD mutation reduces volume variation amongst S2 and S1B CAV1 blobs at increasing shrink values, that may reflect retraction of the CAV1 N-terminus towards the membrane, potentially preventing accessibility of the CSD. Detection of point mutation-induced changes to CAV1 domains highlights the utility of SMLM network analysis for mesoscale structural analysis of oligomers in their native environment.
Super-resolution modularity analysis shows polyhedral caveolin-1 oligomers combine to form scaffolds and caveolae
Caveolin-1 (Cav1), the coat protein for caveolae, also forms non-caveolar Cav1 scaffolds. Single molecule Cav1 super-resolution microscopy analysis previously identified caveolae and three distinct scaffold domains: smaller S1A and S2B scaffolds and larger hemispherical S2 scaffolds. Application here of network modularity analysis of SMLM data for endogenous Cav1 labeling in HeLa cells shows that small scaffolds combine to form larger scaffolds and caveolae. We find modules within Cav1 blobs by maximizing the intra-connectivity between Cav1 molecules within a module and minimizing the inter-connectivity between Cav1 molecules across modules, which is achieved via spectral decomposition of the localizations adjacency matrix. Features of modules are then matched with intact blobs to find the similarity between the module-blob pairs of group centers. Our results show that smaller S1A and S1B scaffolds are made up of small polygons, that S1B scaffolds correspond to S1A scaffold dimers and that caveolae and hemispherical S2 scaffolds are complex, modular structures formed from S1B and S1A scaffolds, respectively. Polyhedral interactions of Cav1 oligomers, therefore, leads progressively to the formation of larger and more complex scaffold domains and the biogenesis of caveolae.
Super Resolution Network Analysis Defines the Molecular Architecture of Caveolae and Caveolin-1 Scaffolds
Quantitative approaches to analyze the large data sets generated by single molecule localization super-resolution microscopy (SMLM) are limited. We developed a computational pipeline and applied it to analyzing 3D point clouds of SMLM localizations (event lists) of the caveolar coat protein, caveolin-1 (Cav1), in prostate cancer cells differentially expressing CAVIN1 (also known as PTRF), that is also required for caveolae formation. High degree (strongly-interacting) points were removed by an iterative blink merging algorithm and Cav1 network properties were compared with randomly generated networks to retain a sub-network of geometric structures (or blobs). Machine-learning based classification extracted 28 quantitative features describing the size, shape, topology and network characteristics of ∼80,000 blobs. Unsupervised clustering identified small S1A scaffolds corresponding to SDS-resistant Cav1 oligomers, as yet undescribed larger hemi-spherical S2 scaffolds and, only in CAVIN1-expressing cells, spherical, hollow caveolae. Multi-threshold modularity analysis suggests that S1A scaffolds interact to form larger scaffolds and that S1A dimers group together, in the presence of CAVIN1, to form the caveolae coat.
Comparative Analysis of SPLICS and MCS-DETECT for Detecting Mitochondria-ER Contact Sites (MERCs)
Detection of mitochondria-ER contacts (MERCs) from diffraction limited confocal images commonly uses fluorescence colocalization analysis of mitochondria and endoplasmic reticulum (ER) as well as split fluorescent probes, such as the split-GFP-based contact site sensor (SPLICS). However, inter-organelle distances (∼10–60 nm) for MERCs are lower than the 200–250 nm diffraction limited resolution obtained by standard confocal microscopy. Super-resolution microscopy of 3D volume analysis provides a two-fold resolution improvement (∼120 nm XY; 250 nm Z), which remains unable to resolve MERCs. MCS-DETECT, a membrane contact site (MCS) detection algorithm faithfully detects elongated ribosome-studded riboMERCs when applied to 3D STED super-resolution image volumes. Here, we expressed the SPLICSL reporter in HeLa cells co-transfected with the ER reporter RFP-KDEL and label fixed cells with antibodies to RFP and the mitochondrial protein TOM20. MCS-DETECT analysis of 3D STED volumes was compared to contacts determined by co-occurrence colocalization analysis of mitochondria and ER or the SPLICSL probe. Percent mitochondria coverage by MCS-DETECT derived contacts was significantly smaller than those obtained for colocalization analysis or SPLICSL, and more closely matched contact site metrics obtained by 3D electron microscopy. Further, STED analysis localized a subset of the SPLICSL label to mitochondria with some SPLICSL puncta observed to be completely enveloped by mitochondria in 3D views. These data suggest that MCS-DETECT reports on a limited set of MERCs that more closely corresponds to those observed by EM.
SPECHT: Self-tuning Plausibility based object detection Enables quantification of Conflict in Heterogeneous multi-scale microscopy
Identification of small objects in fluorescence microscopy is a non-trivial task burdened by parameter-sensitive algorithms, for which there is a clear need for an approach that adapts dynamically to changing imaging conditions. Here, we introduce an adaptive object detection method that, given a microscopy image and an image level label, uses kurtosis-based matching of the distribution of the image differential to express operator intent in terms of recall or precision. We show how a theoretical upper bound of the statistical distance in feature space enables application of belief theory to obtain statistical support for each detected object, capturing those aspects of the image that support the label, and to what extent. We validate our method on 2 datasets: distinguishing sub-diffraction limit caveolae and scaffold by stimulated emission depletion (STED) super-resolution microscopy; and detecting amyloid-β deposits in confocal microscopy retinal cross-sections of neuropathologically confirmed Alzheimer’s disease donor tissue. Our results are consistent with biological ground truth and with previous subcellular object classification results, and add insight into more nuanced class transition dynamics. We illustrate the novel application of belief theory to object detection in heterogeneous microscopy datasets and the quantification of conflict of evidence in a joint belief function. By applying our method successfully to diffraction-limited confocal imaging of tissue sections and super-resolution microscopy of subcellular structures, we demonstrate multi-scale applicability.