Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
22,849 result(s) for "confocal microscopy"
Sort by:
Machine Learning Based Prediction of Squamous Cell Carcinoma in Ex Vivo Confocal Laser Scanning Microscopy
Image classification with convolutional neural networks (CNN) offers an unprecedented opportunity to medical imaging. Regulatory agencies in the USA and Europe have already cleared numerous deep learning/machine learning based medical devices and algorithms. While the field of radiology is on the forefront of artificial intelligence (AI) revolution, conventional pathology, which commonly relies on examination of tissue samples on a glass slide, is falling behind in leveraging this technology. On the other hand, ex vivo confocal laser scanning microscopy (ex vivo CLSM), owing to its digital workflow features, has a high potential to benefit from integrating AI tools into the assessment and decision-making process. Aim of this work was to explore a preliminary application of CNN in digitally stained ex vivo CLSM images of cutaneous squamous cell carcinoma (cSCC) for automated detection of tumor tissue. Thirty-four freshly excised tissue samples were prospectively collected and examined immediately after resection. After the histologically confirmed ex vivo CLSM diagnosis, the tumor tissue was annotated for segmentation by experts, in order to train the MobileNet CNN. The model was then trained and evaluated using cross validation. The overall sensitivity and specificity of the deep neural network for detecting cSCC and tumor free areas on ex vivo CLSM slides compared to expert evaluation were 0.76 and 0.91, respectively. The area under the ROC curve was equal to 0.90 and the area under the precision-recall curve was 0.85. The results demonstrate a high potential of deep learning models to detect cSCC regions on digitally stained ex vivo CLSM slides and to distinguish them from tumor-free skin.
Multiview confocal super-resolution microscopy
Confocal microscopy 1 remains a major workhorse in biomedical optical microscopy owing to its reliability and flexibility in imaging various samples, but suffers from substantial point spread function anisotropy, diffraction-limited resolution, depth-dependent degradation in scattering samples and volumetric bleaching 2 . Here we address these problems, enhancing confocal microscopy performance from the sub-micrometre to millimetre spatial scale and the millisecond to hour temporal scale, improving both lateral and axial resolution more than twofold while simultaneously reducing phototoxicity. We achieve these gains using an integrated, four-pronged approach: (1) developing compact line scanners that enable sensitive, rapid, diffraction-limited imaging over large areas; (2) combining line-scanning with multiview imaging, developing reconstruction algorithms that improve resolution isotropy and recover signal otherwise lost to scattering; (3) adapting techniques from structured illumination microscopy, achieving super-resolution imaging in densely labelled, thick samples; (4) synergizing deep learning with these advances, further improving imaging speed, resolution and duration. We demonstrate these capabilities on more than 20 distinct fixed and live samples, including protein distributions in single cells; nuclei and developing neurons in Caenorhabditis elegans embryos, larvae and adults; myoblasts in imaginal disks of Drosophila wings; and mouse renal, oesophageal, cardiac and brain tissues. A combination of multiview imaging, structured illumination, reconstruction algorithms and deep-learning predictions realizes spatial- and temporal-resolution improvements in fluorescence microscopy to produce super-resolution images from diffraction-limited input images.
Confocal Microscopy for Diagnosis and Management of Cutaneous Malignancies: Clinical Impacts and Innovation
Cutaneous malignancies are common malignancies worldwide, with rising incidence. Most skin cancers, including melanoma, can be cured if diagnosed correctly at an early stage. Thus, millions of biopsies are performed annually, posing a major economic burden. Non-invasive skin imaging techniques can aid in early diagnosis and save unnecessary benign biopsies. In this review article, we will discuss in vivo and ex vivo confocal microscopy (CM) techniques that are currently being utilized in dermatology clinics for skin cancer diagnosis. We will discuss their current applications and clinical impact. Additionally, we will provide a comprehensive review of the advances in the field of CM, including multi-modal approaches, the integration of fluorescent targeted dyes, and the role of artificial intelligence for improved diagnosis and management.
Line-Field Confocal Optical Coherence Tomography: A New Tool for the Differentiation between Nevi and Melanomas?
Until now, the clinical differentiation between a nevus and a melanoma is still challenging in some cases. Line-field confocal optical coherence tomography (LC-OCT) is a new tool with the aim to change that. The aim of the study was to evaluate LC-OCT for the discrimination between nevi and melanomas. A total of 84 melanocytic lesions were examined with LC-OCT and 36 were also imaged with RCM. The observers recorded the diagnoses, and the presence or absence of the 18 most common imaging parameters for melanocytic lesions, nevi, and melanomas in the LC-OCT images. Their confidence in diagnosis and the image quality of LC-OCT and RCM were evaluated. The most useful criteria, the sensitivity and specificity of LC-OCT vs. RCM vs. histology, to differentiate a (dysplastic) nevus from a melanoma were analyzed. Good image quality correlated with better diagnostic performance (Spearman correlation: 0.4). LC-OCT had a 93% sensitivity and 100% specificity compared to RCM (93% sensitivity, 95% specificity) for diagnosing a melanoma (vs. all types of nevi). No difference in performance between RCM and LC-OCT was observed (McNemar’s p value = 1). Both devices falsely diagnosed dysplastic nevi as non-dysplastic (43% sensitivity for dysplastic nevus diagnosis). The most significant criteria for diagnosing a melanoma with LC-OCT were irregular honeycombed patterns (92% occurrence rate; 31.7 odds ratio (OR)), the presence of pagetoid spread (89% occurrence rate; 23.6 OR) and the absence of dermal nests (23% occurrence rate, 0.02 OR). In conclusion LC-OCT is useful for the discrimination between melanomas and nevi.
An artificial intelligence-based deep learning algorithm for the diagnosis of diabetic neuropathy using corneal confocal microscopy: a development and validation study
Aims/hypothesisCorneal confocal microscopy is a rapid non-invasive ophthalmic imaging technique that identifies peripheral and central neurodegenerative disease. Quantification of corneal sub-basal nerve plexus morphology, however, requires either time-consuming manual annotation or a less-sensitive automated image analysis approach. We aimed to develop and validate an artificial intelligence-based, deep learning algorithm for the quantification of nerve fibre properties relevant to the diagnosis of diabetic neuropathy and to compare it with a validated automated analysis program, ACCMetrics.MethodsOur deep learning algorithm, which employs a convolutional neural network with data augmentation, was developed for the automated quantification of the corneal sub-basal nerve plexus for the diagnosis of diabetic neuropathy. The algorithm was trained using a high-end graphics processor unit on 1698 corneal confocal microscopy images; for external validation, it was further tested on 2137 images. The algorithm was developed to identify total nerve fibre length, branch points, tail points, number and length of nerve segments, and fractal numbers. Sensitivity analyses were undertaken to determine the AUC for ACCMetrics and our algorithm for the diagnosis of diabetic neuropathy.ResultsThe intraclass correlation coefficients for our algorithm were superior to those for ACCMetrics for total corneal nerve fibre length (0.933 vs 0.825), mean length per segment (0.656 vs 0.325), number of branch points (0.891 vs 0.570), number of tail points (0.623 vs 0.257), number of nerve segments (0.878 vs 0.504) and fractals (0.927 vs 0.758). In addition, our proposed algorithm achieved an AUC of 0.83, specificity of 0.87 and sensitivity of 0.68 for the classification of participants without (n = 90) and with (n = 132) neuropathy (defined by the Toronto criteria).Conclusions/interpretationThese results demonstrated that our deep learning algorithm provides rapid and excellent localisation performance for the quantification of corneal nerve biomarkers. This model has potential for adoption into clinical screening programmes for diabetic neuropathy.Data availabilityThe publicly shared cornea nerve dataset (dataset 1) is available at http://bioimlab.dei.unipd.it/Corneal%20Nerve%20Tortuosity%20Data%20Set.htm and http://bioimlab.dei.unipd.it/Corneal%20Nerve%20Data%20Set.htm.
Tutorial: guidance for quantitative confocal microscopy
When used appropriately, a confocal fluorescence microscope is an excellent tool for making quantitative measurements in cells and tissues. The confocal microscope’s ability to block out-of-focus light and thereby perform optical sectioning through a specimen allows the researcher to quantify fluorescence with very high spatial precision. However, generating meaningful data using confocal microscopy requires careful planning and a thorough understanding of the technique. In this tutorial, the researcher is guided through all aspects of acquiring quantitative confocal microscopy images, including optimizing sample preparation for fixed and live cells, choosing the most suitable microscope for a given application and configuring the microscope parameters. Suggestions are offered for planning unbiased and rigorous confocal microscope experiments. Common pitfalls such as photobleaching and cross-talk are addressed, as well as several troubling instrumentation problems that may prevent the acquisition of quantitative data. Finally, guidelines for analyzing and presenting confocal images in a way that maintains the quantitative nature of the data are presented, and statistical analysis is discussed. A visual summary of this tutorial is available as a poster ( https://doi.org/10.1038/s41596-020-0307-7 ). This tutorial and the accompanying poster ( https://doi.org/10.1038/s41596-020-0307-7 ) provide a guide for performing quantitative fluorescence imaging using confocal microscopy. It includes advice and troubleshooting information from sample preparation and microscope setup to data analysis and statistics.
Corneal confocal microscopy is a rapid reproducible ophthalmic technique for quantifying corneal nerve abnormalities
To assess the effect of applying a protocol for image selection and the number of images required for adequate quantification of corneal nerve pathology using in vivo corneal confocal microscopy (IVCCM). IVCCM was performed in 35 participants by a single examiner. For each participant, 4 observers used a standardized protocol to select 6 central corneal nerve images to assess the inter-observer variability. Furthermore, images were selected by a single observer on two occasions to assess intra-observer variability and the effect of sample size was assessed by comparing 6 with 12 images. Corneal nerve fiber density (CNFD), branch density (CNBD) and length (CNFL) were quantified using fully automated software. The data were compared using the intra class correlation coefficient (ICC) and Bland-Altman agreement plots for all experiments. The ICC values for CNFD, CNBD and CNFL were 0.93 (P<0.0001), 0.96 (P<0.0001) and 0.95 (P<0.0001) for inter-observer variability and 0.95 (P<0.0001), 0.97 (P<0.001) and 0.97 (P<0.0001) for intra-observer variability. For sample size variability, ICC values were 0.94 (P<0.0001), 0.95 (P<0.0001), and 0.96 (P<0.0001) for CNFD, CNBD and CNFL. Bland-Altman plots showed excellent agreement for all parameters. This study shows that implementing a standardized protocol to select IVCCM images results in high intra and inter-observer reproducibility for all corneal nerve parameters and 6 images are adequate for analysis. IVCCM could therefore be deployed in large multicenter clinical trials with confidence.
Multiplexed 3D super-resolution imaging of whole cells using spinning disk confocal microscopy and DNA-PAINT
Single-molecule localization microscopy (SMLM) can visualize biological targets on the nanoscale, but complex hardware is required to perform SMLM in thick samples. Here, we combine 3D DNA points accumulation for imaging in nanoscale topography (DNA-PAINT) with spinning disk confocal (SDC) hardware to overcome this limitation. We assay our achievable resolution with two- and three-dimensional DNA origami structures and demonstrate the general applicability by imaging a large variety of cellular targets including proteins, DNA and RNA deep in cells. We achieve multiplexed 3D super-resolution imaging at sample depths up to ~10 µm with up to 20 nm planar and 80 nm axial resolution, now enabling DNA-based super-resolution microscopy in whole cells using standard instrumentation. Existing methods for nanoscale visualization of biological targets in thick samples require complex hardware. Here, the authors combine the standard spinning disk confocal (SDC) microscopy with DNA points accumulation for imaging in nanoscale topography (DNA-PAINT) to image proteins, DNA and RNA deep in cells.
Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning
We demonstrate that a deep neural network can be trained to virtually refocus a two-dimensional fluorescence image onto user-defined three-dimensional (3D) surfaces within the sample. Using this method, termed Deep-Z, we imaged the neuronal activity of a Caenorhabditis elegans worm in 3D using a time sequence of fluorescence images acquired at a single focal plane, digitally increasing the depth-of-field by 20-fold without any axial scanning, additional hardware or a trade-off of imaging resolution and speed. Furthermore, we demonstrate that this approach can correct for sample drift, tilt and other aberrations, all digitally performed after the acquisition of a single fluorescence image. This framework also cross-connects different imaging modalities to each other, enabling 3D refocusing of a single wide-field fluorescence image to match confocal microscopy images acquired at different sample planes. Deep-Z has the potential to improve volumetric imaging speed while reducing challenges relating to sample drift, aberration and defocusing that are associated with standard 3D fluorescence microscopy.
Cargo sorting zones in the trans-Golgi network visualized by super-resolution confocal live imaging microscopy in plants
The trans -Golgi network (TGN) has been known as a key platform to sort and transport proteins to their final destinations in post-Golgi membrane trafficking. However, how the TGN sorts proteins with different destinies still remains elusive. Here, we examined 3D localization and 4D dynamics of TGN-localized proteins of Arabidopsis thaliana that are involved in either secretory or vacuolar trafficking from the TGN, by a multicolor high-speed and high-resolution spinning-disk confocal microscopy approach that we developed. We demonstrate that TGN-localized proteins exhibit spatially and temporally distinct distribution. VAMP721 (R-SNARE), AP (adaptor protein complex)−1, and clathrin which are involved in secretory trafficking compose an exclusive subregion, whereas VAMP727 (R-SNARE) and AP-4 involved in vacuolar trafficking compose another subregion on the same TGN. Based on these findings, we propose that the single TGN has at least two subregions, or “zones”, responsible for distinct cargo sorting: the secretory-trafficking zone and the vacuolar-trafficking zone. The trans -Golgi network (TGN) serves as a platform to sort and transport proteins to their final destinations. Here the authors show that the TGN of Arabidopsis consists of spatially and temporally distinct subregions and propose that these zones may sort cargo to different destinations.