Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
113,261 result(s) for "COMPUTERS / Image Processing."
Sort by:
Amygdalar nuclei and hippocampal subfields on MRI: Test-retest reliability of automated volumetry across different MRI sites and vendors
The amygdala and the hippocampus are two limbic structures that play a critical role in cognition and behavior, however their manual segmentation and that of their smaller nuclei/subfields in multicenter datasets is time consuming and difficult due to the low contrast of standard MRI. Here, we assessed the reliability of the automated segmentation of amygdalar nuclei and hippocampal subfields across sites and vendors using FreeSurfer in two independent cohorts of older and younger healthy adults. Sixty-five healthy older (cohort 1) and 68 younger subjects (cohort 2), from the PharmaCog and CoRR consortia, underwent repeated 3D-T1 MRI (interval 1–90 days). Segmentation was performed using FreeSurfer v6.0. Reliability was assessed using volume reproducibility error (ε) and spatial overlapping coefficient (DICE) between test and retest session. Significant MRI site and vendor effects (p ​< ​.05) were found in a few subfields/nuclei for the ε, while extensive effects were found for the DICE score of most subfields/nuclei. Reliability was strongly influenced by volume, as ε correlated negatively and DICE correlated positively with volume size of structures (absolute value of Spearman’s r correlations >0.43, p ​< ​1.39E-36). In particular, volumes larger than 200 ​mm3 (for amygdalar nuclei) and 300 ​mm3 (for hippocampal subfields, except for molecular layer) had the best test-retest reproducibility (ε ​< ​5% and DICE ​> ​0.80). Our results support the use of volumetric measures of larger amygdalar nuclei and hippocampal subfields in multisite MRI studies. These measures could be useful for disease tracking and assessment of efficacy in drug trials. •Differences in MRI site/vendor had a limited effect on volume reproducibility.•Differences in MRI site/vendor had an extensive effect on spatial accuracy.•Reliability is good for larger amygdalar and hippocampal structures.•Automated volumetry is reliable in multicenter MRI studies.
A population-based phenome-wide association study of cardiac and aortic structure and function
Differences in cardiac and aortic structure and function are associated with cardiovascular diseases and a wide range of other types of disease. Here we analyzed cardiovascular magnetic resonance images from a population-based study, the UK Biobank, using an automated machine-learning-based analysis pipeline. We report a comprehensive range of structural and functional phenotypes for the heart and aorta across 26,893 participants, and explore how these phenotypes vary according to sex, age and major cardiovascular risk factors. We extended this analysis with a phenome-wide association study, in which we tested for correlations of a wide range of non-imaging phenotypes of the participants with imaging phenotypes. We further explored the associations of imaging phenotypes with early-life factors, mental health and cognitive function using both observational analysis and Mendelian randomization. Our study illustrates how population-based cardiac and aortic imaging phenotypes can be used to better define cardiovascular disease risks as well as heart–brain health interactions, highlighting new opportunities for studying disease mechanisms and developing image-based biomarkers. Using magnetic resonance images of the heart and aorta from 26,893 individuals in the UK Biobank, a phenome-wide association study associates cardiovascular imaging phenotypes with a wide range of demographic, lifestyle and clinical features.
What’s new and what’s next in diffusion MRI preprocessing
•This review covers diffusion MRI artifacts and preprocessing steps.•Notable developments and new advances since the HCP are summarized.•Practical considerations and future developments are discussed. Diffusion MRI (dMRI) provides invaluable information for the study of tissue microstructure and brain connectivity, but suffers from a range of imaging artifacts that greatly challenge the analysis of results and their interpretability if not appropriately accounted for. This review will cover dMRI artifacts and preprocessing steps, some of which have not typically been considered in existing pipelines or reviews, or have only gained attention in recent years: brain/skull extraction, B-matrix incompatibilities w.r.t the imaging data, signal drift, Gibbs ringing, noise distribution bias, denoising, between- and within-volumes motion, eddy currents, outliers, susceptibility distortions, EPI Nyquist ghosts, gradient deviations, B1 bias fields, and spatial normalization. The focus will be on “what’s new” since the notable advances prior to and brought by the Human Connectome Project (HCP), as presented in the predecessing issue on “Mapping the Connectome” in 2013. In addition to the development of novel strategies for dMRI preprocessing, exciting progress has been made in the availability of open source tools and reproducible pipelines, databases and simulation tools for the evaluation of preprocessing steps, and automated quality control frameworks, amongst others. Finally, this review will consider practical considerations and our view on “what’s next” in dMRI preprocessing.
Validation of a digital pathology system including remote review during the COVID-19 pandemic
Remote digital pathology allows healthcare systems to maintain pathology operations during public health emergencies. Existing Clinical Laboratory Improvement Amendments regulations require pathologists to electronically verify patient reports from a certified facility. During the 2019 pandemic of COVID-19 disease, caused by the SAR-CoV-2 virus, this requirement potentially exposes pathologists, their colleagues, and household members to the risk of becoming infected. Relaxation of government enforcement of this regulation allows pathologists to review and report pathology specimens from a remote, non-CLIA certified facility. The availability of digital pathology systems can facilitate remote microscopic diagnosis, although formal comprehensive (case-based) validation of remote digital diagnosis has not been reported. All glass slides representing routine clinical signout workload in surgical pathology subspecialties at Memorial Sloan Kettering Cancer Center were scanned on an Aperio GT450 at ×40 equivalent resolution (0.26 µm/pixel). Twelve pathologists from nine surgical pathology subspecialties remotely reviewed and reported complete pathology cases using a digital pathology system from a non-CLIA certified facility through a secure connection. Whole slide images were integrated to and launched within the laboratory information system to a custom vendor-agnostic, whole slide image viewer. Remote signouts utilized consumer-grade computers and monitors (monitor size, 13.3–42 in.; resolution, 1280 × 800–3840 × 2160 pixels) connecting to an institution clinical workstation via secure virtual private network. Pathologists subsequently reviewed all corresponding glass slides using a light microscope within the CLIA-certified department. Intraobserver concordance metrics included reporting elements of top-line diagnosis, margin status, lymphovascular and/or perineural invasion, pathology stage, and ancillary testing. The median whole slide image file size was 1.3 GB; scan time/slide averaged 90 s; and scanned tissue area averaged 612 mm2. Signout sessions included a total of 108 cases, comprised of 254 individual parts and 1196 slides. Major diagnostic equivalency was 100% between digital and glass slide diagnoses; and overall concordance was 98.8% (251/254). This study reports validation of primary diagnostic review and reporting of complete pathology cases from a remote site during a public health emergency. Our experience shows high (100%) intraobserver digital to glass slide major diagnostic concordance when reporting from a remote site. This randomized, prospective study successfully validated remote use of a digital pathology system including operational feasibility supporting remote review and reporting of pathology specimens, and evaluation of remote access performance and usability for remote signout.
Deep learning extended depth-of-field microscope for fast and slide-free histology
Microscopic evaluation of resected tissue plays a central role in the surgical management of cancer. Because optical microscopes have a limited depth-of-field (DOF), resected tissue is either frozen or preserved with chemical fixatives, sliced into thin sections placed on microscope slides, stained, and imaged to determine whether surgical margins are free of tumor cells—a costly and time- and labor-intensive procedure. Here, we introduce a deep-learning extended DOF (DeepDOF) microscope to quickly image large areas of freshly resected tissue to provide histologic-quality images of surgical margins without physical sectioning. The DeepDOF microscope consists of a conventional fluorescence microscope with the simple addition of an inexpensive (less than $10) phase mask inserted in the pupil plane to encode the light field and enhance the depth-invariance of the point-spread function. When used with a jointly optimized image-reconstruction algorithm, diffraction-limited optical performance to resolve subcellular features can be maintained while significantly extending the DOF (200 μm). Data from resected oral surgical specimens show that the DeepDOF microscope can consistently visualize nuclear morphology and other important diagnostic features across highly irregular resected tissue surfaces without serial refocusing. With the capability to quickly scan intact samples with subcellular detail, the DeepDOF microscope can improve tissue sampling during intraoperative tumor-margin assessment, while offering an affordable tool to provide histological information from resected tissue specimens in resource-limited settings.
BEaST: Brain extraction based on nonlocal segmentation technique
Brain extraction is an important step in the analysis of brain images. The variability in brain morphology and the difference in intensity characteristics due to imaging sequences make the development of a general purpose brain extraction algorithm challenging. To address this issue, we propose a new robust method (BEaST) dedicated to produce consistent and accurate brain extraction. This method is based on nonlocal segmentation embedded in a multi-resolution framework. A library of 80 priors is semi-automatically constructed from the NIH-sponsored MRI study of normal brain development, the International Consortium for Brain Mapping, and the Alzheimer's Disease Neuroimaging Initiative databases. In testing, a mean Dice similarity coefficient of 0.9834±0.0053 was obtained when performing leave-one-out cross validation selecting only 20 priors from the library. Validation using the online Segmentation Validation Engine resulted in a top ranking position with a mean Dice coefficient of 0.9781±0.0047. Robustness of BEaST is demonstrated on all baseline ADNI data, resulting in a very low failure rate. The segmentation accuracy of the method is better than two widely used publicly available methods and recent state-of-the-art hybrid approaches. BEaST provides results comparable to a recent label fusion approach, while being 40 times faster and requiring a much smaller library of priors.
Comparing fully automated state-of-the-art cerebellum parcellation from magnetic resonance images
The human cerebellum plays an essential role in motor control, is involved in cognitive function (i.e., attention, working memory, and language), and helps to regulate emotional responses. Quantitative in-vivo assessment of the cerebellum is important in the study of several neurological diseases including cerebellar ataxia, autism, and schizophrenia. Different structural subdivisions of the cerebellum have been shown to correlate with differing pathologies. To further understand these pathologies, it is helpful to automatically parcellate the cerebellum at the highest fidelity possible. In this paper, we coordinated with colleagues around the world to evaluate automated cerebellum parcellation algorithms on two clinical cohorts showing that the cerebellum can be parcellated to a high accuracy by newer methods. We characterize these various methods at four hierarchical levels: coarse (i.e., whole cerebellum and gross structures), lobe, subdivisions of the vermis, and the lobules. Due to the number of labels, the hierarchy of labels, the number of algorithms, and the two cohorts, we have restricted our analyses to the Dice measure of overlap. Under these conditions, machine learning based methods provide a collection of strategies that are efficient and deliver parcellations of a high standard across both cohorts, surpassing previous work in the area. In conjunction with the rank-sum computation, we identified an overall winning method. •First paper to evaluate the state-of-the-art in cerebellum parcellation.•Presenting results on both Adult and Pediatric Cohorts.•Adult Cohort contains healthy controls, and patients with either symptoms of cerebellar dysfunction or SCA 6.•Pediatric Cohort contains healthy controls, and patients with ADHD or Autism.
Automated segmentation by deep learning of loose connective tissue fibers to define safe dissection planes in robot-assisted gastrectomy
The prediction of anatomical structures within the surgical field by artificial intelligence (AI) is expected to support surgeons’ experience and cognitive skills. We aimed to develop a deep-learning model to automatically segment loose connective tissue fibers (LCTFs) that define a safe dissection plane. The annotation was performed on video frames capturing a robot-assisted gastrectomy performed by trained surgeons. A deep-learning model based on U-net was developed to output segmentation results. Twenty randomly sampled frames were provided to evaluate model performance by comparing Recall and F1/Dice scores with a ground truth and with a two-item questionnaire on sensitivity and misrecognition that was completed by 20 surgeons. The model produced high Recall scores (mean 0.606, maximum 0.861). Mean F1/Dice scores reached 0.549 (range 0.335–0.691), showing acceptable spatial overlap of the objects. Surgeon evaluators gave a mean sensitivity score of 3.52 (with 88.0% assigning the highest score of 4; range 2.45–3.95). The mean misrecognition score was a low 0.14 (range 0–0.7), indicating very few acknowledged over-detection failures. Thus, AI can be trained to predict fine, difficult-to-discern anatomical structures at a level convincing to expert surgeons. This technology may help reduce adverse events by determining safe dissection planes.
3DeeCellTracker, a deep learning-based pipeline for segmenting and tracking cells in 3D time lapse images
Despite recent improvements in microscope technologies, segmenting and tracking cells in three-dimensional time-lapse images (3D + T images) to extract their dynamic positions and activities remains a considerable bottleneck in the field. We developed a deep learning-based software pipeline, 3DeeCellTracker, by integrating multiple existing and new techniques including deep learning for tracking. With only one volume of training data, one initial correction, and a few parameter changes, 3DeeCellTracker successfully segmented and tracked ~100 cells in both semi-immobilized and ‘straightened’ freely moving worm's brain, in a naturally beating zebrafish heart, and ~1000 cells in a 3D cultured tumor spheroid. While these datasets were imaged with highly divergent optical systems, our method tracked 90–100% of the cells in most cases, which is comparable or superior to previous results. These results suggest that 3DeeCellTracker could pave the way for revealing dynamic cell activities in image datasets that have been difficult to analyze. Microscopes have been used to decrypt the tiny details of life since the 17th century. Now, the advent of 3D microscopy allows scientists to build up detailed pictures of living cells and tissues. In that effort, automation is becoming increasingly important so that scientists can analyze the resulting images and understand how bodies grow, heal and respond to changes such as drug therapies. In particular, algorithms can help to spot cells in the picture (called cell segmentation), and then to follow these cells over time across multiple images (known as cell tracking). However, performing these analyses on 3D images over a given period has been quite challenging. In addition, the algorithms that have already been created are often not user-friendly, and they can only be applied to a specific dataset gathered through a particular scientific method. As a response, Wen et al. developed a new program called 3DeeCellTracker, which runs on a desktop computer and uses a type of artificial intelligence known as deep learning to produce consistent results. Crucially, 3DeeCellTracker can be used to analyze various types of images taken using different types of cutting-edge microscope systems. And indeed, the algorithm was then harnessed to track the activity of nerve cells in moving microscopic worms, of beating heart cells in a young small fish, and of cancer cells grown in the lab. This versatile tool can now be used across biology, medical research and drug development to help monitor cell activities.
Enhanced Tooth Region Detection Using Pretrained Deep Learning Models
The rapid development of artificial intelligence (AI) has led to the emergence of many new technologies in the healthcare industry. In dentistry, the patient’s panoramic radiographic or cone beam computed tomography (CBCT) images are used for implant placement planning to find the correct implant position and eliminate surgical risks. This study aims to develop a deep learning-based model that detects missing teeth’s position on a dataset segmented from CBCT images. Five hundred CBCT images were included in this study. After preprocessing, the datasets were randomized and divided into 70% training, 20% validation, and 10% test data. A total of six pretrained convolutional neural network (CNN) models were used in this study, which includes AlexNet, VGG16, VGG19, ResNet50, DenseNet169, and MobileNetV3. In addition, the proposed models were tested with/without applying the segmentation technique. Regarding the normal teeth class, the performance of the proposed pretrained DL models in terms of precision was above 0.90. Moreover, the experimental results showed the superiority of DenseNet169 with a precision of 0.98. In addition, other models such as MobileNetV3, VGG19, ResNet50, VGG16, and AlexNet obtained a precision of 0.95, 0.94, 0.94, 0.93, and 0.92, respectively. The DenseNet169 model performed well at the different stages of CBCT-based detection and classification with a segmentation accuracy of 93.3% and classification of missing tooth regions with an accuracy of 89%. As a result, the use of this model may represent a promising time-saving tool serving dental implantologists with a significant step toward automated dental implant planning.