Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
22,721 result(s) for "Image Processing, Computer-Assisted - methods"
Sort by:
Convolutional neural networks for medical image processing applications
\"With the development of technology, living standards rise and people's expectations increase. This situation makes itself felt strikingly especially in the medical field. The use of medical devices is rapidly increasing to protect human health. It is very important to quickly evaluate the images obtained from these medical imaging devices. For this purpose, artificial intelligence (AI) methods are used. While hand-crafted methods were preferred in the past, more advanced methods are preferred today. CNN architectures are one of the most effective AI methods today. This book contains applications for the use of CNN methods for medical applications. The content of the book, in which different CNN methods are applied to various medical image processing problems, is quite extensive. Readers will be able to comprehensively analyze the effects of CNN methods presented in the book on medical applications\"-- Provided by publisher.
Amygdalar nuclei and hippocampal subfields on MRI: Test-retest reliability of automated volumetry across different MRI sites and vendors
The amygdala and the hippocampus are two limbic structures that play a critical role in cognition and behavior, however their manual segmentation and that of their smaller nuclei/subfields in multicenter datasets is time consuming and difficult due to the low contrast of standard MRI. Here, we assessed the reliability of the automated segmentation of amygdalar nuclei and hippocampal subfields across sites and vendors using FreeSurfer in two independent cohorts of older and younger healthy adults. Sixty-five healthy older (cohort 1) and 68 younger subjects (cohort 2), from the PharmaCog and CoRR consortia, underwent repeated 3D-T1 MRI (interval 1–90 days). Segmentation was performed using FreeSurfer v6.0. Reliability was assessed using volume reproducibility error (ε) and spatial overlapping coefficient (DICE) between test and retest session. Significant MRI site and vendor effects (p ​< ​.05) were found in a few subfields/nuclei for the ε, while extensive effects were found for the DICE score of most subfields/nuclei. Reliability was strongly influenced by volume, as ε correlated negatively and DICE correlated positively with volume size of structures (absolute value of Spearman’s r correlations >0.43, p ​< ​1.39E-36). In particular, volumes larger than 200 ​mm3 (for amygdalar nuclei) and 300 ​mm3 (for hippocampal subfields, except for molecular layer) had the best test-retest reproducibility (ε ​< ​5% and DICE ​> ​0.80). Our results support the use of volumetric measures of larger amygdalar nuclei and hippocampal subfields in multisite MRI studies. These measures could be useful for disease tracking and assessment of efficacy in drug trials. •Differences in MRI site/vendor had a limited effect on volume reproducibility.•Differences in MRI site/vendor had an extensive effect on spatial accuracy.•Reliability is good for larger amygdalar and hippocampal structures.•Automated volumetry is reliable in multicenter MRI studies.
A population-based phenome-wide association study of cardiac and aortic structure and function
Differences in cardiac and aortic structure and function are associated with cardiovascular diseases and a wide range of other types of disease. Here we analyzed cardiovascular magnetic resonance images from a population-based study, the UK Biobank, using an automated machine-learning-based analysis pipeline. We report a comprehensive range of structural and functional phenotypes for the heart and aorta across 26,893 participants, and explore how these phenotypes vary according to sex, age and major cardiovascular risk factors. We extended this analysis with a phenome-wide association study, in which we tested for correlations of a wide range of non-imaging phenotypes of the participants with imaging phenotypes. We further explored the associations of imaging phenotypes with early-life factors, mental health and cognitive function using both observational analysis and Mendelian randomization. Our study illustrates how population-based cardiac and aortic imaging phenotypes can be used to better define cardiovascular disease risks as well as heart–brain health interactions, highlighting new opportunities for studying disease mechanisms and developing image-based biomarkers. Using magnetic resonance images of the heart and aorta from 26,893 individuals in the UK Biobank, a phenome-wide association study associates cardiovascular imaging phenotypes with a wide range of demographic, lifestyle and clinical features.
Data mining in biomedical imaging, signaling, and systems
\"Data mining has rapidly emerged as an enabling, robust, and scalable technique to analyze data for novel patterns, trends, anomalies, structures, and features that can be employed for a variety of biomedical and clinical domains. Approaching the techniques and challenges of image mining from a multidisciplinary perspective, this book presents data mining techniques, methodologies, algorithms, and strategies to analyze biomedical signals and images. Written by experts, the text addresses data mining paradigms for the development of biomedical systems. It also includes special coverage of knowledge discovery in mammograms and emphasizes both the diagnostic and therapeutic fields of eye imaging\"--Provided by publisher.
Fiji: an open-source platform for biological-image analysis
Presented is an overview of the image-analysis software platform Fiji, a distribution of ImageJ that updates the underlying ImageJ architecture and adds modern software design elements to expand the capabilities of the platform and facilitate collaboration between biologists and computer scientists. Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.
Validation of a digital pathology system including remote review during the COVID-19 pandemic
Remote digital pathology allows healthcare systems to maintain pathology operations during public health emergencies. Existing Clinical Laboratory Improvement Amendments regulations require pathologists to electronically verify patient reports from a certified facility. During the 2019 pandemic of COVID-19 disease, caused by the SAR-CoV-2 virus, this requirement potentially exposes pathologists, their colleagues, and household members to the risk of becoming infected. Relaxation of government enforcement of this regulation allows pathologists to review and report pathology specimens from a remote, non-CLIA certified facility. The availability of digital pathology systems can facilitate remote microscopic diagnosis, although formal comprehensive (case-based) validation of remote digital diagnosis has not been reported. All glass slides representing routine clinical signout workload in surgical pathology subspecialties at Memorial Sloan Kettering Cancer Center were scanned on an Aperio GT450 at ×40 equivalent resolution (0.26 µm/pixel). Twelve pathologists from nine surgical pathology subspecialties remotely reviewed and reported complete pathology cases using a digital pathology system from a non-CLIA certified facility through a secure connection. Whole slide images were integrated to and launched within the laboratory information system to a custom vendor-agnostic, whole slide image viewer. Remote signouts utilized consumer-grade computers and monitors (monitor size, 13.3–42 in.; resolution, 1280 × 800–3840 × 2160 pixels) connecting to an institution clinical workstation via secure virtual private network. Pathologists subsequently reviewed all corresponding glass slides using a light microscope within the CLIA-certified department. Intraobserver concordance metrics included reporting elements of top-line diagnosis, margin status, lymphovascular and/or perineural invasion, pathology stage, and ancillary testing. The median whole slide image file size was 1.3 GB; scan time/slide averaged 90 s; and scanned tissue area averaged 612 mm2. Signout sessions included a total of 108 cases, comprised of 254 individual parts and 1196 slides. Major diagnostic equivalency was 100% between digital and glass slide diagnoses; and overall concordance was 98.8% (251/254). This study reports validation of primary diagnostic review and reporting of complete pathology cases from a remote site during a public health emergency. Our experience shows high (100%) intraobserver digital to glass slide major diagnostic concordance when reporting from a remote site. This randomized, prospective study successfully validated remote use of a digital pathology system including operational feasibility supporting remote review and reporting of pathology specimens, and evaluation of remote access performance and usability for remote signout.
Multi-level block permutation
Under weak and reasonable assumptions, mainly that data are exchangeable under the null hypothesis, permutation tests can provide exact control of false positives and allow the use of various non-standard statistics. There are, however, various common examples in which global exchangeability can be violated, including paired tests, tests that involve repeated measurements, tests in which subjects are relatives (members of pedigrees) — any dataset with known dependence among observations. In these cases, some permutations, if performed, would create data that would not possess the original dependence structure, and thus, should not be used to construct the reference (null) distribution. To allow permutation inference in such cases, we test the null hypothesis using only a subset of all otherwise possible permutations, i.e., using only the rearrangements of the data that respect exchangeability, thus retaining the original joint distribution unaltered. In a previous study, we defined exchangeability for blocks of data, as opposed to each datum individually, then allowing permutations to happen within block, or the blocks as a whole to be permuted. Here we extend that notion to allow blocks to be nested, in a hierarchical, multi-level definition. We do not explicitly model the degree of dependence between observations, only the lack of independence; the dependence is implicitly accounted for by the hierarchy and by the permutation scheme. The strategy is compatible with heteroscedasticity and variance groups, and can be used with permutations, sign flippings, or both combined. We evaluate the method for various dependence structures, apply it to real data from the Human Connectome Project (HCP) as an example application, show that false positives can be avoided in such cases, and provide a software implementation of the proposed approach. •The presence of structured, non-independent data affects simple permutation testing.•Modelling full dependence obviated through definition of variance groups (minimal assumptions).•Implementation based on shuffling branches of a tree-like (hierarchical) structure.•Validity demonstrated with simulations, and exemplified with data from the HCP.
Enhanced Tooth Region Detection Using Pretrained Deep Learning Models
The rapid development of artificial intelligence (AI) has led to the emergence of many new technologies in the healthcare industry. In dentistry, the patient’s panoramic radiographic or cone beam computed tomography (CBCT) images are used for implant placement planning to find the correct implant position and eliminate surgical risks. This study aims to develop a deep learning-based model that detects missing teeth’s position on a dataset segmented from CBCT images. Five hundred CBCT images were included in this study. After preprocessing, the datasets were randomized and divided into 70% training, 20% validation, and 10% test data. A total of six pretrained convolutional neural network (CNN) models were used in this study, which includes AlexNet, VGG16, VGG19, ResNet50, DenseNet169, and MobileNetV3. In addition, the proposed models were tested with/without applying the segmentation technique. Regarding the normal teeth class, the performance of the proposed pretrained DL models in terms of precision was above 0.90. Moreover, the experimental results showed the superiority of DenseNet169 with a precision of 0.98. In addition, other models such as MobileNetV3, VGG19, ResNet50, VGG16, and AlexNet obtained a precision of 0.95, 0.94, 0.94, 0.93, and 0.92, respectively. The DenseNet169 model performed well at the different stages of CBCT-based detection and classification with a segmentation accuracy of 93.3% and classification of missing tooth regions with an accuracy of 89%. As a result, the use of this model may represent a promising time-saving tool serving dental implantologists with a significant step toward automated dental implant planning.
Coronary CT angiography: image quality, diagnostic accuracy, and potential for radiation dose reduction using a novel iterative image reconstruction technique—comparison with traditional filtered back projection
Objectives To compare image noise, image quality and diagnostic accuracy of coronary CT angiography (cCTA) using a novel iterative reconstruction algorithm versus traditional filtered back projection (FBP) and to estimate the potential for radiation dose savings. Methods Sixty five consecutive patients (48 men; 59.3 ± 7.7 years) prospectively underwent cCTA and coronary catheter angiography (CCA). Full radiation dose data, using all projections, were reconstructed with FBP. To simulate image acquisition at half the radiation dose, 50% of the projections were discarded from the raw data. The resulting half-dose data were reconstructed with sinogram-affirmed iterative reconstruction (SAFIRE). Full-dose FBP and half-dose iterative reconstructions were compared with regard to image noise and image quality, and their respective accuracy for stenosis detection was compared against CCA. Results Compared with full-dose FBP, half-dose iterative reconstructions showed significantly ( p  = 0.001 – p  = 0.025) lower image noise and slightly higher image quality. Iterative reconstruction improved the accuracy of stenosis detection compared with FBP (per-patient: accuracy 96.9% vs. 93.8%, sensitivity 100% vs. 100%, specificity 94.6% vs. 89.2%, NPV 100% vs. 100%, PPV 93.3% vs. 87.5%). Conclusions Iterative reconstruction significantly reduces image noise without loss of diagnostic information and holds the potential for substantial radiation dose reduction from cCTA.