Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
48,381 result(s) for "image segmentation"
Sort by:
Towards a guideline for evaluation metrics in medical image segmentation
In the last decade, research on artificial intelligence has seen rapid growth with deep learning models, especially in the field of medical image segmentation. Various studies demonstrated that these models have powerful prediction capabilities and achieved similar results as clinicians. However, recent studies revealed that the evaluation in image segmentation studies lacks reliable model performance assessment and showed statistical bias by incorrect metric implementation or usage. Thus, this work provides an overview and interpretation guide on the following metrics for medical image segmentation evaluation in binary as well as multi-class problems: Dice similarity coefficient, Jaccard, Sensitivity, Specificity, Rand index, ROC curves, Cohen’s Kappa, and Hausdorff distance. Furthermore, common issues like class imbalance and statistical as well as interpretation biases in evaluation are discussed. As a summary, we propose a guideline for standardized medical image segmentation evaluation to improve evaluation quality, reproducibility, and comparability in the research field.
Deep semantic segmentation of natural and medical images: a review
The semantic image segmentation task consists of classifying each pixel of an image into an instance, where each instance corresponds to a class. This task is a part of the concept of scene understanding or better explaining the global context of an image. In the medical image analysis domain, image segmentation can be used for image-guided interventions, radiotherapy, or improved radiological diagnostics. In this review, we categorize the leading deep learning-based medical and non-medical image segmentation solutions into six main groups of deep architectural, data synthesis-based, loss function-based, sequenced models, weakly supervised, and multi-task methods and provide a comprehensive review of the contributions in each of these groups. Further, for each group, we analyze each variant of these groups and discuss the limitations of the current approaches and present potential future research directions for semantic image segmentation.
A survey on recent trends in deep learning for nucleus segmentation from histopathology images
Nucleus segmentation is an imperative step in the qualitative study of imaging datasets, considered as an intricate task in histopathology image analysis. Segmenting a nucleus is an important part of diagnosing, staging, and grading cancer, but overlapping regions make it hard to separate and tell apart independent nuclei. Deep Learning is swiftly paving its way in the arena of nucleus segmentation, attracting quite a few researchers with its numerous published research articles indicating its efficacy in the field. This paper presents a systematic survey on nucleus segmentation using deep learning in the last five years (2017–2021), highlighting various segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) and exploring their similarities, strengths, datasets utilized, and unfolding research areas.
DCSAU-Net: A deeper and more compact split-attention U-Net for medical image segmentation
Deep learning architecture with convolutional neural network achieves outstanding success in the field of computer vision. Where U-Net has made a great breakthrough in biomedical image segmentation and has been widely applied in a wide range of practical scenarios. However, the equal design of every downsampling layer in the encoder part and simply stacked convolutions do not allow U-Net to extract sufficient information of features from different depths. The increasing complexity of medical images brings new challenges to the existing methods. In this paper, we propose a deeper and more compact split-attention u-shape network, which efficiently utilises low-level and high-level semantic information based on two frameworks: primary feature conservation and compact split-attention block. We evaluate the proposed model on CVC-ClinicDB, 2018 Data Science Bowl, ISIC-2018, SegPC-2021 and BraTS-2021 datasets. As a result, our proposed model displays better performance than other state-of-the-art methods in terms of the mean intersection over union and dice coefficient. More significantly, the proposed model demonstrates excellent segmentation performance on challenging images. The code for our work and more technical details can be found at https://github.com/xq141839/DCSAU-Net. •Low-level features are collected using optimised depth-wise separable convolution.•A lightweight multi-scale split attention block is used for deep feature extraction.•Notable performance improvements on complex images are achieved with a compact model.
Ultra-High Resolution Image Segmentation via Locality-Aware Context Fusion and Alternating Local Enhancement
Ultra-high resolution image segmentation has raised increasing interests in recent years due to its realistic applications. In this paper, we innovate the widely used high-resolution image segmentation pipeline, in which an ultra-high resolution image is partitioned into regular patches for local segmentation and then the local results are merged into a high-resolution semantic mask. In particular, we introduce a novel locality-aware context fusion based segmentation model to process local patches, where the relevance between local patch and its various contexts are jointly and complementarily utilized to handle the semantic regions with large variations. Additionally, we present the alternating local enhancement module that restricts the negative impact of redundant information introduced from the contexts, and thus is endowed with the ability of fixing the locality-aware features to produce refined results. Furthermore, in comprehensive experiments, we demonstrate that our model outperforms other state-of-the-art methods in public benchmarks and verify the effectiveness of the proposed modules. Our released codes will be available at: https://github.com/liqiokkk/FCtL.
Modality specific U-Net variants for biomedical image segmentation: a survey
With the advent of advancements in deep learning approaches, such as deep convolution neural network, residual neural network, adversarial network; U-Net architectures are most widely utilized in biomedical image segmentation to address the automation in identification and detection of the target regions or sub-regions. In recent studies, U-Net based approaches have illustrated state-of-the-art performance in different applications for the development of computer-aided diagnosis systems for early diagnosis and treatment of diseases such as brain tumor, lung cancer, alzheimer, breast cancer, etc., using various modalities. This article contributes in presenting the success of these approaches by describing the U-Net framework, followed by the comprehensive analysis of the U-Net variants by performing (1) inter-modality, and (2) intra-modality categorization to establish better insights into the associated challenges and solutions. Besides, this article also highlights the contribution of U-Net based frameworks in the ongoing pandemic, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) also known as COVID-19. Finally, the strengths and similarities of these U-Net variants are analysed along with the challenges involved in biomedical image segmentation to uncover promising future research directions in this area.
Enhancing Agricultural Image Segmentation with an Agricultural Segment Anything Model Adapter
The Segment Anything Model (SAM) is a versatile image segmentation model that enables zero-shot segmentation of various objects in any image using prompts, including bounding boxes, points, texts, and more. However, studies have shown that the SAM performs poorly in agricultural tasks like crop disease segmentation and pest segmentation. To address this issue, the agricultural SAM adapter (ASA) is proposed, which incorporates agricultural domain expertise into the segmentation model through a simple but effective adapter technique. By leveraging the distinctive characteristics of agricultural image segmentation and suitable user prompts, the model enables zero-shot segmentation, providing a new approach for zero-sample image segmentation in the agricultural domain. Comprehensive experiments are conducted to assess the efficacy of the ASA compared to the default SAM. The results show that the proposed model achieves significant improvements on all 12 agricultural segmentation tasks. Notably, the average Dice score improved by 41.48% on two coffee-leaf-disease segmentation tasks.
Literature review: efficient deep neural networks techniques for medical image analysis
Significant evolution in deep learning took place in 2010, when software developers started using graphical processing units for general-purpose applications. From that date, the deep neural network (DNN) started progressive steps across different applications ranging from natural language processing to hyperspectral image processing. The convolutional neural network (CNN) mostly triggers the interest, as it is considered one of the most powerful ways to learn useful representations of images and other structured data. The revolution of DNNs in medical imaging (MI) came in 2012, when Li launched ImageNet, a free database of more than 14 million labeled medical images. This state-of-the-art work presents a comprehensive study for the recent DNNs research directions applied in MI analysis. Clinical and pathological analysis through a selected patch of most cited researches is introduced. It will be shown how DNNs are able to tackle medical problems: classification, detection, localization, segmentation, and automatic diagnosis. Datasets comprises a range of imaging technologies: X-Ray, MRI, CT, Ultrasound, PET, Fluorescene Angiography, and even photographic images. This work surveys different patterns of DNNs and focuses somehow on the CNN, which offers an outstanding percentage of solutions compared to other DNNs structures. CNN emphasizes image features and has well-known architectures. On the other hand, limitations beyond DNNs training and execution time will be explained. Problems related to data augmentation and image annotation will be analyzed among a multiple of high standard publications. Finally, a comparative study of existing software frameworks supporting DNNs and future research directions in the area will be presented. From all presented works it could be deduced that the use of DNNs in healthcare is still in its early stages, there are strong initiatives in academia and industry to pursue healthcare projects based on DNNs.
X-Net: a dual encoding–decoding method in medical image segmentation
Medical image segmentation has the priori guiding significance for clinical diagnosis and treatment. In the past ten years, a large number of experimental facts have proved the great success of deep convolutional neural networks in various medical image segmentation tasks. However, the convolutional networks seem to focus too much on the local image details, while ignoring the long-range dependence. The Transformer structure can encode long-range dependencies in image and learn high-dimensional image information through the self-attention mechanism. But this structure currently depends on the database scale to give full play to its excellent performance, which limits its application in medical images with limited database size. In this paper, the characteristics of CNNs and Transformer are integrated to propose a dual encoding–decoding structure of the X-shaped network (X-Net). It can serve as a good alternative to the traditional pure convolutional medical image segmentation network. In the encoding phase, the local and global features are simultaneously extracted by two types of encoders, convolutional downsampling, and Transformer and then merged through jump connection. In the decoding phase, a variational auto-encoder branch is added to reconstruct the input image itself in order to weaken the impact of insufficient data. Comparative experiments on three medical image datasets show that X-Net can realize the organic combination of Transformer and CNNs.
Few-shot medical image segmentation using a global correlation network with discriminative embedding
Despite impressive developments in deep convolutional neural networks for medical imaging, the paradigm of supervised learning requires numerous annotations in training to avoid overfitting. In clinical cases, massive semantic annotations are difficult to acquire where biomedical expert knowledge is required. Moreover, it is common when only a few annotated classes are available. In this study, we proposed a new approach to few-shot medical image segmentation, which enables a segmentation model to quickly generalize to an unseen class with few training images. We constructed a few-shot image segmentation mechanism using a deep convolutional network trained episodically. Motivated by the spatial consistency and regularity in medical images, we developed an efficient global correlation module to model the correlation between a support and query image and incorporate it into the deep network. We enhanced the discrimination ability of the deep embedding scheme to encourage clustering of feature domains belonging to the same class while keeping feature domains of different organs far apart. We experimented using anatomical abdomen images from both CT and MRI modalities. [Display omitted] •We proposed an efficient global correlation module to capture the correspondence between the support and query image pair.•We enhanced feature learning by discriminative embedding for intra-compactness and inter-class separability.•We developed a modified episodic training to adapt to the discriminative embedding.•The proposed model achieved promising results in few-shot abdominal organ segmentation.