Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
19
result(s) for
"3D-Unet"
Sort by:
Segmentation of Lung Nodules Using Improved 3D-UNet Neural Network
2020
Lung cancer has one of the highest morbidity and mortality rates in the world. Lung nodules are an early indicator of lung cancer. Therefore, accurate detection and image segmentation of lung nodules is of great significance to the early diagnosis of lung cancer. This paper proposes a CT (Computed Tomography) image lung nodule segmentation method based on 3D-UNet and Res2Net, and establishes a new convolutional neural network called 3D-Res2UNet. 3D-Res2Net has a symmetrical hierarchical connection network with strong multi-scale feature extraction capabilities. It enables the network to express multi-scale features with a finer granularity, while increasing the receptive field of each layer of the network. This structure solves the deep level problem. The network is not prone to gradient disappearance and gradient explosion problems, which improves the accuracy of detection and segmentation. The U-shaped network ensures the size of the feature map while effectively repairing the lost features. The method in this paper was tested on the LUNA16 public dataset, where the dice coefficient index reached 95.30% and the recall rate reached 99.1%, indicating that this method has good performance in lung nodule image segmentation.
Journal Article
3D-UNet-LSTM: A Deep Learning-Based Radar Echo Extrapolation Model for Convective Nowcasting
2023
Radar echo extrapolation is a commonly used approach for convective nowcasting. The evolution of convective systems over a very short term can be foreseen according to the extrapolated reflectivity images. Recently, deep neural networks have been widely applied to radar echo extrapolation and have achieved better forecasting performance than traditional approaches. However, it is difficult for existing methods to combine predictive flexibility with the ability to capture temporal dependencies at the same time. To leverage the advantages of the previous networks while avoiding the mentioned limitations, a 3D-UNet-LSTM model, which has an extractor-forecaster architecture, is proposed in this paper. The extractor adopts 3D-UNet to extract comprehensive spatiotemporal features from the input radar images. In the forecaster, a newly designed Seq2Seq network exploits the extracted features and uses different convolutional long short-term memory (ConvLSTM) layers to iteratively generate hidden states for different future timestamps. Finally, the hidden states are transformed into predicted radar images through a convolutional layer. We conduct 0–1 h convective nowcasting experiments on the public MeteoNet dataset. Quantitative evaluations demonstrate the effectiveness of the 3D-UNet extractor, the newly designed forecaster, and their combination. In addition, case studies qualitatively demonstrate that the proposed model has a better spatiotemporal modeling ability for the complex nonlinear processes of convective echoes.
Journal Article
Deep Learning-Based Segmentation of 3D Volumetric Image and Microstructural Analysis
2023
As a fundamental but difficult topic in computer vision, 3D object segmentation has various applications in medical image analysis, autonomous vehicles, robotics, virtual reality, lithium battery image analysis, etc. In the past, 3D segmentation was performed using hand-made features and design techniques, but these techniques could not generalize to vast amounts of data or reach acceptable accuracy. Deep learning techniques have lately emerged as the preferred method for 3D segmentation jobs as a result of their extraordinary performance in 2D computer vision. Our proposed method used a CNN-based architecture called 3D UNET, which is inspired by the famous 2D UNET that has been used to segment volumetric image data. To see the internal changes of composite materials, for instance, in a lithium battery image, it is necessary to see the flow of different materials and follow the directions analyzing the inside properties. In this paper, a combination of 3D UNET and VGG19 has been used to conduct a multiclass segmentation of publicly available sandstone datasets to analyze their microstructures using image data based on four different objects in the samples of volumetric data. In our image sample, there are a total of 448 2D images, which are then aggregated as one 3D volume to examine the 3D volumetric data. The solution involves the segmentation of each object in the volume data and further analysis of each object to find its average size, area percentage, total area, etc. The open-source image processing package IMAGEJ is used for further analysis of individual particles. In this study, it was demonstrated that convolutional neural networks can be trained to recognize sandstone microstructure traits with an accuracy of 96.78% and an IOU of 91.12%. According to our knowledge, many prior works have applied 3D UNET for segmentation, but very few papers extend it further to show the details of particles in the sample. The proposed solution offers a computational insight for real-time implementation and is discovered to be superior to the current state-of-the-art methods. The result has importance for the creation of an approximately similar model for the microstructural analysis of volumetric data.
Journal Article
Multi-Scale Convolutional Attention and Structural Re-Parameterized Residual-Based 3D U-Net for Liver and Liver Tumor Segmentation from CT
2025
Accurate segmentation of the liver and liver tumors is crucial for clinical diagnosis and treatment. However, the task poses significant challenges due to the complex morphology of tumors, indistinct features of small targets, and the similarity in grayscale values between the liver and surrounding organs. To address these issues, this paper proposes an enhanced 3D UNet architecture, named ELANRes-MSCA-UNet. By incorporating a structural re-parameterized residual module (ELANRes) and a multi-scale convolutional attention module (MSCA), the network significantly improves feature extraction and boundary optimization, particularly excelling in segmenting small targets. Additionally, a two-stage strategy is employed, where the liver region is segmented first, followed by the fine-grained segmentation of tumors, effectively reducing false positive rates. Experiments conducted on the LiTS2017 dataset demonstrate that the ELANRes-MSCA-UNet achieved Dice scores of 97.2% and 72.9% for liver and tumor segmentation tasks, respectively, significantly outperforming other state-of-the-art methods. These results validate the accuracy and robustness of the proposed method in medical image segmentation and highlight its potential for clinical applications.
Journal Article
Computerized Characterization of Spinal Structures on MRI and Clinical Significance of 3D Reconstruction of Lumbosacral Intervertebral Foramen
2022
Segmentation of spinal structures is important in medical imaging analysis, which facilitates surgeons to plan a preoperative trajectory for the transforaminal approach. However, manual segmentation of spinal structures is time-consuming, and studies have not explored automatic segmentation of spinal structures at the L5/S1 level.
This study sought to develop a new method based on a deep learning algorithm for automatic segmentation of spinal structures. The resulting algorithm may be used to rapidly generate a precise 3D lumbosacral intervertebral foramen model to assist physicians in planning an ideal trajectory in L5/S1 lumbar transforaminal radiofrequency ablation (LTRFA).
This was an observational study for developing a new technique on spinal structures segmentation.
The study was carried out at the department of radiology and spine surgery at our hospital.
A total of 100 L5/S1 level data samples from 100 study patients were used in this study. Masks of vertebral bone structures (VBSs) and intervertebral discs (IVDs) for all data samples were segmented manually by a skilled surgeon and served as the \"ground truth.\" After data preprocessing, a 3D-UNet model based on deep learning was used for automated segmentation of lumbar spine structures at L5/S1 level magnetic resonance imaging (MRI). Segmentation performances and morphometric measurement were used for 3D lumbosacral intervertebral foramen (LIVF) reconstruction generated by either manual segmentation and automatic segmentation.
The 3D-UNet model showed high performance in automatic segmentation of lumbar spinal structures (VBSs and IVDs). The corresponding mean Dice similarity coefficient (DSC) of 5-fold cross-validation scores for L5 vertebrae, IVDs, S1 vertebrae, and all L5/S1 level spinal structures were 93.46 ± 2.93%, 90.39 ± 6.22%, 93.32 ± 1.51%, and 92.39 ± 2.82%, respectively. Notably, the analysis showed no associated difference in morphometric measurements between the manual and automatic segmentation at the L5/S1 level.
Semantic segmentation of multiple spinal structures (such as VBSs, IVDs, blood vessels, muscles, and ligaments) was simultaneously not integrated into the deep-learning method in this study. In addition, large clinical experiments are needed to evaluate the clinical efficacy of the model.
The 3D-UNet model developed in this study based on deep learning can effectively and simultaneously segment VBSs and IVDs at L5/S1 level formMR images, thereby enabling rapid and accurate 3D reconstruction of LIVF models. The method can be used to segment VBSs and IVDs of spinal structures on MR images within near-human expert performance; therefore, it is reliable for reconstructing LIVF for L5/S1 LTRFA.
Journal Article
Yaru3DFPN: a lightweight modified 3D UNet with feature pyramid network and combine thresholding for brain tumor segmentation
by
Suciati, Nanik
,
Akbar, Agus Subhan
,
Fatichah, Chastine
in
Artificial Intelligence
,
Brain
,
Brain cancer
2024
Gliomas are the most common and aggressive form of all brain tumors, with a median survival rate of fewer than two years, especially for the highest-grade glioma patient. Accurate and reproducible brain tumor segmentation is essential for an effective treatment plan and diagnosis to reduce the risk of further spread. Automated brain tumor segmentation is challenging because it can appear in the brain with variations in shape, size, and position from one patient to another. Several deep learning architectures have been created to handle automatic segmentation with good performance results on 3D MRI images. However, these architectures are generally large and require high hardware specifications and a large amount of memory and storage. This paper proposes a lightweight modified 3D UNet architecture with an outstanding performance level called Yaru3DFPN. The architecture is built based on the UNet. The block used is ResNet and is modified to use pre-activation strategies and GroupNormalization for batch normalization. In the expanding section, features are arranged into pyramid features. The final output is thresholded using the combining thresholding method. This architecture is light and fast. This proposal was tested using BraTS datasets with the highest dice performance of 80.90%, 86.27%, and 92.02% for ET, TC, and WT areas, respectively. This result outperformed all other comparative architectures and promised to be developed for clinical application.
Journal Article
Improved microvascular imaging with optical coherence tomography using 3D neural networks and a channel attention mechanism
by
Rashidi, Mohammad
,
Kalenkov, Georgy
,
Mclaughlin, Robert A.
in
3D Unet
,
639/166/985
,
639/624/1107/510
2024
Skin microvasculature is vital for human cardiovascular health and thermoregulation, but its imaging and analysis presents significant challenges. Statistical methods such as speckle decorrelation in optical coherence tomography angiography (OCTA) often require multiple co-located B-scans, leading to lengthy acquisitions prone to motion artefacts. Deep learning has shown promise in enhancing accuracy and reducing measurement time by leveraging local information. However, both statistical and deep learning methods typically focus solely on processing individual 2D B-scans, neglecting contextual information from neighbouring B-scans. This limitation compromises spatial context and disregards the 3D features within tissue, potentially affecting OCTA image accuracy. In this study, we propose a novel approach utilising 3D convolutional neural networks (CNNs) to address this limitation. By considering the 3D spatial context, these 3D CNNs mitigate information loss, preserving fine details and boundaries in OCTA images. Our method reduces the required number of B-scans while enhancing accuracy, thereby increasing clinical applicability. This advancement holds promise for improving clinical practices and understanding skin microvascular dynamics crucial for cardiovascular health and thermoregulation.
Journal Article
Automatic lymph node segmentation using deep parallel squeeze & excitation and attention Unet
2024
Automatic segmentation and lymph node (LN) detection for cancer staging are critical. In clinical practice, computed tomography (CT) and positron emission tomography (PET) imaging detect abnormal LNs. Yet, it is still a difficult task due to the low contrast of LNs and surrounding soft tissues and the variation in nodal size and shape. We designed a location-guided 3D dual network for LN segmentation. A localization module generates Gaussian masks focused on LNs centralized within selected regions of interest (ROI). Our segmentation model incorporated squeeze & excitation (SE) and attention gate (AG) modules into a conventional 3D UNet architecture to boost useful feature utilization and increase usable feature utilization and segmentation accuracy. Lastly, we provide a simple boundary refinement module to polish the outcomes. We assessed the location-guided LN segmentation network’s performance on a clinical dataset with head and neck cancer. The location-guided network outperformed a comparable architecture without the Gaussian mask in terms of performance.
Journal Article
Assessment of volumetric changes after regenerative endodontic procedures using semiautomated and 3D U-NET automated CBCT segmentation: a retrospective cohort study
2025
Background
This retrospective cohort study evaluated volumetric changes in dental pulp and root structure of necrotic teeth after regenerative endodontic procedures (REPs) compared to contralateral counterparts (CON) using semiautomated segmentation and a 3D UNet model.
Methods
Data from 23 teeth with REPs and their CON were analyzed. Semiautomated segmentation was performed using ITK-SNAP and 3D Slicer CMF on scans taken before and 12 months post-treatment. Measurements included root length, volume changes (dentinal wall, intracanal calcification, and pulp) below the Biodentine plug. Automated pulpal segmentation utilized a 3D UNet model trained on 400 samples from the ToothFairy dataset, achieving a Dice Similarity Index of 0.76, Intersection over Union of 0.61, and Hausdorff distance of 2.15.
Results
REP teeth showed no significant differences in root volume or dentinal changes compared to CON. REP significantly reduced pulp volume in mature teeth (-4.86 mm
3
vs. -1.34 mm
3
,
p
=
0.05
), with lesser changes in immature teeth (-3.20 mm
3
vs. -6.44 mm
3
,
p
=
0.12
). Significant root length differences (
p
=
0.04
) were observed between mature and immature teeth with REPs. The 3D UNet model and semi-automated method for volumetric pulp assessment showed excellent agreement (ICC = 0.92
, p
<
0.001
). Bland–Altman plots indicated good agreement between the two methods in measuring pulpal volumetric changes.
Conclusions
REPs yield comparable findings to CON in root volume and dentinal wall changes, while significantly reduced pulp volume in mature teeth compared to CON. Distinct patterns of intracanal calcification and root length changes between mature and immature teeth were identified. A deep learning model can expedite post-REP pulp volume evaluations.
Clinical implications
Understanding volumetric changes in dental pulp and root structure is vital for assessing REP success, facilitating clinical decision-making, and improving outcomes in regenerative endodontics.
Journal Article
Mandible segmentation from CT data for virtual surgical planning using an augmented two-stepped convolutional neural network
by
Raith, Stefan
,
Pankert, Tobias
,
Peters, Florian
in
Accuracy
,
Artificial neural networks
,
Automation
2023
Purpose
For computer-aided planning of facial bony surgery, the creation of high-resolution 3D-models of the bones by segmenting volume imaging data is a labor-intensive step, especially as metal dental inlays or implants cause severe artifacts that reduce the quality of the computer-tomographic imaging data. This study provides a method to segment accurate, artifact-free 3D surface models of mandibles from CT data using convolutional neural networks.
Methods
The presented approach cascades two independently trained 3D-U-Nets to perform accurate segmentations of the mandible bone from full resolution CT images. The networks are trained in different settings using three different loss functions and a data augmentation pipeline. Training and evaluation datasets consist of manually segmented CT images from 307 dentate and edentulous individuals, partly with heavy imaging artifacts. The accuracy of the models is measured using overlap-based, surface-based and anatomical-curvature-based metrics.
Results
Our approach produces high-resolution segmentations of the mandibles, coping with severe imaging artifacts in the CT imaging data. The use of the two-stepped approach yields highly significant improvements to the prediction accuracies. The best models achieve a Dice coefficient of 94.824% and an average surface distance of 0.31 mm on our test dataset.
Conclusion
The use of two cascaded U-Net allows high-resolution predictions for small regions of interest in the imaging data. The proposed method is fast and allows a user-independent image segmentation, producing objective and repeatable results that can be used in automated surgical planning procedures.
Journal Article