Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
92
result(s) for
"Mandible segmentation"
Sort by:
Automated localization of mandibular landmarks in the construction of mandibular median sagittal plane
2024
Objective
To use deep learning to segment the mandible and identify three-dimensional (3D) anatomical landmarks from cone-beam computed tomography (CBCT) images, the planes constructed from the mandibular midline landmarks were compared and analyzed to find the best mandibular midsagittal plane (MMSP).
Methods
A total of 400 participants were randomly divided into a training group (
n
= 360) and a validation group (
n
= 40). Normal individuals were used as the test group (
n
= 50). The PointRend deep learning mechanism segmented the mandible from CBCT images and accurately identified 27 anatomic landmarks via PoseNet. 3D coordinates of 5 central landmarks and 2 pairs of side landmarks were obtained for the test group. Every 35 combinations of 3 midline landmarks were screened using the template mapping technique. The asymmetry index (AI) was calculated for each of the 35 mirror planes. The template mapping technique plane was used as the reference plane; the top four planes with the smallest AIs were compared through distance, volume difference, and similarity index to find the plane with the fewest errors.
Results
The mandible was segmented automatically in 10 ± 1.5 s with a 0.98 Dice similarity coefficient. The mean landmark localization error for the 27 landmarks was 1.04 ± 0.28 mm. MMSP should use the plane made by B (supramentale), Gn (gnathion), and F (mandibular foramen). The average AI grade was 1.6 (min–max: 0.59–3.61). There was no significant difference in distance or volume (
P
> 0.05); however, the similarity index was significantly different (
P
< 0.01).
Conclusion
Deep learning can automatically segment the mandible, identify anatomic landmarks, and address medicinal demands in people without mandibular deformities. The most accurate MMSP was the B-Gn-F plane.
Journal Article
Mandible segmentation from CT data for virtual surgical planning using an augmented two-stepped convolutional neural network
by
Raith, Stefan
,
Pankert, Tobias
,
Peters, Florian
in
Accuracy
,
Artificial neural networks
,
Automation
2023
Purpose
For computer-aided planning of facial bony surgery, the creation of high-resolution 3D-models of the bones by segmenting volume imaging data is a labor-intensive step, especially as metal dental inlays or implants cause severe artifacts that reduce the quality of the computer-tomographic imaging data. This study provides a method to segment accurate, artifact-free 3D surface models of mandibles from CT data using convolutional neural networks.
Methods
The presented approach cascades two independently trained 3D-U-Nets to perform accurate segmentations of the mandible bone from full resolution CT images. The networks are trained in different settings using three different loss functions and a data augmentation pipeline. Training and evaluation datasets consist of manually segmented CT images from 307 dentate and edentulous individuals, partly with heavy imaging artifacts. The accuracy of the models is measured using overlap-based, surface-based and anatomical-curvature-based metrics.
Results
Our approach produces high-resolution segmentations of the mandibles, coping with severe imaging artifacts in the CT imaging data. The use of the two-stepped approach yields highly significant improvements to the prediction accuracies. The best models achieve a Dice coefficient of 94.824% and an average surface distance of 0.31 mm on our test dataset.
Conclusion
The use of two cascaded U-Net allows high-resolution predictions for small regions of interest in the imaging data. The proposed method is fast and allows a user-independent image segmentation, producing objective and repeatable results that can be used in automated surgical planning procedures.
Journal Article
Automatic Segmentation of Mandible from Conventional Methods to Deep Learning—A Review
by
Guo, Jiapan
,
van der Wel, Hylke
,
van Ooijen, Peter M. A.
in
Automation
,
Cancer therapies
,
Computed tomography
2021
Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications.
Journal Article
Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography
by
Guo, Jiapan
,
van Ooijen, Peter M. A.
,
Qiu, Bingjiang
in
Algorithms
,
Computed tomography
,
Connectivity
2021
Purpose: Classic encoder–decoder-based convolutional neural network (EDCNN) approaches cannot accurately segment detailed anatomical structures of the mandible in computed tomography (CT), for instance, condyles and coronoids of the mandible, which are often affected by noise and metal artifacts. The main reason is that EDCNN approaches ignore the anatomical connectivity of the organs. In this paper, we propose a novel CNN-based 3D mandible segmentation approach that has the ability to accurately segment detailed anatomical structures. Methods: Different from the classic EDCNNs that need to slice or crop the whole CT scan into 2D slices or 3D patches during the segmentation process, our proposed approach can perform mandible segmentation on complete 3D CT scans. The proposed method, namely, RCNNSeg, adopts the structure of the recurrent neural networks to form a directed acyclic graph in order to enable recurrent connections between adjacent nodes to retain their connectivity. Each node then functions as a classic EDCNN to segment a single slice in the CT scan. Our proposed approach can perform 3D mandible segmentation on sequential data of any varied lengths and does not require a large computation cost. The proposed RCNNSeg was evaluated on 109 head and neck CT scans from a local dataset and 40 scans from the PDDCA public dataset. The final accuracy of the proposed RCNNSeg was evaluated by calculating the Dice similarity coefficient (DSC), average symmetric surface distance (ASD), and 95% Hausdorff distance (95HD) between the reference standard and the automated segmentation. Results: The proposed RCNNSeg outperforms the EDCNN-based approaches on both datasets and yields superior quantitative and qualitative performances when compared to the state-of-the-art approaches on the PDDCA dataset. The proposed RCNNSeg generated the most accurate segmentations with an average DSC of 97.48%, ASD of 0.2170 mm, and 95HD of 2.6562 mm on 109 CT scans, and an average DSC of 95.10%, ASD of 0.1367 mm, and 95HD of 1.3560 mm on the PDDCA dataset. Conclusions: The proposed RCNNSeg method generated more accurate automated segmentations than those of the other classic EDCNN segmentation techniques in terms of quantitative and qualitative evaluation. The proposed RCNNSeg has potential for automatic mandible segmentation by learning spatially structured information.
Journal Article
Robust and Accurate Mandible Segmentation on Dental CBCT Scans Affected by Metal Artifacts Using a Prior Shape Model
by
Guo, Jiapan
,
Hendrik Glas, Haye
,
van der Wel, Hylke
in
Automation
,
Computed tomography
,
Deep learning
2021
Accurate mandible segmentation is significant in the field of maxillofacial surgery to guide clinical diagnosis and treatment and develop appropriate surgical plans. In particular, cone-beam computed tomography (CBCT) images with metal parts, such as those used in oral and maxillofacial surgery (OMFS), often have susceptibilities when metal artifacts are present such as weak and blurred boundaries caused by a high-attenuation material and a low radiation dose in image acquisition. To overcome this problem, this paper proposes a novel deep learning-based approach (SASeg) for automated mandible segmentation that perceives overall mandible anatomical knowledge. SASeg utilizes a prior shape feature extractor (PSFE) module based on a mean mandible shape, and recurrent connections maintain the continuity structure of the mandible. The effectiveness of the proposed network is substantiated on a dental CBCT dataset from orthodontic treatment containing 59 patients. The experiments show that the proposed SASeg can be easily used to improve the prediction accuracy in a dental CBCT dataset corrupted by metal artifacts. In addition, the experimental results on the PDDCA dataset demonstrate that, compared with the state-of-the-art mandible segmentation models, our proposed SASeg can achieve better segmentation performance.
Journal Article
Mandible Segmentation of Dental CBCT Scans Affected by Metal Artifacts Using Coarse-to-Fine Learning Model
by
Guo, Jiapan
,
van der Wel, Hylke
,
van Ooijen, Peter M. A.
in
Computed tomography
,
Curricula
,
Datasets
2021
Accurate segmentation of the mandible from cone-beam computed tomography (CBCT) scans is an important step for building a personalized 3D digital mandible model for maxillofacial surgery and orthodontic treatment planning because of the low radiation dose and short scanning duration. CBCT images, however, exhibit lower contrast and higher levels of noise and artifacts due to extremely low radiation in comparison with the conventional computed tomography (CT), which makes automatic mandible segmentation from CBCT data challenging. In this work, we propose a novel coarse-to-fine segmentation framework based on 3D convolutional neural network and recurrent SegUnet for mandible segmentation in CBCT scans. Specifically, the mandible segmentation is decomposed into two stages: localization of the mandible-like region by rough segmentation and further accurate segmentation of the mandible details. The method was evaluated using a dental CBCT dataset. In addition, we evaluated the proposed method and compared it with state-of-the-art methods in two CT datasets. The experiments indicate that the proposed algorithm can provide more accurate and robust segmentation results for different imaging techniques in comparison with the state-of-the-art models with respect to these three datasets.
Journal Article
Deep Learning Method for Mandibular Canal Segmentation in Dental Cone Beam Computed Tomography Volumes
by
Sahlsten, Jaakko
,
Jaskari, Joel
,
Hietanen, Ari
in
631/114/1305
,
692/700/1421
,
692/700/3032/3093/3094
2020
Accurate localisation of mandibular canals in lower jaws is important in dental implantology, in which the implant position and dimensions are currently determined manually from 3D CT images by medical experts to avoid damaging the mandibular nerve inside the canal. Here we present a deep learning system for automatic localisation of the mandibular canals by applying a fully convolutional neural network segmentation on clinically diverse dataset of 637 cone beam CT volumes, with mandibular canals being coarsely annotated by radiologists, and using a dataset of 15 volumes with accurate voxel-level mandibular canal annotations for model evaluation. We show that our deep learning model, trained on the coarsely annotated volumes, localises mandibular canals of the voxel-level annotated set, highly accurately with the mean curve distance and average symmetric surface distance being 0.56 mm and 0.45 mm, respectively. These unparalleled accurate results highlight that deep learning integrated into dental implantology workflow could significantly reduce manual labour in mandibular canal annotations.
Journal Article
Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action
by
Chen, Xiaojun
,
Schwenzer-Zimmerer, Katja
,
Mischak, Irene
in
Algorithms
,
Analysis
,
Augmented reality
2018
Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice.
In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance.
Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups.
Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works.
Journal Article
Automatic mandibular canal detection using a deep convolutional neural network
by
Kwak, Gloria Hyunjung
,
Park, Hae Ryoun
,
Cho, Bong-Hae
in
692/700/1421/2025
,
692/700/3032/3093
,
692/700/3032/3093/3094
2020
The practicability of deep learning techniques has been demonstrated by their successful implementation in varied fields, including diagnostic imaging for clinicians. In accordance with the increasing demands in the healthcare industry, techniques for automatic prediction and detection are being widely researched. Particularly in dentistry, for various reasons, automated mandibular canal detection has become highly desirable. The positioning of the inferior alveolar nerve (IAN), which is one of the major structures in the mandible, is crucial to prevent nerve injury during surgical procedures. However, automatic segmentation using Cone beam computed tomography (CBCT) poses certain difficulties, such as the complex appearance of the human skull, limited number of datasets, unclear edges, and noisy images. Using work-in-progress automation software, experiments were conducted with models based on 2D SegNet, 2D and 3D U-Nets as preliminary research for a dental segmentation automation tool. The 2D U-Net with adjacent images demonstrates higher global accuracy of 0.82 than naïve U-Net variants. The 2D SegNet showed the second highest global accuracy of 0.96, and the 3D U-Net showed the best global accuracy of 0.99. The automated canal detection system through deep learning will contribute significantly to efficient treatment planning and to reducing patients’ discomfort by a dentist. This study will be a preliminary report and an opportunity to explore the application of deep learning to other dental fields.
Journal Article
Automatic jawbone structure segmentation on dental CBCT images via deep learning
by
Liu, Lichao
,
Wang, Ge
,
Tian, Yuan
in
Cancellous bone
,
Cancellous Bone - diagnostic imaging
,
Computed tomography
2024
Objectives
This study developed and evaluated a two-stage deep learning-based system for automatic segmentation of mandibular cortical bone, mandibular cancellous bone, maxillary cortical bone and maxillary cancellous bone on cone beam computed tomography (CBCT) images.
Materials and methods
A dataset containing 155 CBCT scans acquired with different parameters was obtained. A two-stage deep learning-based system was developed for automatically segmenting jawbone structures. The Dice similarity coefficient (DSC) and average symmetric surface distance (ASSD) were used to assess the segmentation performance of the system by comparing the automatic segmentation results with the ground truth. The impact of dental and quality abnormalities on segmentation performance was analysed, and a comparison of automatic segmentation (AS) with manually refined segmentation (MRS) was reported.
Results
The system achieved promising segmentation performance, with average DSC values of 93.69%, 96.83%, 86.14% and 95.57% and average ASSD values of 0.13 mm, 0.16 mm, 0.29 mm and 0.41 mm for the mandibular cortical bone, mandibular cancellous bone, maxillary cortical bone and maxillary cancellous bone, respectively. Quality abnormalities had a negative impact on segmentation performance. The performance metrics (DSCs > 98.8% and ASSDs < 0.1 mm) indicated high overlap between the AS and MRS.
Conclusion
The proposed system offers an accurate and time-efficient method for segmenting jawbone structures on CBCT images.
Clinical relevance
Automatically segmenting jawbone structures is essential in most digital dental workflows. The proposed system has considerable potential for application in digital clinical workflows to assist dentists in making more accurate diagnoses and developing patient-specific treatment plans.
Journal Article