Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
54 result(s) for "Ariji, Yoshiko"
Sort by:
Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography
ObjectivesThe aim of this study was to evaluate the use of a convolutional neural network (CNN) system for detecting vertical root fracture (VRF) on panoramic radiography.MethodsThree hundred panoramic images containing a total of 330 VRF teeth with clearly visible fracture lines were selected from our hospital imaging database. Confirmation of VRF lines was performed by two radiologists and one endodontist. Eighty percent (240 images) of the 300 images were assigned to a training set and 20% (60 images) to a test set. A CNN-based deep learning model for the detection of VRFs was built using DetectNet with DIGITS version 5.0. To defend test data selection bias and increase reliability, fivefold cross-validation was performed. Diagnostic performance was evaluated using recall, precision, and F measure.ResultsOf the 330 VRFs, 267 were detected. Twenty teeth without fractures were falsely detected. Recall was 0.75, precision 0.93, and F measure 0.83.ConclusionsThe CNN learning model has shown promise as a tool to detect VRFs on panoramic images and to function as a CAD tool.
Deep-learning classification using convolutional neural network for evaluation of maxillary sinusitis on panoramic radiography
ObjectivesTo apply a deep-learning system for diagnosis of maxillary sinusitis on panoramic radiography, and to clarify its diagnostic performance.MethodsTraining data for 400 healthy and 400 inflamed maxillary sinuses were enhanced to 6000 samples in each category by data augmentation. Image patches were input into a deep-learning system, the learning process was repeated for 200 epochs, and a learning model was created. Newly-prepared testing image patches from 60 healthy and 60 inflamed sinuses were input into the learning model, and the diagnostic performance was calculated. Receiver-operating characteristic (ROC) curves were drawn, and the area under the curve (AUC) values were obtained. The results were compared with those of two experienced radiologists and two dental residents.ResultsThe diagnostic performance of the deep-learning system for maxillary sinusitis on panoramic radiographs was high, with accuracy of 87.5%, sensitivity of 86.7%, specificity of 88.3%, and AUC of 0.875. These values showed no significant differences compared with those of the radiologists and were higher than those of the dental residents.ConclusionsThe diagnostic performance of the deep-learning system for maxillary sinusitis on panoramic radiographs was sufficiently high. Results from the deep-learning system are expected to provide diagnostic support for inexperienced dentists.
Pulp regeneration by transplantation of dental pulp stem cells in pulpitis: a pilot clinical study
Background Experiments have previously demonstrated the therapeutic potential of mobilized dental pulp stem cells (MDPSCs) for complete pulp regeneration. The aim of the present pilot clinical study is to assess the safety, potential efficacy, and feasibility of autologous transplantation of MDPSCs in pulpectomized teeth. Methods Five patients with irreversible pulpitis were enrolled and monitored for up to 24 weeks following MDPSC transplantation. The MDPSCs were isolated from discarded teeth and expanded based on good manufacturing practice (GMP). The quality of the MDPSCs at passages 9 or 10 was ascertained by karyotype analyses. The MDPSCs were transplanted with granulocyte colony-stimulating factor (G-CSF) in atelocollagen into pulpectomized teeth. Results The clinical and laboratory evaluations demonstrated no adverse events or toxicity. The electric pulp test (EPT) of the pulp at 4 weeks demonstrated a robust positive response. The signal intensity of magnetic resonance imaging (MRI) of the regenerated tissue in the root canal after 24 weeks was similar to that of normal dental pulp in the untreated control. Finally, cone beam computed tomography demonstrated functional dentin formation in three of the five patients. Conclusions Human MDPSCs are safe and efficacious for complete pulp regeneration in humans in this pilot clinical study.
Detection and classification of unilateral cleft alveolus with and without cleft palate on panoramic radiographs using a deep learning system
Although panoramic radiography has a role in the examination of patients with cleft alveolus (CA), its appearances is sometimes difficult to interpret. The aims of this study were to develop a computer-aided diagnosis system for diagnosing the CA status on panoramic radiographs using a deep learning object detection technique with and without normal data in the learning process, to verify its performance in comparison to human observers, and to clarify some characteristic appearances probably related to the performance. The panoramic radiographs of 383 CA patients with cleft palate (CA with CP) or without cleft palate (CA only) and 210 patients without CA (normal) were used to create two models on the DetectNet. The models 1 and 2 were developed based on the data without and with normal subjects, respectively, to detect the CAs and classify them into with or without CP. The model 2 reduced the false positive rate (1/30) compared to the model 1 (12/30). The overall accuracy of Model 2 was higher than Model 1 and human observers. The model created in this study appeared to have the potential to detect and classify CAs on panoramic radiographs, and might be useful to assist the human observers.
A preliminary deep learning study on automatic segmentation of contrast-enhanced bolus in videofluorography of swallowing
Although videofluorography (VFG) is an effective tool for evaluating swallowing functions, its accurate evaluation requires considerable time and effort. This study aimed to create a deep learning model for automated bolus segmentation on VFG images of patients with healthy swallowing and dysphagia using the artificial intelligence deep learning segmentation method, and to assess the performance of the method. VFG images of 72 swallowing of 12 patients were continuously converted into 15 static images per second. In total, 3910 images were arbitrarily assigned to the training, validation, test 1, and test 2 datasets. In the training and validation datasets, images of colored bolus areas were prepared, along with original images. Using a U-Net neural network, a trained model was created after 500 epochs of training. The test datasets were applied to the trained model, and the performances of automatic segmentation (Jaccard index, Sørensen–Dice coefficient, and sensitivity) were calculated. All performance values for the segmentation of the test 1 and 2 datasets were high, exceeding 0.9. Using an artificial intelligence deep learning segmentation method, we automatically segmented the bolus areas on VFG images; our method exhibited high performance. This model also allowed assessment of aspiration and laryngeal invasion.
CT evaluation of extranodal extension of cervical lymph node metastases in patients with oral squamous cell carcinoma using deep learning classification
ObjectiveTo clarify CT diagnostic performance in extranodal extension of cervical lymph node metastases using deep learning classification.MethodsSeven-hundred and three CT images (178 with and 525 without extranodal extension) in 51 patients with cervical lymph node metastases from oral squamous cell carcinoma were enrolled in this study. CT images were cropped to an arbitrary size to include lymph nodes and surrounding tissues. All images were automatically divided into two datasets, assigning 80% as the training dataset and 20% as the testing dataset. The automated selection was repeated five times. Each training dataset was imported to a deep learning training system “DIGITS”. Five learning models were created after 300 epochs of the learning process using a neural network “AlexNet”. Each testing dataset was applied to each created learning model and resulting five performances were averaged as estimated diagnostic performances. A radiologist measured the minor axis and three radiologists evaluated central necrosis and irregular borders of each lymph node, and the diagnostic performances were obtained.ResultsThe deep learning accuracy of extranodal extension was 84.0%. The radiologists’ accuracies based on minor axis ≥ 11 mm, central necrosis, and irregular borders were 55.7%, 51.1% and 62.6%, respectively.ConclusionsThe deep learning diagnostic performance in extranodal extension was significantly higher than that of radiologists. This method is expected to improve diagnostic accuracy by further study with increasing the number of patients.
Limitations of panoramic radiographs in predicting mandibular wisdom tooth extraction and the potential of deep learning models to overcome them
Surgeons routinely interpret preoperative radiographic images for estimating the shape and position of the tooth prior to performing tooth extraction. In this study, we aimed to predict the difficulty of lower wisdom tooth extraction using only panoramic radiographs. Difficulty was evaluated using the modified Parant score. Two oral surgeons (a specialist and a clinical resident) predicted the difficulty level of the test data. This study also aimed to evaluate the performance of a deep learning model in predicting the necessity for tooth separation or bone removal during wisdom tooth extraction. Two convolutional neural networks (AlexNet and VGG-16) were created and trained using panoramic X-ray images. Both surgeons interpreted the same images and classified them into three groups. The accuracies for humans were 54.4% for both surgeons, 57.7% for AlexNet, and 54.4% for VGG-16. These results indicate that accurately predict the difficulty of wisdom teeth extraction using panoramic radiographs alone is challenging. However, AlexNet and VGG-16 had sensitivities of more than 90% for crown and root separation. The predictive ability of our proposed model is equivalent to that of an oral surgery specialist, and a recall value > 90% makes it suitable for screening in clinical settings.
Reliability of diagnostic imaging for degenerative diseases with osseous changes in the temporomandibular joint with special emphasis on subchondral cyst
ObjectivesThe present study aimed to clarify the reliabilities of four characteristic appearances, subchondral cyst, erosion, generalized sclerosis, and osteophyte, for evaluation of degenerative diseases with osseous changes in the temporomandibular joint (TMJ) using panoramic TMJ projection imaging and computed tomography (CT), and to investigate the imaging features of these modalities for subchondral cyst with reference to its magnetic resonance imaging (MRI) features.MethodsThe reliabilities (κ values) of panoramic TMJ projection and CT images were determined by three radiologists for each characteristic appearance of TMJ osseous changes in 146 condyles. The features of cyst-like areas on CT images with agreement among the three radiologists were investigated for size, location, and continuity with the joint space together with MRI signal intensity and surrounding edema-like lesions.ResultsPanoramic TMJ projection images showed moderate and substantial agreements for erosion and osteophyte evaluations, respectively; while CT images showed substantial agreements for subchondral cyst, erosion, and osteophyte evaluations. Cyst-like areas on CT images were predominantly located in the central parts and 69 of 86 (80.2%) areas showed no communication with the joint space. Cyst-like areas with diameters exceeding 2 mm showed high or moderate MRI signal intensities. Edema-like lesions were observed in 10 of 28 (29.4%) condyles.ConclusionsThe reliabilities of panoramic TMJ projection and CT images were clarified for each characteristic appearance. The results support the bone contusion theory for the formation of subchondral cysts in the TMJ. A possible improvement in reliability is suggested relative to MRI findings.
Preliminary Study on the Diagnostic Performance of a Deep Learning System for Submandibular Gland Inflammation Using Ultrasonography Images
This study was performed to evaluate the diagnostic performance of deep learning systems using ultrasonography (USG) images of the submandibular glands (SMGs) in three different conditions: obstructive sialoadenitis, Sjögren’s syndrome (SjS), and normal glands. Fifty USG images with a confirmed diagnosis of obstructive sialoadenitis, 50 USG images with a confirmed diagnosis of SjS, and 50 USG images with no SMG abnormalities were included in the study. The training group comprised 40 obstructive sialoadenitis images, 40 SjS images, and 40 control images, and the test group comprised 10 obstructive sialoadenitis images, 10 SjS images, and 10 control images for deep learning analysis. The performance of the deep learning system was calculated and compared between two experienced radiologists. The sensitivity of the deep learning system in the obstructive sialoadenitis group, SjS group, and control group was 55.0%, 83.0%, and 73.0%, respectively, and the total accuracy was 70.3%. The sensitivity of the two radiologists was 64.0%, 72.0%, and 86.0%, respectively, and the total accuracy was 74.0%. This study revealed that the deep learning system was more sensitive than experienced radiologists in diagnosing SjS in USG images of two case groups and a group of healthy subjects in inflammation of SMGs.
Magnetic resonance imaging in endodontics: a literature review
ObjectivesMagnetic resonance imaging (MRI) has recently been used for the evaluation of dental pulp anatomy, vitality, and regeneration. This study reviewed the recent use of MRI in the endodontic field.MethodsLiterature published from January 2000 to March 2017 was searched in PubMed using the following Medical Subject Heading (MeSH) terms: (1) MRI and (dental pulp anatomy or endodontic pulp); (2) MRI and dental pulp regeneration. Studies were narrowed down based on specific inclusion criteria and categorized as in vitro, in vivo, or dental pulp regeneration studies. The MRI sequences and imaging findings were summarized.ResultsIn the in vitro studies on dental pulp anatomy, T1-weighted imaging with high resolution was frequently used to evaluate dental pulp morphology, demineralization depth, and tooth abnormalities. Other sequences such as apparent diffusion coefficient mapping and sweep imaging with Fourier transformation were used to evaluate pulpal fluid and decayed teeth, and short-T2 tissues (dentin and enamel), respectively. In the in vivo studies, pulp vitality and reperfusion were visible with fat-saturated T2-weighted imaging or contrast-enhanced T1-weighted imaging. In both the in vitro and in vivo studies, MRI could reveal pulp regeneration after stem cell therapy. Stem cells labeled with superparamagnetic iron oxide particles were also visible on MRI. Angiogenesis induced by stem cells could be confirmed on enhanced T1-weighted imaging.ConclusionMRI can be successfully used to visualize pulp morphology as well as pulp vitality and regeneration. The use of MRI in the endodontic field is likely to increase in the future.