Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
14,606
result(s) for
"deep convolutional neural network"
Sort by:
Automatic Diagnosis of Diabetic Retinopathy Stage Focusing Exclusively on Retinal Hemorrhage
by
Toshihiko Nagasawa
,
Yuki Yoshizumi
,
Yoshihiro Tokuda
in
Accuracy
,
Artificial Intelligence
,
Blood vessels
2022
Background and Objectives: The present study evaluated the detection of diabetic retinopathy (DR) using an automated fundus camera focusing exclusively on retinal hemorrhage (RH) using a deep convolutional neural network, which is a machine-learning technology. Materials and Methods: This investigation was conducted via a prospective and observational study. The study included 89 fundus ophthalmoscopy images. Seventy images passed an image quality review and were graded as showing no apparent DR (n = 51), mild nonproliferative DR (NPDR; n = 16), moderate NPDR (n = 1), severe NPDR (n = 1), and proliferative DR (n = 1) by three retinal experts according to the International Clinical Diabetic Retinopathy Severity scale. The RH numbers and areas were automatically detected and the results of two tests—the detection of mild-or-worse NPDR and the detection of moderate-or-worse NPDR—were examined. Results: The detection of mild-or-worse DR showed a sensitivity of 0.812 (95% confidence interval: 0.680–0.945), specificity of 0.888, and area under the curve (AUC) of 0.884, whereas the detection of moderate-or-worse DR showed a sensitivity of 1.0, specificity of 1.0, and AUC of 1.0. Conclusions: Automated diagnosis using artificial intelligence focusing exclusively on RH could be used to diagnose DR requiring ophthalmologist intervention.
Journal Article
Efficient detection of glaucoma using double tier deep convolutional neural network
by
Pitchai, R.
,
Babu, Ch. Madhu
,
Prabaharan, G.
in
Artificial neural networks
,
Computer Science
,
Eye diseases
2023
Glaucoma is an ocular disease which causes the eyes’ optic nerves to suffer from irreversible blindness because of increased intraocular pressure. Early detection and glaucoma screening can prevent loss of vision. A common way to diagnose the progression of glaucoma is through examination by a special ophthalmologist of the dilated pupil of the eye. But this approach is difficult and takes a lot of time, so automation can solve the problem with the concept of double tier deep convolution neural networks. This network is well suited for resolving this type of problem, as it can deduce hierarchical data from the image that allows them to discern among glaucoma and non-glaucoma diagnostic patterns. It is composed with different layers. Every layer contains hidden layers like turbidity, max pooling, output and the fully connected layer. The retinal images are processed in the hidden layers, and the results obtained are combined and classified as normal or glaucomatous. The efficiency of the double tier deep convolutional neural network has been compared with different existing recognition methodologies. The outcomes show that the double tier deep convolutional neural network gives better performance when compared with other methodologies in terms of accuracy of 92.64%, sensitivity of 92.18%, specificity of 91.20% and precision of 90.76%.
Journal Article
Optimized Deep Learning Model for Disease Prediction in Potato Leaves
by
Shelke, Chetan J
,
Sharma, Nonita
,
Shrivastava, Virendra Kumar
in
Artificial intelligence
,
Artificial neural networks
,
Climate change
2024
Food crops are important for nations and human survival. Potatoes are one of the most widely used foods globally. But there are several diseases hampering potato growth and production as well. Traditional methods for diagnosing disease in potato leaves are based on human observations and laboratory tests which is a cumbersome and time-consuming task. The new age technologies such as artificial intelligence and deep learning can play a vital role in disease detection. This research proposed an optimized deep learning model to predict potato leaf diseases. The model is trained on a collection of potato leaf image datasets. The model is based on a deep convolutional neural network architecture which includes data augmentation, transfer learning, and hyper-parameter tweaking used to optimize the proposed model. Results indicate that the optimized deep convolutional neural network model has produced 99.22% prediction accuracy on Potato Disease Leaf Dataset.
Journal Article
Water chaotic fruit fly optimization-based deep convolutional neural network for image watermarking using wavelet transform
by
D, Jayadevappa
,
Ingaleshwar, Subodh
,
Dharwadkar, Nagaraj V
in
Algorithms
,
Artificial neural networks
,
Brain cancer
2023
Due to the rapid growth of multimedia in network technology, accessing the digital media becomes very easy. Hence, protecting the intellectual property requires more interest in image watermarking. For this sake, different image watermarking approaches are developed, but it poses robustness and transparency issues. Therefore, an effective image watermarking method named Water Chaotic fruit fly Optimization algorithm-based Deep Convolutional neural network (WCFOA-based Deep CNN) is developed for embedding the secret message to the cover media. The proposed WCFOA is developed by integrating the Water Wave Optimization (WWO) with the Chaotic Fruit Fly Optimization algorithm (CFOA). The inspiration of propagation operator and the refraction operator increases the diversity of population and minimizes the premature convergence. However, the breaking, propagation and the refraction operator of the proposed optimization shows the effectiveness of balance between the exploitation of exploration phase in search space using the fitness measure. Accordingly, the embedding process is achieved using the wavelet transform with the selected optimal region using the evaluated fitness value. Several images of brain tumors from BRATS dataset, with tumors having different contrast and form, are used to assess the proposed method. The experimental analysis shows that, the proposed WCFOA-based Deep CNN obtained better performance using the metrics, like correlation coefficient and Peak signal-to-noise ratio (PSNR) with the values of 1 and 45.2157 using without noise scenario and the correlation coefficient and PSNR of 0.9918 and 45.0627 for Impulse noise. By considering the salt and pepper noise, the correlation coefficient and PSNR is 0.9918 and 47.001 and in the Gaussian noise scenario the values of correlation coefficient and PSNR is 0.990 and 46.985, respectively.
Journal Article
Improved Facial Expression Recognition Based on DWT Feature for Deep CNN
by
Bendjillali, Ridha Ilyas
,
Taleb-Ahmed, Abdelmalik
,
Beladgham, Mohammed
in
Accuracy
,
Algorithms
,
Artificial intelligence
2019
Facial expression recognition (FER) has become one of the most important fields of research in pattern recognition. In this paper, we propose a method for the identification of facial expressions of people through their emotions. Being robust against illumination changes, this method combines four steps: Viola–Jones face detection algorithm, facial image enhancement using contrast limited adaptive histogram equalization (CLAHE) algorithm, the discrete wavelet transform (DWT), and deep convolutional neural network (CNN). We have used Viola–Jones to locate the face and facial parts; the facial image is enhanced using CLAHE; then facial features extraction is done using DWT; and finally, the extracted features are used directly to train the CNN network, for the purpose of classifying the facial expressions. Our experimental work was performed on the CK+ database and JAFFE face database. The results obtained using this network were 96.46% and 98.43%, respectively.
Journal Article
Automated Estimation of Mammary Gland Content Ratio Using Regression Deep Convolutional Neural Network and the Effectiveness in Clinical Practice as Explainable Artificial Intelligence
by
Kai, Chiharu
,
Nara, Miyako
,
Futamura, Hitoshi
in
Accuracy
,
Artificial intelligence
,
Automation
2023
Recently, breast types were categorized into four types based on the Breast Imaging Reporting and Data System (BI-RADS) atlas, and evaluating them is vital in clinical practice. A Japanese guideline, called breast composition, was developed for the breast types based on BI-RADS. The guideline is characterized using a continuous value called the mammary gland content ratio calculated to determine the breast composition, therefore allowing a more objective and visual evaluation. Although a discriminative deep convolutional neural network (DCNN) has been developed conventionally to classify the breast composition, it could encounter two-step errors or more. Hence, we propose an alternative regression DCNN based on mammary gland content ratio. We used 1476 images, evaluated by an expert physician. Our regression DCNN contained four convolution layers and three fully connected layers. Consequently, we obtained a high correlation of 0.93 (p < 0.01). Furthermore, to scrutinize the effectiveness of the regression DCNN, we categorized breast composition using the estimated ratio obtained by the regression DCNN. The agreement rates are high at 84.8%, suggesting that the breast composition can be calculated using regression DCNN with high accuracy. Moreover, the occurrence of two-step errors or more is unlikely, and the proposed method can intuitively understand the estimated results.
Journal Article
Solar module defects classification using deep convolutional neural network
by
Cahyaningtyas, Rizqia
,
Indarti, Dina
,
Bertalya, Bertalya
in
Accuracy
,
Alternative energy
,
Artificial neural networks
2025
Solar modules are essential components of a solar power plant, that are designed to withstand scorching heat, storms, strong winds, and other natural influences. However, continuous usage can cause defects in solar modules, preventing them from producing electrical energy optimally. This paper proposes the development of a deep learning-based system for identifying and classifying solar module surface defects in solar power plants. Module surface condition are classified into five categories: clean, dirt, burn, crack, and snail track. The dataset used consists of 8,370 images, including primary image data acquired directly from the mini solar power plant at the Renewable Energy Laboratory of PLN Institute of Technology, and secondary image data obtained from public repositories. The limitation in the number of images in each category was overcome using data augmentation techniques. The proposed classification model combines Deep Convolutional Neural Networks (DCNN) with transfer learning models (DenseNet201, MobileNetV2, and EfficientNetB0) to perform supervised image classification. Training and testing results on the three models demonstrated that the combination of DCNN + DenseNet201 provided the best performance, with a classification accuracy of 97.85%, compared to 97.25% accuracy for DCNN + EfficientNetB0 and 94.98% for DCNN + MobileNetV2. This research shows that DCNN-based image classification reliably diagnoses solar module defects and supports using RGB images for surface defect classification. Applying the developed system to solar power plant maintenance management can help in accelerating the process of identifying panel defects, determining defect types, and performing panel maintenance or repairs, while ensuring optimal power production.
Journal Article
Depth Map Super-Resolution Reconstruction Based on Multi-Channel Progressive Attention Fusion Network
2023
Depth maps captured by traditional consumer-grade depth cameras are often noisy and low-resolution. Especially when upsampling low-resolution depth maps with large upsampling factors, the resulting depth maps tend to suffer from vague edges. To address these issues, we propose a multi-channel progressive attention fusion network that utilizes a pyramid structure to progressively recover high-resolution depth maps. The inputs of the network are the low-resolution depth image and its corresponding color image. The color image is used as prior information in this network to fill in the missing high-frequency information of the depth image. Then, an attention-based multi-branch feature fusion module is employed to mitigate the texture replication issue caused by incorrect guidance from the color image and inconsistencies between the color image and the depth map. This module restores the HR depth map by effectively integrating the information from both inputs. Extensive experimental results demonstrate that our proposed method outperforms existing methods.
Journal Article
Effect of Denoising and Deblurring 18F-Fluorodeoxyglucose Positron Emission Tomography Images on a Deep Learning Model’s Classification Performance for Alzheimer’s Disease
by
Chang-Soo Yun
,
Kyuseok Kim
,
Min-Hee Lee
in
18F-FDG PET
,
18F-FDG PET; deep convolutional neural network; Alzheimer’s disease
,
Accuracy
2022
Alzheimer’s disease (AD) is the most common progressive neurodegenerative disease. 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) is widely used to predict AD using a deep learning model. However, the effects of noise and blurring on 18F-FDG PET images were not considered. The performance of a classification model trained using raw, deblurred (by the fast total variation deblurring method), or denoised (by the median modified Wiener filter) 18F-FDG PET images without or with cropping around the limbic system area using a 3D deep convolutional neural network was investigated. The classification model trained using denoised whole-brain 18F-FDG PET images achieved classification performance (0.75/0.65/0.79/0.39 for sensitivity/specificity/F1-score/Matthews correlation coefficient (MCC), respectively) higher than that with raw and deblurred 18F-FDG PET images. The classification model trained using cropped raw 18F-FDG PET images achieved higher performance (0.78/0.63/0.81/0.40 for sensitivity/specificity/F1-score/MCC) than the whole-brain 18F-FDG PET images (0.72/0.32/0.71/0.10 for sensitivity/specificity/F1-score/MCC, respectively). The 18F-FDG PET image deblurring and cropping (0.89/0.67/0.88/0.57 for sensitivity/specificity/F1-score/MCC) procedures were the most helpful for improving performance. For this model, the right middle frontal, middle temporal, insula, and hippocampus areas were the most predictive of AD using the class activation map. Our findings demonstrate that 18F-FDG PET image preprocessing and cropping improves the explainability and potential clinical applicability of deep learning models.
Journal Article
A Novel Approach for Detection of Counterfeit Indian Currency Notes Using Deep Convolutional Neural Network
by
Singal, Gaurav
,
Nethravathi, R.
,
Sirikonda, Shwetha
in
Artificial neural networks
,
Counterfeit
,
Counterfeit detection
2020
In recent years, the Indian economy has shown rapid growth among all other major economies. India has been tragically reviled with issues like corruption and black currency, fake money notes is additionally major issues to it, in spite of a strong security feature are endorsed by RBI to print original currency. The advancement of color printing technology helped local racketeers and foreign racketeers to print a large amount of counterfeit Indian currency notes in the market. Albeit counterfeit money is being printed with accuracy, it likely is distinguished with some effort. In this paper, the proposed model efficiently detect the counterfeit Indian currency notes by adapting three-layered Deep Convolutional Neural Network (Deep ConvNet), and achieved an accuracy of 96.6%.
Journal Article