Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
20 result(s) for "Muthu Subash Kavitha"
Sort by:
Exploring the Capabilities of a Lightweight CNN Model in Accurately Identifying Renal Abnormalities: Cysts, Stones, and Tumors, Using LIME and SHAP
Kidney abnormality is one of the major concerns in modern society, and it affects millions of people around the world. To diagnose different abnormalities in human kidneys, a narrow-beam x-ray imaging procedure, computed tomography, is used, which creates cross-sectional slices of the kidneys. Several deep-learning models have been successfully applied to computer tomography images for classification and segmentation purposes. However, it has been difficult for clinicians to interpret the model’s specific decisions and, thus, creating a “black box” system. Additionally, it has been difficult to integrate complex deep-learning models for internet-of-medical-things devices due to demanding training parameters and memory-resource cost. To overcome these issues, this study proposed (1) a lightweight customized convolutional neural network to detect kidney cysts, stones, and tumors and (2) understandable AI Shapely values based on the Shapley additive explanation and predictive results based on the local interpretable model-agnostic explanations to illustrate the deep-learning model. The proposed CNN model performed better than other state-of-the-art methods and obtained an accuracy of 99.52 ± 0.84% for K = 10-fold of stratified sampling. With improved results and better interpretive power, the proposed work provides clinicians with conclusive and understandable results.
A robust approach for industrial small-object detection using an improved faster regional convolutional neural network
With the increasing pace in the industrial sector, the need for a smart environment is also increasing and the production of industrial products in terms of quality always matters. There is a strong burden on the industrial environment to continue to reduce impulsive downtime, concert deprivation, and safety risks, which needs an efficient solution to detect and improve potential obligations as soon as possible. The systems working in industrial environments for generating industrial products are very fast and generate products rapidly, sometimes leading to faulty products. Therefore, this problem needs to be solved efficiently. Considering this problem in terms of faulty small-object detection, this study proposed an improved faster regional convolutional neural network-based model to detect the faults in the product images. We introduced a novel data-augmentation method along with a bi-cubic interpolation-based feature amplification method. A center loss is also introduced in the loss function to decrease the inter-class similarity issue. The experimental results show that the proposed improved model achieved better classification accuracy for detecting our small faulty objects. The proposed model performs better than the state-of-the-art methods.
Deep Neural Network Models for Colon Cancer Screening
Early detection of colorectal cancer can significantly facilitate clinicians’ decision-making and reduce their workload. This can be achieved using automatic systems with endoscopic and histological images. Recently, the success of deep learning has motivated the development of image- and video-based polyp identification and segmentation. Currently, most diagnostic colonoscopy rooms utilize artificial intelligence methods that are considered to perform well in predicting invasive cancer. Convolutional neural network-based architectures, together with image patches and preprocesses are often widely used. Furthermore, learning transfer and end-to-end learning techniques have been adopted for detection and localization tasks, which improve accuracy and reduce user dependence with limited datasets. However, explainable deep networks that provide transparency, interpretability, reliability, and fairness in clinical diagnostics are preferred. In this review, we summarize the latest advances in such models, with or without transparency, for the prediction of colorectal cancer and also address the knowledge gap in the upcoming technology.
Deep vector-based convolutional neural network approach for automatic recognition of colonies of induced pluripotent stem cells
Pluripotent stem cells can potentially be used in clinical applications as a model for studying disease progress. This tracking of disease-causing events in cells requires constant assessment of the quality of stem cells. Existing approaches are inadequate for robust and automated differentiation of stem cell colonies. In this study, we developed a new model of vector-based convolutional neural network (V-CNN) with respect to extracted features of the induced pluripotent stem cell (iPSC) colony for distinguishing colony characteristics. A transfer function from the feature vectors to the virtual image was generated at the front of the CNN in order for classification of feature vectors of healthy and unhealthy colonies. The robustness of the proposed V-CNN model in distinguishing colonies was compared with that of the competitive support vector machine (SVM) classifier based on morphological, textural, and combined features. Additionally, five-fold cross-validation was used to investigate the performance of the V-CNN model. The precision, recall, and F-measure values of the V-CNN model were comparatively higher than those of the SVM classifier, with a range of 87-93%, indicating fewer false positives and false negative rates. Furthermore, for determining the quality of colonies, the V-CNN model showed higher accuracy values based on morphological (95.5%), textural (91.0%), and combined (93.2%) features than those estimated with the SVM classifier (86.7, 83.3, and 83.4%, respectively). Similarly, the accuracy of the feature sets using five-fold cross-validation was above 90% for the V-CNN model, whereas that yielded by the SVM model was in the range of 75-77%. We thus concluded that the proposed V-CNN model outperforms the conventional SVM classifier, which strongly suggests that it as a reliable framework for robust colony classification of iPSCs. It can also serve as a cost-effective quality recognition tool during culture and other experimental procedures.
Automated Detection and Classification of Oral Squamous Cell Carcinoma Using Deep Neural Networks
This work aims to classify normal and carcinogenic cells in the oral cavity using two different approaches with an eye towards achieving high accuracy. The first approach extracts local binary patterns and metrics derived from a histogram from the dataset and is fed to several machine-learning models. The second approach uses a combination of neural networks as a backbone feature extractor and a random forest for classification. The results show that information can be learnt effectively from limited training images using these approaches. Some approaches use deep learning algorithms to generate a bounding box that can locate the suspected lesion. Other approaches use handcrafted textural feature extraction techniques and feed the resultant feature vectors to a classification model. The proposed method will extract the features pertaining to the images using pre-trained convolution neural networks (CNN) and train a classification model using the resulting feature vectors. By using the extracted features from a pre-trained CNN model to train a random forest, the problem of requiring a large amount of data to train deep learning models is bypassed. The study selected a dataset consisting of 1224 images, which were divided into two sets with varying resolutions.The performance of the model is calculated based on accuracy, specificity, sensitivity, and the area under curve (AUC). The proposed work is able to produce a highest test accuracy of 96.94% and an AUC of 0.976 using 696 images of 400× magnification and a highest test accuracy of 99.65% and an AUC of 0.9983 using only 528 images of 100× magnification images.
Distributional Variations in the Quantitative Cortical and Trabecular Bone Radiographic Measurements of Mandible, between Male and Female Populations of Korea, and its Utilization
It is important to investigate the irregularities in aging-associated changes in bone, between men and women for bone strength and osteoporosis. The purpose of this study was to characterize the changes and associations of mandibular cortical and trabecular bone measures of men and women based on age and to the evaluation of cortical shape categories, in a large Korean population. Panoramic radiographs of 1047 subjects (603 women and 444 men) aged between 15 to 90 years were used. Mandibular cortical width (MCW), mandibular cortical index (MCI), and fractal dimensions (FD) of the molar, premolar, and anterior regions of the mandibular trabecular bone were measured. Study subjects were grouped into six 10-years age groups. A local linear regression smoothing with bootstrap resampling for robust fitting of data was used to estimate the relationship between radiographic mandibular variables and age groups as well as genders. The mean age of women (49.56 ± 19.5 years) was significantly higher than that of men (45.57 ± 19.6 years). The MCW of men and women (3.17mm and 2.91mm, respectively, p < 0.0001) was strongly associated with age and MCI. Indeed, trabecular measures also correlated with age in men (r > -0.140, p = 0.003), though not as strongly as in women (r > -0.210, p < 0.0001). In men aged over 55 years, only MCW was significantly associated (r = -0.412, p < 0.0001). Furthermore, by comparison of mandibular variables from different age groups and MCI categories, the results suggest that MCW was detected to be strongly associated in both men and women for the detection of bone strength and osteoporosis. The FD measures revealed relatively higher association with age among women than men, but not as strong as MCW.
Smart Diagnosis of Adenocarcinoma Using Convolution Neural Networks and Support Vector Machines
Adenocarcinoma is a type of cancer that develops in the glands present on the lining of the organs in the human body. It is found that histopathological images, obtained as a result of biopsy, are the most definitive way of diagnosing cancer. The main objective of this work is to use deep learning techniques for the detection and classification of adenocarcinoma using histopathological images of lung and colon tissues with minimal preprocessing. Two approaches have been utilized. The first method entails creating two CNN architectures: CNN with a Softmax classifier (AdenoCanNet) and CNN with an SVM classifier (AdenoCanSVM). The second approach corresponds to training some of the prominent existing architecture such as VGG16, VGG19, LeNet, and ResNet50. The study aims at understanding the performance of various architectures in diagnosing using histopathological images with cases taken separately and taken together, with a full dataset and a subset of the dataset. The LC25000 dataset used consists of 25,000 histopathological images, having both cancerous and normal images from both the lung and colon regions of the human body. The accuracy metric was taken as the defining parameter for determining and comparing the performance of various architectures undertaken during the study. A comparison between the several models used in the study is presented and discussed.
Extensive framework based on novel convolutional and variational autoencoder based on maximization of mutual information for anomaly detection
In present study, we proposed a general framework based on a convolutional kernel and a variational autoencoder (CVAE) for anomaly detection on both complex image and vector datasets. The main idea is to maximize mutual information (MMI) through regularizing key information as follows: (1) the features between original input and the representation of latent space, (2) that between the first convolutional layer output and the last convolutional layer input, (3) original input and output of the decoder to train the model. Therefore, the proposed CVAE is optimized by combining the representations learned across the three different objectives targeted at MMI on both local and global variables with the original training objective function of Kullback–Leibler divergence distributions. It allowed achieving the additional supervision power for the detection of image and vector data anomalies using convolutional and fully connected layers, respectively. Our proposal CVAE combined by regularizing multiple discriminator spaces to detect anomalies was introduced for the first time as far as we know. To evaluate the reliability of the proposed CVAE-MMI, it was compared with the convolutional autoencoder-based model using the original objective function. Furthermore, the performance of our network was compared over state-of-the-art approaches in distinguishing anomalies concerning both image and vector datasets. The proposed structure outperformed the state-of-the-arts with high and stable area under the curve values.
A Foreground Prototype-Based One-Shot Segmentation of Brain Tumors
The potential for enhancing brain tumor segmentation with few-shot learning is enormous. While several deep learning networks (DNNs) show promising segmentation results, they all take a substantial amount of training data in order to yield appropriate results. Moreover, a prominent problem for most of these models is to perform well in unseen classes. To overcome these challenges, we propose a one-shot learning model to segment brain tumors on brain magnetic resonance images (MRI) based on a single prototype similarity score. With the use of recently developed few-shot learning techniques, where training and testing are carried out utilizing support and query sets of images, we attempt to acquire a definitive tumor region by focusing on slices containing foreground classes. It is unlike other recent DNNs that employed the entire set of images. The training of this model is carried out in an iterative manner where in each iteration, random slices containing foreground classes of randomly sampled data are selected as the query set, along with a different random slice from the same sample as the support set. In order to differentiate query images from class prototypes, we used a metric learning-based approach based on non-parametric thresholds. We employed the multimodal Brain Tumor Image Segmentation (BraTS) 2021 dataset with 60 training images and 350 testing images. The effectiveness of the model is evaluated using the mean dice score and mean IoU score. The experimental results provided a dice score of 83.42 which was greater than other works in the literature. Additionally, the proposed one-shot segmentation model outperforms the conventional methods in terms of computational time, memory usage, and the number of data.
Automated Bone Marrow Cell Classification for Haematological Disease Diagnosis Using Siamese Neural Network
The critical structure and nature of different bone marrow cells which form a base in the diagnosis of haematological ailments requires a high-grade classification which is a very prolonged approach and accounts for human error if performed manually, even by field experts. Therefore, the aim of this research is to automate the process to study and accurately classify the structure of bone marrow cells which will help in the diagnosis of haematological ailments at a much faster and better rate. Various machine learning algorithms and models, such as CNN + SVM, CNN + XGB Boost and Siamese network, were trained and tested across a dataset of 170,000 expert-annotated cell images from 945 patients’ bone marrow smears with haematological disorders. The metrics used for evaluation of this research are accuracy of model, precision and recall of all the different classes of cells. Based on these performance metrics the CNN + SVM, CNN + XGB, resulted in 32%, 28% accuracy, respectively, and therefore these models were discarded. Siamese neural resulted in 91% accuracy and 84% validation accuracy. Moreover, the weighted average recall values of the Siamese neural network were 92% for training and 91% for validation. Hence, the final results are based on Siamese neural network model as it was outperforming all the other algorithms used in this research.