Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
64 result(s) for "Alam, Md. Golam Rabiul"
Sort by:
Vision transformer and explainable transfer learning models for auto detection of kidney cyst, stone and tumor from CT-radiography
Renal failure, a public health concern, and the scarcity of nephrologists around the globe have necessitated the development of an AI-based system to auto-diagnose kidney diseases. This research deals with the three major renal diseases categories: kidney stones, cysts, and tumors, and gathered and annotated a total of 12,446 CT whole abdomen and urogram images in order to construct an AI-based kidney diseases diagnostic system and contribute to the AI community’s research scope e.g., modeling digital-twin of renal functions. The collected images were exposed to exploratory data analysis, which revealed that the images from all of the classes had the same type of mean color distribution. Furthermore, six machine learning models were built, three of which are based on the state-of-the-art variants of the Vision transformers EANet, CCT, and Swin transformers, while the other three are based on well-known deep learning models Resnet, VGG16, and Inception v3, which were adjusted in the last layers. While the VGG16 and CCT models performed admirably, the swin transformer outperformed all of them in terms of accuracy, with an accuracy of 99.30 percent. The F1 score and precision and recall comparison reveal that the Swin transformer outperforms all other models and that it is the quickest to train. The study also revealed the blackbox of the VGG16, Resnet50, and Inception models, demonstrating that VGG16 is superior than Resnet50 and Inceptionv3 in terms of monitoring the necessary anatomy abnormalities. We believe that the superior accuracy of our Swin transformer-based model and the VGG16-based model can both be useful in diagnosing kidney tumors, cysts, and stones.
Shapley-Additive-Explanations-Based Factor Analysis for Dengue Severity Prediction using Machine Learning
Dengue is a viral disease that primarily affects tropical and subtropical regions and is especially prevalent in South-East Asia. This mosquito-borne disease sometimes triggers nationwide epidemics, which results in a large number of fatalities. The development of Dengue Haemorrhagic Fever (DHF) is where most cases occur, and a large portion of them are detected among children under the age of ten, with severe conditions often progressing to a critical state known as Dengue Shock Syndrome (DSS). In this study, we analysed two separate datasets from two different countries– Vietnam and Bangladesh, which we referred as VDengu and BDengue, respectively. For the VDengu dataset, as it was structured, supervised learning models were effective for predictive analysis, among which, the decision tree classifier XGBoost in particular produced the best outcome. Furthermore, Shapley Additive Explanation (SHAP) was used over the XGBoost model to assess the significance of individual attributes of the dataset. Among the significant attributes, we applied the SHAP dependence plot to identify the range for each attribute against the number of DHF or DSS cases. In parallel, the dataset from Bangladesh was unstructured; therefore, we applied an unsupervised learning technique, i.e., hierarchical clustering, to find clusters of vital blood components of the patients according to their complete blood count reports. The clusters were further analysed to find the attributes in the dataset that led to DSS or DHF.
Yield Response of Different Rice Ecotypes to Meteorological, Agro-Chemical, and Soil Physiographic Factors for Interpretable Precision Agriculture Using Extreme Gradient Boosting and Support Vector Regression
The food security of more than half of the world’s population depends on rice production which is one of the key objectives of precision agriculture. The traditional rice almanac used astronomical and climate factors to estimate yield response. However, this research integrated meteorological, agro-chemical, and soil physiographic factors for yield response prediction. Besides, the impact of those factors on the production of three major rice ecotypes has also been studied in this research. Moreover, this study found a different set of those factors with respect to the yield response of different rice ecotypes. Machine learning algorithms named Extreme Gradient Boosting (XGBoost) and Support Vector Regression (SVR) have been used for predicting the yield response. The SVR shows better results than XGBoost for predicting the yield of the Aus rice ecotype, whereas XGBoost performs better for forecasting the yield of the Aman and Boro rice ecotypes. The result shows that the root mean squared error (RMSE) of three different ecotypes are in between 9.38% and 24.37% and that of R-squared values are between 89.74% and 99.13% on two different machine learning algorithms. Moreover, the explainability of the models is also shown in this study with the help of the explainable artificial intelligence (XAI) model called Local Interpretable Model-Agnostic Explanations (LIME).
CNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysis
Recognizing emotional state of human using brain signal is an active research domain with several open challenges. In this research, we propose a signal spectrogram image based CNN-XGBoost fusion method for recognising three dimensions of emotion, namely arousal (calm or excitement), valence (positive or negative feeling) and dominance (without control or empowered). We used a benchmark dataset called DREAMER where the EEG signals were collected from multiple stimulus along with self-evaluation ratings. In our proposed method, we first calculate the Short-Time Fourier Transform (STFT) of the EEG signals and convert them into RGB images to obtain the spectrograms. Then we use a two dimensional Convolutional Neural Network (CNN) in order to train the model on the spectrogram images and retrieve the features from the trained layer of the CNN using a dense layer of the neural network. We apply Extreme Gradient Boosting (XGBoost) classifier on extracted CNN features to classify the signals into arousal, valence and dominance of human emotion. We compare our results with the feature fusion-based state-of-the-art approaches of emotion recognition. To do this, we applied various feature extraction techniques on the signals which include Fast Fourier Transformation, Discrete Cosine Transformation, Poincare, Power Spectral Density, Hjorth parameters and some statistical features. Additionally, we use Chi-square and Recursive Feature Elimination techniques to select the discriminative features. We form the feature vectors by applying feature level fusion, and apply Support Vector Machine (SVM) and Extreme Gradient Boosting (XGBoost) classifiers on the fused features to classify different emotion levels. The performance study shows that the proposed spectrogram image based CNN-XGBoost fusion method outperforms the feature fusion-based SVM and XGBoost methods. The proposed method obtained the accuracy of 99.712% for arousal, 99.770% for valence and 99.770% for dominance in human emotion detection.
Larger models yield better results? Streamlined severity classification of ADHD-related concerns using BERT-based knowledge distillation
This work focuses on the efficiency of the knowledge distillation approach in generating a lightweight yet powerful BERT-based model for natural language processing (NLP) applications. After the model creation, we applied the resulting model, LastBERT, to a real-world task—classifying severity levels of Attention Deficit Hyperactivity Disorder (ADHD)-related concerns from social media text data. Referring to LastBERT, a customized student BERT model, we significantly lowered model parameters from 110 million BERT base to 29 million-resulting in a model approximately 73.64% smaller. On the General Language Understanding Evaluation (GLUE) benchmark, comprising paraphrase identification, sentiment analysis, and text classification, the student model maintained strong performance across many tasks despite this reduction. The model was also used on a real-world ADHD dataset with an accuracy of 85%, F1 score of 85%, precision of 85%, and recall of 85%. When compared to DistilBERT (66 million parameters) and ClinicalBERT (110 million parameters), LastBERT demonstrated comparable performance, with DistilBERT slightly outperforming it at 87%, and ClinicalBERT achieving 86% across the same metrics. These findings highlight the LastBERT model’s capacity to classify degrees of ADHD severity properly, so it offers a useful tool for mental health professionals to assess and comprehend material produced by users on social networking platforms. The study emphasizes the possibilities of knowledge distillation to produce effective models fit for use in resource-limited conditions, hence advancing NLP and mental health diagnosis. Furthermore underlined by the considerable decrease in model size without appreciable performance loss is the lower computational resources needed for training and deployment, hence facilitating greater applicability. Especially using readily available computational tools like Google Colab and Kaggle Notebooks. This study shows the accessibility and usefulness of advanced NLP methods in pragmatic world applications.
Blockchain Based Smart-Grid Stackelberg Model for Electricity Trading and Price Forecasting Using Reinforcement Learning
A smart grid is an intelligent electricity network that allows efficient electricity distribution from the source to consumers through telecommunication technology. The legacy smart grid follows the centralized oligopoly marketplace for electricity trading. This research proposes a blockchain-based electricity marketplace for the smart grid environment to introduce a decentralized ledger in the electricity market for enabling trust and traceability among the stakeholders. The electricity prices in the smart grid are dynamic in nature. Therefore, price forecasting in smart grids has paramount importance for the service providers to ensure service level agreement and also to maximize profit. This research introduced a Stackelberg model-based dynamic retail price forecasting of electricity in a smart grid. The Stackelberg model considered two-stage pricing between electricity producers to retailers and retailers to customers. To enable adaptive and dynamic price forecasting, reinforcement learning is used. Reinforcement learning provides an optimal price forecasting strategy through the online learning process. The use of blockchain will connect the service providers and consumers in a more secure transaction environment. It will help tackle the centralized system’s vulnerability by performing transactions through customers’ smart contracts. Thus, the integration of blockchain will not only make the smart grid system more secure, but also price forecasting with reinforcement learning will make it more optimized and scalable.
Optimizing network bandwidth slicing identification: NADAM-enhanced CNN and VAE data preprocessing for enhanced interpretability
Communication networks of the future will rely heavily on network slicing (NS), a technology that enables the creation of distinct virtual networks within a shared physical infrastructure. This capability is critical for meeting the diverse quality of service (QoS) requirements of various applications, from ultra-reliable low-latency communications to massive IoT deployments. To achieve efficient network slicing, intelligent algorithms are essential for optimizing network resources and ensuring QoS. Artificial Intelligence (AI) models, particularly deep learning techniques, have emerged as powerful tools for automating and enhancing network slicing processes. These models are increasingly applied in next-generation mobile and wireless networks, including 5G, IoT infrastructure, and software-defined networking (SDN), to allocate resources and manage network slices dynamically. In this paper, we propose an Interpretable Network Bandwidth Slicing Identification (INBSI) system that leverages a modified Convolutional Neural Network (CNN) architecture with Nesterov-accelerated Adaptive Moment Estimation (NADAM) optimization. Additionally, we use a Variational Autoencoder (VAE) for preprocessing initial data, along with reconstructed data for data validity assessment. The model we propose outperforms other alternatives and reaches an accuracy peak of (84%) in the system environment. A range of accuracy was achieved by (k-nearest neighbors algorithm) KNN (76%), Random Forest (69%), BaggingClassifier (70%), and Gaussian Naive Bayes (GaussianNB) (55%). The accuracy of additional methods varies, including Decision Trees, AdaBoost, Deep Neural Forest (DNF), and Multilayer Perceptrons (MLPs). We utilize two eXplainable Artificial Intelligence (XAI) approaches, Shapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), to provide insight into the impact of certain input characteristics on the network slicing process. Our work highlights the potential of AI-driven solutions in network slicing, offering insights for operators to optimize resource allocation and enhance future network management.
Double Deep Q-Learning and Faster R-CNN-Based Autonomous Vehicle Navigation and Obstacle Avoidance in Dynamic Environment
Autonomous vehicle navigation in an unknown dynamic environment is crucial for both supervised- and Reinforcement Learning-based autonomous maneuvering. The cooperative fusion of these two learning approaches has the potential to be an effective mechanism to tackle indefinite environmental dynamics. Most of the state-of-the-art autonomous vehicle navigation systems are trained on a specific mapped model with familiar environmental dynamics. However, this research focuses on the cooperative fusion of supervised and Reinforcement Learning technologies for autonomous navigation of land vehicles in a dynamic and unknown environment. The Faster R-CNN, a supervised learning approach, identifies the ambient environmental obstacles for untroubled maneuver of the autonomous vehicle. Whereas, the training policies of Double Deep Q-Learning, a Reinforcement Learning approach, enable the autonomous agent to learn effective navigation decisions form the dynamic environment. The proposed model is primarily tested in a gaming environment similar to the real-world. It exhibits the overall efficiency and effectiveness in the maneuver of autonomous land vehicles.
LRFMV: An efficient customer segmentation model for superstores
The Recency, Frequency, and Monetary model, also known as the RFM model, is a popular and widely used business model for determining beneficial client segments and analyzing profit. It is also recommended and frequently used in superstores to identify customer segments and increase profit margins. Later, the Length, Recency, Frequency, and Monetary model, also known as the LRFM model, was introduced as an improved version of the RFM model to identify more relevant and exact consumer groups for profit maximization. Superstores have a varying number of different products. In RFM and LRFM models, the relationship between profit and purchased quantity has never been investigated. Therefore, this paper proposed an efficient customer segmentation model, namely LRFMV (Length, Recency, Frequency, Monetary and Volume) and studied the profit-quantity relationship. A new dimension V (volume) has been added to the existing LRFM model to show a direct profit-quantity relationship in customer segmentation. The V stands for volume, which is derived by calculating the average number of products purchased by a frequent superstore client in a single day. The data obtained from feature extraction of the LRMFV model is then clustered by using conventional K-means, K-Medoids, and Mini Batch K-means methods. The results obtained from the three algorithms are compared, and the K-means algorithm is chosen for the superstore dataset of the proposed LRFMV model. All clusters created using these three algorithms are evaluated in the LRFMV model, and a close relationship between profit and volume is observed. A clear profit-quantity relationship of items has yet not been seen in any prior study on the RFM and LRFM models. Grouping customers aiming at profit maximization existed previously, but there was no clear and direct depiction of profit and quantity of sold items. This study applied unsupervised machine learning to investigate the patterns, trends, and correlations between volume and profit. The traits of all the clusters are analyzed by the Customer-Classification Matrix. The LRFMV values, larger or less than the overall average for each cluster, are identified as their traits. The performance of the proposed LRFMV model is compared with the legacy RFM and LRFM customer segmentation models. The outcome shows that the LRFMV model creates precise customer segments with the same number of customers while maintaining a greater profit.
A Hybrid Approach to Attention Deficit Hyperactivity Disorder Detection Leveraging Transformer and XGBoost Models Using XSparseFormerNet
Early detection of neurodevelopmental disorders such as attention-deficit/hyperactivity disorder (ADHD) is crucial for improved outcomes and prompt intervention. However, traditional detection methods often suffer from challenges due to subjectivity and misinterpretation, lack of resources, and diagnostic biases that can lead to underdiagnosis or overdiagnosis. Early detection of these neurodevelopmental disorders not only helps individuals obtain proper ministrations but can also improve their social, cognitive, and mental development. This study proposes an ensemble model—XSparseFormerNet—that leverages, a custom encoder-decoder Transformer enhanced with various attention mechanisms alongside an XGBoost model to enhance diagnostic accuracy. The proposed model aims to increase the accuracy and efficiency of diagnosis via a custom transformer architecture with the gradient boosting algorithm XGBoost. By using a preprocessed EEG dataset and a customized ensemble model, XSparseFormerNet achieves 85% accuracy, outperforming traditional methods across evaluation metrics. Moreover, this research contributes to future development in the field by offering methodologies that can be useful for studies on disorder detection.