Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
8 result(s) for "Light gradient boosting classifier"
Sort by:
Random Oversampling-Based Diabetes Classification via Machine Learning Algorithms
Diabetes mellitus is considered one of the main causes of death worldwide. If diabetes fails to be treated and diagnosed earlier, it can cause several other health problems, such as kidney disease, nerve disease, vision problems, and brain issues. Early detection of diabetes reduces healthcare costs and minimizes the chance of serious complications. In this work, we propose an e-diagnostic model for diabetes classification via a machine learning algorithm that can be executed on the Internet of Medical Things (IoMT). The study uses and analyses two benchmarking datasets, the PIMA Indian Diabetes Dataset (PIDD) and the Behavioral Risk Factor Surveillance System (BRFSS) diabetes dataset, to classify diabetes. The proposed model consists of the random oversampling method to balance the range of classes, the interquartile range technique-based outlier detection to eliminate outlier data, and the Boruta algorithm for selecting the optimal features from the datasets. The proposed approach considers ML algorithms such as random forest, gradient boosting models, light gradient boosting classifiers, and decision trees, as they are widely used classification algorithms for diabetes prediction. We evaluated all four ML algorithms via performance indicators such as accuracy, F 1 score, recall, precision, and AUC-ROC. Comparative analysis of this model suggests that the random forest algorithm outperforms all the remaining classifiers, with the greatest accuracy of 92% on the BRFSS diabetes dataset and 94% accuracy on the PIDD dataset, which is greater than the 3% accuracy reported in existing research. This research is helpful for assisting diabetologists in developing accurate treatment regimens for patients who are diabetic.
Early Terrain Identification for Mobile Robots Using Inertial Measurement Sensors and Machine Learning Techniques
Due to rapid advancements in robotics technology, mobile robots are now utilized across various industries and applications. Understanding the terrain on which a robot operates can greatly aid its navigation and movement adjustments, ultimately minimizing potential hazards and ensuring seamless operation. This study aims to identify the specific terrain on which a mobile robot travel. Data was gathered using an inertial measurement unit (IMU) installed on the robot for experimental testing. The key contributions of this research are twofold: firstly, the implementation and evaluation of various machine learning techniques using the IMU sensor dataset, comparing their performance using metrics like accuracy, precision, recall, and F1-score. Secondly, after assessing the different techniques, the most effective one is chosen for the final system implementation. Following the experimental evaluation of machine learning techniques, it was determined that the light gradient boosting machine (LGBM) classifier outperformed the others. Consequently, LGBM was utilized for the proposed system's implementation, achieving a 91% accuracy in surface classification. The experimental results highlight the efficiency and viability of the proposed system.
ICMFKC with optimize XGBoost classification for breast cancer image screening and detection
Nowadays most vicious disease is cancer, the cure of which must be the main argument through scientific investigation. The prior detection of cancer could assist in curing the disease completely. A cancerous tumor in the breast comprises of a mass cancer cells which develop in an abnormal, uncontrolled way. Breast mammogram images can be enhanced using digital image processing tools to assist physicians in detecting breast tumors at the initial stage. A lot of researchers have worked on early detection and classification of mammogram images. The main aim of this research work is, to develop a highly accurate automatic approach to analyze and segment the breast tumor and classify it into benign or malignant images. Here, in this research article, the experimental analysis work mammography images are taken from the both Public Digital Database of Screening Mammography (DDSM) and Mammographic Image Analysis Society (MIAS), the in-house clinical dataset from Metro scans and laboratories. The proposed work's first stage is to remove noise from the input image and boost the contrast of the image's anomalous region by using Anisotropic Diffusion with Median Filter (ADWMF). In the second phase, the denoise image was segmented, and Identifying the accurate breast tumor position using Improved Centroid-based Macqueen’s K-Means Clustering (ICMFKC) method was adopted. From the segmented ROI image, the GLCM features are extracted in the third phase. Finally, images of benign and malignant breast cancer are classified. The classification is carried out depending on the extracted features from the ROIs using the Light Gradient Boosting Machine (LGBM) Classifier and Optimize Extreme Gradient Boosting (OXGBoost) Classifier. Using the LGBM classifier an accuracy of 85.4% is obtained; with the OXGBoost classifier, an accuracy of 96.6% is obtained. Hence, the proposed work is most helpful for radiologists in the diagnosis of breast cancer.
Gut Microbiome Signatures in Multiple Sclerosis: A Case-Control Study with Machine Learning and Global Data Integration
Background/Objectives: Gut dysbiosis has been implicated in multiple sclerosis (MS), but microbial signatures remain inconsistent across studies. Machine learning (ML) algorithms based on global microbiome data integration can reveal key disease-associated microbial biomarkers and new insights into MS pathogenesis. This study aimed to investigate gut microbial signatures associated with MS and to evaluate the potential of ML for diagnostic applications. Methods: Fecal samples from 29 relapsing–remitting MS patients during exacerbation and 27 healthy controls were analyzed using 16S rRNA gene sequencing. Differential abundance analysis was performed, and data were integrated with 29 published studies. Four ML models were developed to distinguish MS-associated microbiome profiles. Results: MS patients exhibited reduced levels of Eubacteriales (p = 0.037), Lachnospirales (p = 0.021), Oscillospiraceae (p = 0.013), Lachnospiraceae (p = 0.012), Parasutterella (p = 0.018), Faecalibacterium (p = 0.004), and higher abundance of Lachnospiraceae UCG-008 (p = 0.045) compared to healthy controls. The Light Gradient Boosting Machine classifier demonstrated the highest performance (accuracy: 0.88, AUC-ROC: 0.95) in distinguishing MS microbiome profiles from healthy controls. Conclusions: This study highlights specific microbiome dysbiosis in MS patients and supports the potential of ML for diagnostic applications. Further research is needed to elucidate the mechanistic role of these microbial alterations in MS progression and their therapeutic utility.
Stacked encoding and AutoML-based identification of lead–zinc small open pit active mines around Rampura Agucha in Rajasthan state, India
Accurately discerning lead–zinc open pit mining areas using traditional remote sensing methods is challenging due to spectral signature class mixing. However, machine learning (ML) algorithms have been implemented to classify satellite images, achieving better accuracy in discriminating complex landcover features. This study aims to characterise various ML models for detecting and classifying lead–zinc open pit mining areas amidst surrounding landcover features based on Sentinel 2 image analysis. Various associated band ratios and spectral indices were integrated with processed Sentinel 2 reflectance bands to enhance detection accuracy. Suitable bands highlighting lead and zinc mine areas were identified based on optimal index factor (OIF) analysis and various deep learning-based stacked encoders. Furthermore, 15 different ML classifiers were tested to identify optimised algorithms for accurately discriminating complex mining areas and associated landcover features. After detailed evaluation and comparison of their accuracies, the extra tree classifier (et) was the most effective, achieving an overall accuracy of 0.94 and a kappa coefficient of 0.93. The light gradient boosting machine classifier (lightgbm) and random forest classifier (rf) models also performed well, with overall accuracies of 0.937 and 0.936 and kappa coefficients of 0.925 and 0.925, respectively.
An Ensemble of Light Gradient Boosting Machine and Adaptive Boosting for Prediction of Type-2 Diabetes
Machine learning helps construct predictive models in clinical data analysis, predicting stock prices, picture recognition, financial modelling, disease prediction, and diagnostics. This paper proposes machine learning ensemble algorithms to forecast diabetes. The ensemble combines k-NN, Naive Bayes (Gaussian), Random Forest (RF), Adaboost, and a recently designed Light Gradient Boosting Machine. The proposed ensembles inherit detection ability of LightGBM to boost accuracy. Under fivefold cross-validation, the proposed ensemble models perform better than other recent models. The k -NN, Adaboost, and LightGBM jointly achieve 90.76% detection accuracy. The receiver operating curve analysis shows that k -NN, RF, and LightGBM successfully solve class imbalance issue of the underlying dataset.
Predicting the availability of power line communication nodes using semi-supervised learning algorithms
Power Line Communication (PLC) facilitates the usage of power cables to transmit data. The issue is that sending data to unavailable nodes is time-consuming. Machine Learning has solved this by predicting a node having optimum readings. The more the machine learning models learn, the more accurate they become, as the model becomes always updated with the node’s continuous availability status, so self-training algorithms have been used. A dataset of 2000 instances of a node of a 500-node implemented PLC network has been collected. These instances consist of CINR(Carrier-to-Interference plus Noise Ratio), SNR(Signal-to-Noise Ratio), and RSSI(Received Signal Strength Indicator) as features for the label, which is a node is UP/Down. The data set has been split into 85% as a training set and 15% as a testing set. 15% of the training data are unlabeled. Self-training classifier has been used to allow Light Gradient Boosting Machine (LGBM) and Support Vector Machine (linear and non-linear kernel) to behave in a self-training manner as well as the training of label propagation and label spreading algorithms. Supervised Learning algorithms (Random Forest and logistic regression) have been trained on the dataset to compare the results. The best model is the Label Spreading, which resulted in accuracy equals 94.67%, f1-score equals 0.947, precision is 0.946, and recall equals 0.947 with training time equals 0.018 sec. and memory consumption equals 0.99 MB.
Light Gradient Boosting Machine-Based Low–Slow–Small Target Detection Algorithm for Airborne Radar
For airborne radar, detecting a low–slow–small (LSS) target is a hot and challenging topic, which results from the rapidly increasing number of non-cooperative flying LSS targets becoming of widespread concern, and the low signal-to-clutter ratio (SCR) of LSS targets results in the targets being particularly easily overwhelmed by the clutter. In this paper, a novel light gradient boosting machine (LightGBM)-based LSS target detection algorithm for airborne radar is proposed. The proposed method, based on the current real-time clutter environment of the range cell to be detected, firstly designs a specific real-time space-time LSS target signal repository with special dimensions and structures. Then, the proposed method creatively designs a new fast-built real-time training feature dataset specifically for the LSS target and the current clutter, together with a series of unique data transformations, sample selection, data restructuring, feature extraction, and feature processing. Finally, the proposed method develops a unique machine learning-based LSS target detection classifier model for the designed training dataset, by fully excavating and utilizing the advantages of the ensemble decision trees-based LightGBM. Consequently, the pre-processed data in the range cell of interest are classified using the proposed algorithm, which achieves LSS target detection by evaluating the output results of the designed classifier. Compared with the traditional classical target detection methods, the proposed algorithm is capable of providing markedly superior performance for LSS target detection. With an appropriate computational time, the proposed algorithm attains the highest probability of detecting LSS targets under the low SCR. The simulation outcomes and detection results with the experimental data are employed to validate the effectiveness and merits of the proposed algorithm.