Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
20 result(s) for "entropy-based data analysis"
Sort by:
Mining Complex Ecological Patterns in Protected Areas: An FP-Growth Approach to Conservation Rule Discovery
This study introduces a data-driven framework for enhancing the sustainable management of fish species in Romania’s Natura 2000 protected areas through ecosystem modeling and association rule mining (ARM). Drawing on seven years of ecological monitoring data for 13 fish species of ecological and socio-economic importance, we apply the FP-Growth algorithm to extract high-confidence co-occurrence patterns among 19 codified conservation measures. By encoding expert habitat assessments into binary transactions, the analysis revealed 44 robust association rules, highlighting interdependent management actions that collectively improve species resilience and habitat conditions. These results provide actionable insights for integrated, evidence-based conservation planning. The approach demonstrates the interpretability, scalability, and practical relevance of ARM in biodiversity management, offering a replicable method for supporting adaptive ecological decision making across complex protected area networks.
The hidden side of the entropy-based land-use mix index
This study clarifies the previously unknown limitations of the entropy-based land-use mix index and suggests conditions under which the index is valid. The land-use mix index has an n-shaped relationship to dependent variables, which was evidenced by this study, but previous studies have ignored the problem. This study identified a non-linear relationship between the land-use mix index and a common dependent variable of interest, pedestrian volume. Pedestrian volume is a common measure of the vitality of a district and/or a city and a major goal of urban design and regeneration. Using mathematical analysis, simulation, and empirical analysis, this study found that the land-use mix index had an inconsistent quadratic relationship to pedestrian volume. It was confirmed that an analytical model using the land-use mix index, and that index squared, should be used together when samples representative of entire cities are tested. Otherwise, in samples from predominantly residential areas, the land-use mix index positively relates to pedestrian volume, whereas, in predominantly commercial areas, it will be negative. Previous studies failed to observe the hidden side of the entropy-based land-use mix index in commercial areas because their focus was mainly on residential areas or residents. Future studies should clarify the logical and theoretical relationships between the index and the outcome variable of interest, review the characteristics of the data and, then, implement appropriate statistical analyses by being aware of the hidden side. 本研究阐明了以前未知的基于熵的土地利用综合指数的局限性,并提出了该指数有效的条件。土地利用综合指数与因变量呈 n 型关系,本研究证明了这一点,但以前的研究忽略了这个问题。本研究确定了土地利用综合指数与一个常用的因变量——行人流量——之间的非线性关系。行人流量是衡量区域和/或城市活力的常用指标,也是城市设计和再生的主要目标。利用数学分析、模拟和实证分析,本研究发现土地利用综合指数与行人流量之间有着不一致的二次方关系。我们已经证实,在对代表整个城市的样本进行测试时,土地利用综合指数分析模型和平方指数应一起使用。否则,在居住区占主导的样本中,土地利用综合指数与行人流量正相关,而在商业区主导的样本中则为负相关。先前的研究未能观察到商业区基于熵的土地利用综合指数的隐藏面,因为它们的重点主要集中在居住区或居民。未来的研究应该阐明该指数与相关结果变量之间的逻辑和理论关系,评估数据的特征,然后在明确意识到隐藏面的前提下实施合适的统计分析。
Entropy-Based Uncertainty Quantification in Linear Consecutive k-out-of-n:G Systems via Cumulative Residual Tsallis Entropy
Quantifying uncertainty in complex systems is a central problem in reliability analysis and engineering applications. In this work, we develop an information-theoretic framework for analyzing linear consecutive k-out-of-n:G systems using the cumulative residual Tsallis entropy (CRTE). A general analytical expression for CRTE is derived, and its behavior is investigated under various stochastic ordering relations, providing insight into the reliability of systems governed by continuous lifetime distributions. To address challenges in large-scale settings or with nonstandard lifetimes, we establish analytical bounds that serve as practical tools for uncertainty quantification and reliability assessment. Beyond theoretical contributions, we propose a nonparametric CRTE-based test for dispersive ordering, establish its asymptotic distribution, and confirm its statistical properties through extensive Monte Carlo simulations. The methodology is further illustrated with real lifetime data, highlighting the interpretability and effectiveness of CRTE as a probabilistic entropy measure for reliability modeling. The results demonstrate that CRTE provides a versatile and computationally feasible approach for bounding analysis, characterization, and inference in systems where uncertainty plays a critical role, aligning with current advances in entropy-based uncertainty quantification.
Retinal disease prediction through blood vessel segmentation and classification using ensemble-based deep learning approaches
Automatic detection of retinal diseases is found to be more challenging and gaining considerable attention in the recent years. The visual impairments are emerging in different forms and hence effective retina screening system is required. The ophthalmologists utilize colour fundus images normally to diagnose the abnormalities and most of the experiments were conducted by the researchers in classifying the retinal diseases. However, the quality of fundus images gets deteriorated due to low contrast issues and illumination inhomogeneity that affects the overall classification accuracy. In most of the works, convergence rate is obtained to be less, over fitting issues may occur and classification errors are increased. On considering these issues, the proposed work attempts to analyse the performance of Ensemble based Deep learning approaches for retinal disease prediction. The proposed retinal disease prediction method is categorized in to various stages including Pre-processing, Adaptive Gaussian kernel PDF based matched filtering approach, Post processing for segmentation and classification. In pre-processing, the RGB image is transformed into a grayscale retinal image through Principle Component Analysis. The contrast of obtained greyscale retinal image is improved using CLAHE and the image is enhanced by Toggle Contrast operator. In the Adaptive Gaussian kernel, PDF is designed with matched filter kernel to generate the MFR (Matched filtered response) image. In Post processing, the MFR image is directed to ideal entropy based-thresholding to extract binary segmented retinal blood vessel image. This is followed by length filtering for artifact removal and masking is done to generate the accurate segmented blood vessel. The classification of segmented image is performed through three approaches called Efficient Net B0, VGG 16 and ResNet-152. The obtained feature vector is fused through ensemble approach and eleven forms of retinal diseases are classified precisely through softmax classifier. Through the implementation of proposed method, 99.71% of accuracy, 98.63% of precision, 98.25% of recall, 99.22% of F measure are obtained.
CAISC: A software to integrate copy number variations and single nucleotide mutations for genetic heterogeneity profiling and subclone detection by single-cell RNA sequencing
Background Although both copy number variations (CNVs) and single nucleotide variations (SNVs) detected by single-cell RNA sequencing (scRNA-seq) are used to study intratumor heterogeneity and detect clonal groups, a software that integrates these two types of data in the same cells is unavailable. Results We developed Clonal Architecture with Integration of SNV and CNV (CAISC), an R package for scRNA-seq data analysis that clusters single cells into distinct subclones by integrating CNV and SNV genotype matrices using an entropy weighted approach. The performance of CAISC was tested on simulation data and four real datasets, which confirmed its high accuracy in sub-clonal identification and assignment, including subclones which cannot be identified using one type of data alone. Furthermore, integration of SNV and CNV allowed for accurate examination of expression changes between subclones, as demonstrated by the results from trisomy 8 clones of the myelodysplastic syndromes (MDS) dataset. Conclusions CAISC is a powerful tool for integration of CNV and SNV data from scRNA-seq to identify clonal clusters with better accuracy than obtained from a single type of data. CAISC allows users to interactively examine clonal assignments.
Coupling Machine and Deep Learning with Explainable Artificial Intelligence for Improving Prediction of Groundwater Quality and Decision-Making in Arid Region, Saudi Arabia
Recently, machine learning (ML) and deep learning (DL) models based on artificial intelligence (AI) have emerged as fast and reliable tools for predicting water quality index (WQI) in various regions worldwide. In this study, we propose a novel stacking framework based on DL models for WQI prediction, employing a convolutional neural network (CNN) model. Additionally, we introduce explainable AI (XAI) through XGBoost-based SHAP (SHapley Additive exPlanations) values to gain valuable insights that can enhance decision-making strategies in water management. Our findings demonstrate that the stacking model achieves the highest accuracy in WQI prediction (R2: 0.99, MAPE: 15.99%), outperforming the CNN model (R2: 0.90, MAPE: 58.97%). Although the CNN model shows a relatively high R2 value, other statistical measures indicate that it is actually the worst-performing model among the five tested. This discrepancy may be attributed to the limited training data available for the CNN model. Furthermore, the application of explainable AI (XAI) techniques, specifically XGBoost-based SHAP values, allows us to gain deep insights into the models and extract valuable information for water management purposes. The SHAP values and interaction plot reveal that elevated levels of total dissolved solids (TDS), zinc, and electrical conductivity (EC) are the primary drivers of poor water quality. These parameters exhibit a nonlinear relationship with the water quality index, implying that even minor increases in their concentrations can significantly impact water quality. Overall, this study presents a comprehensive and integrated approach to water management, emphasizing the need for collaborative efforts among all stakeholders to mitigate pollution levels and uphold water quality. By leveraging AI and XAI, our proposed framework not only provides a powerful tool for accurate WQI prediction but also offers deep insights into the models, enabling informed decision-making in water management strategies.
The Impact of the Intuitionistic Fuzzy Entropy-Based Weights on the Results of Subjective Quality of Life Measurement Using Intuitionistic Fuzzy Synthetic Measure
In this paper, an extended Intuitionistic Fuzzy Synthetic Measure (IFSM) with intuitionistic fuzzy (IF) entropy-based weights is presented. This method can be implemented in a ranking problem where the assessments of the criteria are expressed in the form of intuitionistic fuzzy values and the information about the importance criteria is unknown. One example of such a problem is measuring the subjective quality of life in cities. We join the debate on the determination of weights for the analysis of the quality of life problem using multi-criteria methods. To handle this problem, four different IF entropy-based weight methods were applied. Their performances were compared and analyzed based on the questionnaires from the survey concerning the quality of life in European cities. The studies show very similar weighting systems obtained by different IF entropy-based approaches, resulting in almost the same city rankings acquired through IFSM by using those weights. The differences in rankings obtained through the IFSM measure (and only by one position) concern the six cities included in the analysis. Our results support the assumption of the equal importance of the criteria in measuring this complex phenomenon.
The art of misclassification: too many classes, not enough points
Classification is a ubiquitous and fundamental problem in artificial intelligence and machine learning, with extensive efforts dedicated to developing more powerful classifiers and larger datasets. However, the classification task is ultimately constrained by the intrinsic properties of datasets, independently of computational power or model complexity. In this work, we introduce a formal entropy-based measure of classifiability, which quantifies the inherent difficulty of a classification problem by assessing the uncertainty in class assignments given feature representations. This measure captures the degree of class overlap and aligns with human intuition, serving as an upper bound on classification performance for classification problems. Our results establish a theoretical limit beyond which no classifier can improve the classification accuracy, regardless of the architecture or amount of data, in a given problem. Our approach provides a principled framework for understanding when classification is inherently fallible and fundamentally ambiguous.
A holistic approach for understanding the status of water quality and causes of its deterioration in a drought-prone agricultural area of Southeastern India
This study investigates the groundwater quality in the Kadiri Basin, Ananthapuramu district of Andhra Pradesh, India. Groundwater samples from 77 locations were collected and tested for the concentration of various physicochemical parameters. The collected data were assimilated in the form of a groundwater quality index to estimate groundwater quality (drinking and irrigation) using an information entropy-based weight determination approach (EWQI). The water quality maps obtained from the study area suggest a definite trend in groundwater contamination of the study area. Furthermore, the influence of different physicochemical parameters on groundwater quality was determined using machine learning techniques. Learning and prediction accuracies of four different techniques, namely artificial neural network (ANN), deep learning (DL), random forest (RF), and gradient boosting machine (GBM), were investigated. The performance of the ANN model (MEA = 11.23, RSME = 21.22, MAPE = 7.48, and R 2  = 0.91) was found to be highly effective for the present dataset. The ANN model was then used to understand the relative influence of physicochemical parameters on groundwater quality. It was observed that the deterioration in groundwater quality in the study area was primarily due to the excess concentration of turbidity and iron values. The relatively higher concentration of sulfate and nitrate had caused a significant impact on the groundwater quality. The study has wider implications for modeling in similar drought-prone agricultural areas elsewhere for assessing the groundwater quality.
Sensitivity Analysis Based on E-TOPSIS Combined with MORIME-Based Multi-Objective Optimization for Sprayer Frame Design Optimization
This study establishes a sensitivity evaluation system based on the E-TOPSIS method and combines it with the MORIME algorithm for the optimization design of the frame. First, a three-dimensional model and a finite element analysis model of the frame were developed. The loading conditions of the frame were then analyzed, followed by static and modal analyses. Modal data of the frame were also extracted. The experimental results prove the reliability of the established finite element model and the subsequent optimization results. A sensitivity evaluation system based on the E-TOPSIS method was established in this study. Using this system, the sensitivity of the frame components with respect to three different performance parameters was analyzed, enabling the scientific and rapid selection of 17 design variables and significantly reducing the optimization workload. The experimental design was then conducted using Latin hypercube sampling and CCD sampling methods. Finally, the multi-objective lightweight design of the selected components was performed based on the MORIME algorithm. After optimization, the stress increased by 12.01% and 1.52% under two operating conditions, while deformation increased by 0.647 mm and 0.607 mm, and the frame mass was reduced by 22.754 kg, a decrease of 12.8%. The experimental results demonstrate the effectiveness of the proposed method.