Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
4,860
result(s) for
"Explainable machine learning"
Sort by:
Understanding Price-To-Rent Ratios Through Simulation-Based Distributions And Explainable Machine Learning
2025
Index-level price-to-rent (PTR) ratios are a widely used metric for analyzing housing markets, employed by both real estate practitioners and policymakers. This article seeks to improve the contextualization of observed PTR values by examining the interplay between these ratios and macroeconomic and housing-market developments in a non-linear framework. We analyze historical data on housing prices, rents and macroeconomic developments from 18 advanced economies, spanning from 1870, using Boosted Regression Trees and explainable machine learning techniques. As a precursor to this analysis, we also present the empirical distribution of the price-to-rent ratio and the implied housing risk premia across all years and countries.
Journal Article
Review of Artificial Intelligence and Machine Learning Technologies: Classification, Restrictions, Opportunities and Challenges
by
Popova, Yelena
,
Abdoldina, Farida
,
Mukhamediev, Ravil I.
in
AI challenges
,
Algorithms
,
Artificial intelligence
2022
Artificial intelligence (AI) is an evolving set of technologies used for solving a wide range of applied issues. The core of AI is machine learning (ML)—a complex of algorithms and methods that address the problems of classification, clustering, and forecasting. The practical application of AI&ML holds promising prospects. Therefore, the researches in this area are intensive. However, the industrial applications of AI and its more intensive use in society are not widespread at the present time. The challenges of widespread AI applications need to be considered from both the AI (internal problems) and the societal (external problems) perspective. This consideration will identify the priority steps for more intensive practical application of AI technologies, their introduction, and involvement in industry and society. The article presents the identification and discussion of the challenges of the employment of AI technologies in the economy and society of resource-based countries. The systematization of AI&ML technologies is implemented based on publications in these areas. This systematization allows for the specification of the organizational, personnel, social and technological limitations. This paper outlines the directions of studies in AI and ML, which will allow us to overcome some of the limitations and achieve expansion of the scope of AI&ML applications.
Journal Article
Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics
2021
The most successful Machine Learning (ML) systems remain complex black boxes to end-users, and even experts are often unable to understand the rationale behind their decisions. The lack of transparency of such systems can have severe consequences or poor uses of limited valuable resources in medical diagnosis, financial decision-making, and in other high-stake domains. Therefore, the issue of ML explanation has experienced a surge in interest from the research community to application domains. While numerous explanation methods have been explored, there is a need for evaluations to quantify the quality of explanation methods to determine whether and to what extent the offered explainability achieves the defined objective, and compare available explanation methods and suggest the best explanation from the comparison for a specific task. This survey paper presents a comprehensive overview of methods proposed in the current literature for the evaluation of ML explanations. We identify properties of explainability from the review of definitions of explainability. The identified properties of explainability are used as objectives that evaluation metrics should achieve. The survey found that the quantitative metrics for both model-based and example-based explanations are primarily used to evaluate the parsimony/simplicity of interpretability, while the quantitative metrics for attribution-based explanations are primarily used to evaluate the soundness of fidelity of explainability. The survey also demonstrated that subjective measures, such as trust and confidence, have been embraced as the focal point for the human-centered evaluation of explainable systems. The paper concludes that the evaluation of ML explanations is a multidisciplinary research topic. It is also not possible to define an implementation of evaluation metrics, which can be applied to all explanation methods.
Journal Article
Explainable artificial intelligence in information systems: A review of the status quo and future research directions
by
Sigler, Irina
,
ster, Maximilian
,
Klier, Mathias
in
Accountability
,
Artificial intelligence
,
Breast cancer
2023
The quest to open black box artificial intelligence (AI) systems evolved into an emerging phenomenon of global interest for academia, business, and society and brought about the rise of the research field of explainable artificial intelligence (XAI). With its pluralistic view, information systems (IS) research is predestined to contribute to this emerging field; thus, it is not surprising that the number of publications on XAI has been rising significantly in IS research. This paper aims to provide a comprehensive overview of XAI research in IS in general and electronic markets in particular using a structured literature review. Based on a literature search resulting in 180 research papers, this work provides an overview of the most receptive outlets, the development of the academic discussion, and the most relevant underlying concepts and methodologies. Furthermore, eight research areas with varying maturity in electronic markets are carved out. Finally, directions for a research agenda of XAI in IS are presented.
Journal Article
Precision biochar yield forecasting employing random forest and XGBoost with Taylor diagram visualization
by
Kanti, Praveen Kumar
,
Vemanaboina, Harinadh
,
Kilari, Naveen
in
Biomass
,
Boosting Machine Learning Algorithms
,
Charcoal - chemistry
2025
Waste-to-energy conversion via pyrolysis has attracted increasing attention recently owing to its multiple uses. Among the products of this process, biochar stands out for its versatility, with its yield influenced by various factors. Extensive and labor-intensive experimental testing is sometimes necessary to properly grasp the output distribution from various feedstocks. Nonetheless, data-driven predictive models using large-scale historical experiment records can provide insightful analysis of projected yields from a variety of biomass materials, hence overcoming the challenges of empirical modeling. As such, five modern approaches available in modern machine learning are employed in this study to develop the biochar yield prediction models. The Lasso regression, Tweedie regression, random forest, XGBoost, and Gradient boosting regression were employed. Out of these five XGBoost was superior with a training mean squared error (MSE) of 1.17 and a test MSE of 2.94. The XGBoost-based biochar yield model shows excellent performance with a strong predictive accuracy of the R
values as 0.9739 (training) and 0.8875 (test). The mean absolute percentage error value was only 2.14% in the training phase and 3.8% in the testing phase. Precision prognostic technologies have broad effects on sectors including biomass logistics, conversion technologies, and effective biomass utilization as renewable energy. Leveraging SHAP based on cooperative game theory, the study shows that while ash and moisture lower biochar yield, FPT, nitrogen, and carbon content significantly boost it. Small variables like heating rate and volatile matter have a secondary impact on production efficiency.
Journal Article
A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features
by
Jain, Rajan
,
Choi, Yoon Seong
,
Ghosh, Debashis
in
Algorithms
,
Brain cancer
,
Clinical decision making
2022
Machine learning (ML) models have been shown to predict the presence of clinical factors from medical imaging with remarkable accuracy. However, these complex models can be difficult to interpret and are often criticized as “black boxes”. Prediction models that provide no insight into how their predictions are obtained are difficult to trust for making important clinical decisions, such as medical diagnoses or treatment. Explainable machine learning (XML) methods, such as Shapley values, have made it possible to explain the behavior of ML algorithms and to identify which predictors contribute most to a prediction. Incorporating XML methods into medical software tools has the potential to increase trust in ML-powered predictions and aid physicians in making medical decisions. Specifically, in the field of medical imaging analysis the most used methods for explaining deep learning-based model predictions are saliency maps that highlight important areas of an image. However, they do not provide a straightforward interpretation of which qualities of an image area are important. Here, we describe a novel pipeline for XML imaging that uses radiomics data and Shapley values as tools to explain outcome predictions from complex prediction models built with medical imaging with well-defined predictors. We present a visualization of XML imaging results in a clinician-focused dashboard that can be generalized to various settings. We demonstrate the use of this workflow for developing and explaining a prediction model using MRI data from glioma patients to predict a genetic mutation.
Journal Article
Polycystic Ovary Syndrome Detection Machine Learning Model Based on Optimized Feature Selection and Explainable Artificial Intelligence
by
Alohali, Manal Abdullah
,
Saleh, Hager
,
Elmannai, Hela
in
Artificial intelligence
,
Diagnosis
,
ensemble learning
2023
Polycystic ovary syndrome (PCOS) has been classified as a severe health problem common among women globally. Early detection and treatment of PCOS reduce the possibility of long-term complications, such as increasing the chances of developing type 2 diabetes and gestational diabetes. Therefore, effective and early PCOS diagnosis will help the healthcare systems to reduce the disease’s problems and complications. Machine learning (ML) and ensemble learning have recently shown promising results in medical diagnostics. The main goal of our research is to provide model explanations to ensure efficiency, effectiveness, and trust in the developed model through local and global explanations. Feature selection methods with different types of ML models (logistic regression (LR), random forest (RF), decision tree (DT), naive Bayes (NB), support vector machine (SVM), k-nearest neighbor (KNN), xgboost, and Adaboost algorithm to get optimal feature selection and best model. Stacking ML models that combine the best base ML models with meta-learner are proposed to improve performance. Bayesian optimization is used to optimize ML models. Combining SMOTE (Synthetic Minority Oversampling Techniques) and ENN (Edited Nearest Neighbour) solves the class imbalance. The experimental results were made using a benchmark PCOS dataset with two ratios splitting 70:30 and 80:20. The result showed that the Stacking ML with REF feature selection recorded the highest accuracy at 100 compared to other models.
Journal Article
A novel explainable machine learning approach for EEG-based brain-computer interface systems
by
Hussain, Amir
,
Mammone, Nadia
,
Ieracitano, Cosimo
in
Accuracy
,
Artificial Intelligence
,
Artificial neural networks
2022
Electroencephalographic (EEG) recordings can be of great help in decoding the open/close hand’s motion preparation. To this end, cortical EEG source signals in the motor cortex (evaluated in the 1-s window preceding movement onset) are extracted by solving inverse problem through beamforming. EEG sources epochs are used as source-time maps input to a custom deep convolutional neural network (CNN) that is trained to perform 2-ways classification tasks: pre-hand close (HC) versus resting state (RE) and pre-hand open (HO) versus RE. The developed deep CNN works well (accuracy rates up to
89.65
±
5.29
%
for HC versus RE and
90.50
±
5.35
%
for HO versus RE), but the core of the present study was to explore the interpretability of the deep CNN to provide further insights into the activation mechanism of cortical sources during the preparation of hands’ sub-movements. Specifically,
occlusion sensitivity analysis
was carried out to investigate which cortical areas are more relevant in the classification procedure. Experimental results show a recurrent trend of spatial cortical activation across subjects. In particular, the central region (close to the longitudinal fissure) and the right temporal zone of the premotor together with the primary motor cortex appear to be primarily involved. Such findings encourage an in-depth study of cortical areas that seem to play a key role in hand’s open/close preparation.
Journal Article
A Grey-Box Ensemble Model Exploiting Black-Box Accuracy and White-Box Intrinsic Interpretability
by
Livieris, Ioannis E.
,
Pintelas, Emmanuel
,
Pintelas, Panagiotis
in
Accuracy
,
Algorithms
,
Artificial intelligence
2020
Machine learning has emerged as a key factor in many technological and scientific advances and applications. Much research has been devoted to developing high performance machine learning models, which are able to make very accurate predictions and decisions on a wide range of applications. Nevertheless, we still seek to understand and explain how these models work and make decisions. Explainability and interpretability in machine learning is a significant issue, since in most of real-world problems it is considered essential to understand and explain the model’s prediction mechanism in order to trust it and make decisions on critical issues. In this study, we developed a Grey-Box model based on semi-supervised methodology utilizing a self-training framework. The main objective of this work is the development of a both interpretable and accurate machine learning model, although this is a complex and challenging task. The proposed model was evaluated on a variety of real world datasets from the crucial application domains of education, finance and medicine. Our results demonstrate the efficiency of the proposed model performing comparable to a Black-Box and considerably outperforming single White-Box models, while at the same time remains as interpretable as a White-Box model.
Journal Article
Beyond Predictive Learning Analytics Modelling and onto Explainable Artificial Intelligence with Prescriptive Analytics and ChatGPT
2024
A significant body of recent research in the field of Learning Analytics has focused on leveraging machine learning approaches for predicting at-risk students in order to initiate timely interventions and thereby elevate retention and completion rates. The overarching feature of the majority of these research studies has been on the science of prediction only. The component of predictive analytics concerned with interpreting the internals of the models and explaining their predictions for individual cases to stakeholders has largely been neglected. Additionally, works that attempt to employ data-driven prescriptive analytics to automatically generate evidence-based remedial advice for at-risk learners are in their infancy. eXplainable AI is a field that has recently emerged providing cutting-edge tools which support transparent predictive analytics and techniques for generating tailored advice for at-risk students. This study proposes a novel framework that unifies both transparent machine learning as well as techniques for enabling prescriptive analytics, while integrating the latest advances in large language models for communicating the insights to learners. This work demonstrates a predictive modelling framework for identifying learners at risk of qualification non-completion based on a real-world dataset comprising
∼
7000 learners with their outcomes, covering 2018 - 2022. The study further demonstrates how predictive modelling can be augmented with prescriptive analytics on two case studies to generate human-readable prescriptive feedback for those who are at risk using ChatGPT.
Journal Article