Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3,498
result(s) for
"explainable machine learning"
Sort by:
Understanding Price-To-Rent Ratios Through Simulation-Based Distributions And Explainable Machine Learning
2025
Index-level price-to-rent (PTR) ratios are a widely used metric for analyzing housing markets, employed by both real estate practitioners and policymakers. This article seeks to improve the contextualization of observed PTR values by examining the interplay between these ratios and macroeconomic and housing-market developments in a non-linear framework. We analyze historical data on housing prices, rents and macroeconomic developments from 18 advanced economies, spanning from 1870, using Boosted Regression Trees and explainable machine learning techniques. As a precursor to this analysis, we also present the empirical distribution of the price-to-rent ratio and the implied housing risk premia across all years and countries.
Journal Article
Review of Artificial Intelligence and Machine Learning Technologies: Classification, Restrictions, Opportunities and Challenges
by
Popova, Yelena
,
Abdoldina, Farida
,
Mukhamediev, Ravil I.
in
AI challenges
,
Algorithms
,
Artificial intelligence
2022
Artificial intelligence (AI) is an evolving set of technologies used for solving a wide range of applied issues. The core of AI is machine learning (ML)—a complex of algorithms and methods that address the problems of classification, clustering, and forecasting. The practical application of AI&ML holds promising prospects. Therefore, the researches in this area are intensive. However, the industrial applications of AI and its more intensive use in society are not widespread at the present time. The challenges of widespread AI applications need to be considered from both the AI (internal problems) and the societal (external problems) perspective. This consideration will identify the priority steps for more intensive practical application of AI technologies, their introduction, and involvement in industry and society. The article presents the identification and discussion of the challenges of the employment of AI technologies in the economy and society of resource-based countries. The systematization of AI&ML technologies is implemented based on publications in these areas. This systematization allows for the specification of the organizational, personnel, social and technological limitations. This paper outlines the directions of studies in AI and ML, which will allow us to overcome some of the limitations and achieve expansion of the scope of AI&ML applications.
Journal Article
Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics
2021
The most successful Machine Learning (ML) systems remain complex black boxes to end-users, and even experts are often unable to understand the rationale behind their decisions. The lack of transparency of such systems can have severe consequences or poor uses of limited valuable resources in medical diagnosis, financial decision-making, and in other high-stake domains. Therefore, the issue of ML explanation has experienced a surge in interest from the research community to application domains. While numerous explanation methods have been explored, there is a need for evaluations to quantify the quality of explanation methods to determine whether and to what extent the offered explainability achieves the defined objective, and compare available explanation methods and suggest the best explanation from the comparison for a specific task. This survey paper presents a comprehensive overview of methods proposed in the current literature for the evaluation of ML explanations. We identify properties of explainability from the review of definitions of explainability. The identified properties of explainability are used as objectives that evaluation metrics should achieve. The survey found that the quantitative metrics for both model-based and example-based explanations are primarily used to evaluate the parsimony/simplicity of interpretability, while the quantitative metrics for attribution-based explanations are primarily used to evaluate the soundness of fidelity of explainability. The survey also demonstrated that subjective measures, such as trust and confidence, have been embraced as the focal point for the human-centered evaluation of explainable systems. The paper concludes that the evaluation of ML explanations is a multidisciplinary research topic. It is also not possible to define an implementation of evaluation metrics, which can be applied to all explanation methods.
Journal Article
Explainable artificial intelligence in information systems: A review of the status quo and future research directions
by
Sigler, Irina
,
ster, Maximilian
,
Klier, Mathias
in
Accountability
,
Artificial intelligence
,
Breast cancer
2023
The quest to open black box artificial intelligence (AI) systems evolved into an emerging phenomenon of global interest for academia, business, and society and brought about the rise of the research field of explainable artificial intelligence (XAI). With its pluralistic view, information systems (IS) research is predestined to contribute to this emerging field; thus, it is not surprising that the number of publications on XAI has been rising significantly in IS research. This paper aims to provide a comprehensive overview of XAI research in IS in general and electronic markets in particular using a structured literature review. Based on a literature search resulting in 180 research papers, this work provides an overview of the most receptive outlets, the development of the academic discussion, and the most relevant underlying concepts and methodologies. Furthermore, eight research areas with varying maturity in electronic markets are carved out. Finally, directions for a research agenda of XAI in IS are presented.
Journal Article
A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features
by
Jain, Rajan
,
Choi, Yoon Seong
,
Ghosh, Debashis
in
Algorithms
,
Brain cancer
,
Clinical decision making
2022
Machine learning (ML) models have been shown to predict the presence of clinical factors from medical imaging with remarkable accuracy. However, these complex models can be difficult to interpret and are often criticized as “black boxes”. Prediction models that provide no insight into how their predictions are obtained are difficult to trust for making important clinical decisions, such as medical diagnoses or treatment. Explainable machine learning (XML) methods, such as Shapley values, have made it possible to explain the behavior of ML algorithms and to identify which predictors contribute most to a prediction. Incorporating XML methods into medical software tools has the potential to increase trust in ML-powered predictions and aid physicians in making medical decisions. Specifically, in the field of medical imaging analysis the most used methods for explaining deep learning-based model predictions are saliency maps that highlight important areas of an image. However, they do not provide a straightforward interpretation of which qualities of an image area are important. Here, we describe a novel pipeline for XML imaging that uses radiomics data and Shapley values as tools to explain outcome predictions from complex prediction models built with medical imaging with well-defined predictors. We present a visualization of XML imaging results in a clinician-focused dashboard that can be generalized to various settings. We demonstrate the use of this workflow for developing and explaining a prediction model using MRI data from glioma patients to predict a genetic mutation.
Journal Article
A novel explainable machine learning approach for EEG-based brain-computer interface systems
by
Hussain, Amir
,
Mammone, Nadia
,
Ieracitano, Cosimo
in
Accuracy
,
Artificial Intelligence
,
Artificial neural networks
2022
Electroencephalographic (EEG) recordings can be of great help in decoding the open/close hand’s motion preparation. To this end, cortical EEG source signals in the motor cortex (evaluated in the 1-s window preceding movement onset) are extracted by solving inverse problem through beamforming. EEG sources epochs are used as source-time maps input to a custom deep convolutional neural network (CNN) that is trained to perform 2-ways classification tasks: pre-hand close (HC) versus resting state (RE) and pre-hand open (HO) versus RE. The developed deep CNN works well (accuracy rates up to
89.65
±
5.29
%
for HC versus RE and
90.50
±
5.35
%
for HO versus RE), but the core of the present study was to explore the interpretability of the deep CNN to provide further insights into the activation mechanism of cortical sources during the preparation of hands’ sub-movements. Specifically,
occlusion sensitivity analysis
was carried out to investigate which cortical areas are more relevant in the classification procedure. Experimental results show a recurrent trend of spatial cortical activation across subjects. In particular, the central region (close to the longitudinal fissure) and the right temporal zone of the premotor together with the primary motor cortex appear to be primarily involved. Such findings encourage an in-depth study of cortical areas that seem to play a key role in hand’s open/close preparation.
Journal Article
Automated machine learning with interpretation: A systematic review of methodologies and applications in healthcare
by
Xie, Feng
,
Yuan, Han
,
Yu, Kunyu
in
Accuracy
,
Artificial intelligence
,
automated machine learning
2024
Machine learning (ML) has achieved substantial success in performing healthcare tasks in which the configuration of every part of the ML pipeline relies heavily on technical knowledge. To help professionals with borderline expertise to better use ML techniques, Automated ML (AutoML) has emerged as a prospective solution. However, most models generated by AutoML are black boxes that are challenging to comprehend and deploy in healthcare settings. We conducted a systematic review to examine AutoML with interpretation systems for healthcare. We searched four databases (MEDLINE, EMBASE, Web of Science, and Scopus) complemented with seven prestigious ML conferences (AAAI, ACL, ICLR, ICML, IJCAI, KDD, and NeurIPS) that reported AutoML with interpretation for healthcare before September 1, 2023. We included 118 articles related to AutoML with interpretation in healthcare. First, we illustrated AutoML techniques used in the included publications, including automated data preparation, automated feature engineering, and automated model development, accompanied by a real‐world case study to demonstrate the advantages of AutoML over classic ML. Then, we summarized interpretation methods: feature interaction and importance, data dimensionality reduction, intrinsically interpretable models, and knowledge distillation and rule extraction. Finally, we detailed how AutoML with interpretation has been used for six major data types: image, free text, tabular data, signal, genomic sequences, and multi‐modality. To some extent, AutoML with interpretation provides effortless development and improves users' trust in ML in healthcare settings. In future studies, researchers should explore automated data preparation, seamless integration of automation and interpretation, compatibility with multi‐modality, and utilization of foundation models. This article systematically reviewed AutoML techniques, summarized interpretation methods, and detailed how AutoML with interpretation has been used for six major healthcare data types: image, free text, tabular data, signal, genomic sequences, and multi‐modality.
Journal Article
A Grey-Box Ensemble Model Exploiting Black-Box Accuracy and White-Box Intrinsic Interpretability
by
Livieris, Ioannis E.
,
Pintelas, Emmanuel
,
Pintelas, Panagiotis
in
Accuracy
,
Algorithms
,
Artificial intelligence
2020
Machine learning has emerged as a key factor in many technological and scientific advances and applications. Much research has been devoted to developing high performance machine learning models, which are able to make very accurate predictions and decisions on a wide range of applications. Nevertheless, we still seek to understand and explain how these models work and make decisions. Explainability and interpretability in machine learning is a significant issue, since in most of real-world problems it is considered essential to understand and explain the model’s prediction mechanism in order to trust it and make decisions on critical issues. In this study, we developed a Grey-Box model based on semi-supervised methodology utilizing a self-training framework. The main objective of this work is the development of a both interpretable and accurate machine learning model, although this is a complex and challenging task. The proposed model was evaluated on a variety of real world datasets from the crucial application domains of education, finance and medicine. Our results demonstrate the efficiency of the proposed model performing comparable to a Black-Box and considerably outperforming single White-Box models, while at the same time remains as interpretable as a White-Box model.
Journal Article
Polycystic Ovary Syndrome Detection Machine Learning Model Based on Optimized Feature Selection and Explainable Artificial Intelligence
by
Alohali, Manal Abdullah
,
Saleh, Hager
,
Elmannai, Hela
in
Artificial intelligence
,
Diagnosis
,
ensemble learning
2023
Polycystic ovary syndrome (PCOS) has been classified as a severe health problem common among women globally. Early detection and treatment of PCOS reduce the possibility of long-term complications, such as increasing the chances of developing type 2 diabetes and gestational diabetes. Therefore, effective and early PCOS diagnosis will help the healthcare systems to reduce the disease’s problems and complications. Machine learning (ML) and ensemble learning have recently shown promising results in medical diagnostics. The main goal of our research is to provide model explanations to ensure efficiency, effectiveness, and trust in the developed model through local and global explanations. Feature selection methods with different types of ML models (logistic regression (LR), random forest (RF), decision tree (DT), naive Bayes (NB), support vector machine (SVM), k-nearest neighbor (KNN), xgboost, and Adaboost algorithm to get optimal feature selection and best model. Stacking ML models that combine the best base ML models with meta-learner are proposed to improve performance. Bayesian optimization is used to optimize ML models. Combining SMOTE (Synthetic Minority Oversampling Techniques) and ENN (Edited Nearest Neighbour) solves the class imbalance. The experimental results were made using a benchmark PCOS dataset with two ratios splitting 70:30 and 80:20. The result showed that the Stacking ML with REF feature selection recorded the highest accuracy at 100 compared to other models.
Journal Article
Is CSR a leading indicator of corporate restructuring performance? Evidence from explainable machine learning
by
He, Ling-Yang
,
Wang, Yuting
in
Corporate performance
,
Corporate social responsibility
,
Explainable machine learning
2026
Corporate social responsibility (CSR), associated with corporate reputation, has attracted considerable attention from both scholars and practitioners. However, empirical evidence concerning CSR's capacity to predict performance following corporate restructuring remains limited and inconclusive. This paper theoretically examines CSR's influence on post-restructuring performance through a reward-punishment framework and empirically assesses its predictive power using explainable machine learning. Our findings demonstrate that CSR serves as a significant predictor of post-restructuring performance. The analysis reveals a negative relationship, indicating that CSR functions as a restructuring punishment rather than a performance reward. Furthermore, both the predictive strength and the directional nature of this effect depend on the underlying motivations driving CSR engagement. These findings provide valuable insights for managers seeking to strategically align CSR initiatives with restructuring objectives and enhance the governance of restructuring processes.
Journal Article