Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
58
result(s) for
"explainable recommendation"
Sort by:
Learning Heterogeneous Knowledge Base Embeddings for Explainable Recommendation
by
Zhang, Yongfeng
,
Azizi, Vahid
,
Chen, Xu
in
Algorithms
,
Collaboration
,
collaborative filtering
2018
Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms—especially the collaborative filtering (CF)- based approaches with shallow or deep models—usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amounts of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users’ historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. A great challenge for using knowledge bases for recommendation is how to integrate large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements in knowledge-base embedding (KBE) sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge for explanation. In this work, we propose to explain knowledge-base embeddings for explainable recommendation. Specifically, we propose a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines.
Journal Article
Explaining recommendation system using counterfactual textual explanations
by
Momtazi, Saeedeh
,
Homayoonpour, MohammadMehdi
,
Ranjbar, Niloofar
in
Artificial Intelligence
,
Computer Science
,
Control
2024
Currently, there is a significant amount of research being conducted in the field of artificial intelligence to improve the explainability and interpretability of deep learning models. It is found that if end-users understand the reason for the production of some output, it is easier to trust the system. Recommender systems are one example of systems that great efforts have been conducted to make their output more explainable. One method for producing a more explainable output is using counterfactual reasoning, which involves altering minimal features to generate a counterfactual item that results in changing the output of the system. This process allows the identification of input features that have a significant impact on the desired output, leading to effective explanations. In this paper, we present a method for generating counterfactual explanations for both tabular and textual features. We evaluated the performance of our proposed method on three real-world datasets and demonstrated a +5% improvement on finding effective features (based on model-based measures) compared to the baseline method.
Journal Article
Explainable mutual fund recommendation system developed based on knowledge graph embeddings
2022
Because deep learning models have been used successfully in various fields during recent years, many recommendation systems have been developed using deep learning techniques. However, although deep learning–based recommendation systems have achieved high recommendation performance, their lack of interpretability may reduce users’ trust and satisfaction. In this study, we aimed to predict and recommend the purchase of funds by customers in the next month while simultaneously providing relevant explanations. To achieve this goal, we employed a knowledge graph structure and deep learning techniques to embed features of customers and funds into a unified latent space. With the proposed structure, we learned some information that could not be learned using traditional deep learning models and obtained personalized recommendations and explanations simultaneously. Moreover, we obtained complex explanations by changing the training procedure of the model and developed a measure for rating the customized explanations according to their strength and uniqueness. Finally, we obtained some possible special recommendations based on the knowledge graph structure. By evaluating the data set of mutual fund transaction records, we verified the effectiveness of the developed model for providing precise recommendations. We also conducted some case studies of explanations to demonstrate the effectiveness of the developed model for providing usual explanations, complex explanations, and other special recommendations.
Journal Article
To Explain or Not To Explain: An Empirical Investigation of AI-based Recommendations on Social Media Platforms
by
Haque, AKM Bahalul
,
Mikalef, Patrick
,
Islam, Najmul
in
Artificial intelligence
,
Business and Management
,
Comprehension
2025
Artificial intelligence integration into social media recommendations has significant promise for enhancing user experience. Frequently, however, suggestions fail to align with users’ preferences and result in unfavorable encounters. Furthermore, the lack of transparency in the social media recommendation system gives rise to concerns regarding its impartiality, comprehensibility, and interpretability. This study explores social media content recommendation from the perspective of end users. To facilitate our analysis, we conducted an exploratory investigation involving users of Facebook, a widely used social networking platform. We asked participants about the comprehensibility and explainability of suggestions for social media content. Our analysis shows that users mostly want explanations when encountering unfamiliar content and wish to be informed about their data privacy and security. Furthermore, users favor concise, non-technical, categorical representations of explanations along with the facility of controlled information flow. We observed that explanations impact users’ perception of the social media platform’s transparency, trust, and understandability. In this work, we have outlined design implications related to explainability and presented a synthesized framework of how various explanation attributes impact user experience. In addition, we proposed another synthesized framework for end user inclusion in designing an explainable interactive user interface.
Journal Article
Explainable recommendation with fusion of aspect information
by
Wu, Yi
,
Hou, Yunfeng
,
Yu, Philip S
in
Decomposition
,
Quantitative analysis
,
Recommender systems
2019
Explainable recommendation has attracted increasing attention from researchers. The existing methods, however, often suffer from two defects. One is the lack of quantitative fine-grained explanations why a user chooses an item, which likely makes recommendations unconvincing. The other one is that the fine-grained information such as aspects of item is not effectively utilized for making recommendations. In this paper, we investigate the problem of making quantitatively explainable recommendation at aspect level. It is a nontrivial task due to the challenges on quantitative evaluation of aspect and fusing aspect information into recommendation. To address these challenges, we propose an Aspect-based Matrix Factorization model (AMF), which is able to improve the accuracy of rating prediction by collaboratively decomposing the rating matrix with the auxiliary information extracted from aspects. To quantitatively evaluate aspects, we propose two metrics: User Aspect Preference (UAP) and Item Aspect Quality (IAQ), which quantify user preference to a specific aspect and the review sentiment of item on an aspect, respectively. By UAP and IAQ, we can quantitatively explain why a user chooses an item. To achieve information incorporation, we assemble UAPs and IAQs into two matrices UAP Matrix (UAPM) and IAQ Matrix (IAQM), respectively, and fuse UAPM and IAQM as constraints into the collaborative decomposition of item rating matrix. The extensive experiments conducted on real datasets verify the recommendation performance and explanatory ability of our approach.
Journal Article
Knowledge-aware attentional neural network for review-based movie recommendation with explanations
by
Liu, Yun
,
Miyazaki, Jun
in
Artificial Intelligence
,
Computational Biology/Bioinformatics
,
Computational Science and Engineering
2023
In this paper, we propose a knowledge-aware attentional neural network (KANN) for dealing with movie recommendation tasks by extracting knowledge entities from movie reviews and capturing understandable interactions between users and movies at the knowledge level. In most recommendation systems, review information is already widely utilized to uncover the explicit preferences of users for items, especially for domains including movie recommendations, music recommendations, and book recommendations, as reviews are full of knowledge entities relevant to the domain. When processing review information, current methods usually use word embeddings to represent reviews for modeling users and items. As a result, they may split the meaning of a phrase, and thereby induce erroneous predictions. Moreover, most methods capture high-order interactions between users and items after obtaining latent low-dimensional representations, which means they cannot discover understandable interactions or provide knowledge-level explanations. By incorporating knowledge graph representation into movie recommendation tasks, the proposed KANN can not only capture the inner attention among user (movie) reviews but also compute the outer attention values between users and movies before generating corresponding latent vector representations. These characteristics enable the explicit preferences of users for movies to be learned and understood. We test our model on two datasets (IMDb and Amazon) for the movie rating prediction task and the click-through rate prediction task and show that it outperforms some of the existing state-of-the-art models and gains outstanding prediction performances in cases with a very small amount of reviews. Furthermore, we demonstrate the high explainability of the proposed KANN by visualizing the interaction between users and movies through a case study. Our results and analyses highlight the relatively high effectiveness and reliability of KANN for movie recommendation tasks.
Journal Article
Meta-path guided graph attention network for explainable herb recommendation
2023
Traditional Chinese Medicine (TCM) has been widely adopted in clinical practice by Eastern Asia people for thousands of years. Nowadays, TCM still plays a critical role in Chinese society and receives increasing attention worldwide. The existing herb recommenders learn the complex relations between symptoms and herbs by mining the TCM prescriptions. Given a set of symptoms, they will provide a set of herbs and explanations from the TCM theory. However, the foundation of TCM is Yinyangism (i.e. the combination of Five Phases theory with Yin-yang theory), which is very different from modern medicine philosophy. Only recommending herbs from the TCM theory aspect largely prevents TCM from modern medical treatment. As TCM and modern medicine share a common view at the molecular level, it is necessary to integrate the ancient practice of TCM and standards of modern medicine. In this paper, we explore the underlying action mechanisms of herbs from both TCM and modern medicine, and propose a Meta-path guided Graph Attention Network (MGAT) to provide the explainable herb recommendations. Technically, to translate TCM from an experience-based medicine to an evidence-based medicine system, we incorporate the pharmacology knowledge of modern Chinese medicine with the TCM knowledge. We design a meta-path guided information propagation scheme based on the extended knowledge graph, which combines information propagation and decision process. This scheme adopts meta-paths (predefined relation sequences) to guide neighbor selection in the propagation process. Furthermore, the attention mechanism is utilized in aggregation to help distinguish the salience of different paths connecting a symptom with a herb. In this way, our model can distill the long-range semantics along meta-paths and generate fine-grained explanations. We conduct extensive experiments on a public TCM dataset, demonstrating comparable performance to the state-of-the-art herb recommendation models and the strong explainability.
Journal Article
Knowledge-aware reasoning with self-supervised reinforcement learning for explainable recommendation in MOOCs
by
Wu, Pengcheng
,
Zeng, Wenhua
,
Zhang, Wei
in
Accuracy
,
Artificial Intelligence
,
Computational Biology/Bioinformatics
2024
Explainable recommendation is important but not yet explored in Massive Open Online Courses (MOOCs). Recently, knowledge graph (KG) has achieved great success in explainable recommendations. However, the e-learning scenario has some unique constraints, such as learners’ knowledge structure and course prerequisite requirements, leading the existing KG-based recommendation methods to work poorly in MOOCs. To address these issues, we propose a novel explainable recommendation model, namely
K
nowledge-aware
R
easoning with self-supervised
R
einforcement
L
earning (KRRL). Specifically, to enhance the semantic representation and relation in the KG, a multi-level representation learning method enriches the perceptual information of semantic interactions. Afterward, a self-supervised reinforcement learning method effectively guides the path reasoning over the KG, to match the unique constraints in the e-learning scenario. We evaluate the KRRL model on two real-world MOOCs datasets. The experimental results show that KRRL evidently outperforms state-of-the-art baselines in terms of the recommendation accuracy and explainability.
Journal Article
Justification vs. Transparency: Why and How Visual Explanations in a Scientific Literature Recommender System
2023
Significant attention has been paid to enhancing recommender systems (RS) with explanation facilities to help users make informed decisions and increase trust in and satisfaction with an RS. Justification and transparency represent two crucial goals in explainable recommendations. Different from transparency, which faithfully exposes the reasoning behind the recommendation mechanism, justification conveys a conceptual model that may differ from that of the underlying algorithm. An explanation is an answer to a question. In explainable recommendation, a user would want to ask questions (referred to as intelligibility types) to understand the results given by an RS. In this paper, we identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency. We followed the Human-Centered Design (HCD) approach and leveraged the What–Why–How visualization framework to systematically design and implement Why and How visual explanations in the transparent Recommendation and Interest Modeling Application (RIMA). Furthermore, we conducted a qualitative user study (N = 12) based on a thematic analysis of think-aloud sessions and semi-structured interviews with students and researchers to investigate the potential effects of providing Why and How explanations together in an explainable RS on users’ perceptions regarding transparency, trust, and satisfaction. Our study shows qualitative evidence confirming that the choice of the explanation intelligibility types depends on the explanation goal and user type.
Journal Article
CAESAR: context-aware explanation based on supervised attention for service recommendations
2021
Explainable recommendations have drawn more attention from both academia and industry recently, because they can help users better understand recommendations (i.e., why some particular items are recommended), therefore improving the persuasiveness of the recommender system and users’ satisfaction. However, little work has been done to provide explanations from the angle of a user’s contextual situations (e.g., companion, season, and destination if the recommendation is a hotel). To fill this research gap, we propose a new context-aware recommendation algorithm based on supervised attention mechanism (CAESAR), which particularly matches latent features to explicit contextual features as mined from user-generated reviews for producing context-aware explanations. Experimental results on two large datasets in hotel and restaurant service domains demonstrate that our model improves recommendation performance against the state-of-the-art methods and furthermore is able to return feature-level explanations that can adapt to the target user’s current contexts.
Journal Article