Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Multimodal recommender system based on multi-channel counterfactual learning networks
by
Liang, Jindong
, Fang, Hong
, Sha, Leiyuxin
in
Collaboration
/ Computer Communication Networks
/ Computer Graphics
/ Computer Science
/ Cotton
/ Cryptology
/ Data Storage Representation
/ Feature extraction
/ Inference
/ Learning
/ Multimedia Information Systems
/ Operating Systems
/ Preferences
/ Recommender systems
/ Regular Paper
/ Semantics
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Multimodal recommender system based on multi-channel counterfactual learning networks
by
Liang, Jindong
, Fang, Hong
, Sha, Leiyuxin
in
Collaboration
/ Computer Communication Networks
/ Computer Graphics
/ Computer Science
/ Cotton
/ Cryptology
/ Data Storage Representation
/ Feature extraction
/ Inference
/ Learning
/ Multimedia Information Systems
/ Operating Systems
/ Preferences
/ Recommender systems
/ Regular Paper
/ Semantics
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Multimodal recommender system based on multi-channel counterfactual learning networks
by
Liang, Jindong
, Fang, Hong
, Sha, Leiyuxin
in
Collaboration
/ Computer Communication Networks
/ Computer Graphics
/ Computer Science
/ Cotton
/ Cryptology
/ Data Storage Representation
/ Feature extraction
/ Inference
/ Learning
/ Multimedia Information Systems
/ Operating Systems
/ Preferences
/ Recommender systems
/ Regular Paper
/ Semantics
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Multimodal recommender system based on multi-channel counterfactual learning networks
Journal Article
Multimodal recommender system based on multi-channel counterfactual learning networks
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Most multimodal recommender systems utilize multimodal content of user-interacted items as supplemental information to capture user preferences based on historical interactions without considering user-uninteracted items. In contrast, multimodal recommender systems based on causal inference counterfactual learning utilize the causal difference between the multimodal content of user-interacted and user-uninteracted items to purify the content related to user preferences. However, existing methods adopt a unified multimodal channel, which treats each modality equally, resulting in the inability to distinguish users’ tastes for different modalities. Therefore, the differences in users’ attention and perception of different modalities' content cannot be reflected. To cope with the above issue, this paper proposes a novel recommender system based on multi-channel counterfactual learning (MCCL) networks to capture user fine-grained preferences on different modalities. First, two independent channels are established based on the corresponding features for the content of image and text modalities for modality-specific feature extraction. Then, leveraging the counterfactual theory of causal inference, features in each channel unrelated to user preferences are eliminated using the features of the user-uninteracted items. Features related to user preferences are enhanced and multimodal user preferences are modeled at the content level, which portrays the users' taste for the different modalities of items. Finally, semantic entities are extracted to model semantic-level multimodal user preferences, which are fused with historical user interaction information and content-level user preferences for recommendation. Extensive experiments on three different datasets show that our results improve up to 4.17% on NDCG compared to the optimal model.
Publisher
Springer Berlin Heidelberg,Springer Nature B.V
Subject
This website uses cookies to ensure you get the best experience on our website.