Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Visual contextual perception and user emotional feedback in visual communication design
by
Zhu, Jiayi
in
Analysis
/ Artificial intelligence
/ Audiences
/ Behavioral Science and Psychology
/ Classification
/ Clinical Psychology
/ Cognitive Psychology
/ Communication
/ Deep learning
/ Design
/ Designers
/ Dual attention mechanism
/ Emotions
/ Feedback (Psychology)
/ Holistic and local feature
/ Humans
/ Multilayer CNN
/ Neural networks
/ Neural Networks, Computer
/ Principles
/ Psychology
/ Psychology Research
/ Sentiment analysis
/ Social Media
/ Visual communication
/ Visual Perception
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Visual contextual perception and user emotional feedback in visual communication design
by
Zhu, Jiayi
in
Analysis
/ Artificial intelligence
/ Audiences
/ Behavioral Science and Psychology
/ Classification
/ Clinical Psychology
/ Cognitive Psychology
/ Communication
/ Deep learning
/ Design
/ Designers
/ Dual attention mechanism
/ Emotions
/ Feedback (Psychology)
/ Holistic and local feature
/ Humans
/ Multilayer CNN
/ Neural networks
/ Neural Networks, Computer
/ Principles
/ Psychology
/ Psychology Research
/ Sentiment analysis
/ Social Media
/ Visual communication
/ Visual Perception
2025
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Visual contextual perception and user emotional feedback in visual communication design
by
Zhu, Jiayi
in
Analysis
/ Artificial intelligence
/ Audiences
/ Behavioral Science and Psychology
/ Classification
/ Clinical Psychology
/ Cognitive Psychology
/ Communication
/ Deep learning
/ Design
/ Designers
/ Dual attention mechanism
/ Emotions
/ Feedback (Psychology)
/ Holistic and local feature
/ Humans
/ Multilayer CNN
/ Neural networks
/ Neural Networks, Computer
/ Principles
/ Psychology
/ Psychology Research
/ Sentiment analysis
/ Social Media
/ Visual communication
/ Visual Perception
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Visual contextual perception and user emotional feedback in visual communication design
Journal Article
Visual contextual perception and user emotional feedback in visual communication design
2025
Request Book From Autostore
and Choose the Collection Method
Overview
Background
With the advent of the information era, the significance of visual communication design has escalated within the realm of increasingly prevalent network applications. Addressing the deficiency observed in prevailing sentiment analysis approaches in visual communication design, which predominantly leverage the holistic image information while overlooking the nuances inherent in the localized regions that accentuate emotion, coupled with the inadequacy in semantically mining diverse channel features.
Methods
This paper introduces a dual-attention multilayer feature fusion-based methodology denoted as DA-MLCNN. Initially, a multilayer convolutional neural network (CNN) feature extraction architecture is devised to effectuate the amalgamation of both overall and localized features, thereby extracting both high-level and low-level features inherent in the image. Furthermore, the integration of a spatial attention mechanism fortifies the low-level features, while a channel attention mechanism bolsters the high-level features. Ultimately, the features augmented by the attention mechanisms are harmonized to yield semantically enriched discerning visual features for training sentiment classifiers.
Results
This culminates in attaining classification accuracies of 79.8% and 55.8% on the Twitter 2017 and Emotion ROI datasets, respectively. Furthermore, the method attains classification accuracies of 89%, 94%, and 91% for the three categories of sadness, surprise, and joy on the Emotion ROI dataset.
Conclusions
The efficacy demonstrated on dichotomous and multicategorical emotion image datasets underscores the capacity of the proposed approach to acquire more discriminative visual features, thereby enhancing the landscape of visual sentiment analysis. The elevated performance of the visual sentiment analysis method serves to catalyze innovative advancements in visual communication design, offering designers expanded prospects and possibilities.
Publisher
BioMed Central,BioMed Central Ltd,Springer Nature B.V,BMC
Subject
This website uses cookies to ensure you get the best experience on our website.