Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Deep-Learning-Based Multimodal Emotion Classification for Music Videos
by
Pandeya, Yagya Raj
, Bhattarai, Bhuwan
, Lee, Joonwhoan
in
channel and filter separable convolution
/ Datasets
/ Emotions
/ end-to-end emotion classification
/ Information sources
/ Neural networks
/ unimodal and multimodal
2021
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Deep-Learning-Based Multimodal Emotion Classification for Music Videos
by
Pandeya, Yagya Raj
, Bhattarai, Bhuwan
, Lee, Joonwhoan
in
channel and filter separable convolution
/ Datasets
/ Emotions
/ end-to-end emotion classification
/ Information sources
/ Neural networks
/ unimodal and multimodal
2021
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Deep-Learning-Based Multimodal Emotion Classification for Music Videos
by
Pandeya, Yagya Raj
, Bhattarai, Bhuwan
, Lee, Joonwhoan
in
channel and filter separable convolution
/ Datasets
/ Emotions
/ end-to-end emotion classification
/ Information sources
/ Neural networks
/ unimodal and multimodal
2021
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Deep-Learning-Based Multimodal Emotion Classification for Music Videos
Journal Article
Deep-Learning-Based Multimodal Emotion Classification for Music Videos
2021
Request Book From Autostore
and Choose the Collection Method
Overview
Music videos contain a great deal of visual and acoustic information. Each information source within a music video influences the emotions conveyed through the audio and video, suggesting that only a multimodal approach is capable of achieving efficient affective computing. This paper presents an affective computing system that relies on music, video, and facial expression cues, making it useful for emotional analysis. We applied the audio–video information exchange and boosting methods to regularize the training process and reduced the computational costs by using a separable convolution strategy. In sum, our empirical findings are as follows: (1) Multimodal representations efficiently capture all acoustic and visual emotional clues included in each music video, (2) the computational cost of each neural network is significantly reduced by factorizing the standard 2D/3D convolution into separate channels and spatiotemporal interactions, and (3) information-sharing methods incorporated into multimodal representations are helpful in guiding individual information flow and boosting overall performance. We tested our findings across several unimodal and multimodal networks against various evaluation metrics and visual analyzers. Our best classifier attained 74% accuracy, an f1-score of 0.73, and an area under the curve score of 0.926.
Publisher
MDPI AG,MDPI
This website uses cookies to ensure you get the best experience on our website.