Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Enhanced Hybrid Vision Transformer with Multi-Scale Feature Integration and Patch Dropping for Facial Expression Recognition
by
Li, Xinyuan
, Xiao, Zhiguo
, Li, Nianfeng
, Wang, Zhenyan
, Huang, Yongyuan
, Fan, Ziyao
in
Accuracy
/ Algorithms
/ attention module
/ Automated Facial Recognition - methods
/ Datasets
/ Deep learning
/ Efficiency
/ Face
/ Facial Expression
/ facial expression recognition
/ Humans
/ Image Processing, Computer-Assisted - methods
/ lightweight network
/ Neural networks
/ Neural Networks, Computer
/ Pattern Recognition, Automated - methods
/ transformer
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Enhanced Hybrid Vision Transformer with Multi-Scale Feature Integration and Patch Dropping for Facial Expression Recognition
by
Li, Xinyuan
, Xiao, Zhiguo
, Li, Nianfeng
, Wang, Zhenyan
, Huang, Yongyuan
, Fan, Ziyao
in
Accuracy
/ Algorithms
/ attention module
/ Automated Facial Recognition - methods
/ Datasets
/ Deep learning
/ Efficiency
/ Face
/ Facial Expression
/ facial expression recognition
/ Humans
/ Image Processing, Computer-Assisted - methods
/ lightweight network
/ Neural networks
/ Neural Networks, Computer
/ Pattern Recognition, Automated - methods
/ transformer
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Enhanced Hybrid Vision Transformer with Multi-Scale Feature Integration and Patch Dropping for Facial Expression Recognition
by
Li, Xinyuan
, Xiao, Zhiguo
, Li, Nianfeng
, Wang, Zhenyan
, Huang, Yongyuan
, Fan, Ziyao
in
Accuracy
/ Algorithms
/ attention module
/ Automated Facial Recognition - methods
/ Datasets
/ Deep learning
/ Efficiency
/ Face
/ Facial Expression
/ facial expression recognition
/ Humans
/ Image Processing, Computer-Assisted - methods
/ lightweight network
/ Neural networks
/ Neural Networks, Computer
/ Pattern Recognition, Automated - methods
/ transformer
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Enhanced Hybrid Vision Transformer with Multi-Scale Feature Integration and Patch Dropping for Facial Expression Recognition
Journal Article
Enhanced Hybrid Vision Transformer with Multi-Scale Feature Integration and Patch Dropping for Facial Expression Recognition
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Convolutional neural networks (CNNs) have made significant progress in the field of facial expression recognition (FER). However, due to challenges such as occlusion, lighting variations, and changes in head pose, facial expression recognition in real-world environments remains highly challenging. At the same time, methods solely based on CNN heavily rely on local spatial features, lack global information, and struggle to balance the relationship between computational complexity and recognition accuracy. Consequently, the CNN-based models still fall short in their ability to address FER adequately. To address these issues, we propose a lightweight facial expression recognition method based on a hybrid vision transformer. This method captures multi-scale facial features through an improved attention module, achieving richer feature integration, enhancing the network’s perception of key facial expression regions, and improving feature extraction capabilities. Additionally, to further enhance the model’s performance, we have designed the patch dropping (PD) module. This module aims to emulate the attention allocation mechanism of the human visual system for local features, guiding the network to focus on the most discriminative features, reducing the influence of irrelevant features, and intuitively lowering computational costs. Extensive experiments demonstrate that our approach significantly outperforms other methods, achieving an accuracy of 86.51% on RAF-DB and nearly 70% on FER2013, with a model size of only 3.64 MB. These results demonstrate that our method provides a new perspective for the field of facial expression recognition.
Publisher
MDPI AG
This website uses cookies to ensure you get the best experience on our website.