Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
3,388
result(s) for
"facial expression recognition"
Sort by:
Social signal processing
\"Social Signal Processing is the first book to cover all aspects of the modeling, automated detection, analysis, and synthesis of nonverbal behavior in human-human and human-machine interactions. Authoritative surveys address conceptual foundations, machine analysis and synthesis of social signal processing, and applications. Foundational topics include affect perception and interpersonal coordination in communication; later chapters cover technologies for automatic detection and understanding such as computational paralinguistics and facial expression analysis and for the generation of artificial social signals such as social robots and artificial agents. The final section covers a broad spectrum of applications based on social signal processing in healthcare, deception detection, and digital cities, including detection of developmental diseases and analysis of small groups. Each chapter offers a basic introduction to its topic, accessible to students and other newcomers, and then outlines challenges and future perspectives for the benefit of experienced researchers and practitioners in the field\"-- Provided by publisher.
Enhanced Sample Self-Revised Network for Cross-Dataset Facial Expression Recognition
2022
Recently, cross-dataset facial expression recognition (FER) has obtained wide attention from researchers. Thanks to the emergence of large-scale facial expression datasets, cross-dataset FER has made great progress. Nevertheless, facial images in large-scale datasets with low quality, subjective annotation, severe occlusion, and rare subject identity can lead to the existence of outlier samples in facial expression datasets. These outlier samples are usually far from the clustering center of the dataset in the feature space, thus resulting in considerable differences in feature distribution, which severely restricts the performance of most cross-dataset facial expression recognition methods. To eliminate the influence of outlier samples on cross-dataset FER, we propose the enhanced sample self-revised network (ESSRN) with a novel outlier-handling mechanism, whose aim is first to seek these outlier samples and then suppress them in dealing with cross-dataset FER. To evaluate the proposed ESSRN, we conduct extensive cross-dataset experiments across RAF-DB, JAFFE, CK+, and FER2013 datasets. Experimental results demonstrate that the proposed outlier-handling mechanism can reduce the negative impact of outlier samples on cross-dataset FER effectively and our ESSRN outperforms classic deep unsupervised domain adaptation (UDA) methods and the recent state-of-the-art cross-dataset FER results.
Journal Article
LTVAL: Label Transfer Virtual Adversarial Learning framework for source-free facial expression recognition
by
Guo, Zhe
,
Pan, Zhaojun
,
Liu, Shiya
in
Clustering
,
Computer Communication Networks
,
Computer Science
2024
Previous research on cross-domain Facial Expression Recognition (FER) mainly focused on metric learning or adversarial learning, which presupposes access to source domain data to find domain invariant information. However, in practical applications, due to the high privacy and sensitivity of face data, it is often impossible to directly obtain source domain data. In this case, these methods cannot be effectively applied. In order to better apply the cross-domain FER method to the real scenarios, this paper proposes a source-free FER method called Label Transfer Virtual Adversarial Learning (LTVAL), which does not need to directly access source domain data. First, we train the target domain model based on the information maximization constraint, and obtain the pseudo-labels of the target domain data through deep clustering to achieve label transfer. Secondly, the perturbation is added to the target domain samples, and the perturbed samples and the original samples are together used for virtual adversarial training with local distributed smoothing constraints. Finally, a joint loss function is constructed to optimize the target domain model. Using the source domain model trained on RAF-DB, experiments on four public datasets FER2013, JAFFE, CK+, and EXPW as target domain datasets show that our approach achieves much higher performance than the state-of-the-art cross-domain FER methods that require access to source domain data.
Journal Article
Deep Neural Networks for Automatic Facial Expression Recognition
by
Gatram Rama Mohan Babu
,
Ravichandran, Suban
,
Venkata Srinivasu Veesam
in
Artificial neural networks
,
Classification
,
Communication
2022
Out of all non-linguistic communications, one of the most popular is face expression and is capable of communicating effectively with others. We have number of applications of facial expressions in as sorted arenas comprising of medicine like psychology, security, gaming, Classroom communication and even commercial creativities. Owing to huge intra-class distinction it is still challenging to recognize the emotions automatically based on facial expression though it is a vigorous area of research since decades. Conventional lines for this approach are dependent on hand-crafted characteristics like Scale Invariant Feature Transform, Histogram of Oriented Gradient and Local Binary Patterns surveyed by a classifier which is applied on a dataset. Various types of architectures were applied for restored performance as Deep learning proved an outstanding feat. The goal of this study is to create a deep learning model on automatic facial emotion recognition FER. The proposed model efforts more on pulling out the crucial features, thereby, advances the expression recognition accuracy, and beats the competition on FER2013 dataset.
Journal Article
AI-Based Visual Early Warning System
by
Al-Tekreeti, Zeena
,
Madrigal Garcia, Maria Isabel
,
Rodrigues, Marcos A.
in
Artificial intelligence
,
Artificial neural networks
,
automatic facial expression recognition (AFER)
2024
Facial expressions are a universally recognised means of conveying internal emotional states across diverse human cultural and ethnic groups. Recent advances in understanding people’s emotions expressed through verbal and non-verbal communication are particularly noteworthy in the clinical context for the assessment of patients’ health and well-being. Facial expression recognition (FER) plays an important and vital role in health care, providing communication with a patient’s feelings and allowing the assessment and monitoring of mental and physical health conditions. This paper shows that automatic machine learning methods can predict health deterioration accurately and robustly, independent of human subjective assessment. The prior work of this paper is to discover the early signs of deteriorating health that align with the principles of preventive reactions, improving health outcomes and human survival, and promoting overall health and well-being. Therefore, methods are developed to create a facial database mimicking the underlying muscular structure of the face, whose Action Unit motions can then be transferred to human face images, thus displaying animated expressions of interest. Then, building and developing an automatic system based on convolution neural networks (CNN) and long short-term memory (LSTM) to recognise patterns of facial expressions with a focus on patients at risk of deterioration in hospital wards. This research presents state-of-the-art results on generating and modelling synthetic database and automated deterioration prediction through FEs with 99.89% accuracy. The main contributions to knowledge from this paper can be summarized as (1) the generation of visual datasets mimicking real-life samples of facial expressions indicating health deterioration, (2) improvement of the understanding and communication with patients at risk of deterioration through facial expression analysis, and (3) development of a state-of-the-art model to recognize such facial expressions using a ConvLSTM model.
Journal Article
Recognition of facial expressions based on CNN features
by
González-Lozoya, Sonia M
,
Benitez-Ruiz, Antonio
,
Escalante Hugo Jair
in
Algorithms
,
Artificial neural networks
,
Datasets
2020
Facial expressions are a natural way to communicate emotional states and intentions. In recent years, automatic facial expression recognition (FER) has been studied due to its practical importance in many human-behavior analysis tasks such as interviews, autonomous-driving, medical treatment, among others. In this paper we propose a method for facial expression recognition based on features extracted with convolutional neural networks (CNN), taking advantage of a pre-trained model in similar tasks. Unlike other approaches, the proposed FER method learns from mixed instances taken from different databases with the goal of improving generalization, a major issue in machine learning. Experimental results show that the FER method is able to recognize the six universal expressions with an accuracy above 92% considering five of the widely used databases. In addition, we have extended our method to deal with micro-expressions recognition (MER). In this regard, we propose three strategies to create a temporal-aggregated feature vector: mean, standard deviation and early fusion. In this case, the best result is 78.80% accuracy. Furthermore, we present a prototype system that implements the two proposed methods for FER and MER as a tool that allows to analyze videos.
Journal Article
A survey on facial expression recognition in 3D video sequences
by
Theoharis, Theoharis
,
Danelakis, Antonios
,
Pratikakis, Ioannis
in
3-D graphics
,
Analysis
,
Computer Communication Networks
2015
Facial expression recognition constitutes an active research area due to its various applications. This survey addresses methodologies for 3
D
mesh video facial expression recognition. Recognition is, actually, a special case of intra-class retrieval. The approaches are analyzed and compared in detail. They are primarily categorized according to the 3
D
dynamic face analysis technique used. In addition, currently available datasets, used for 3
D
video facial expression analysis, are presented. Finally, future challenges that can be addressed in order for 3
D
video facial expression recognition field to be further improved, are extensively discussed.
Journal Article
Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network
by
Minaee, Shervin
,
Minaei, Mehdi
,
Abdolrashidi, Amirali
in
Accuracy
,
attention mechanism
,
convolutional neural network
2021
Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face.
Journal Article
Mapping Forests: A Comprehensive Approach for Nonlinear Mapping Problems
by
Jampour, Mahdi
,
Bischof, Horst
,
Moin, Mohammad-Shahram
in
Applications of Mathematics
,
Computer Science
,
Computer vision
2018
A new and robust mapping approach is proposed entitled mapping forests (MFs) for computer vision applications based on regression transformations. Mapping forests relies on learning nonlinear mappings deduced from pairs of source and target training data, and improves the performance of mappings by enabling nonlinear transformations using forests. In contrast to previous approaches, it provides automatically selected mappings, which are naturally nonlinear. MF can provide accurate nonlinear transformations to compensate the gap of linear mappings or can generalize the nonlinear mappings with constraint kernels. In our experiments, we demonstrate that the proposed MF approach is not only on a par or better than linear mapping approaches and the state-of-the-art, but also is very time efficient, which makes it an attractive choice for real-time applications. We evaluated the efficiency and performance of the MF approach using the BU3DFE and Multi-PIE datasets for multi-view facial expression recognition application, and Set5, Set14 and SuperTex136 datasets for single image super-resolution application.
Journal Article