Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
10,942
result(s) for
"Facial Expression"
Sort by:
Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network
by
Minaee, Shervin
,
Minaei, Mehdi
,
Abdolrashidi, Amirali
in
Accuracy
,
attention mechanism
,
convolutional neural network
2021
Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face.
Journal Article
Make a face
by
Alegria, Ricardo, Jr., author
,
Kuvarzina, Anya, illustrator
in
Face Juvenile fiction.
,
Animals Juvenile fiction.
,
Facial expression Juvenile fiction.
2017
Make a Face is a very fun interactive, concept driven-picture book that shows how different facial expressions connect with different emotions by pairing them with corresponding animals who \"come to life\" as children make different faces on cue.
Macro- and Micro-Expressions Facial Datasets: A Survey
by
Ghazouani, Haythem
,
Guerdelli, Hajer
,
Ferrari, Claudio
in
Acquisitions & mergers
,
applications of facial expression datasets
,
Datasets
2022
Automatic facial expression recognition is essential for many potential applications. Thus, having a clear overview on existing datasets that have been investigated within the framework of face expression recognition is of paramount importance in designing and evaluating effective solutions, notably for neural networks-based training. In this survey, we provide a review of more than eighty facial expression datasets, while taking into account both macro- and micro-expressions. The proposed study is mostly focused on spontaneous and in-the-wild datasets, given the common trend in the research is that of considering contexts where expressions are shown in a spontaneous way and in a real context. We have also provided instances of potential applications of the investigated datasets, while putting into evidence their pros and cons. The proposed survey can help researchers to have a better understanding of the characteristics of the existing datasets, thus facilitating the choice of the data that best suits the particular context of their application.
Journal Article
Read the face : face reading for success in your career, relationships, and health
\"Relearn the intuitive language of face reading From birth, face is our first language. We are born face readers-knowing to seek out human features and faces from the moment our eyes open. We all have the intuitive ability to read and interpret the feelings and expressions of those around us. In Read the Face, master face reader Eric Standop unlocks the power of this innate human ability, sharing his own journey to become a face reading master, along with stories that illustrate the power of this unique language. Using a combination of three different schools of face reading, along with a scientific accuracy to detect the most fleeting microexpressions, Standop is able to read personality, character, emotions, and even the state of a person's health-all from simply glancing at their face. The book is divided into sections focusing on specific ways that face reading can offer insight, such as Health, Love, Communication, Work and Success. The stories are accompanied by detailed black and white illustrations of faces, allowing readers to observe the same features that Standop interpreted. The final section of the book outlines the meanings of dozens of facial features and face shapes, so that readers can recognize their own innate intuitive powers and develop them. Read the Face is a guide to using the ancient art and science of face reading to go beyond the surface and create the boldest life possible\"-- Provided by publisher.
Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People
by
Akhmedov, Farkhod
,
Mukhiddinov, Mukhriddin
,
Cho, Jinsoo
in
Accuracy
,
Artificial Intelligence
,
Blindness
2023
Current artificial intelligence systems for determining a person’s emotions rely heavily on lip and mouth movement and other facial features such as eyebrows, eyes, and the forehead. Furthermore, low-light images are typically classified incorrectly because of the dark region around the eyes and eyebrows. In this work, we propose a facial emotion recognition method for masked facial images using low-light image enhancement and feature analysis of the upper features of the face with a convolutional neural network. The proposed approach employs the AffectNet image dataset, which includes eight types of facial expressions and 420,299 images. Initially, the facial input image’s lower parts are covered behind a synthetic mask. Boundary and regional representation methods are used to indicate the head and upper features of the face. Secondly, we effectively adopt a facial landmark detection method-based feature extraction strategy using the partially covered masked face’s features. Finally, the features, the coordinates of the landmarks that have been identified, and the histograms of the oriented gradients are then incorporated into the classification procedure using a convolutional neural network. An experimental evaluation shows that the proposed method surpasses others by achieving an accuracy of 69.3% on the AffectNet dataset.
Journal Article
Self-Difference Convolutional Neural Network for Facial Expression Recognition
by
Liu, Leyuan
,
Huo, Jiao
,
Jiang, Rubin
in
difference-based method
,
Facial Expression
,
facial expression classification
2021
Facial expression recognition (FER) is a challenging problem due to the intra-class variation caused by subject identities. In this paper, a self-difference convolutional network (SD-CNN) is proposed to address the intra-class variation issue in FER. First, the SD-CNN uses a conditional generative adversarial network to generate the six typical facial expressions for the same subject in the testing image. Second, six compact and light-weighted difference-based CNNs, called DiffNets, are designed for classifying facial expressions. Each DiffNet extracts a pair of deep features from the testing image and one of the six synthesized expression images, and compares the difference between the deep feature pair. In this way, any potential facial expression in the testing image has an opportunity to be compared with the synthesized “Self”—an image of the same subject with the same facial expression as the testing image. As most of the self-difference features of the images with the same facial expression gather tightly in the feature space, the intra-class variation issue is significantly alleviated. The proposed SD-CNN is extensively evaluated on two widely-used facial expression datasets: CK+ and Oulu-CASIA. Experimental results demonstrate that the SD-CNN achieves state-of-the-art performance with accuracies of 99.7% on CK+ and 91.3% on Oulu-CASIA, respectively. Moreover, the model size of the online processing part of the SD-CNN is only 9.54 MB (1.59 MB ×6), which enables the SD-CNN to run on low-cost hardware.
Journal Article
A face like glass
by
Hardinge, Frances, author
in
Emotions Juvenile fiction.
,
Facial expression Juvenile fiction.
,
Hallucinogenic drugs Juvenile fiction.
2017
When Neverfell, who has no memory, arrives in Caverna, her facial expressions make her very dangerous to the people who live with blank faces or pay dearly to learn to simulate emotions.
Emotional Expressions Reconsidered
by
Pollak, Seth D.
,
Barrett, Lisa Feldman
,
Adolphs, Ralph
in
Anger
,
Artificial intelligence
,
Classification
2019
It is commonly assumed that a person’s emotional state can be readily inferred from his or her facial movements, typically called emotional expressions or facial expressions. This assumption influences legal judgments, policy decisions, national security protocols, and educational practices; guides the diagnosis and treatment of psychiatric illness, as well as the development of commercial applications; and pervades everyday social interactions as well as research in other scientific fields such as artificial intelligence, neuroscience, and computer vision. In this article, we survey examples of this widespread assumption, which we refer to as the common view, and we then examine the scientific evidence that tests this view, focusing on the six most popular emotion categories used by consumers of emotion research: anger, disgust, fear, happiness, sadness, and surprise. The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state. Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another. We make specific research recommendations that will yield a more valid picture of how people move their faces to express emotions and how they infer emotional meaning from facial movements in situations of everyday life. This research is crucial to provide consumers of emotion research with the translational information they require.
Journal Article