Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
72
result(s) for
"Zamansky, Anna"
Sort by:
Investigating the capabilities of large vision language models in dog emotion recognition
2025
Identifying emotional states in animals is a key challenge in behavioural science and a prerequisite for developing reliable welfare assessments, ethical frameworks, and robust human–animal communication models. Recently, large vision-language models (LVLMs) such as GPT-4o, Gemini, and LLaVA have shown promise in general image understanding tasks, and are beginning to be applied for emotion recognition in animals. In this study, we critically evaluated the ability of state-of-the-art LVLMs to classify emotional states in dogs using a zero-shot approach. We assessed model performance on two datasets: (1) the Dog Emotions (DE) dataset, consisting of web-sourced images with layperson-generated emotion labels, and (2) the Labrador Retriever cropped-face (LRc) dataset, which stems from a rigorously controlled experimental study where emotional states were systematically elicited in dogs and defined based on the experimental context in canine emotion research. Our results revealed that while LVLMs showed moderate classification accuracy on DE, performance is likely driven by superficial correlations, such as background context and breed morphology. When evaluated on LRc, where emotional states are experimentally induced and backgrounds are minimal, performance dropped to near-chance levels, indicating limited ability to generalise based on biologically relevant cues. Background manipulation experiments further confirmed that models relied heavily on contextual features. Prompt variation and system-level instructions slightly improved response rates but did not enhance classification accuracy. These findings highlight significant limitations in the current application of LVLMs to non-human species and raise ethical and epistemological concerns regarding potential anthropocentric biases embedded in their training data. We advocate for species-sensitive AI approaches grounded in validated behavioural science, emphasising the need for high-quality, preferably experimentally-based multimodal datasets and more transparent validation. Our study underscores both the potential and the risks of using general-purpose AI to infer internal states in animals and calls for rigorous, interdisciplinary development of animal-centred computational approaches.
Journal Article
Automated recognition of pain in cats
by
Finka, Lauren R.
,
Luna, Stelio P. L.
,
Mills, Daniel S.
in
631/601/18
,
639/705/1042
,
639/705/117
2022
Facial expressions in non-human animals are closely linked to their internal affective states, with the majority of empirical work focusing on facial shape changes associated with pain. However, existing tools for facial expression analysis are prone to human subjectivity and bias, and in many cases also require special expertise and training. This paper presents the first comparative study of two different paths towards automatizing pain recognition in facial images of domestic short haired cats (n = 29), captured during ovariohysterectomy at different time points corresponding to varying intensities of pain. One approach is based on convolutional neural networks (ResNet50), while the other—on machine learning models based on geometric landmarks analysis inspired by species specific Facial Action Coding Systems (i.e. catFACS). Both types of approaches reach comparable accuracy of above 72%, indicating their potential usefulness as a basis for automating cat pain detection from images.
Journal Article
Dog facial landmarks detection and its applications for facial analysis
by
Martvel, George
,
Canori, Chiara
,
Bremhorst, Annika
in
631/601/18
,
639/705/117
,
Animal emotion recognition
2025
Automated analysis of facial expressions is a crucial challenge in the emerging field of animal affective computing. One of the most promising approaches in this context is facial landmarks, which are well-studied for humans and are now being adopted for many non-human species. The scarcity of high-quality, comprehensive datasets is a significant challenge in the field. This paper is the first to present a novel Dog Facial Landmarks in the Wild (DogFLW) dataset containing 3732 images of dogs annotated with facial landmarks and bounding boxes. Our facial landmark scheme has 46 landmarks grounded in canine facial anatomy, the Dog Facial Action Coding System (DogFACS), and informed by existing cross-species landmarking methods. We additionally provide a benchmark for dog facial landmarks detection and demonstrate two case studies for landmark detection models trained on the DogFLW. The first is a pipeline using landmarks for emotion classification from dog facial expressions from video, and the second is the recognition of DogFACS facial action units (variables), which can enhance the DogFACS coding process by reducing the time needed for manual annotation. The DogFLW dataset aims to advance the field of animal affective computing by facilitating the development of more accurate, interpretable, and scalable tools for analysing facial expressions in dogs with broader potential applications in behavioural science, veterinary practice, and animal-human interaction research.
Journal Article
Explainable automated recognition of emotional states from canine facial expressions: the case of positive anticipation and frustration
by
Distelfeld, Tomer
,
Bremhorst, Annika
,
Shimshoni, Ilan
in
631/601/18
,
639/705/117
,
Animal research
2022
In animal research, automation of affective states recognition has so far mainly addressed pain in a few species. Emotional states remain uncharted territories, especially in dogs, due to the complexity of their facial morphology and expressions. This study contributes to fill this gap in two aspects. First, it is the first to address dog emotional states using a dataset obtained in a controlled experimental setting, including videos from (n = 29) Labrador Retrievers assumed to be in two experimentally induced emotional states: negative (frustration) and positive (anticipation). The dogs’ facial expressions were measured using the Dogs Facial Action Coding System (DogFACS). Two different approaches are compared in relation to our aim: (1) a DogFACS-based approach with a two-step pipeline consisting of (i) a DogFACS variable detector and (ii) a positive/negative state Decision Tree classifier; (2) An approach using deep learning techniques with no intermediate representation. The approaches reach accuracy of above 71% and 89%, respectively, with the deep learning approach performing better. Secondly, this study is also the first to study explainability of AI models in the context of emotion in animals. The DogFACS-based approach provides decision trees, that is a mathematical representation which reflects previous findings by human experts in relation to certain facial expressions (DogFACS variables) being correlates of specific emotional states. The deep learning approach offers a different, visual form of explainability in the form of heatmaps reflecting regions of focus of the network’s attention, which in some cases show focus clearly related to the nature of particular DogFACS variables. These heatmaps may hold the key to novel insights on the sensitivity of the network to nuanced pixel patterns reflecting information invisible to the human eye.
Journal Article
Computational investigation of the social function of domestic cat facial signals
by
Martvel, George
,
Scott, Lauren
,
Florkiewicz, Brittany
in
631/601/18
,
639/705/1046
,
Animal Communication
2024
There is growing interest in the facial signals of domestic cats. Domestication may have shifted feline social dynamics towards a greater emphasis on facial signals that promote affiliative bonding. Most studies have focused on cat facial signals during human interactions or in response to pain. Research on intraspecific facial communication in cats has predominantly examined non-affiliative social interactions. A recent study by Scott and Florkiewicz
1
demonstrated significant differences between cats’ facial signals during affiliative and non-affiliative intraspecific interactions. This follow-up study applies computational approaches to make two main contributions. First, we develop a machine learning classifier for affiliative/non-affiliative interactions based on manual CatFACS codings and automatically detected facial landmarks, reaching above 77% in CatFACS codings and 68% in landmarks by integrating a temporal dimension. Secondly, we introduce novel measures for rapid facial mimicry based on CatFACS coding. Our analysis suggests that domestic cats exhibit more rapid facial mimicry in affiliative contexts than non-affiliative ones, which is consistent with the proposed function of mimicry. Moreover, we found that ear movements (such as EAD103 and EAD104) are highly prone to rapid facial mimicry. Our research introduces new possibilities for analyzing cat facial signals and exploring shared moods with innovative AI-based approaches.
Journal Article
Exploring the dog–human relationship by combining fMRI, eye-tracking and behavioural measures
2020
Behavioural studies revealed that the dog–human relationship resembles the human mother–child bond, but the underlying mechanisms remain unclear. Here, we report the results of a multi-method approach combining fMRI (N = 17), eye-tracking (N = 15), and behavioural preference tests (N = 24) to explore the engagement of an attachment-like system in dogs seeing human faces. We presented morph videos of the caregiver, a familiar person, and a stranger showing either happy or angry facial expressions. Regardless of emotion, viewing the caregiver activated brain regions associated with emotion and attachment processing in humans. In contrast, the stranger elicited activation mainly in brain regions related to visual and motor processing, and the familiar person relatively weak activations overall. While the majority of happy stimuli led to increased activation of the caudate nucleus associated with reward processing, angry stimuli led to activations in limbic regions. Both the eye-tracking and preference test data supported the superior role of the caregiver’s face and were in line with the findings from the fMRI experiment. While preliminary, these findings indicate that cutting across different levels, from brain to behaviour, can provide novel and converging insights into the engagement of the putative attachment system when dogs interact with humans.
Journal Article
Environmental enrichments and data-driven welfare indicators for sheltered dogs using telemetric physiological measures and signal processing
by
Travain, Tiziano
,
Valsecchi, Paola
,
Natoli, Eugenia
in
631/601/1737
,
631/601/18
,
639/705/1046
2024
Shelters are stressful environments for domestic dogs which are known to negatively impact their welfare. The introduction of outside stimuli for dogs in this environment can improve their welfare and life conditions. However, our current understanding of the influence of different stimuli on shelter dogs’ welfare is limited and the data is still insufficient to draw conclusions. In this study, we collected 28 days (four weeks) of telemetry data from eight male dogs housed in an Italian shelter for a long period of time. During this period, three types of enrichment were introduced into the dogs’ pens for one week each: entertaining objects, intraspecific, and interspecific social enrichment, by means of the presence of female conspecifics and the presence of a human. To quantify their impact, we introduce novel metrics as indicators of sheltered dogs’ welfare based on telemetry data: the variation of heart rate, muscle activity, and body temperature from an average baseline day, quality of sleep, and the regularity for cyclicity of the aforementioned parameters, based on the day-night cycle. Using these metrics, we show that while all three stimuli statistically improve the dogs’ welfare, the variance between individual dogs is large. Moreover, our findings indicate that the presence of female conspecific is the best stimulus among the three explored options which improves both the quality of sleep and the parameters’ cyclicity. Our results are consistent with previous research findings while providing novel data-driven welfare indicators that promote objectivity. Thus, this research provides some useful guidelines for managing shelters and improving dogs’ welfare.
Journal Article
Automated recognition of emotional states of horses from facial expressions
by
Rettig, Tidhar
,
Distelfeld, Tomer
,
Riccie-Bonot, Claire
in
Accuracy
,
Affect (Psychology)
,
Affective computing
2024
Animal affective computing is an emerging new field, which has so far mainly focused on pain, while other emotional states remain uncharted territories, especially in horses. This study is the first to develop AI models to automatically recognize horse emotional states from facial expressions using data collected in a controlled experiment. We explore two types of pipelines: a deep learning one which takes as input video footage, and a machine learning one which takes as input EquiFACS annotations. The former outperforms the latter, with 76% accuracy in separating between four emotional states: baseline, positive anticipation, disappointment and frustration. Anticipation and frustration were difficult to separate, with only 61% accuracy.
Journal Article
AI-based prediction and detection of early-onset of digital dermatitis in dairy cows using infrared thermography
2024
Digital dermatitis (DD) is a common foot disease that can cause lameness, decreased milk production and fertility decline in cows. The prediction and early detection of DD can positively impact animal welfare and profitability of the dairy industry. This study applies deep learning-based computer vision techniques for early onset detection and prediction of DD using infrared thermography (IRT) data. We investigated the role of various inputs for these tasks, including thermal images of cow feet, statistical color features extracted from IRT images, and manually registered temperature values. Our models achieved performances of above 81% accuracy on DD detection on ‘day 0’ (first appearance of clinical signs), and above 70% accuracy prediction of DD two days prior to the first appearance of clinical signs. Moreover, current findings indicate that the use of IRT images in conjunction with AI based predictors show real potential for developing future real-time automated tools to monitoring DD in dairy cows.
Journal Article
Explainable automated pain recognition in cats
2023
Manual tools for pain assessment from facial expressions have been suggested and validated for several animal species. However, facial expression analysis performed by humans is prone to subjectivity and bias, and in many cases also requires special expertise and training. This has led to an increasing body of work on automated pain recognition, which has been addressed for several species, including cats. Even for experts, cats are a notoriously challenging species for pain assessment. A previous study compared two approaches to automated ‘pain’/‘no pain’ classification from cat facial images: a deep learning approach, and an approach based on manually annotated geometric landmarks, reaching comparable accuracy results. However, the study included a very homogeneous dataset of cats and thus further research to study generalizability of pain recognition to more realistic settings is required. This study addresses the question of whether AI models can classify ‘pain’/‘no pain’ in cats in a more realistic (multi-breed, multi-sex) setting using a more heterogeneous and thus potentially ‘noisy’ dataset of 84 client-owned cats. Cats were a convenience sample presented to the Department of Small Animal Medicine and Surgery of the University of Veterinary Medicine Hannover and included individuals of different breeds, ages, sex, and with varying medical conditions/medical histories. Cats were scored by veterinary experts using the Glasgow composite measure pain scale in combination with the well-documented and comprehensive clinical history of those patients; the scoring was then used for training AI models using two different approaches. We show that in this context the landmark-based approach performs better, reaching accuracy above 77% in pain detection as opposed to only above 65% reached by the deep learning approach. Furthermore, we investigated the explainability of such machine recognition in terms of identifying facial features that are important for the machine, revealing that the region of nose and mouth seems more important for machine pain classification, while the region of ears is less important, with these findings being consistent across the models and techniques studied here.
Journal Article