Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
54
result(s) for
"Hessels, Roy S."
Sort by:
How does gaze to faces support face-to-face interaction? A review and perspective
Gaze—where one looks, how long, and when—plays an essential part in human social behavior. While many aspects of social gaze have been reviewed, there is no comprehensive review or theoretical framework that describes how gaze to faces supports face-to-face interaction. In this review, I address the following questions: (1) When does gaze need to be allocated to a particular region of a face in order to provide the relevant information for successful interaction; (2) How do humans look at other people, and faces in particular, regardless of whether gaze needs to be directed at a particular region to acquire the relevant visual information; (3) How does gaze support the regulation of interaction? The work reviewed spans psychophysical research, observational research, and eye-tracking research in both lab-based and interactive contexts. Based on the literature overview, I sketch a framework for future research based on dynamic systems theory. The framework holds that gaze should be investigated in relation to sub-states of the interaction, encompassing sub-states of the interactors, the content of the interaction as well as the interactive context. The relevant sub-states for understanding gaze in interaction vary over different timescales from microgenesis to ontogenesis and phylogenesis. The framework has important implications for vision science, psychopathology, developmental science, and social robotics.
Journal Article
How robust are wearable eye trackers to slow and fast head and body movements?
by
Niehorster, Diederick C.
,
Hooge, Ignace T. C.
,
Benjamins, Jeroen S.
in
Behavioral Science and Psychology
,
Cognitive Psychology
,
Eye Movement Measurements
2023
How well can modern wearable eye trackers cope with head and body movement? To investigate this question, we asked four participants to stand still, walk, skip, and jump while fixating a static physical target in space. We did this for six different eye trackers. All the eye trackers were capable of recording gaze during the most dynamic episodes (skipping and jumping). The accuracy became worse as movement got wilder. During skipping and jumping, the biggest error was 5.8
∘
. However, most errors were smaller than 3
∘
. We discuss the implications of decreased accuracy in the context of different research scenarios.
Journal Article
Implying social interaction and its influence on gaze behavior to the eyes
2020
Researchers have increasingly focused on how the potential for social interaction modulates basic processes of visual attention and gaze behavior. In this study, we investigated why people may experience social interaction and what factors contributed to their subjective experience. We furthermore investigated whether implying social interaction modulated gaze behavior to people's faces, specifically the eyes. To imply the potential for interaction, participants received either one of two instructions: 1) they would be presented with a person via a 'live' video-feed, or 2) they would be presented with a pre-recorded video clip of a person. Prior to the presentation, a confederate walked into a separate room to suggest to participants that (s)he was being positioned behind a webcam. In fact, all participants were presented with a pre-recorded clip. During the presentation, we measured participants' gaze behavior with an eye tracker, and after the presentation, participants were asked whether they believed that the confederate was 'live' or not, and, why they thought so. Participants varied greatly in their judgements about whether the confederate was 'live' or not. Analyses of gaze behavior revealed that a large subset of participants who received the live-instruction gazed less at the eyes of confederates compared with participants who received the pre-recorded-instruction. However, for both the live-instruction group and the pre-recorded instruction group, another subset of participants gazed predominantly at the eyes. The current findings may contribute to the development of experimental designs aimed to capture the interactive aspects of social cognition and visual attention.
Journal Article
GlassesValidator: A data quality tool for eye tracking glasses
by
Niehorster, Diederick C.
,
Hooge, Ignace T. C.
,
Benjamins, Jeroen S.
in
Annan data- och informationsvetenskap
,
Behavioral Science and Psychology
,
Bioinformatics (Computational Biology)
2024
According to the proposal for a minimum reporting guideline for an eye tracking study by Holmqvist et al. (
2022
), the accuracy (in degrees) of eye tracking data should be reported. Currently, there is no easy way to determine accuracy for wearable eye tracking recordings. To enable determining the accuracy quickly and easily, we have produced a simple validation procedure using a printable poster and accompanying Python software. We tested the poster and procedure with 61 participants using one wearable eye tracker. In addition, the software was tested with six different wearable eye trackers. We found that the validation procedure can be administered within a minute per participant and provides measures of accuracy and precision. Calculating the eye-tracking data quality measures can be done offline on a simple computer and requires no advanced computer skills.
Journal Article
A field test of computer-vision-based gaze estimation in psychology
by
Kemner, Chantal
,
Valtakari, Niilo V.
,
Nyström, Pär
in
Adult
,
Behavioral Science and Psychology
,
Calibration
2024
Computer-vision-based gaze estimation refers to techniques that estimate gaze direction directly from video recordings of the eyes or face without the need for an eye tracker. Although many such methods exist, their validation is often found in the technical literature (e.g., computer science conference papers). We aimed to (1) identify which computer-vision-based gaze estimation methods are usable by the average researcher in fields such as psychology or education, and (2) evaluate these methods. We searched for methods that do not require calibration and have clear documentation. Two toolkits, OpenFace and OpenGaze, were found to fulfill these criteria. First, we present an experiment where adult participants fixated on nine stimulus points on a computer screen. We filmed their face with a camera and processed the recorded videos with OpenFace and OpenGaze. We conclude that OpenGaze is accurate and precise enough to be used in screen-based experiments with stimuli separated by at least 11 degrees of gaze angle. OpenFace was not sufficiently accurate for such situations but can potentially be used in sparser environments. We then examined whether OpenFace could be used with horizontally separated stimuli in a sparse environment with infant participants. We compared dwell measures based on OpenFace estimates to the same measures based on manual coding. We conclude that OpenFace gaze estimates may potentially be used with measures such as relative total dwell time to sparse, horizontally separated areas of interest, but should not be used to draw conclusions about measures such as dwell duration.
Journal Article
Gaze-action coupling, gaze-gesture coupling, and exogenous attraction of gaze in dyadic interactions
2024
In human interactions, gaze may be used to acquire information for goal-directed actions, to acquire information related to the interacting partner’s actions, and in the context of multimodal communication. At present, there are no models of gaze behavior in the context of vision that adequately incorporate these three components. In this study, we aimed to uncover and quantify patterns of within-person gaze-action coupling, gaze-gesture and gaze-speech coupling, and coupling between one person’s gaze and another person’s manual actions, gestures, or speech (or exogenous attraction of gaze) during dyadic collaboration. We showed that in the context of a collaborative Lego Duplo-model copying task, within-person gaze-action coupling is strongest, followed by within-person gaze-gesture coupling, and coupling between gaze and another person’s actions. When trying to infer gaze location from one’s own manual actions, gestures, or speech or that of the other person, only one’s own manual actions were found to lead to better inference compared to a baseline model. The improvement in inferring gaze location was limited, contrary to what might be expected based on previous research. We suggest that inferring gaze location may be most effective for constrained tasks in which different manual actions follow in a quick sequence, while gaze-gesture and gaze-speech coupling may be stronger in unconstrained conversational settings or when the collaboration requires more negotiation. Our findings may serve as an empirical foundation for future theory and model development, and may further be relevant in the context of action/intention prediction for (social) robotics and effective human–robot interaction.
Journal Article
Task-related gaze control in human crowd navigation
by
van Doorn, Andrea J.
,
Hooge, Ignace T. C.
,
Benjamins, Jeroen S.
in
Affordances
,
Behavior
,
Behavioral Science and Psychology
2020
Human crowds provide an interesting case for research on the perception of people. In this study, we investigate how visual information is acquired for (1) navigating human crowds and (2) seeking out social affordances in crowds by studying gaze behavior during human crowd navigation under different task instructions. Observers (
n
= 11) wore head-mounted eye-tracking glasses and walked two rounds through hallways containing walking crowds (
n
= 38) and static objects. For round one, observers were instructed to avoid collisions. For round two, observers furthermore had to indicate with a button press whether oncoming people made eye contact. Task performance (walking speed, absence of collisions) was similar across rounds. Fixation durations indicated that heads, bodies, objects, and walls maintained gaze comparably long. Only crowds in the distance maintained gaze relatively longer. We find no compelling evidence that human bodies and heads hold one’s gaze more than objects while navigating crowds. When eye contact was assessed, heads were fixated more often and for a total longer duration, which came at the cost of looking at bodies. We conclude that gaze behavior in crowd navigation is task-dependent, and that not every fixation is strictly necessary for navigating crowds. When explicitly tasked with seeking out potential social affordances, gaze is modulated as a result. We discuss our findings in the light of current theories and models of gaze behavior. Furthermore, we show that in a head-mounted eye-tracking study, a large degree of experimental control can be maintained while many degrees of freedom on the side of the observer remain.
Journal Article
Eye contact avoidance in crowds: A large wearable eye-tracking study
by
Benjamins, Jeroen S.
,
Valtakari, Niilo V.
,
van Hal, Sebas
in
Anxiety Disorders
,
Autism
,
Autism Spectrum Disorders
2022
Eye contact is essential for human interactions. We investigated whether humans are able to avoid eye contact while navigating crowds. At a science festival, we fitted 62 participants with a wearable eye tracker and instructed them to walk a route. Half of the participants were further instructed to avoid eye contact. We report that humans can flexibly allocate their gaze while navigating crowds and avoid eye contact primarily by orienting their head and eyes towards the floor. We discuss implications for crowd navigation and gaze behavior. In addition, we address a number of issues encountered in such field studies with regard to data quality, control of the environment, and participant adherence to instructions. We stress that methodological innovation and scientific progress are strongly interrelated.
Journal Article
Stable eye versus mouth preference in a live speech-processing task
by
Viktorsson, Charlotte
,
Hooge, Ignace T. C.
,
Falck-Ytter, Terje
in
631/477
,
631/477/2811
,
Adult
2023
Looking at the mouth region is thought to be a useful strategy for speech-perception tasks. The tendency to look at the eyes versus the mouth of another person during speech processing has thus far mainly been studied using screen-based paradigms. In this study, we estimated the eye-mouth-index (EMI) of 38 adult participants in a live setting. Participants were seated across the table from an experimenter, who read sentences out loud for the participant to remember in both a familiar (English) and unfamiliar (Finnish) language. No statistically significant difference in the EMI between the familiar and the unfamiliar languages was observed. Total relative looking time at the mouth also did not predict the number of correctly identified sentences. Instead, we found that the EMI was higher during an instruction phase than during the speech-processing task. Moreover, we observed high intra-individual correlations in the EMI across the languages and different phases of the experiment. We conclude that there are stable individual differences in looking at the eyes versus the mouth of another person. Furthermore, this behavior appears to be flexible and dependent on the requirements of the situation (speech processing or not).
Journal Article
Eye contact takes two – autistic and social anxiety traits predict gaze behavior in dyadic interaction
by
Hooge, Ignace T. C.
,
Kemner, Chantal
,
Cornelissen, Tim H. W.
in
Anxiety
,
Anxiety disorders
,
Autism
2018
Research on social impairments in psychopathology has relied heavily on the face processing literature. However, although many sub-systems of facial information processing are described, recent evidence suggests that generalizability of these findings to social settings may be limited. The main argument is that in social interaction, the content of faces is more dynamic and dependent on the interplay between interaction partners, than the content of a non-responsive face (e.g. pictures or videos) as portrayed in a typical experiment. The question beckons whether gaze atypicalities to non-responsive faces in certain disorders generalize to faces in interaction. In the present study, a dual eye-tracking setup capable of recording gaze with high resolution was used to investigate how gaze behavior in interaction is related to traits of Autism Spectrum Disorder (ASD), and Social Anxiety Disorder (SAD). As clinical ASD and SAD groups have exhibited deficiencies in reciprocal social behavior, traits of these two conditions were assessed in a general population. We report that gaze behavior in interaction of individuals scoring high on ASD and SAD traits corroborates hypotheses posed in typical face-processing research using non-responsive stimuli. Moreover, our findings on the relation between paired gaze states (when and how often pairs look at each other’s eyes simultaneously or alternately) and ASD and SAD traits bear resemblance to prevailing models in the ASD literature (the ‘gaze aversion’ model) and SAD literature (the ‘vigilant-avoidance’ model). Pair-based analyses of gaze may reveal behavioral patterns crucial to our understanding of ASD and SAD, and more general to our understanding of eye movements as social signals in interaction.
Journal Article