Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
11
result(s) for
"Perugia, Giulia"
Sort by:
Robot’s Gendering Trouble: A Scoping Review of Gendering Humanoid Robots and Its Effects on HRI
2023
The discussion around gendering humanoid robots has gained more traction in the last few years. To lay the basis for a full comprehension of how robots’ “gender” has been understood within the Human–Robot Interaction (HRI) community—i.e., how it has been manipulated, in which contexts, and which effects it has yielded on people’s perceptions and interactions with robots—we performed a scoping review of the literature. We identified 553 papers relevant for our review retrieved from 5 different databases. The final sample of reviewed papers included 35 papers written between 2005 and 2021, which involved a total of 3902 participants. In this article, we thoroughly summarize these papers by reporting information about their objectives and assumptions on gender (i.e., definitions and reasons to manipulate gender), their manipulation of robots’ “gender” (i.e., gender cues and manipulation checks), their experimental designs (e.g., demographics of participants, employed robots), and their results (i.e., main and interaction effects). The review reveals that robots’ “gender” does not affect crucial constructs for the HRI, such as likability and acceptance, but rather bears its strongest effect on stereotyping. We leverage our different epistemological backgrounds in Social Robotics and Gender Studies to provide a comprehensive interdisciplinary perspective on the results of the review and suggest ways to move forward in the field of HRI.
Journal Article
Context-Enhanced Human-Robot Interaction: Exploring the Role of System Interactivity and Multimodal Stimuli on the Engagement of People with Dementia
by
Hu, Jun
,
Perugia, Giulia
,
Rauterberg, G. W. Matthias
in
Augmented reality
,
Context
,
Context-enhanced human-robot interaction
2022
Engaging people with dementia (PWD) in meaningful activities is the key to promote their quality of life. Design towards a higher level of user engagement has been extensively studied within the human-computer interaction community, however, few extend to PWD. It is generally considered that increased richness of experiences can lead to enhanced engagement. Therefore, this paper explores the effects of rich interaction in terms of the role of system interactivity and multimodal stimuli by engaging participants in context-enhanced human-robot interaction activities. The interaction with a social robot was considered context-enhanced due to the additional responsive sensory feedback from an augmented reality display. A field study was conducted in a Dutch nursing home with 16 residents. The study followed a two by two mixed factorial design with one within-subject variable - multimodal stimuli - and one between-subject variable - system interactivity. A mixed method of video coding analysis and observational rating scales was adopted to assess user engagement comprehensively. Results disclose that when additional auditory modality was included besides the visual-tactile stimuli, participants had significantly higher scores on attitude, more positive behavioral engagement during activity, and a higher percentage of communications displayed. The multimodal stimuli also promoted social interaction between participants and the facilitator. The findings provide sufficient evidence regarding the significant role of multimodal stimuli in promoting PWD’s engagement, which could be potentially used as a motivation strategy in future research to improve emotional aspects of activity-related engagement and social interaction with the human partner.
Journal Article
Robot's Gendering Trouble: A Scoping Review of Gendering Humanoid Robots and its Effects on HRI
2023
The discussion around the problematic practice of gendering humanoid robots has risen to the foreground in the last few years. To lay the basis for a thorough understanding of how robot's \"gender\" has been understood within the Human-Robot Interaction (HRI) community - i.e., how it has been manipulated, in which contexts, and which effects it has yield on people's perceptions and interactions with robots - we performed a scoping review of the literature. We identified 553 papers relevant for our review retrieved from 5 different databases. The final sample of reviewed papers included 35 papers written between 2005 and 2021, which involved a total of 3902 participants. In this article, we thoroughly summarize these papers by reporting information about their objectives and assumptions on gender (i.e., definitions and reasons to manipulate gender), their manipulation of robot's \"gender\" (i.e., gender cues and manipulation checks), their experimental designs (e.g., demographics of participants, employed robots), and their results (i.e., main and interaction effects). The review reveals that robot's \"gender\" does not affect crucial constructs for the HRI, such as likability and acceptance, but rather bears its strongest effect on stereotyping. We leverage our different epistemological backgrounds in Social Robotics and Gender Studies to provide a comprehensive interdisciplinary perspective on the results of the review and suggest ways to move forward in the field of HRI.
Social Robots for People Living with Dementia: A Scoping Review on Deception from Design to Perception
2026
As social robots are increasingly introduced into dementia care, their embodied and interactive design may blur the boundary between artificial and lifelike entities, raising ethical concerns about robotic deception. However, it remains unclear which specific design cues of social robots might lead to social robotic deception (SRD) in people living with dementia (PLwD), and which perceptions and responses of PLwD might indicate that SRD is taking place. To address these questions, we conducted a scoping review of 26 empirical studies reporting PLwD interacting with social robots. We identified three key design cue categories that might contribute to SRD and one that might break the illusion. However, the available literature does not provide sufficient evidence to determine which specific design cues lead to SRD. Thematic analysis of user responses reveals six recurring patterns in how PLwD perceive and respond to social robots. However, conceptual limitations in existing definitions of robotic deception make it difficult to identify when and to what extent deception actually occurs. Building on the results, we propose a dual-process interpretation that clarifies the cognitive basis of false beliefs in human-robot interaction and distinguishes SRD from anthropomorphism or emotional engagement.
Social Robots for People with Dementia: A Literature Review on Deception from Design to Perception
2025
As social robots increasingly enter dementia care, concerns about deception, intentional or not, are gaining attention. Yet, how robotic design cues might elicit misleading perceptions in people with dementia, and how these perceptions arise, remains insufficiently understood. In this scoping review, we examined 26 empirical studies on interactions between people with dementia and physical social robots. We identify four key design cue categories that may influence deceptive impressions: cues resembling physiological signs (e.g., simulated breathing), social intentions (e.g., playful movement), familiar beings (e.g., animal-like form and sound), and, to a lesser extent, cues that reveal artificiality. Thematic analysis of user responses reveals that people with dementia often attribute biological, social, and mental capacities to robots, dynamically shifting between awareness and illusion. These findings underscore the fluctuating nature of ontological perception in dementia contexts. Existing definitions of robotic deception often rest on philosophical or behaviorist premises, but rarely engage with the cognitive mechanisms involved. We propose an empirically grounded definition: robotic deception occurs when Type 1 (automatic, heuristic) processing dominates over Type 2 (deliberative, analytic) reasoning, leading to misinterpretation of a robot's artificial nature. This dual-process perspective highlights the ethical complexity of social robots in dementia care and calls for design approaches that are not only engaging, but also epistemically respectful.
I Can See it in Your Eyes: Gaze as an Implicit Cue of Uncanniness and Task Performance in Repeated Interactions
by
Perugia, Giulia
,
Alanenpää, Madelene
,
Paetzel-Prüsmann, Maike
in
Eye movements
,
Measurement techniques
,
Perception
2021
Over the past years, extensive research has been dedicated to developing robust platforms and data-driven dialog models to support long-term human-robot interactions. However, little is known about how people's perception of robots and engagement with them develop over time and how these can be accurately assessed through implicit and continuous measurement techniques. In this paper, we explore this by involving participants in three interaction sessions with multiple days of zero exposure in between. Each session consists of a joint task with a robot as well as two short social chats with it before and after the task. We measure participants' gaze patterns with a wearable eye-tracker and gauge their perception of the robot and engagement with it and the joint task using questionnaires. Results disclose that aversion of gaze in a social chat is an indicator of a robot's uncanniness and that the more people gaze at the robot in a joint task, the worse they perform. In contrast with most HRI literature, our results show that gaze towards an object of shared attention, rather than gaze towards a robotic partner, is the most meaningful predictor of engagement in a joint task. Furthermore, the analyses of gaze patterns in repeated interactions disclose that people's mutual gaze in a social chat develops congruently with their perceptions of the robot over time. These are key findings for the HRI community as they entail that gaze behavior can be used as an implicit measure of people's perception of robots in a social chat and of their engagement and task performance in a joint task.
Does the Goal Matter? Emotion Recognition Tasks Can Change the Social Value of Facial Mimicry towards Artificial Agents
by
Varni, Giovanna
,
Perugia, Giulia
,
Paetzel-Prüssman, Maike
in
Agents (artificial intelligence)
,
Design of experiments
,
Emotion recognition
2021
In this paper, we present a study aimed at understanding whether the embodiment and humanlikeness of an artificial agent can affect people's spontaneous and instructed mimicry of its facial expressions. The study followed a mixed experimental design and revolved around an emotion recognition task. Participants were randomly assigned to one level of humanlikeness (between-subject variable: humanlike, characterlike, or morph facial texture of the artificial agents) and observed the facial expressions displayed by a human (control) and three artificial agents differing in embodiment (within-subject variable: video-recorded robot, physical robot, and virtual agent). To study both spontaneous and instructed facial mimicry, we divided the experimental sessions into two phases. In the first phase, we asked participants to observe and recognize the emotions displayed by the agents. In the second phase, we asked them to look at the agents' facial expressions, replicate their dynamics as closely as possible, and then identify the observed emotions. In both cases, we assessed participants' facial expressions with an automated Action Unit (AU) intensity detector. Contrary to our hypotheses, our results disclose that the agent that was perceived as the least uncanny, and most anthropomorphic, likable, and co-present, was the one spontaneously mimicked the least. Moreover, they show that instructed facial mimicry negatively predicts spontaneous facial mimicry. Further exploratory analyses revealed that spontaneous facial mimicry appeared when participants were less certain of the emotion they recognized. Hence, we postulate that an emotion recognition goal can flip the social value of facial mimicry as it transforms a likable artificial agent into a distractor.