Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
95
result(s) for
"Scheutz, Matthias"
Sort by:
Assistive Robots for the Social Management of Health: A Framework for Robot Design and Human–Robot Interaction Research
2021
There is a close connection between health and the quality of one’s social life. Strong social bonds are essential for health and wellbeing, but often health conditions can detrimentally affect a person’s ability to interact with others. This can become a vicious cycle resulting in further decline in health. For this reason, the social management of health is an important aspect of healthcare. We propose that socially assistive robots (SARs) could help people with health conditions maintain positive social lives by supporting them in social interactions. This paper makes three contributions, as detailed below. We develop a framework of social mediation functions that robots could perform, motivated by the special social needs that people with health conditions have. In this framework we identify five types of functions that SARs could perform: (a) changing how the person is perceived, (b) enhancing the social behavior of the person, (c) modifying the social behavior of others, (d) providing structure for interactions, and (e) changing how the person feels. We thematically organize and review the existing literature on robots supporting human–human interactions, in both clinical and non-clinical settings, and explain how the findings and design ideas from these studies can be applied to the functions identified in the framework. Finally, we point out and discuss challenges in designing SARs for supporting social interactions, and highlight opportunities for future robot design and HRI research on the mediator role of robots.
Journal Article
A Touching Connection: How Observing Robotic Touch Can Affect Human Trust in a Robot
2021
As robots begin to occupy our social spaces, touch will increasingly become part of human–robot interactions. This paper examines the impact of observing a robot touch a human on trust in that robot. In three online studies, observers watched short videos of human–robot interactions and provided a series of judgments about the robot, which either did or did not touch the human on the shoulder. Trust was measured using a recently introduced multi-dimensional instrument, which assesses people’s trust in a robot as being capable, reliable, sincere, and/or ethical. The first study showed that observed robot touch increased overall trust in the robot, especially for the sincere and ethical trust aspects, and led people to perceive the robot as more comforting, but also more inappropriate. A second study replicated the general pattern, even with a handshake preceding the touch; but in the context of the handshake the touch was seen as more inappropriate. A third study examined the joint impact of a handshake, touch, and information about the robot’s designed function. In the context of such information, observed touch was seen as even more inappropriate, which in turn decreased trust.
Journal Article
Cognitive cascades: How to model (and potentially counter) the spread of fake news
by
Rabb, Nicholas
,
de Ruiter, Jan P.
,
Cowen, Lenore
in
Behavior
,
Biology and Life Sciences
,
Cascades
2022
Understanding the spread of false or dangerous beliefs—often called misinformation or disinformation—through a population has never seemed so urgent. Network science researchers have often taken a page from epidemiologists, and modeled the spread of false beliefs as similar to how a disease spreads through a social network. However, absent from those disease-inspired models is an internal model of an individual’s set of current beliefs, where cognitive science has increasingly documented how the interaction between mental models and incoming messages seems to be crucially important for their adoption or rejection. Some computational social science modelers analyze agent-based models where individuals do have simulated cognition, but they often lack the strengths of network science, namely in empirically-driven network structures. We introduce a cognitive cascade model that combines a network science belief cascade approach with an internal cognitive model of the individual agents as in opinion diffusion models as a public opinion diffusion (POD) model, adding media institutions as agents which begin opinion cascades. We show that the model, even with a very simplistic belief function to capture cognitive effects cited in disinformation study (dissonance and exposure), adds expressive power over existing cascade models. We conduct an analysis of the cognitive cascade model with our simple cognitive function across various graph topologies and institutional messaging patterns. We argue from our results that population-level aggregate outcomes of the model qualitatively match what has been reported in COVID-related public opinion polls, and that the model dynamics lend insights as to how to address the spread of problematic beliefs. The overall model sets up a framework with which social science misinformation researchers and computational opinion diffusion modelers can join forces to understand, and hopefully learn how to best counter, the spread of disinformation and “alternative facts.”
Journal Article
Investigating Methods for Cognitive Workload Estimation for Assistive Robots
2022
Robots interacting with humans in assistive contexts have to be sensitive to human cognitive states to be able to provide help when it is needed and not overburden the human when the human is busy. Yet, it is currently still unclear which sensing modality might allow robots to derive the best evidence of human workload. In this work, we analyzed and modeled data from a multi-modal simulated driving study specifically designed to evaluate different levels of cognitive workload induced by various secondary tasks such as dialogue interactions and braking events in addition to the primary driving task. Specifically, we performed statistical analyses of various physiological signals including eye gaze, electroencephalography, and arterial blood pressure from the healthy volunteers and utilized several machine learning methodologies including k-nearest neighbor, naive Bayes, random forest, support-vector machines, and neural network-based models to infer human cognitive workload levels. Our analyses provide evidence for eye gaze being the best physiological indicator of human cognitive workload, even when multiple signals are combined. Specifically, the highest accuracy (in %) of binary workload classification based on eye gaze signals is 80.45 ∓ 3.15 achieved by using support-vector machines, while the highest accuracy combining eye gaze and electroencephalography is only 77.08 ∓ 3.22 achieved by a neural network-based model. Our findings are important for future efforts of real-time workload estimation in the multimodal human-robot interactive systems given that eye gaze is easy to collect and process and less susceptible to noise artifacts compared to other physiological signal modalities.
Journal Article
How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress
2014
The rise of military drones and other robots deployed in ethically-sensitive contexts has fueled interest in developing autonomous agents that behave ethically. The ability for autonomous agents to independently reason about situational ethics will inevitably lead to confrontations between robots and human operators regarding the morality of issued commands. Ideally, a robot would be able to successfully convince a human operator to abandon a potentially unethical course of action. To investigate this issue, we conducted an experiment to measure how successfully a humanoid robot could dissuade a person from performing a task using verbal refusals and affective displays that conveyed distress. The results demonstrate a significant behavioral effect on task-completion as well as significant effects on subjective metrics such as how comfortable subjects felt ordering the robot to complete the task. We discuss the potential relationship between the level of perceived agency of the robot and the sensitivity of subjects to robotic confrontation. Additionally, the possible ethical pitfalls of utilizing robotic displays of affect to shape human behavior are also discussed.
Journal Article
An Attachment Framework for Human-Robot Interaction
2022
Attachment theory is a research area in psychology that has enjoyed decades of successful study, and has subsequently become explored in realms beyond that of the original infant-caregiver bonds. Now, attachment is studied in relation to pets, symbols (such as deities), objects, technologies, and notably for our purposes, robots. When we discuss attachment in Human-Robot Interaction (HRI), is “attachment” to a robot the same as being attached to a pet? Or does it more closely resemble attachment to a technology device such as a smartphone? Through untangling the concept of attachment in HRI, we summarize a breadth of the existing attachment literature in a unified spectrum. We present a notion of
weak
attachment, and
strong
attachment before setting both as distinct ends of a spectrum of attachment. We motivate this spectrum by teasing out the underlying theoretical basis for strong attachment, and how capabilities of the attachment figure could lead to stronger or weaker attachment. This more nuanced, multi-dimensional representation of attachment allows us to present a clarified categorization of where various human-robot bonds explored in HRI studies fit on the spectrum, where robots in general could place, and how a clearer definition of human-robot attachment can benefit future HRI studies.
Journal Article
Why and How Robots Should Say ‘No’
2022
Language-enabled robots with moral reasoning capabilities will inevitably face situations in which they have to respond to human commands that might violate normative principles and could cause harm to humans. We believe that it is critical for robots to be able to reject such commands. We thus address the two key challenges of
when
and
how
to reject norm-violating directives. First, we present research in both engineering language-enabled robots that can engage in rudimentary rejection dialogues, as well as related HRI research into the effectiveness of robot protest. Second, we argue that
how
rejections are phrased is important and review the factors that should guide natural language formulations of command rejections. Finally, we conclude by identifying relevant open questions that will further inform the design of future language-capable and morally competent robots.
Journal Article
Looking the Part: Social Status Cues Shape Race Perception
by
Freeman, Jonathan B.
,
Penner, Andrew M.
,
Ambady, Nalini
in
Ambiguity
,
Attraction
,
Classification
2011
It is commonly believed that race is perceived through another's facial features, such as skin color. In the present research, we demonstrate that cues to social status that often surround a face systematically change the perception of its race. Participants categorized the race of faces that varied along White-Black morph continua and that were presented with high-status or low-status attire. Low-status attire increased the likelihood of categorization as Black, whereas high-status attire increased the likelihood of categorization as White; and this influence grew stronger as race became more ambiguous (Experiment 1). When faces with high-status attire were categorized as Black or faces with low-status attire were categorized as White, participants' hand movements nevertheless revealed a simultaneous attraction to select the other race-category response (stereotypically tied to the status cue) before arriving at a final categorization. Further, this attraction effect grew as race became more ambiguous (Experiment 2). Computational simulations then demonstrated that these effects may be accounted for by a neurally plausible person categorization system, in which contextual cues come to trigger stereotypes that in turn influence race perception. Together, the findings show how stereotypes interact with physical cues to shape person categorization, and suggest that social and contextual factors guide the perception of race.
Journal Article
HRI ethics and type-token ambiguity: what kind of robotic identity is most responsible?
2020
This paper addresses ethical challenges posed by a robot acting as both a general type of system and a discrete, particular machine. Using the philosophical distinction between “type” and “token,” we locate type-token ambiguity within a larger field of indefinite robotic identity, which can include networked systems or multiple bodies under a single control system. The paper explores three specific areas where the type-token tension might affect human–robot interaction, including how a robot demonstrates the highly personalized recounting of information, how a robot makes moral appeals and justifies its decisions, and how the possible need for replacement of a particular robot shapes its ongoing role (including how its programming could transfer to a new body platform). We also consider how a robot might regard itself as a replaceable token of a general robotic type and take extraordinary actions on that basis. For human–robot interaction robotic type-token identity is not an ontological problem that has a single solution, but a range of possible interactions that responsible design must take into account, given how people stand to gain and lose from the shifting identities social robots will present.
Journal Article
Robots in healthcare as envisioned by care professionals
by
Chita-Tegmark, Meia
,
Law, Theresa
,
Scheutz, Matthias
in
Artificial Intelligence
,
Caregivers
,
Control
2024
As AI-enabled robots enter the realm of healthcare and caregiving, it is important to consider how they will address the dimensions of care and how they will interact not just with the direct receivers of assistance, but also with those who provide it (e.g., caregivers, healthcare providers, etc.). Caregiving in its best form addresses challenges in a multitude of dimensions of a person’s life: from physical to social-emotional and sometimes even existential dimensions (such as issues surrounding life and death). In this study, we use semi-structured qualitative interviews administered to healthcare professionals with multidisciplinary backgrounds (physicians, public health professionals, social workers, and chaplains) to understand their expectations regarding the possible roles robots may play in the healthcare ecosystem in the future. We found that participants drew inspiration in their mental models of robots from both works of science fiction but also from existing commercial robots. Participants envisioned roles for robots in the full spectrum of care, from physical to social-emotional and even existential-spiritual dimensions, but also pointed out numerous limitations that robots have in being able to provide comprehensive humanistic care. While no dimension of care was deemed as exclusively the realm of humans, participants stressed the importance of caregiving humans as the primary providers of comprehensive care, with robots assisting with more narrowly focused tasks. Throughout the paper, we point out the encouraging confluence of ideas between the expectations of healthcare providers and research trends in the human–robot interaction (HRI) literature.
Journal Article