Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
6
result(s) for
"Stapels, Julia G."
Sort by:
Let’s not be indifferent about robots: Neutral ratings on bipolar measures mask ambivalence in attitudes towards robots
2021
Ambivalence, the simultaneous experience of both positive and negative feelings about one and the same attitude object, has been investigated within psychological attitude research for decades. Ambivalence is interpreted as an attitudinal conflict with distinct affective, behavioral, and cognitive consequences. In social psychological research, it has been shown that ambivalence is sometimes confused with neutrality due to the use of measures that cannot distinguish between neutrality and ambivalence. Likewise, in social robotics research the attitudes of users are often characterized as neutral. We assume that this is due to the fact that existing research regarding attitudes towards robots lacks the opportunity to measure ambivalence. In the current experiment (N = 45), we show that a neutral and a robot stimulus were evaluated equivalently when using a bipolar item, but evaluations differed greatly regarding self-reported ambivalence and arousal. This points to attitudes towards robots being in fact highly ambivalent, although they might appear neutral depending on the measurement method. To gain valid insights into people’s attitudes towards robots, positive and negative evaluations of robots should be measured separately, providing participants with measures to express evaluative conflict instead of administering bipolar items. Acknowledging the role of ambivalence in attitude research focusing on robots has the potential to deepen our understanding of users’ attitudes and their potential evaluative conflicts, and thus improve predictions of behavior from attitudes towards robots.
Journal Article
Robocalypse? Yes, Please! The Role of Robot Autonomy in the Development of Ambivalent Attitudes Towards Robots
2022
Attitudes towards robots are not always unequivocally positive or negative: when attitudes encompass both strong positive and strong negative evaluations about an attitude object, people experience an unpleasant state of evaluative conflict, called ambivalence. To shed light on ambivalence towards robots, we conducted a mixed-methods experiment with
N
= 163 German university students that investigated the influence of robot autonomy on robot-related attitudes. With technological progress, robots become increasingly autonomous. We hypothesized that high levels of robot autonomy would increase both positive and negative robot-related evaluations, resulting in more attitudinal ambivalence. We experimentally manipulated robot autonomy through text vignettes and assessed objective ambivalence (i.e., the amount of reported conflicting thoughts and feelings) and subjective ambivalence (i.e., self-reported experienced conflict) towards the robot ‘VIVA’ using qualitative and quantitative measures. Autonomy did not impact objective ambivalence. However, subjective ambivalence was higher towards the robot high versus low in autonomy. Interestingly, this effect turned non-significant when controlling for individual differences in technology commitment. Qualitative results were categorized by two independent raters into assets (e.g., assistance, companionship) and risks (e.g., privacy/data security, social isolation). Taken together, the present research demonstrated that attitudes towards robots are indeed ambivalent and that this ambivalence might influence behavioral intentions towards robots. Moreover, the findings highlight the important role of technology commitment. Finally, qualitative results shed light on potential users’ concerns and aspirations. This way, these data provide useful insights into factors that facilitate human–robot research.
Journal Article
Torn Between Love and Hate: Mouse Tracking Ambivalent Attitudes Towards Robots
2024
Robots are a source of evaluative conflict and thus elicit ambivalence. In fact, psychological research has shown across domains that people simultaneously report strong positive and strong negative evaluations about one and the same attitude object. This is defined as ambivalence. In the current research, we extended existing ambivalence research by measuring ambivalence towards various robot-related stimuli using explicit (i.e., self-report) and implicit measures. Concretely, we used a mouse tracking approach to gain insights into the experience and resolution of evaluative conflict elicited by robots. We conducted an extended replication across four experiments with
N
= 411 overall. This featured a mixed-methods approach and included a single paper meta-analysis. Thereby, we showed that the amount of reported conflicting thoughts and feelings (i.e., objective ambivalence) and self-reported experienced conflict (i.e., subjective ambivalence) were consistently higher towards robot-related stimuli compared to stimuli evoking univalent responses. Further, implicit measures of ambivalence revealed that response times were higher when evaluating robot-related stimuli compared to univalent stimuli, however results concerning behavioral indicators of ambivalence in mouse trajectories were inconsistent. This might indicate that behavioral indicators of ambivalence apparently depend on the respective robot-related stimulus. We could not obtain evidence of systematic information processing as a cognitive indicator of ambivalence, however, qualitative data suggested that participants might focus on especially strong arguments to compensate their experienced conflict. Furthermore, interindividual differences did not seem to substantially influence ambivalence towards robots. Taken together, the current work successfully applied the implicit and explicit measurement of ambivalent attitudes to the domain of social robotics, while at the same time identifying potential boundaries for its application.
Journal Article
Never Trust Anything That Can Think for Itself, if You Can’t Control Its Privacy Settings: The Influence of a Robot’s Privacy Settings on Users’ Attitudes and Willingness to Self-disclose
2023
When encountering social robots, potential users are often facing a dilemma between privacy and utility. That is, high utility often comes at the cost of lenient privacy settings, allowing the robot to store personal data and to connect to the internet permanently, which brings in associated data security risks. However, to date, it still remains unclear how this dilemma affects attitudes and behavioral intentions towards the respective robot. To shed light on the influence of a social robot’s privacy settings on robot-related attitudes and behavioral intentions, we conducted two online experiments with a total sample of
N
= 320 German university students. We hypothesized that strict privacy settings compared to lenient privacy settings of a social robot would result in more favorable attitudes and behavioral intentions towards the robot in Experiment 1. For Experiment 2, we expected more favorable attitudes and behavioral intentions for choosing independently the robot’s privacy settings in comparison to evaluating preset privacy settings. However, those two manipulations seemed to influence attitudes towards the robot in diverging domains: While strict privacy settings increased trust, decreased subjective ambivalence and increased the willingness to self-disclose compared to lenient privacy settings, the choice of privacy settings seemed to primarily impact robot likeability, contact intentions and the depth of potential self-disclosure. Strict compared to lenient privacy settings might reduce the risk associated with robot contact and thereby also reduce risk-related attitudes and increase trust-dependent behavioral intentions. However, if allowed to choose, people make the robot ‘their own’, through making a privacy-utility tradeoff. This tradeoff is likely a compromise between full privacy and full utility and thus does not reduce risks of robot-contact as much as strict privacy settings do. Future experiments should replicate these results using real-life human robot interaction and different scenarios to further investigate the psychological mechanisms causing such divergences.
Journal Article