Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
13,784
result(s) for
"631/477"
Sort by:
The potential of generative AI for personalized persuasion at scale
2024
Matching the language or content of a message to the psychological profile of its recipient (known as “personalized persuasion”) is widely considered to be one of the most effective messaging strategies. We demonstrate that the rapid advances in large language models (LLMs), like ChatGPT, could accelerate this influence by making personalized persuasion scalable. Across four studies (consisting of seven sub-studies; total
N
= 1788), we show that personalized messages crafted by ChatGPT exhibit significantly more influence than non-personalized messages. This was true across different domains of persuasion (e.g., marketing of consumer products, political appeals for climate action), psychological profiles (e.g., personality traits, political ideology, moral foundations), and when only providing the LLM with a single, short prompt naming or describing the targeted psychological dimension. Thus, our findings are among the first to demonstrate the potential for LLMs to automate, and thereby scale, the use of personalized persuasion in ways that enhance its effectiveness and efficiency. We discuss the implications for researchers, practitioners, and the general public.
Journal Article
ChatGPT in education: global reactions to AI innovations
2023
The release and rapid diffusion of ChatGPT have caught the attention of educators worldwide. Some educators are enthusiastic about its potential to support learning. Others are concerned about how it might circumvent learning opportunities or contribute to misinformation. To better understand reactions about ChatGPT concerning education, we analyzed Twitter data (16,830,997 tweets from 5,541,457 users). Based on topic modeling and sentiment analysis, we provide an overview of global perceptions and reactions to ChatGPT regarding education. ChatGPT triggered a massive response on Twitter, with education being the most tweeted content topic. Topics ranged from specific (e.g., cheating) to broad (e.g., opportunities), which were discussed with mixed sentiment. We traced that authority decisions may influence public opinions. We discussed that the average reaction on Twitter (e.g., using ChatGPT to cheat in exams) differs from discussions in which education and teaching–learning researchers are likely to be more interested (e.g., ChatGPT as an intelligent learning partner). This study provides insights into people's reactions when new groundbreaking technology is released and implications for scientific and policy communication in rapidly changing circumstances.
Journal Article
Misinformation of COVID-19 vaccines and vaccine hesitancy
2022
The current study examined various types of misinformation related to the COVID-19 vaccines and their relationships to vaccine hesitancy and refusal. Study 1 asked a sample of full-time working professionals in the US (
n
= 505) about possible misinformation they were exposed to related to the COVID-19 vaccines. Study 2 utilized an online survey to examine U.S. college students’ (
n
= 441) knowledge about COVID-19 vaccines, and its associations with vaccine hesitancy and behavioral intention to get a COVID-19 vaccine. Analysis of open-ended responses in Study 1 revealed that 57.6% reported being exposed to conspiratorial misinformation such as COVID-19 vaccines are harmful and dangerous. The results of a structural equation modeling analysis for Study 2 supported our hypotheses predicting a negative association between the knowledge level and vaccine hesitancy and between vaccine hesitancy and behavioral intention. Vaccine hesitancy mediated the relationship between the vaccine knowledge and behavioral intention. Findings across these studies suggest exposure to misinformation and believing it as true could increase vaccine hesitancy and reduce behavioral intention to get vaccinated.
Journal Article
Bias against AI art can enhance perceptions of human creativity
by
Horton Jr, C. Blaine
,
Iyengar, Sheena S.
,
White, Michael W.
in
631/477
,
631/477/2811
,
Creativity
2023
The contemporary art world is conservatively estimated to be a $65 billion USD market that employs millions of human artists, sellers, and collectors globally. Recent attention paid to AI-made art in prestigious galleries, museums, and popular media has provoked debate around how these statistics will change. Unanswered questions fuel growing anxieties. Are AI-made and human-made art evaluated in the same ways? How will growing exposure to AI-made art impact evaluations of human creativity? Our research uses a psychological lens to explore these questions in the realm of visual art. We find that people devalue art labeled as AI-made across a variety of dimensions, even when they report it is indistinguishable from human-made art, and even when they believe it was produced collaboratively with a human. We also find that comparing images labeled as human-made to images labeled as AI-made increases perceptions of human creativity, an effect that can be leveraged to increase the value of human effort. Results are robust across six experiments (
N
= 2965) using a range of human-made and AI-made stimuli and incorporating representative samples of the US population. Finally, we highlight conditions that strengthen effects as well as dimensions where AI-devaluation effects are more pronounced.
Journal Article
Attitudes towards AI: measurement and associations with personality
by
Messingschlager, Tanja
,
Hutmacher, Fabian
,
Stein, Jan-Philipp
in
631/477
,
631/477/2811
,
639/705/258
2024
Artificial intelligence (AI) has become an integral part of many contemporary technologies, such as social media platforms, smart devices, and global logistics systems. At the same time, research on the public acceptance of AI shows that many people feel quite apprehensive about the potential of such technologies—an observation that has been connected to both demographic and sociocultural user variables (e.g., age, previous media exposure). Yet, due to divergent and often ad-hoc measurements of AI-related attitudes, the current body of evidence remains inconclusive. Likewise, it is still unclear if attitudes towards AI are also affected by users’ personality traits. In response to these research gaps, we offer a two-fold contribution. First, we present a novel, psychologically informed questionnaire (ATTARI-12) that captures attitudes towards AI as a single construct, independent of specific contexts or applications. Having observed good reliability and validity for our new measure across two studies (
N
1
= 490;
N
2
= 150), we examine several personality traits—the Big Five, the Dark Triad, and conspiracy mentality—as potential predictors of AI-related attitudes in a third study (
N
3
= 298). We find that agreeableness and younger age predict a more positive view towards artificially intelligent technology, whereas the susceptibility to conspiracy beliefs connects to a more negative attitude. Our findings are discussed considering potential limitations and future directions for research and practice.
Journal Article
Virtual communication curbs creative idea generation
2022
COVID-19 accelerated a decade-long shift to remote work by normalizing working from home on a large scale. Indeed, 75% of US employees in a 2021 survey reported a personal preference for working remotely at least one day per week
1
, and studies estimate that 20% of US workdays will take place at home after the pandemic ends
2
. Here we examine how this shift away from in-person interaction affects innovation, which relies on collaborative idea generation as the foundation of commercial and scientific progress
3
. In a laboratory study and a field experiment across five countries (in Europe, the Middle East and South Asia), we show that videoconferencing inhibits the production of creative ideas. By contrast, when it comes to selecting which idea to pursue, we find no evidence that videoconferencing groups are less effective (and preliminary evidence that they may be more effective) than in-person groups. Departing from previous theories that focus on how oral and written technologies limit the synchronicity and extent of information exchanged
4
,
5
–
6
, we find that our effects are driven by differences in the physical nature of videoconferencing and in-person interactions. Specifically, using eye-gaze and recall measures, as well as latent semantic analysis, we demonstrate that videoconferencing hampers idea generation because it focuses communicators on a screen, which prompts a narrower cognitive focus. Our results suggest that virtual interaction comes with a cognitive cost for creative idea generation.
Videoconferencing inhibits the production of creative ideas, but videoconferencing groups are as effective as (or perhaps even more effective than) in-person groups at deciding which ideas to pursue.
Journal Article
The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks
by
Zabelina, Darya L.
,
Awa, Kim N.
,
Hubert, Kent F.
in
631/477
,
631/477/2811
,
Artificial Intelligence
2024
The emergence of publicly accessible artificial intelligence (AI) large language models such as ChatGPT has given rise to global conversations on the implications of AI capabilities. Emergent research on AI has challenged the assumption that creative potential is a uniquely human trait thus, there seems to be a disconnect between human perception versus what AI is objectively capable of creating. Here, we aimed to assess the creative potential of humans in comparison to AI. In the present study, human participants (N = 151) and GPT-4 provided responses for the Alternative Uses Task, Consequences Task, and Divergent Associations Task. We found that AI was robustly more creative along each divergent thinking measurement in comparison to the human counterparts. Specifically, when controlling for fluency of responses, AI was more original and elaborate. The present findings suggest that the current state of AI language models demonstrate higher creative potential than human respondents.
Journal Article
The effect of social media on well-being differs from adolescent to adolescent
by
van Driel, Irene I.
,
Valkenburg, Patti M.
,
Keijsers, Loes
in
631/477
,
631/477/2811
,
Adolescent
2020
The question whether social media use benefits or undermines adolescents’ well-being is an important societal concern. Previous empirical studies have mostly established across-the-board effects among (sub)populations of adolescents. As a result, it is still an open question whether the effects are unique for each individual adolescent. We sampled adolescents’ experiences six times per day for one week to quantify differences in their susceptibility to the effects of social media on their momentary affective well-being. Rigorous analyses of 2,155 real-time assessments showed that the association between social media use and affective well-being differs strongly across adolescents: While 44% did not feel better or worse after passive social media use, 46% felt better, and 10% felt worse. Our results imply that person-specific effects can no longer be ignored in research, as well as in prevention and intervention programs.
Journal Article
The psychological and political correlates of conspiracy theory beliefs
2022
Understanding the individual-level characteristics associated with conspiracy theory beliefs is vital to addressing and combatting those beliefs. While researchers have identified numerous psychological and political characteristics associated with conspiracy theory beliefs, the generalizability of those findings is uncertain because they are typically drawn from studies of only a few conspiracy theories. Here, we employ a national survey of 2021 U.S. adults that asks about 15 psychological and political characteristics as well as beliefs in 39 different conspiracy theories. Across 585 relationships examined within both bivariate (correlations) and multivariate (regression) frameworks, we find that psychological traits (e.g., dark triad) and non-partisan/ideological political worldviews (e.g., populism, support for violence) are most strongly related to individual conspiracy theory beliefs, regardless of the belief under consideration, while other previously identified correlates (e.g., partisanship, ideological extremity) are inconsistently related. We also find that the correlates of specific conspiracy theory beliefs mirror those of
conspiracy thinking
(the predisposition), indicating that this predisposition operates like an ‘average’ of individual conspiracy theory beliefs. Overall, our findings detail the psychological and political traits of the individuals most drawn to conspiracy theories and have important implications for scholars and practitioners seeking to prevent or reduce the impact of conspiracy theories.
Journal Article
The CASA theory no longer applies to desktop computers
2023
The Computers Are Social Actors (CASA) theory is the most important theoretical contribution that has shaped the field of human–computer interaction. The theory states that humans interact with computers as if they are human, and is the cornerstone on which all social human–machine communication (e.g., chatbots, robots, virtual agents) are designed. However, the theory itself dates back to the early 1990s, and, since then, technology and its place in society has evolved and changed drastically. Here we show, via a direct replication of the original study, that participants no longer interact with desktop computers as if they are human. This suggests that the CASA Theory may only work for emergent technology, an important concept that needs to be taken into account when designing and researching human–computer interaction.
Journal Article