Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
37 result(s) for "Rathje, Steve"
Sort by:
Out-group animosity drives engagement on social media
There has been growing concern about the role social media plays in political polarization. We investigated whether out-group animosity was particularly successful at generating engagement on two of the largest social media platforms: Facebook and Twitter. Analyzing posts from news media accounts and US congressional members (n = 2,730,215), we found that posts about the political out-group were shared or retweeted about twice as often as posts about the in-group. Each individual term referring to the political out-group increased the odds of a social media post being shared by 67%. Out-group language consistently emerged as the strongest predictor of shares and retweets: the average effect size of out-group language was about 4.8 times as strong as that of negative affect language and about 6.7 times as strong as that of moral-emotional language—both established predictors of social media engagement. Language about the out-group was a very strong predictor of “angry” reactions (the most popular reactions across all datasets), and language about the in-group was a strong predictor of “love” reactions, reflecting in-group favoritism and out-group derogation. This out-group effect was not moderated by political orientation or social media platform, but stronger effects were found among political leaders than among news media accounts. In sum, out-group language is the strongest predictor of social media engagement across all relevant predictors measured, suggesting that social media may be creating perverse incentives for content expressing out-group animosity.
Social comparison and maladaptive emotion regulation are associated with poorer mental health in social media users
The ubiquity of social technologies in daily life has intensified concerns about their psychological impact. Emerging evidence points to the quality of engagement and the psychological processes that social media use activates. This study examined these processes in a nationally representative sample of 1,707 adults aged 16–75 (M = 44.5, SD = 14.8), 50.40% females. Objective screen-time verification was complemented with validated self-report questionnaires measuring anxiety and depression symptomatology, anger reactions and displaced aggression, social comparison, and maladaptive emotion regulation. Data were analyzed using correlation analyses, multivariate analysis of covariance (MANCOVA), and path analysis. Findings revealed consistent gender differences: women spent more time online and reported higher levels of social comparison and maladaptive regulation strategies. Cohort analyses showed Generation Z to be most vulnerable, scoring highest on social comparison, maladaptive strategies such as rumination and catastrophizing, and symptoms of depression, anxiety, and anger, whereas Boomers consistently reported the lowest levels. Our work also shows that mental health indicators such as anxiety, depression, and anger are more strongly associated with social media time when higher use co-occurs with greater social comparison and maladaptive emotion regulation strategies. These findings are interpreted considering emerging work on digital emotion regulation, suggesting that the quality of engagement may be more relevant than sheer time online. The present work refines the ongoing debate on screen time, underscoring the importance of fostering emotional regulation to promote healthier and more adaptive engagement in today’s hyperconnected digital world.
Toolbox of individual-level interventions against online misinformation
The spread of misinformation through media and social networks threatens many aspects of society, including public health and the state of democracies. One approach to mitigating the effect of misinformation focuses on individual-level interventions, equipping policymakers and the public with essential tools to curb the spread and influence of falsehoods. Here we introduce a toolbox of individual-level interventions for reducing harm from online misinformation. Comprising an up-to-date account of interventions featured in 81 scientific papers from across the globe, the toolbox provides both a conceptual overview of nine main types of interventions, including their target, scope and examples, and a summary of the empirical evidence supporting the interventions, including the methods and experimental paradigms used to test them. The nine types of interventions covered are accuracy prompts, debunking and rebuttals, friction, inoculation, lateral reading and verification strategies, media-literacy tips, social norms, source-credibility labels, and warning and fact-checking labels. Kozyreva et al. review evidence from individual-level interventions for fighting online misinformation featured in 81 scientific papers. They classify the interventions in nine different types and summarize their findings in a toolbox.
Individual-level solutions may support system-level change â if they are internalized as part of one's social identity
System-level change is crucial for solving society's most pressing problems. However, individual-level interventions may be useful for creating behavioral change before system-level change is in place and for increasing necessary public support for system-level solutions. Participating in individual-level solutions may increase support for system-level solutions – especially if the individual-level solutions are internalized as part of one's social identity.
Generative language models exhibit social identity biases
Social identity biases, particularly the tendency to favor one’s own group (ingroup solidarity) and derogate other groups (outgroup hostility), are deeply rooted in human psychology and social behavior. However, it is unknown if such biases are also present in artificial intelligence systems. Here we show that large language models (LLMs) exhibit patterns of social identity bias, similarly to humans. By administering sentence completion prompts to 77 different LLMs (for instance, ‘We are…’), we demonstrate that nearly all base models and some instruction-tuned and preference-tuned models display clear ingroup favoritism and outgroup derogation. These biases manifest both in controlled experimental settings and in naturalistic human–LLM conversations. However, we find that careful curation of training data and specialized fine-tuning can substantially reduce bias levels. These findings have important implications for developing more equitable artificial intelligence systems and highlight the urgent need to understand how human–LLM interactions might reinforce existing social biases. Researchers show that large language models exhibit social identity biases similar to humans, having favoritism toward ingroups and hostility toward outgroups. These biases persist across models, training data and real-world human–LLM conversations.
Accuracy and social motivations shape judgements of (mis)information
The extent to which belief in (mis)information reflects lack of knowledge versus a lack of motivation to be accurate is unclear. Here, across four experiments ( n  = 3,364), we motivated US participants to be accurate by providing financial incentives for correct responses about the veracity of true and false political news headlines. Financial incentives improved accuracy and reduced partisan bias in judgements of headlines by about 30%, primarily by increasing the perceived accuracy of true news from the opposing party ( d  = 0.47). Incentivizing people to identify news that would be liked by their political allies, however, decreased accuracy. Replicating prior work, conservatives were less accurate at discerning true from false headlines than liberals, yet incentives closed the gap in accuracy between conservatives and liberals by 52%. A non-financial accuracy motivation intervention was also effective, suggesting that motivation-based interventions are scalable. Altogether, these results suggest that a substantial portion of people’s judgements of the accuracy of news reflects motivational factors. Can individuals be motivated to accurately identify misinformation? Across four experiments, Rathje et al. provide support for financial incentives improving accuracy, and reducing partisan bias in judgements of political news headlines.
How Can Psychological Science Help Counter the Spread of Fake News?
In recent years, interest in the psychology of fake news has rapidly increased. We outline the various interventions within psychological science aimed at countering the spread of fake news and misinformation online, focusing primarily on corrective (debunking) and pre-emptive (prebunking) approaches. We also offer a research agenda of open questions within the field of psychological science that relate to how and why fake news spreads and how best to counter it: the longevity of intervention effectiveness; the role of sources and source credibility; whether the sharing of fake news is best explained by the motivated cognition or the inattention accounts; and the complexities of developing psychometrically validated instruments to measure how interventions affect susceptibility to fake news at the individual level.
Using natural language processing to analyse text data in behavioural science
Language is a uniquely human trait at the core of human interactions. The language people use often reflects their personality, intentions and state of mind. With the integration of the Internet and social media into everyday life, much of human communication is documented as written text. These online forms of communication (for example, blogs, reviews, social media posts and emails) provide a window into human behaviour and therefore present abundant research opportunities for behavioural science. In this Review, we describe how natural language processing (NLP) can be used to analyse text data in behavioural science. First, we review applications of text data in behavioural science. Second, we describe the NLP pipeline and explain the underlying modelling approaches (for example, dictionary-based approaches and large language models). We discuss the advantages and disadvantages of these methods for behavioural science, in particular with respect to the trade-off between interpretability and accuracy. Finally, we provide actionable recommendations for using NLP to ensure rigour and reproducibility.
Changing the incentive structure of social media may reduce online proxy failure and proliferation of negativity
Social media takes advantage of people's predisposition to attend to threatening stimuli by promoting content in algorithms that capture attention. However, this content is often not what people expressly state they would like to see. We propose that social media companies should weigh users’ expressed preferences more heavily in algorithms. We propose modest changes to user interfaces that could reduce the abundance of threatening content in the online environment.