Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
65 result(s) for "Ecker, Ullrich"
Sort by:
Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence
Misinformation can undermine a well-functioning democracy. For example, public misconceptions about climate change can lead to lowered acceptance of the reality of climate change and lowered support for mitigation policies. This study experimentally explored the impact of misinformation about climate change and tested several pre-emptive interventions designed to reduce the influence of misinformation. We found that false-balance media coverage (giving contrarian views equal voice with climate scientists) lowered perceived consensus overall, although the effect was greater among free-market supporters. Likewise, misinformation that confuses people about the level of scientific agreement regarding anthropogenic global warming (AGW) had a polarizing effect, with free-market supporters reducing their acceptance of AGW and those with low free-market support increasing their acceptance of AGW. However, we found that inoculating messages that (1) explain the flawed argumentation technique used in the misinformation or that (2) highlight the scientific consensus on climate change were effective in neutralizing those adverse effects of misinformation. We recommend that climate communication messages should take into account ways in which scientific content can be distorted, and include pre-emptive inoculation messages.
Toward effective government communication strategies in the era of COVID-19
Several countries have successfully reduced their COVID-19 infection rate early, while others have been overwhelmed. The reasons for the differences are complex, but response efficacy has in part depended on the speed and scale of governmental intervention and how communities have received, perceived, and acted on the information provided by governments and other agencies. While there is no ‘one size fits all’ communications strategy to deliver information during a prolonged crisis, in this article, we draw on key findings from scholarship in multiple social science disciplines to highlight some fundamental characteristics of effective governmental crisis communication. We then present ten recommendations for effective communication strategies to engender maximum support and participation. We argue that an effective communication strategy is a two-way process that involves clear messages, delivered via appropriate platforms, tailored for diverse audiences, and shared by trusted people. Ultimately, the long-term success depends on developing and maintaining public trust. We outline how government policymakers can engender widespread public support and participation through increased and ongoing community engagement. We argue that a diversity of community groups must be included in engagement activities. We also highlight the implications of emerging digital technologies in communication and engagement activities.
Using the president’s tweets to understand political diversion in the age of social media
Social media has arguably shifted political agenda-setting power away from mainstream media onto politicians. Current U.S. President Trump’s reliance on Twitter is unprecedented, but the underlying implications for agenda setting are poorly understood. Using the president as a case study, we present evidence suggesting that President Trump’s use of Twitter diverts crucial media (The New York Times and ABC News) from topics that are potentially harmful to him. We find that increased media coverage of the Mueller investigation is immediately followed by Trump tweeting increasingly about unrelated issues. This increased activity, in turn, is followed by a reduction in coverage of the Mueller investigation—a finding that is consistent with the hypothesis that President Trump’s tweets may also successfully divert the media from topics that he considers threatening. The pattern is absent in placebo analyses involving Brexit coverage and several other topics that do not present a political risk to the president. Our results are robust to the inclusion of numerous control variables and examination of several alternative explanations, although the generality of the successful diversion must be established by further investigation. By analyzing President Trump’s tweets and data from two media sources, the authors provide evidence suggesting that when the media reports on a topic potentially harmful to the president, he tweets about unrelated issues. Further evidence from this case study suggests that these diversionary tweets may also successfully reduce subsequent media coverage of the harmful topic.
Impression formation stimuli: A corpus of behavior statements rated on morality, competence, informativeness, and believability
To investigate impression formation, researchers tend to rely on statements that describe a person’s behavior (e.g., “Alex ridicules people behind their backs”). These statements are presented to participants who then rate their impressions of the person. However, a corpus of behavior statements is costly to generate, and pre-existing corpora may be outdated and might not measure the dimension(s) of interest. The present study makes available a normed corpus of 160 contemporary behavior statements that were rated on 4 dimensions relevant to impression formation: morality, competence, informativeness, and believability. In addition, we show that the different dimensions are non-independent, exhibiting a range of linear and non-linear relationships, which may present a problem for past research. However, researchers interested in impression formation can control for these relationships (e.g., statistically) using the present corpus of behavior statements.
Correcting vaccine misinformation: A failure to replicate familiarity or fear-driven backfire effects
Individuals often continue to rely on misinformation in their reasoning and decision making even after it has been corrected. This is known as the continued influence effect, and one of its presumed drivers is misinformation familiarity. As continued influence can promote misguided or unsafe behaviours, it is important to find ways to minimize the effect by designing more effective corrections. It has been argued that correction effectiveness is reduced if the correction repeats the to-be-debunked misinformation, thereby boosting its familiarity. Some have even suggested that this familiarity boost may cause a correction to inadvertently increase subsequent misinformation reliance; a phenomenon termed the familiarity backfire effect. A study by Pluviano et al. (2017) found evidence for this phenomenon using vaccine-related stimuli. The authors found that repeating vaccine “myths” and contrasting them with corresponding facts backfired relative to a control condition, ironically increasing false vaccine beliefs. The present study sought to replicate and extend this study. We included four conditions from the original Pluviano et al. study: the myths vs. facts, a visual infographic, a fear appeal, and a control condition. The present study also added a “myths-only” condition, which simply repeated false claims and labelled them as false; theoretically, this condition should be most likely to produce familiarity backfire. Participants received vaccine-myth corrections and were tested immediately post-correction, and again after a seven-day delay. We found that the myths vs. facts condition reduced vaccine misconceptions. None of the conditions increased vaccine misconceptions relative to control at either timepoint, or relative to a pre-intervention baseline; thus, no backfire effects were observed. This failure to replicate adds to the mounting evidence against familiarity backfire effects and has implications for vaccination communications and the design of debunking interventions.
Misinformation and Its Correction: Continued Influence and Successful Debiasing
The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation. We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread. We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people's memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing. We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners—including journalists, health professionals, educators, and science communicators—design effective misinformation retractions, educational tools, and public-information campaigns.
Can corrections spread misinformation to new audiences? Testing for the elusive familiarity backfire effect
Misinformation often continues to influence inferential reasoning after clear and credible corrections are provided; this effect is known as the continued influence effect. It has been theorized that this effect is partly driven by misinformation familiarity. Some researchers have even argued that a correction should avoid repeating the misinformation, as the correction itself could serve to inadvertently enhance misinformation familiarity and may thus backfire, ironically strengthening the very misconception that it aims to correct. While previous research has found little evidence of such familiarity backfire effects, there remains one situation where they may yet arise: when correcting entirely novel misinformation, where corrections could serve to spread misinformation to new audiences who had never heard of it before. This article presents three experiments (total N  = 1718) investigating the possibility of familiarity backfire within the context of correcting novel misinformation claims and after a 1-week study-test delay. While there was variation across experiments, overall there was substantial evidence against familiarity backfire. Corrections that exposed participants to novel misinformation did not lead to stronger misconceptions compared to a control group never exposed to the false claims or corrections. This suggests that it is safe to repeat misinformation when correcting it, even when the audience might be unfamiliar with the misinformation.
The psychological drivers of misinformation belief and its resistance to correction
Misinformation has been identified as a major contributor to various contentious contemporary events ranging from elections and referenda to the response to the COVID-19 pandemic. Not only can belief in misinformation lead to poor judgements and decision-making, it also exerts a lingering influence on people’s reasoning after it has been corrected — an effect known as the continued influence effect. In this Review, we describe the cognitive, social and affective factors that lead people to form or endorse misinformed views, and the psychological barriers to knowledge revision after misinformation has been corrected, including theories of continued influence. We discuss the effectiveness of both pre-emptive (‘prebunking’) and reactive (‘debunking’) interventions to reduce the effects of misinformation, as well as implications for information consumers and practitioners in various areas including journalism, public health, policymaking and education.Misinformation is influential despite unprecedented access to high-quality, factual information. In this Review, Ecker et al. describe the cognitive, social and affective factors that drive sustained belief in misinformation, synthesize the evidence for interventions to reduce its effects and offer recommendations for information consumers and practitioners.
Executive function and the continued influence of misinformation: A latent-variable analysis
Misinformation can continue to influence reasoning after correction; this is known as the continued influence effect (CIE). Theoretical accounts of the CIE suggest failure of two cognitive processes to be causal, namely memory updating and suppression of misinformation reliance. Both processes can also be conceptualised as subcomponents of contemporary executive function (EF) models; specifically, working-memory updating and prepotent-response inhibition. EF may thus predict susceptibility to the CIE. The current study investigated whether individual differences in EF could predict individual differences in CIE susceptibility. Participants completed several measures of EF subcomponents, including those of updating and inhibition, as well as set shifting, and a standard CIE task. The relationship between EF and CIE was then assessed using a correlation analysis of the EF and CIE measures, as well as structural equation modelling of the EF-subcomponent latent variable and CIE latent variable. Results showed that EF can predict susceptibility to the CIE, especially the factor of working-memory updating. These results further our understanding of the CIE’s cognitive antecedents and provide potential directions for real-world CIE intervention.
Political Attitudes and the Processing of Misinformation Corrections
Misinformation often continues to influence people's memory and inferential reasoning after it has been retracted; this is known as the continued influence effect (CIE). Previous research investigating the role of attitude-based motivated reasoning in this context has found conflicting results: Some studies have found that worldview can have a strong impact on the magnitude of the CIE, such that retractions are less effective if the misinformation is congruent with a person's relevant attitudes, in which case the retractions can even backfire. Other studies have failed to find evidence for an effect of attitudes on the processing of misinformation corrections. The present study used political misinformation—specifically fictional scenarios involving misconduct by politicians from left-wing and right-wing parties—and tested participants identifying with those political parties. Results showed that in this type of scenario, partisan attitudes have an impact on the processing of retractions, in particular (1) if the misinformation relates to a general assertion rather than just a specific singular event and (2) if the misinformation is congruent with a conservative partisanship.