Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
20
result(s) for
"Fact-Check"
Sort by:
Are fewer people living in poverty in the UK than 10 years ago?
by
Limb, Matthew
in
FACT CHECK
2022
Journal Article
Will genome testing of healthy babies save lives?
by
Mahase, Elisabeth
in
FACT CHECK
2019
Journal Article
Is Matt Hancock right to take credit for falling death rates?
by
Mahase, Elisabeth
in
FACT CHECK
2019
Journal Article
Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media
2020
Social media has increasingly enabled “fake news” to circulate widely, most notably during the 2016 U.S. presidential campaign. These intentionally false or misleading stories threaten the democratic goal of a well-informed electorate. This study evaluates the effectiveness of strategies that could be used by Facebook and other social media to counter false stories. Results from a pre-registered experiment indicate that false headlines are perceived as less accurate when people receive a general warning about misleading information on social media or when specific headlines are accompanied by a “Disputed” or “Rated false” tag. Though the magnitudes of these effects are relatively modest, they generally do not vary by whether headlines were congenial to respondents’ political views. In addition, we find that adding a “Rated false” tag to an article headline lowers its perceived accuracy more than adding a “Disputed” tag (Facebook’s original approach) relative to a control condition. Finally, though exposure to the “Disputed” or “Rated false” tags did not affect the perceived accuracy of unlabeled false or true headlines, exposure to a general warning
decreased
belief in the accuracy of true headlines, suggesting the need for further research into how to most effectively counter false news without distorting belief in true information.
Journal Article
Nudge Effect of Fact-Check Alerts: Source Influence and Media Skepticism on Sharing of News Misinformation in Social Media
2020
This study extends the nudge principle with media effects and credibility evaluation perspectives to examine whether the effectiveness of fact-check alerts to deter news sharing on social media is moderated by news source and whether this moderation is conditional upon users’ skepticism of mainstream media. Results from a 2 (nudge: fact-check alert vs. no alert) × 2 (news source: legacy mainstream vs. unfamiliar non-mainstream) (N = 929) experiment controlling for individual issue involvement, online news involvement, and news sharing experience revealed significant main and interaction effects from both factors. News sharing likelihood was overall lower for non-mainstream news than mainstream news, but showed a greater decrease for mainstream news when nudged. No conditional moderation from media skepticism was found; instead, users’ skepticism of mainstream media amplified the nudge effect only for news from legacy mainstream media and not unfamiliar non-mainstream source. Theoretical and practical implications on the use of fact-checking and mainstream news sources in social media are discussed.
Journal Article
Combating Misinformation by Sharing the Truth: a Study on the Spread of Fact-Checks on Social Media
2023
Misinformation on social media has become a horrendous problem in our society. Fact-checks on information often fall behind the diffusion of misinformation, which can lead to negative impacts on society. This research studies how different factors may affect the spread of fact-checks over the internet. We collected a dataset of fact-checks in a six-month period and analyzed how they spread on Twitter. The spread of fact-checks is measured by the total retweet count. The factors/variables include the truthfulness rating, topic of information, source credibility, etc. The research identifies truthfulness rating as a significant factor: conclusive fact-checks (either true or false) tend to be shared more than others. In addition, the source credibility, political leaning, and the sharing count also affect the spread of fact-checks. The findings of this research provide practical insights into accelerating the spread of the truth in the battle against misinformation online.
Journal Article
Misinformation on social networks during the novel coronavirus pandemic: a quali-quantitative case study of Brazil
by
Biancovilli, Priscila
,
Makszin, Lilla
,
Jurberg, Claudia
in
Biostatistics
,
Brazil
,
Brazil - epidemiology
2021
Background
One of the challenges posed by the novel coronavirus pandemic is the
infodemic risk
, that is, a huge amount of information being published on the topic, along with misinformation and rumours; with social media, this phenomenon is amplified, and it goes faster and further. Around 100 million people in Brazil (50% of the inhabitants) are users of social media networks – almost half of the country’s population. Most of the information on the Internet is unregulated, and its quality remains questionable.
Methods
In this study, we examine the main characteristics of misinformation published on the topic. We analysed 232 pieces of misinformation published by the Brazilian fact-checking service “Agência Lupa”. The following aspects of each news item were analysed: a) In what social media has it circulated?; b) What is the content classification, sentiment and type of misinformation?; d) Are there recurrent themes in the sample studied?
Results
Most were published on Facebook (76%), followed by WhatsApp, with 10% of total cases. Half of the stories (47%) are classified as “real-life”, that is, the focus is on everyday situations, or circumstances involving people. Regarding the type of misinformation, there is a preponderance of fabricated content, with 53% of total, followed by false context (34%) and misleading content (13%). Wrong information was mostly published in text format (47%). We found that 92.9% of misinformation classified as “fabricated content” are “health tips”, and 88.9% of “virtual scams” are also fabricated.
Conclusion
Brazilian media and science communicators must understand the main characteristics of misinformation in social media about COVID-19, so that they can develop attractive, up-to-date and evidence-based content that helps to increase health literacy and counteract the spread of false information.
Journal Article
Fighting the infodemic: the 4 i Framework for Advancing Communication and Trust
by
Jamison, Amelia M.
,
Huhn, Noelle
,
Sundelson, Anne E.
in
Analysis
,
Biostatistics
,
Communication
2023
Background
The proliferation of false and misleading health claims poses a major threat to public health. This ongoing “infodemic” has prompted numerous organizations to develop tools and approaches to manage the spread of falsehoods and communicate more effectively in an environment of mistrust and misleading information. However, these tools and approaches have not been systematically characterized, limiting their utility. This analysis provides a characterization of the current ecosystem of infodemic management strategies, allowing public health practitioners, communicators, researchers, and policy makers to gain an understanding of the tools at their disposal.
Methods
A multi-pronged search strategy was used to identify tools and approaches for combatting health-related misinformation and disinformation. The search strategy included a scoping review of academic literature; a review of gray literature from organizations involved in public health communications and misinformation/disinformation management; and a review of policies and infodemic management approaches from all U.S. state health departments and select local health departments. A team of annotators labelled the main feature(s) of each tool or approach using an iteratively developed list of tags.
Results
We identified over 350 infodemic management tools and approaches. We introduce the 4 i Framework for Advancing Communication and Trust (4 i FACT), a modified social-ecological model, to characterize different levels of infodemic intervention: informational, individual, interpersonal, and institutional. Information-level strategies included those designed to amplify factual information, fill information voids, debunk false information, track circulating information, and verify, detect, or rate the credibility of information. Individual-level strategies included those designed to enhance information literacy and prebunking/inoculation tools. Strategies at the interpersonal/community level included resources for public health communicators and community engagement approaches. Institutional and structural approaches included resources for journalists and fact checkers, tools for managing academic/scientific literature, resources for infodemic researchers/research, resources for infodemic managers, social media regulation, and policy/legislation.
Conclusions
The 4 i FACT provides a useful way to characterize the current ecosystem of infodemic management strategies. Recognizing the complex and multifaceted nature of the ongoing infodemic, efforts should be taken to utilize and integrate strategies across all four levels of the modified social-ecological model.
Journal Article
Preventing the Diffusion of Disinformation on Disaster SNS by Collective Debunking with Penalties
2024
As online resources such as social media are increasingly used in disaster situations, confusion caused by the spread of false information, misinformation, and hoaxes has become an issue. Although a large amount of research has been conducted on how to suppress disinformation, i.e., the widespread dissemination of such false information, most of the research from a revenue perspective has been based on prisoner’s dilemma experiments, and there has been no analysis of measures to deal with the actual occurrence of disinformation on disaster SNSs. In this paper, we focus on the fact that one of the characteristics of disaster SNS information is that it allows citizens to confirm the reality of a disaster. Hereafter, we refer to this as collective debunking , and we propose a profit-agent model for it and conduct an analysis using an evolutionary game. As a result, we experimentally found that deception in the confirmation of disaster information uploaded to SNS is likely to lead to the occurrence of disinformation. We also found that if this deception can be detected and punished, for example by patrols, it tends to suppress the occurrence of disinformation.
Journal Article