Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
8,742
result(s) for
"Misinformation"
Sort by:
Foolproof : why misinformation infects our minds and how to build immunity
by
Van der Linden, Sander, author
in
Misinformation.
,
Disinformation.
,
Truthfulness and falsehood.
2023
\"From fake news to conspiracy theories, from inflammatory memes to misleading headlines, misinformation has swiftly become the defining problem of our era. The crisis threatens the integrity of our democracies, our ability to cultivate trusting relationships, even our physical and psychological well-being--yet most attempts to combat it have proven insufficient. In Foolproof, one of the world's leading experts on misinformation lays out a crucial new paradigm for understanding and defending ourselves against the worldwide infodemic\"-- Provided by publisher
Debiasing misinformation: how do people diagnose health recommendations from AI?
by
Lim, Joon Soo
,
Shin, Donghee
,
Spyridou, Anastasia
in
Accountability
,
Algorithms
,
Artificial intelligence
2024
PurposeThis study examined how people assess health information from AI and improve their diagnostic ability to identify health misinformation. The proposed model was designed to test a cognitive heuristic theory in misinformation discernment.Design/methodology/approachWe proposed the heuristic-systematic model to assess health misinformation processing in the algorithmic context. Using the Analysis of Moment Structure (AMOS) 26 software, we tested fairness/transparency/accountability (FAccT) as constructs that influence the heuristic evaluation and systematic discernment of misinformation by users. To test moderating and mediating effects, PROCESS Macro Model 4 was used.FindingsThe effect of AI-generated misinformation on people’s perceptions of the veracity of health information may differ according to whether they process misinformation heuristically or systematically. Heuristic processing is significantly associated with the diagnosticity of misinformation. There is a greater chance that misinformation will be correctly diagnosed and checked, if misinformation aligns with users’ heuristics or is validated by the diagnosticity they perceive.Research limitations/implicationsWhen exposed to misinformation through algorithmic recommendations, users’ perceived diagnosticity of misinformation can be predicted accurately from their understanding of normative values. This perceived diagnosticity would then positively influence the accuracy and credibility of the misinformation.Practical implicationsPerceived diagnosticity exerts a key role in fostering misinformation literacy, implying that improving people’s perceptions of misinformation and AI features is an efficient way to change their misinformation behavior.Social implicationsAlthough there is broad agreement on the need to control and combat health misinformation, the magnitude of this problem remains unknown. It is essential to understand both users’ cognitive processes when it comes to identifying health misinformation and the diffusion mechanism from which such misinformation is framed and subsequently spread.Originality/valueThe mechanisms through which users process and spread misinformation have remained open-ended questions. This study provides theoretical insights and relevant recommendations that can make users and firms/institutions alike more resilient in protecting themselves from the detrimental impact of misinformation.Peer reviewThe peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-04-2023-0167
Journal Article
The mediated climate : how journalists, big tech, and activists are vying for our future
\"Few contemporary issues have been as riddled with claims of misinformation and skewed coverage as climate change. Critics contend that journalism has, until very recently, failed to cover the topic with an urgency that can best inform and mobilize the public. Journalists are now devoting more resources to the topic and moving away from the traditional \"balanced\" approach that would give climate denialists a voice. However, coverage of climate change is also shaped and distorted by the current networked-era information crisis. In investigating the impact of online platforms and a variety of corporate and political interests, Adrienne Russell argues that we need to think about the information and climate crises together to understand the conditions under which journalism operates and the power dynamics that shape public discourse on the subject. In The Mediated Climate, Russell tells the history of how the boundaries between journalism, public relations, and advocacy have become blurred around the subject of climate change. She traces the evolution of the tools and practices available to various industries that trade in disinformation and how climate journalists have adapted to meet the challenge presented by widespread misinformation. She also considers how journalism's role in shaping a public has been replaced by the digital public as constructed by data and algorithms. Finally, based on her interviews with journalists and activists, Russell looks at how recent mobilizations fight against misinformation, and proposes measures to protect our information infrastructures\"-- Provided by publisher.
Misinformation in and about science
by
West, Jevin D.
,
Bergstrom, Carl T.
in
Arthur M. Sackler on Advancing the Science and Practice of Science Communication: Misinformation about Science in the Public Sphere
,
Biomedical Research - ethics
,
Climate change
2021
Humans learn about the world by collectively acquiring information, filtering it, and sharing what we know. Misinformation undermines this process. The repercussions are extensive. Without reliable and accurate sources of information, we cannot hope to halt climate change, make reasoned democratic decisions, or control a global pandemic. Most analyses of misinformation focus on popular and social media, but the scientific enterprise faces a parallel set of problems—from hype and hyperbole to publication bias and citation misdirection, predatory publishing, and filter bubbles. In this perspective, we highlight these parallels and discuss future research directions and interventions.
Journal Article
May contain lies : how stories, statistics, and studies exploit our biases - and what we can do about it
\"Our lives are minefields of misinformation. It ripples through our social media feeds, our daily headlines, and the pronouncements of politicians, executives, and authors. Stories, statistics, and studies are everywhere, allowing people to find evidence to support whatever position they want. Many of these sources are flawed, yet by playing on our emotions and preying on our biases, they can gain widespread acceptance, warp our views, and distort our decisions. In this eye-opening book, renowned economist Alex Edmans teaches us how to separate fact from fiction. Using colorful examples--from a wellness guru's tragic but fabricated backstory to the blunders that led to the Deepwater Horizon disaster to the diet that ensnared millions yet hastened its founder's death--Edmans highlights the biases that cause us to mistake statements for facts, facts for data, data for evidence, and evidence for proof. Armed with the knowledge of what to guard against, he then provides a practical guide to combat this tide of misinformation. Going beyond simply checking the facts and explaining individual statistics, Edmans explores the relationships between statistics--the science of cause and effect--ultimately training us to think smarter, sharper, and more critically. May Contain Lies is an essential read for anyone who wants to make better sense of the world and better decisions\"-- Provided by publisher.
Prevalence of Health Misinformation on Social Media: Systematic Review
2021
Although at present there is broad agreement among researchers, health professionals, and policy makers on the need to control and combat health misinformation, the magnitude of this problem is still unknown. Consequently, it is fundamental to discover both the most prevalent health topics and the social media platforms from which these topics are initially framed and subsequently disseminated.
This systematic review aimed to identify the main health misinformation topics and their prevalence on different social media platforms, focusing on methodological quality and the diverse solutions that are being implemented to address this public health concern.
We searched PubMed, MEDLINE, Scopus, and Web of Science for articles published in English before March 2019, with a focus on the study of health misinformation in social media. We defined health misinformation as a health-related claim that is based on anecdotal evidence, false, or misleading owing to the lack of existing scientific knowledge. We included (1) articles that focused on health misinformation in social media, including those in which the authors discussed the consequences or purposes of health misinformation and (2) studies that described empirical findings regarding the measurement of health misinformation on these platforms.
A total of 69 studies were identified as eligible, and they covered a wide range of health topics and social media platforms. The topics were articulated around the following six principal categories: vaccines (32%), drugs or smoking (22%), noncommunicable diseases (19%), pandemics (10%), eating disorders (9%), and medical treatments (7%). Studies were mainly based on the following five methodological approaches: social network analysis (28%), evaluating content (26%), evaluating quality (24%), content/text analysis (16%), and sentiment analysis (6%). Health misinformation was most prevalent in studies related to smoking products and drugs such as opioids and marijuana. Posts with misinformation reached 87% in some studies. Health misinformation about vaccines was also very common (43%), with the human papilloma virus vaccine being the most affected. Health misinformation related to diets or pro-eating disorder arguments were moderate in comparison to the aforementioned topics (36%). Studies focused on diseases (ie, noncommunicable diseases and pandemics) also reported moderate misinformation rates (40%), especially in the case of cancer. Finally, the lowest levels of health misinformation were related to medical treatments (30%).
The prevalence of health misinformation was the highest on Twitter and on issues related to smoking products and drugs. However, misinformation on major public health issues, such as vaccines and diseases, was also high. Our study offers a comprehensive characterization of the dominant health misinformation topics and a comprehensive description of their prevalence on different social media platforms, which can guide future studies and help in the development of evidence-based digital policy action plans.
Journal Article
The trouble with deepfakes
by
Gregory, Josh, author
in
Deepfakes Juvenile literature.
,
Misinformation Juvenile literature.
,
Disinformation Juvenile literature.
2025
\"In The Trouble with Deepfakes, readers will learn about what deepfakes are, how they can spread misinformation, and what is being done to manage them. Engaging text allows readers to connect new technology concepts to what they already know\"-- Provided by publisher.
CHILDBOOK
Why the backfire effect does not explain the durability of political misperceptions
by
Nyhan, Brendan
in
Arthur M. Sackler on Advancing the Science and Practice of Science Communication: Misinformation about Science in the Public Sphere
,
Backfire
,
COLLOQUIUM PAPERS
2021
Previous research indicated that corrective information can sometimes provoke a so-called “backfire effect” in which respondents more strongly endorsed a misperception about a controversial political or scientific issue when their beliefs or predispositions were challenged. I show how subsequent research and media coverage seized on this finding, distorting its generality and exaggerating its role relative to other factors in explaining the durability of political misperceptions. To the contrary, an emerging research consensus finds that corrective information is typically at least somewhat effective at increasing belief accuracy when received by respondents. However, the research that I review suggests that the accuracy-increasing effects of corrective information like fact checks often do not last or accumulate; instead, they frequently seem to decay or be overwhelmed by cues from elites and the media promoting more congenial but less accurate claims. As a result, misperceptions typically persist in public opinion for years after they have been debunked. Given these realities, the primary challenge for scientific communication is not to prevent backfire effects but instead, to understand how to target corrective information better and to make it more effective. Ultimately, however, the best approach is to disrupt the formation of linkages between group identities and false claims and to reduce the flow of cues reinforcing those claims from elites and the media. Doing so will require a shift from a strategy focused on providing information to the public to one that considers the roles of intermediaries in forming and maintaining belief systems.
Journal Article
A firehose of falsehood : the story of disinformation
\"A Firehose of Falsehood: The Story of Disinformation breaks down disinformation tactics and offers tools for defending and restoring truth. Using examples from Darius I of ancient Persia (522-486 BCE), to blood libel of the Middle Ages, to Soviet disinformation tactics and modern election deniers, Teri Kanefield and Pat Dorian show how tyrants and would-be tyrants deploy disinformation to gain power. Democracy, which draws its authority from laws instead of the whim of a tyrant, requires truth. For a democracy to survive, its citizens must preserve and defend truth. Now that the internet has turned what was once a trickle of lies into a firehose, the challenge of holding on to truth has never been greater. A Firehose of Falsehood offers readers these necessary tools.\" -- Back cover.
I assume others are influenced by health misinformation on social media: examining the underlying process of intentions to combat health misinformation
2025
PurposeThe mounting health misinformation on social media triggers heated discussions about how to address it. Anchored by the influence of presumed influence (IPI) model, this study investigates the underlying process of intentions to combat health misinformation. Specifically, we analyzed how presumed exposure of others and presumed influence on others affect intentions to practice pre-emptive and reactive misinformation countering strategies.Design/methodology/approachCovariance-based structural equation modeling based on survey data from 690 Chinese participants was performed using the “lavaan” package in R to examine the proposed mechanism.FindingsPersonal attention to health information on social media is positively associated with presumed others’ attention to the same information, which, in turn, is related to an increased perception of health misinformation’s influence on others. The presumed influence is further positively tied to two pre-emptive countermeasures (i.e. support for media literacy interventions and institutional verification intention) and one reactive countermeasure (i.e. misinformation correction intention). However, the relationship between presumed influence and support for governmental restrictions, as another reactive countering method, is not significant.Originality/valueThis study supplements the misinformation countering literature by examining IPI’s tenability in explaining why individuals engage in combating misinformation. Both pre-emptive and reactive strategies were considered, enabling a panoramic view of the motivators of misinformation countering compared to previous studies. Our findings also inform the necessity of adopting a context-specific perspective and crafting other-oriented messages to motivate users’ initiative in implementing corrective actions.
Journal Article