Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
38
result(s) for
"Ciampaglia, Giovanni Luca"
Sort by:
The spread of low-credibility content by social bots
by
Shao, Chengcheng
,
Ciampaglia, Giovanni Luca
,
Flammini, Alessandro
in
639/766/530/2801
,
706/689/454
,
706/689/522
2018
The massive spread of digital misinformation has been identified as a major threat to democracies. Communication, cognitive, social, and computer scientists are studying the complex causes for the viral diffusion of misinformation, while online platforms are beginning to deploy countermeasures. Little systematic, data-based evidence has been published to guide these efforts. Here we analyze 14 million messages spreading 400 thousand articles on Twitter during ten months in 2016 and 2017. We find evidence that social bots played a disproportionate role in spreading articles from low-credibility sources. Bots amplify such content in the early spreading moments, before an article goes viral. They also target users with many followers through replies and mentions. Humans are vulnerable to this manipulation, resharing content posted by bots. Successful low-credibility sources are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.
Online misinformation is a threat to a well-informed electorate and undermines democracy. Here, the authors analyse the spread of articles on Twitter, find that bots play a major role in the spread of low-credibility content and suggest control measures for limiting the spread of misinformation.
Journal Article
How algorithmic popularity bias hinders or promotes quality
by
Ciampaglia, Giovanni Luca
,
Flammini, Alessandro
,
Menczer, Filippo
in
639/705/531
,
639/766/530/2801
,
Algorithms
2018
Algorithms that favor popular items are used to help us select among many choices, from top-ranked search engine results to highly-cited scientific papers. The goal of these algorithms is to identify high-quality items such as reliable news, credible information sources, and important discoveries–in short, high-quality content should rank at the top. Prior work has shown that choosing what is popular may amplify random fluctuations and lead to sub-optimal rankings. Nonetheless, it is often assumed that recommending what is popular will help high-quality content “bubble up” in practice. Here we identify the conditions in which popularity may be a viable proxy for quality content by studying a simple model of a cultural market endowed with an intrinsic notion of quality. A parameter representing the cognitive cost of exploration controls the trade-off between quality and popularity. Below and above a critical exploration cost, popularity bias is more likely to hinder quality. But we find a narrow intermediate regime of user attention where an optimal balance exists: choosing what is popular can help promote high-quality items to the top. These findings clarify the effects of algorithmic popularity bias on quality outcomes, and may inform the design of more principled mechanisms for techno-social cultural markets.
Journal Article
Anatomy of an online misinformation network
by
Shao, Chengcheng
,
Ciampaglia, Giovanni Luca
,
Hui, Pik-Mai
in
Anatomy
,
Artificial Intelligence
,
Biology and Life Sciences
2018
Massive amounts of fake news and conspiratorial content have spread over social media before and after the 2016 US Presidential Elections despite intense fact-checking efforts. How do the spread of misinformation and fact-checking compete? What are the structural and dynamic characteristics of the core of the misinformation diffusion network, and who are its main purveyors? How to reduce the overall amount of misinformation? To explore these questions we built Hoaxy, an open platform that enables large-scale, systematic studies of how misinformation and fact-checking spread and compete on Twitter. Hoaxy captures public tweets that include links to articles from low-credibility and fact-checking sources. We perform k-core decomposition on a diffusion network obtained from two million retweets produced by several hundred thousand accounts over the six months before the election. As we move from the periphery to the core of the network, fact-checking nearly disappears, while social bots proliferate. The number of users in the main core reaches equilibrium around the time of the election, with limited churn and increasingly dense connections. We conclude by quantifying how effectively the network can be disrupted by penalizing the most central nodes. These findings provide a first look at the anatomy of a massive online misinformation diffusion network.
Journal Article
Computational Fact Checking from Knowledge Networks
by
Ciampaglia, Giovanni Luca
,
Flammini, Alessandro
,
Shiralkar, Prashant
in
Algorithms
,
Area Under Curve
,
Computer applications
2015
Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation.
Journal Article
The production of information in the attention economy
by
Ciampaglia, Giovanni Luca
,
Menczer, Filippo
,
Flammini, Alessandro
in
639/705/1042
,
639/705/258
,
639/766/530/2801
2015
Online traces of human activity offer novel opportunities to study the dynamics of complex knowledge exchange networks, in particular how emergent patterns of collective attention determine what new information is generated and consumed. Can we measure the relationship between demand and supply for new information about a topic? We propose a normalization method to compare attention bursts statistics across topics with heterogeneous distribution of attention. Through analysis of a massive dataset on traffic to Wikipedia, we find that the production of new knowledge is associated to significant shifts of collective attention, which we take as proxy for its demand. This is consistent with a scenario in which allocation of attention toward a topic stimulates the demand for information about it and in turn the supply of further novel information. However, attention spikes only for a limited time span, during which new content has higher chances of receiving traffic, compared to content created later or earlier on. Our attempt to quantify demand and supply of information and our finding about their temporal ordering, may lead to the development of the fundamental laws of the attention economy and to a better understanding of social exchange of knowledge information networks.
Journal Article
Power and Fairness in a Generalized Ultimatum Game
by
Ciampaglia, Giovanni Luca
,
Lozano, Sergi
,
Helbing, Dirk
in
Acceptance
,
Attainment
,
Balance of power
2014
Power is the ability to influence others towards the attainment of specific goals, and it is a fundamental force that shapes behavior at all levels of human existence. Several theories on the nature of power in social life exist, especially in the context of social influence. Yet, in bargaining situations, surprisingly little is known about its role in shaping social preferences. Such preferences are considered to be the main explanation for observed behavior in a wide range of experimental settings. In this work, we set out to understand the role of bargaining power in the stylized environment of a Generalized Ultimatum Game (GUG). We modify the payoff structure of the standard Ultimatum Game (UG) to investigate three situations: two in which the power balance is either against the proposer or against the responder, and a balanced situation. We find that other-regarding preferences, as measured by the amount of money donated by participants, do not change with the amount of power, but power changes the offers and acceptance rates systematically. Notably, unusually high acceptance rates for lower offers were observed. This finding suggests that social preferences may be invariant to the balance of power and confirms that the role of power on human behavior deserves more attention.
Journal Article
Research Challenges of Digital Misinformation: Toward a Trustworthy Web
by
Ciampaglia, Giovanni Luca
,
Mantzarlis, Alexios
,
Maus, Gregory
in
Artificial intelligence
,
Cognitive biases
,
Collaboration
2018
The deluge of online and offline misinformation is overloading the exchange of ideas upon which democracies depend. Fake news, conspiracy theories, and deceptive social bots proliferate, facilitating the manipulation of public opinion. Countering misinformation while protecting freedom of speech will require collaboration across industry, journalism, and academe. The Workshop on Digital Misinformation — held in May 2017, in conjunction with the International AAAI Conference on Web and Social Media in Montreal — was intended to foster these efforts. The meeting brought together more than 100 stakeholders from academia, media, and tech companies to discuss the research challenges implicit in building a trustworthy web. In this article, we outline the main findings from the discussion.
Journal Article
Factuality challenges in the era of large language models and opportunities for fact-checking
by
Ciampaglia, Giovanni Luca
,
Chakraborty, Tanmoy
,
DiResta, Renee
in
4014/4009
,
639/705/117
,
Access to information
2024
The emergence of tools based on large language models (LLMs), such as OpenAI’s ChatGPT and Google’s Gemini, has garnered immense public attention owing to their advanced natural language generation capabilities. These remarkably natural-sounding tools have the potential to be highly useful for various tasks. However, they also tend to produce false, erroneous or misleading content—commonly referred to as hallucinations. Moreover, LLMs can be misused to generate convincing, yet false, content and profiles on a large scale, posing a substantial societal challenge by potentially deceiving users and spreading inaccurate information. This makes fact-checking increasingly important. Despite their issues with factual accuracy, LLMs have shown proficiency in various subtasks that support fact-checking, which is essential to ensure factually accurate responses. In light of these concerns, we explore issues related to factuality in LLMs and their impact on fact-checking. We identify key challenges, imminent threats and possible solutions to these factuality issues. We also thoroughly examine these challenges, existing solutions and potential prospects for fact-checking. By analysing the factuality constraints within LLMs and their impact on fact-checking, we aim to contribute to a path towards maintaining accuracy at a time of confluence of generative artificial intelligence and misinformation.
Large language models (LLMs) present challenges, including a tendency to produce false or misleading content and the potential to create misinformation or disinformation. Augenstein and colleagues explore issues related to factuality in LLMs and their impact on fact-checking.
Journal Article
Political audience diversity and news reliability in algorithmic ranking
2022
Newsfeed algorithms frequently amplify misinformation and other low-quality content. How can social media platforms more effectively promote reliable information? Existing approaches are difficult to scale and vulnerable to manipulation. In this paper, we propose using the political diversity of a website’s audience as a quality signal. Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 US residents, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards. We then incorporate audience diversity into a standard collaborative filtering framework and show that our improved algorithm increases the trustworthiness of websites suggested to users—especially those who most frequently consume misinformation—while keeping recommendations relevant. These findings suggest that partisan audience diversity is a valuable signal of higher journalistic standards that should be incorporated into algorithmic ranking decisions.Using survey and internet browsing data and expert ratings, Bhadani et al. find that incorporating partisan audience diversity into algorithmic rankings of news websites increases the trustworthiness of the sites they recommend and maintains relevance.
Journal Article