Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
671
result(s) for
"Online hate speech."
Sort by:
Crash override : how Gamergate (nearly) destroyed my life, and how we can win the fight against online hate
Quinn \"is a video game developer whose ex-boyfriend published a crazed blog post cobbled together from private information, half-truths, and outright fictions, along with a rallying cry to the online hordes to go after her. They answered in the form of a so-called movement known as #gamergate--they hacked her accounts; stole nude photos of her; harassed her family, friends, and colleagues; and threatened to rape and murder her. But instead of shrinking into silence as the online mobs wanted her to, she raised her voice and spoke out against this vicious online culture and for making the Internet a safer place for everyone\"--Amazon.com.
Hate speech, toxicity detection in online social media: a recent survey of state of the art and opportunities
by
Anjum
,
Katarya, Rahul
in
Coding and Information Theory
,
Communications Engineering
,
Communications technology
2024
Information and communication technology has evolved dramatically, and now the majority of people are using internet and sharing their opinion more openly, which has led to the creation, collection and circulation of hate speech over multiple platforms. The anonymity and movability given by these social media platforms allow people to hide themselves behind a screen and spread the hate effortlessly. Online hate speech (OHS) recognition can play a vital role in stopping such activities and can thus restore the position of public platforms as the open marketplace of ideas. To study hate speech detection in social media, we surveyed the related available datasets on the web-based platform. We further analyzed approximately 200 research papers indexed in the different journals from 2010 to 2022. The papers were divided into various sections and approaches used in OHS detection, i.e., feature selection, traditional machine learning (ML) and deep learning (DL). Based on the selected 111 papers, we found that 44 articles used traditional ML and 35 used DL-based approaches. We concluded that most authors used SVM, Naive Bayes, Decision Tree in ML and CNN, LSTM in the DL approach. This survey contributes by providing a systematic approach to help researchers identify a new research direction in online hate speech.
Journal Article
GOVERNING ONLINE SPEECH
2021
Online speech governance stands at an inflection point. The state of emergency that platforms invoked during the COVID-19 pandemic is subsiding, and lawmakers are poised to transform the regulatory landscape. What emerges from this moment will shape the most important channels for communication in the modern era and have profound consequences for individuals, societies, and democratic governance. Tracing the path to this point illuminates the tasks that the institutions created during this transformation must be designed to do. This history shows that where online speech governance was once dominated by the First Amendment tradition’s categorical and individualistic approach to adjudicating speech conflicts, that approach became strained, and online speech governance now revolves around two other principles: proportionality and probability. Proportionality requires no longer focusing on the speech interest in an individual post alone, but also taking account of other societal interests that can justify proportionate limitations on content. But the unfathomable scale of online speech makes enforcement of rules only ever a matter of probability: Content moderation will always involve error, and so the pertinent questions are what error rates are reasonable and which kinds of errors should be preferred. Platforms’ actions during the pandemic have thrown into stark relief the centrality of these principles to online speech governance and also how undertheorized they remain. This Article reviews the causes of this shift from a “posts-as-trumps” approach to online speech governance to one of systemic balancing and what this new era of content moderation entails for platforms and their regulators.
Journal Article
Hate Speech in a Telegram Conspiracy Channel During the First Year of the COVID-19 Pandemic
by
Scrivens, Ryan
,
Martinez Arranz, Alfonso
,
Vergani, Matteo
in
Automatic text analysis
,
Conspiracy
,
Coronaviruses
2022
Research has explored how the COVID-19 pandemic triggered a wave of conspiratorial thinking and online hate speech, but little is empirically known about how different phases of the pandemic are associated with hate speech against adversaries identified by online conspiracy communities. This study addresses this gap by combining observational methods with exploratory automated text analysis of content from an Italian-themed conspiracy channel on Telegram during the first year of the pandemic. We found that, before the first lockdown in early 2020, the primary target of hate was China, which was blamed for a new bioweapon. Yet over the course of 2020 and particularly after the beginning of the second lockdown, the primary targets became journalists and healthcare workers, who were blamed for exaggerating the threat of COVID-19. This study advances our understanding of the association between hate speech and a complex and protracted event like the COVID-19 pandemic, and it suggests that country-specific responses to the virus (e.g., lockdowns and re-openings) are associated with online hate speech against different adversaries depending on the social and political context.
Journal Article
Online hate speech victimization: consequences for victims’ feelings of insecurity
by
Dreißigacker, Arne
,
Müller, Philipp
,
Schemmel, Jonas
in
Community and Environmental Psychology
,
Criminology and Criminal Justice
,
Cybercrime
2024
This paper addresses the question whether and to what extent the experience of online hate speech affects victims’ sense of security. Studies on hate crime in general show that such crimes are associated with a significantly higher feeling of insecurity, but there is little evidence concerning feeling of insecurity due to online hate speech. Based on a secondary data analysis of a representative population survey in Lower Saxony, Germany, on the topic of cybercrime in 2020 (N = 4,102), we tested three hypotheses regarding the effect of offline and online hate speech on feelings of insecurity. As a result, compared to non-victims, victims of online hate speech exhibit a more pronounced feeling of insecurity outside the Internet, while victims of other forms of cybercrime do not differ in this regard from non-victims. We found no effect for offline hate speech when relevant control variables were included in the statistical model. Possible reasons for this finding are assumed to lie in the characteristics of the phenomenon of online hate speech, for example, because the hateful content spreads uncontrollably on the Internet and reaches its victims even in protected private spheres.
Journal Article
Exposure to online hate speech is positively associated with post-traumatic stress disorder symptom severity
by
Levitin, Maor Daniel
,
Mikulincer, Mario
,
Skvirsky, Vera
in
692/499
,
692/699/476/1830
,
692/699/476/5
2025
Post-traumatic stress disorder (PTSD) after traumatic events is prevalent and can lead to negative consequences. While social media use has been associated with PTSD, little is known about the specific association of online hate speech on social media networks and PTSD, and whether such association is stronger among those with difficulties in emotion regulation, who may have a harder time coping with hate speech. In a general population sample of Jewish adults (aged 18–70) in Israel (
N
= 3,998), assessed about two months after the wide-scale terror attacks of October 7, 2023, regression analysis was used to explore the association of online hate speech and self-reported PTSD symptomology. Difficulties in emotion regulation (DER) was explored as a moderator of the association. Greater frequency of hate speech was significantly associated with increased PTSD symptomology, adjusting for problematic use of technology, terror and war exposure, and prior mental health issues. The association differed significantly by DER; as difficulties increased, the association was stronger. Public health campaigns could educate about the potential harms of hate speech to help individuals make informed choices, and clinicians could discuss possible hate speech effects with patients more vulnerable to PTSD, for example, those with emotion dysregulation.
Journal Article
Identity and Status: When Counterspeech Increases Hate Speech Reporting and Why
by
Kim, Jae Yeon
,
Sim, Jaeung
,
Cho, Daegon
in
Activism
,
Attitudes
,
Computer mediated communication
2023
Much has been written about how social media platforms enable the rise of networked activism. However, few studies have examined how these platforms’ low-information environments shape how social movement activists, their opponents, and social media platforms interact. Hate speech reporting is one understudied area where such interactions occur. This article fills this gap by examining to what extent and how the gender and popularity of counterspeech in comment sections influence social media users’ willingness to report hate speech on the #MeToo movement. Based on a survey experiment (n = 1250) conducted in South Korea, we find that YouTube users are more willing to report such sexist hate speech when the counterspeech is delivered by a female rather than a male user. However, when the female user’s counterspeech received many upvotes, this was perceived to signal her enhanced status and decreased the intention to report hate speech, particularly among male users. No parallel patterns were found regarding other attitudes toward hate speech, counterspeech, YouTube, the #MeToo movement, and gender discrimination and hate speech legislation. These findings inform that users report hate speech based on potentially harmful content as well as their complex social interactions with other users and the platform.
Journal Article
An Intersectional Feminist Critique of Cyberlibertarian’s Grip on the Construction of Online Freedom
2025
Abstract
The impacts of online hate speech include emotional and embodied harms that have sometimes stunted careers, resulted in lost wages and missed job opportunities, and damaged relationships (both personal and professional). Despite the real-world negative impacts of digital hate speech, many resist regulations that would mitigate these harms. What is behind the anti-regulatory stance against preventative measures to reduce online hate speech? We argue that similar to the “there is no alternative” to neoliberalism argument, which demonizes regulation and pushes traditionalism, the gendered construction of cyberlibertarianism presents online freedom as the only game in town, at least when it comes to online hate. We adapt Horton’s three-part framework of neoliberal masculinities to stress the role of gender in constructing various understandings of cyberlibertarianism. Through an intersectional feminist critique of cyberlibertarianism rooted in cyberfeminism and a critical feminist cybersecurity construction of online freedom, this paper adds to feminist cybersecurity and cyberfeminism by demonstrating how such approaches can counter cyberlibertarianism.
Journal Article