Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
3 result(s) for "Hatred detection"
Sort by:
Hatred and trolling detection transliteration framework using hierarchical LSTM in code-mixed social media text
The paper describes the usage of self-learning Hierarchical LSTM technique for classifying hatred and trolling contents in social media code-mixed data. The Hierarchical LSTM-based learning is a novel learning architecture inspired from the neural learning models. The proposed HLSTM model is trained to identify the hatred and trolling words available in social media contents. The proposed HLSTM systems model is equipped with self-learning and predicting mechanism for annotating hatred words in transliteration domain. The Hindi–English data are ordered into Hindi, English, and hatred labels for classification. The mechanism of word embedding and character-embedding features are used here for word representation in the sentence to detect hatred words. The method developed based on HLSTM model helps in recognizing the hatred word context by mining the intention of the user for using that word in the sentence. Wide experiments suggests that the HLSTM-based classification model gives the accuracy of 97.49% when evaluated against the standard parameters like BLSTM, CRF, LR, SVM, Random Forest and Decision Tree models especially when there are some hatred and trolling words in the social media data.
Enhancing Impulsive Hatred Detection with Ensemble Techniques and Active Learning
The increasing propagation in recent years of hatred on social media and the dire requirement for counter measures have drawn critical speculation from state run administrations, organizations, and analysts. Despite the fact that specialists have observed that disdain is an issue across different Social media stages, there is an absence of models for online disdain location utilizing this multi-stage information. Different techniques have been produced for robotizing disdain discovery on the web. Here we will begin by giving the current issue that comes the right to speak freely of discourse on the Internet and the abuse of virtual entertainment stages like Twitter, as well as distinguishing the holes present in the current works. At long last, figured out how to tackle these issues. It is a considerably more testing task, as examination of the language in the common datasets shows that disdain needs one of a kind, discriminative highlights and in this manner making it challenging to find. Removing a few exceptional and significant elements and joining them in various sets to look at and dissect the presentation of different machine learning classification calculations as to each list of capabilities. At long last, subsequent to leading a top to bottom investigation, results show that it is feasible to fundamentally expand the classification score acquired.
Identification of Abusive Behavior Towards Religious Beliefs and Practices on Social Media Platforms
The ubiquitous use of social media has enabled many people, including religious scholars and priests, to share their religious views. Unfortunately, exploiting people’s religious beliefs and practices, some extremist groups intentionally or unin-tentionally spread religious hatred among different communities and thus hamper social stability. This paper aims to propose an abusive behavior detection approach to identify hatred, violence, harassment, and extremist expressions against people of any religious belief on social media. For this, first religious posts from social media users’ activities are captured and then the abusive behaviors are identified through a number of sequential processing steps. In the experiment, Twitter has been chosen as an example of social media for collecting dataset of six major religions in English Twittersphere. In order to show the performance of the proposed approach, five classic classifiers on n-gram TF-IDF model have been used. Besides, Long Short-term Memory (LSTM) and Gated Recurrent Unit (GRU) classifiers on trained embedding and pre-trained GloVe word embedding models have been used. The experimental result showed 85%accuracy in terms of precision. However, to the best of our knowledge, this is the first work that will be able to distinguish between hateful and non-hateful contents in other application domains on social media in addition to religious context. Keywords: Social media; religious abuse detection; religious