Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Decoding Digital Discourse Through Multimodal Text and Image Machine Learning Models to Classify Sentiment and Detect Hate Speech in Race- and Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, and Asexual Community–Related Posts on Social Media: Quantitative Study
by
Yue, Xiaohe
, Mullaputi, Penchala Sai Priya
, Nguyen, Quynh C
, Seelman, Kyle
, Dennard, Elizabeth
, Criss, Shaniece
, Nguyen, Thu T
, Alibilli, Amrutha S
, Mane, Heran
, Merchant, Junaid S
, Hswen, Yulin
in
Accuracy
/ Allocation
/ Analysis
/ Artificial intelligence
/ Asexuality
/ Bidirectionality
/ Bisexuality
/ Classification
/ Cultural factors
/ Data processing
/ Datasets
/ Decoding
/ Discourse analysis
/ Female
/ Geometry
/ Hate
/ Hate speech
/ Homosexuality
/ Humans
/ Intersexuality
/ Lesbianism
/ LGBTQ people
/ Machine Learning
/ Male
/ Mass media effects
/ Mass media images
/ Medical personnel
/ Multimodality
/ Online social networks
/ Original Paper
/ Policy making
/ Pretraining
/ Public health
/ Public opinion
/ Quantitative analysis
/ Race
/ Race discrimination
/ Research applications
/ Semantics
/ Sentiment analysis
/ Sexual and Gender Minorities
/ Sexual minorities
/ Sexual orientation
/ Sexuality
/ Social Media
/ Social networks
/ Task performance
/ Technology application
/ Transgender persons
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Decoding Digital Discourse Through Multimodal Text and Image Machine Learning Models to Classify Sentiment and Detect Hate Speech in Race- and Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, and Asexual Community–Related Posts on Social Media: Quantitative Study
by
Yue, Xiaohe
, Mullaputi, Penchala Sai Priya
, Nguyen, Quynh C
, Seelman, Kyle
, Dennard, Elizabeth
, Criss, Shaniece
, Nguyen, Thu T
, Alibilli, Amrutha S
, Mane, Heran
, Merchant, Junaid S
, Hswen, Yulin
in
Accuracy
/ Allocation
/ Analysis
/ Artificial intelligence
/ Asexuality
/ Bidirectionality
/ Bisexuality
/ Classification
/ Cultural factors
/ Data processing
/ Datasets
/ Decoding
/ Discourse analysis
/ Female
/ Geometry
/ Hate
/ Hate speech
/ Homosexuality
/ Humans
/ Intersexuality
/ Lesbianism
/ LGBTQ people
/ Machine Learning
/ Male
/ Mass media effects
/ Mass media images
/ Medical personnel
/ Multimodality
/ Online social networks
/ Original Paper
/ Policy making
/ Pretraining
/ Public health
/ Public opinion
/ Quantitative analysis
/ Race
/ Race discrimination
/ Research applications
/ Semantics
/ Sentiment analysis
/ Sexual and Gender Minorities
/ Sexual minorities
/ Sexual orientation
/ Sexuality
/ Social Media
/ Social networks
/ Task performance
/ Technology application
/ Transgender persons
2025
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Decoding Digital Discourse Through Multimodal Text and Image Machine Learning Models to Classify Sentiment and Detect Hate Speech in Race- and Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, and Asexual Community–Related Posts on Social Media: Quantitative Study
by
Yue, Xiaohe
, Mullaputi, Penchala Sai Priya
, Nguyen, Quynh C
, Seelman, Kyle
, Dennard, Elizabeth
, Criss, Shaniece
, Nguyen, Thu T
, Alibilli, Amrutha S
, Mane, Heran
, Merchant, Junaid S
, Hswen, Yulin
in
Accuracy
/ Allocation
/ Analysis
/ Artificial intelligence
/ Asexuality
/ Bidirectionality
/ Bisexuality
/ Classification
/ Cultural factors
/ Data processing
/ Datasets
/ Decoding
/ Discourse analysis
/ Female
/ Geometry
/ Hate
/ Hate speech
/ Homosexuality
/ Humans
/ Intersexuality
/ Lesbianism
/ LGBTQ people
/ Machine Learning
/ Male
/ Mass media effects
/ Mass media images
/ Medical personnel
/ Multimodality
/ Online social networks
/ Original Paper
/ Policy making
/ Pretraining
/ Public health
/ Public opinion
/ Quantitative analysis
/ Race
/ Race discrimination
/ Research applications
/ Semantics
/ Sentiment analysis
/ Sexual and Gender Minorities
/ Sexual minorities
/ Sexual orientation
/ Sexuality
/ Social Media
/ Social networks
/ Task performance
/ Technology application
/ Transgender persons
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Decoding Digital Discourse Through Multimodal Text and Image Machine Learning Models to Classify Sentiment and Detect Hate Speech in Race- and Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, and Asexual Community–Related Posts on Social Media: Quantitative Study
Journal Article
Decoding Digital Discourse Through Multimodal Text and Image Machine Learning Models to Classify Sentiment and Detect Hate Speech in Race- and Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, and Asexual Community–Related Posts on Social Media: Quantitative Study
2025
Request Book From Autostore
and Choose the Collection Method
Overview
A major challenge in sentiment analysis on social media is the increasing prevalence of image-based content, which integrates text and visuals to convey nuanced messages. Traditional text-based approaches have been widely used to assess public attitudes and beliefs; however, they often fail to fully capture the meaning of multimodal content where cultural, contextual, and visual elements play a significant role.
This study aims to provide practical guidance for collecting, processing, and analyzing social media data using multimodal machine learning models. Specifically, it focuses on training and fine-tuning models to classify sentiment and detect hate speech.
Social media data were collected from Facebook and Instagram using CrowdTangle, a public insights tool by Meta, and from X via its academic research application programming interface. The dataset was filtered to include only race-related terms and lesbian, gay, bisexual, transgender, queer, intersex, and asexual community-related posts with image attachments, ensuring focus on multimodal content. Human annotators labeled 13,000 posts into 4 categories: negative sentiment, positive sentiment, hate, or antihate. We evaluated unimodal (Bidirectional Encoder Representations from Transformers for text and Visual Geometry Group 16 for images) and multimodal (Contrastive Language-Image Pretraining [CLIP], Visual Bidirectional Encoder Representations from Transformers [VisualBERTs], and an intermediate fusion) models. To enhance model performance, the synthetic minority oversampling technique was applied to address class imbalances, and latent Dirichlet allocation was used to improve semantic representations.
Our findings highlighted key differences in model performance. Among unimodal models, Bidirectional Encoder Representations from Transformer outperformed Visual Geometry Group 16, achieving higher accuracy and macro-F
-scores across all tasks. Among multimodal models, CLIP achieved the highest accuracy (0.86) in negative sentiment detection, followed by VisualBERT (0.84). For positive sentiment, VisualBERT outperformed other models with the highest accuracy (0.76). In hate speech detection, the intermediate fusion model demonstrated the highest accuracy (0.91) with a macro-F
-score of 0.64, ensuring balanced performance. Meanwhile, VisualBERT performed best in antihate classification, achieving an accuracy of 0.78. Applying latent Dirichlet allocation and the synthetic minority oversampling technique improved minority class detection, particularly for antihate content. Overall, the intermediate fusion model provided the most balanced performance across tasks, while CLIP excelled in accuracy-driven classifications. Although VisualBERT performed well in certain areas, it struggled to maintain a precision-recall balance. These results emphasized the effectiveness of multimodal approaches over unimodal models in analyzing social media sentiment.
This study contributes to the growing research on multimodal machine learning by demonstrating how advanced models, data augmentation techniques, and diverse datasets can enhance the analysis of social media content. The findings offer valuable insights for researchers, policy makers, and public health professionals seeking to leverage artificial intelligence for social media monitoring and addressing broader societal challenges.
This website uses cookies to ensure you get the best experience on our website.