Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
234,911 result(s) for "False information"
Sort by:
Machine learning-based detection and mitigation of cyberattacks in adaptive cruise control systems
The growing reliance on Vehicle-to-Vehicle (V2V) communication has heightened the vulnerability of Adaptive Cruise Control (ACC) systems to cybersecurity threats, such as manipulation or forgery of V2V messages. This paper investigates the impact of three types of false information injection (FII) on vehicle collision risk and driving efficiency. To address these vulnerabilities, we develop a novel machine learning-based onboard model, ACC anomaly Detection and Mitigation (ACCDM), designed to strengthen ACC resilience against such cyberattacks. ACCDM continuously monitors vehicle parameters under benign conditions, detecting deviations that indicate potential threats and deploying real-time mitigations to maintain safety and efficiency. Simulations across continuous and clustered attack scenarios validate ACCDM’s accuracy in detecting cybersecurity threats, preserving safe following distances, and mitigating the negative impacts of cyberattacks on ACC systems.
A study on the influence of digital literacy on elderly user’s intention to identify social media false information
Purpose This research exclusively focuses on China’s elderly Internet users given how severe a threat disinformation has become for this particular population group as social media platforms thrive and the number of elderly netizens grows in China. The purpose of this study is to explore the mechanism of how elderly social media users’ intention to identify false information is influenced helps supplement the knowledge system of false information governance and provides a basis for correction practices. Design/methodology/approach This study focuses on the digital literacy of elderly social media users and builds a theoretical model of their intention to identify false information based on the theory of planned behaviour. It introduces two variables – namely, risk perception and self-efficacy – and clarifies the relationships between the variables. Questionnaires were distributed both online and offline, with a total of 468 collected. A structural equation model was built for empirical analysis. Findings The results show that digital literacy positively influences risk perception, self-efficacy, subjective norms and perceived behavioural control. Risk perception positively influences subjective norms, perceived behavioural control and the attitude towards the identification of false information. Self-efficacy positively influences perceived behavioural control but does not significantly impact the intention to identify. Subjective norms positively influence the attitude towards identification and the intention to identify. Perceived behavioural control positively influences the attitude towards identification but does not significantly impact the intention to identify. The attitude towards identification positively influences the intention to identify. Originality/value Based on relevant theories and the results of the empirical analysis, this study provides suggestions for false information governance from the perspectives of social media platform collaboration and elderly social media users.
Cross-Modal Consistency with Aesthetic Similarity for Multimodal False Information Detection
With the explosive growth of false information on social media platforms, the automatic detection of multimodal false information has received increasing attention. Recent research has significantly contributed to multimodal information exchange and fusion, with many methods attempting to integrate unimodal features to generate multimodal news representations. However, they still need to fully explore the hierarchical and complex semantic correlations between different modal contents, severely limiting their performance detecting multimodal false information. This work proposes a two-stage detection framework for multimodal false information detection, called ASMFD, which is based on image aesthetic similarity to segment and explores the consistency and inconsistency features of images and texts. Specifically, we first use the Contrastive Language-Image Pre-training (CLIP) model to learn the relationship between text and images through label awareness and train an image aesthetic attribute scorer using an aesthetic attribute dataset. Then, we calculate the aesthetic similarity between the image and related images and use this similarity as a threshold to divide the multimodal correlation matrix into consistency and inconsistency matrices. Finally, the fusion module is designed to identify essential features for detecting multimodal false information. In extensive experiments on four datasets, the performance of the ASMFD is superior to state-of-the-art baseline methods.
Detection of Fake News in Romanian: LLM-Based Approaches to COVID-19 Misinformation
The spread of misinformation during the COVID-19 pandemic raised widespread concerns about public health communication and media reliability. In this study, we focus on these issues as they manifested in Romanian-language media and employ Large Language Models (LLMs) to classify misinformation, with a particular focus on super-narratives—broad thematic categories that capture recurring patterns and ideological framings commonly found in pandemic-related fake news, such as anti-vaccination discourse, conspiracy theories, or geopolitical blame. While some of the categories reflect global trends, others are shaped by the Romanian cultural and political context. We introduce a novel dataset of fake news centered on COVID-19 misinformation in the Romanian geopolitical context, comprising both annotated and unannotated articles. We experimented with multiple LLMs using zero-shot, few-shot, supervised, and semi-supervised learning strategies, achieving the best results with an LLaMA 3.1 8B model and semi-supervised learning, which yielded an F1-score of 78.81%. Experimental evaluations compared this approach to traditional Machine Learning classifiers augmented with morphosyntactic features. Results show that semi-supervised learning substantially improved classification results in both binary and multi-class settings. Our findings highlight the effectiveness of semi-supervised adaptation in low-resource, domain-specific contexts, as well as the necessity of enabling real-time misinformation tracking and enhancing transparency through claim-level explainability and fact-based counterarguments.
Enhancing VANET Security: An Unsupervised Learning Approach for Mitigating False Information Attacks in VANETs
Vehicular ad hoc networks (VANETs) enable communication among vehicles and between vehicles and infrastructure to provide safety and comfort to the users. Malicious nodes in VANETs may broadcast false information to create the impression of a fake event or road congestion. In addition, several malicious nodes may collude to collectively launch a false information attack to increase the credibility of the attack. Detection of these attacks is critical to mitigate the potential risks they bring to the safety of users. Existing techniques for detecting false information attacks in VANETs use different approaches such as machine learning, blockchain, trust scores, statistical methods, etc. These techniques rely on historical information about vehicles, artificial data used to train the technique, or coordination among vehicles. To address these limitations, we propose a false information attack detection technique for VANETs using an unsupervised anomaly detection approach. The objective of the proposed technique is to detect false information attacks based on only real-time characteristics of the network, achieving high accuracy and low processing delay. The performance evaluation results show that our proposed technique offers 30% lower data processing delay and a 17% lower false positive rate compared to existing approaches in scenarios with high proportions of malicious nodes.
Graph Convolutional-Based Deep Residual Modeling for Rumor Detection on Social Media
The popularity and development of social media have made it more and more convenient to spread rumors, and it has become especially important to detect rumors in massive amounts of information. Most of the traditional rumor detection methods use the rumor content or propagation structure to mine rumor characteristics, ignoring the fusion characteristics of the content and structure and their interaction. Therefore, a novel rumor detection method based on heterogeneous convolutional networks is proposed. First, this paper constructs a heterogeneous map that combines both the rumor content and propagation structure to explore their interaction during rumor propagation and obtain a rumor representation. On this basis, this paper uses a deep residual graph convolutional neural network to construct the content and structure interaction information of the current network propagation model. Finally, this paper uses the Twitter15 and Twitter16 datasets to verify the proposed method. Experimental results show that the proposed method has higher detection accuracy compared to the traditional rumor detection method.
False Information Mitigation Using Pattern-Based Anomaly Detection for Secure Vehicular Networks
Vehicular networks utilize wireless communication among vehicles and between vehicles and infrastructures. While vehicular networks offer a wide range of benefits, the security of these networks is critical for ensuring public safety. The transmission of false information by malicious nodes (vehicles) for selfish gain is a security issue in vehicular networks. Mitigating false information is essential to reduce the potential risks posed to public safety. Existing methods for false information detection in vehicular networks utilize various approaches, including machine learning, blockchain, trust scores, and statistical techniques. These methods often rely on past information about vehicles, historical data for training machine learning models, or coordination between vehicles without considering the trustworthiness of the vehicles. To address these limitations, we propose a technique for False Information Mitigation using Pattern-based Anomaly Detection (FIM-PAD). The novelty of FIM-PAD lies in using an unsupervised learning approach to learn the usual patterns between the direction of travel and speed of vehicles, considering the variations in vehicles’ speeds in different directions. FIM-PAD uses only real-time network characteristics to detect the malicious vehicles that do not conform to the identified usual patterns. The objective of FIM-PAD is to accurately detect false information in vehicular networks with minimal processing delays. Our performance evaluations in networks with high proportions of malicious nodes confirm that FIM-PAD on average offers a 38% lower data processing delay and at least 19% lower false positive rate compared to three other existing techniques.
Does Investor Attention Affect Corporate Greenwashing? Evidence from China
Enhancing ESG performance has emerged as a crucial strategy for companies to bolster market value and competitiveness. However, this trend has sparked concerns about corporate greenwashing, where companies may selectively disclose ESG-related information to garner short-term benefits. Against this backdrop, using Chinese A-share listed companies from 2010 to 2022, we examine the impact of investor attention on corporate greenwashing. The findings reveal that investor attention significantly curbs corporate greenwashing. Mechanism analysis indicates that investor attention achieves this by alleviating corporate financing constraints and enhancing transparency in corporate information. Furthermore, moderating analysis suggests that enhancing internal controls and increasing environmental subsidies can strengthen the inhibitory effect of investor attention on corporate greenwashing. Finally, heterogeneity analysis demonstrates that the inhibitory effect of investor attention on corporate greenwashing is more pronounced in state-owned enterprises and companies facing high financing constraints. These findings not only contribute to the literature on investor attention but also offer insights for governing corporate greenwashing and advancing the dual-carbon goal.
A Blockchain-Based Detection and Control System for Model-Generated False Information
In the digital age, spreading false information has a far-reaching impact on various areas, such as society, politics, and the economy. With the popularization of applications of text generation models, the cost of producing false information has significantly decreased, making it challenging for human beings to screen it. Therefore, research on detection screening and early warning control for model-generated false information becomes particularly important. In this paper, we propose a model-generated false information detection and control system based on blockchain. Firstly, we design a model-generated false information detection method combining model-generated text discrimination based on a self-attention network and text similarity detection based on a twin network. Secondly, we construct a blockchain-based model-generated false information control and traceability system. It utilizes the proposed detection algorithm to provide early warning and control of model-generated false information involving important and sensitive events before social network release. For information judged to be model-generated false, the stored data on the blockchain is utilized to track and trace the publisher. Ultimately, experimental tests prove that the proposed detection method improves the accuracy of false information detection. In addition, the operational efficiency of the prototype system can meet quality of service requirements.
False Information Detection via Multimodal Feature Fusion and Multi-Classifier Hybrid Prediction
In the existing false information detection methods, the quality of the extracted single-modality features is low, the information between different modalities cannot be fully fused, and the original information will be lost when the information of different modalities is fused. This paper proposes a false information detection via multimodal feature fusion and multi-classifier hybrid prediction. In this method, first, bidirectional encoder representations for transformers are used to extract the text features, and S win-transformer is used to extract the picture features, and then, the trained deep autoencoder is used as an early fusion method of multimodal features to fuse text features and visual features, and the low-dimensional features are taken as the joint features of the multimodalities. The original features of each modality are concatenated into the joint features to reduce the loss of original information. Finally, the text features, image features and joint features are processed by three classifiers to obtain three probability distributions, and the three probability distributions are added proportionally to obtain the final prediction result. Compared with the attention-based multimodal factorized bilinear pooling, the model achieves 4.3% and 1.2% improvement in accuracy on Weibo dataset and Twitter dataset. The experimental results show that the proposed model can effectively integrate multimodal information and improve the accuracy of false information detection.