Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,891 result(s) for "graph attention networks"
Sort by:
Enhancing Signed Graph Attention Network by Graph Characteristics: An Analysis
A graph neural network (GNN) is one of successful methods for handling tasks on a graph data structure, e.g. node embedding, link prediction and node classification. GNNs focus on a graph data structure that must aggregate messages on nodes in the graph to retain a graph-structured information in new node’s message and proceed tasks on a graph. One of modifications on the propagation step in GNNs by adopting attention mechanism is a graph attention network (GAT). Applying this modification to signed graphs generated by sociological theories is called signed graph attention network (SiGAT). In this research, we utilize SiGAT and create novel graphs using graph characters to assess the performance of SiGAT models embedded in nodes across various characteristic graphs. The primary focus of our study was linked prediction, which aligns with the task employed in the previous research on SiGAT. We propose a method using graph characteristics to improve the time spent on the learning process in SiGAT.
ASKAT: Aspect Sentiment Knowledge Graph Attention Network for Recommendation
In modern online life, recommender systems can help us filter unimportant information. Researchers of recommendation algorithms usually utilize historical interaction data to mine potential user preferences. However, most existing methods use rating data to mine user interest preferences, ignoring rich textual information such as reviews. Although some researchers have attempted to combine ratings and reviews for recommendation, we believe the following shortcomings still exist. First, existing methods are overly dependent on the accuracy of external sentiment analysis tools. Second, existing methods do not fully utilize the features extracted from reviews. Further, existing methods focus only on the aspects that users like, while ignoring the aspects that users dislike, and they cannot completely model users’ true preferences. To address the above issues, in this paper, we propose a recommendation model based on the aspect of the sentiment knowledge graph attention network (ASKAT). We first use the improved aspect-based sentiment analysis algorithm to extract aspectual sentiment features from reviews. Then, to overcome the difficulty in underutilizing the information extracted from the comments, we build aspects of sentiment-enhanced collaborative knowledge mapping. After that, we propose a new graph attention network that uses sentiment-aware attention mechanisms to aggregate neighbour information. Finally, our experimental results on three datasets, Movie, Amazon book, and Yelp, show that our model consistently outperforms the baseline model in two recommendation scenarios, click-through-rate prediction and Top-k recommendation. Compared with other models, the method shows significant improvement in both recommendation accuracy and personalised recommendation effectiveness.
MFCN-DDI: Capsule network based on multimodal feature for multitype drug-drug interaction prediction
Precise prediction of drug-drug interactions (DDIs) is essential for pharmaceutical research and clinical applications to minimize adverse reactions, optimize therapies, and reduce costs. However, existing methods still face challenges in effectively integrating multidimensional drug features and fully utilizing edge features in molecular graphs, which are crucial for predicting DDIs precisely. Moreover, current methods may not adequately capture the complex relationships between different types of features, limiting predictive performance. This paper proposes the MFCN-DDI model for DDI type prediction. The model consists of a multimodal feature extraction module, a capsule network-based feature fusion module, and a DDI predictor module. In the multimodal feature extraction module, four kinds of features are used to provide rich and comprehensive representations for subsequent DDI type prediction, where molecular graph features are generated by considering molecular graphs with edge features. The capsule network-based feature fusion module captures complex feature relationships to generate high- quality integrated representations. In the DDI predictor module, multiclass and multilabel classification predictions are performed accurately. Experimental results show that MFCN-DDI outperforms existing comparison models in prediction tasks. Case studies further prove its practical applicability. In summary, MFCN-DDI provides an efficient and reliable solution for DDI prediction.
A review of graph neural networks: concepts, architectures, techniques, challenges, datasets, applications, and future directions
Deep learning has seen significant growth recently and is now applied to a wide range of conventional use cases, including graphs. Graph data provides relational information between elements and is a standard data format for various machine learning and deep learning tasks. Models that can learn from such inputs are essential for working with graph data effectively. This paper identifies nodes and edges within specific applications, such as text, entities, and relations, to create graph structures. Different applications may require various graph neural network (GNN) models. GNNs facilitate the exchange of information between nodes in a graph, enabling them to understand dependencies within the nodes and edges. The paper delves into specific GNN models like graph convolution networks (GCNs), GraphSAGE, and graph attention networks (GATs), which are widely used in various applications today. It also discusses the message-passing mechanism employed by GNN models and examines the strengths and limitations of these models in different domains. Furthermore, the paper explores the diverse applications of GNNs, the datasets commonly used with them, and the Python libraries that support GNN models. It offers an extensive overview of the landscape of GNN research and its practical implementations.
Attention-based graph neural networks: a survey
Graph neural networks (GNNs) aim to learn well-trained representations in a lower-dimension space for downstream tasks while preserving the topological structures. In recent years, attention mechanism, which is brilliant in the fields of natural language processing and computer vision, is introduced to GNNs to adaptively select the discriminative features and automatically filter the noisy information. To the best of our knowledge, due to the fast-paced advances in this domain, a systematic overview of attention-based GNNs is still missing. To fill this gap, this paper aims to provide a comprehensive survey on recent advances in attention-based GNNs. Firstly, we propose a novel two-level taxonomy for attention-based GNNs from the perspective of development history and architectural perspectives. Specifically, the upper level reveals the three developmental stages of attention-based GNNs, including graph recurrent attention networks, graph attention networks, and graph transformers. The lower level focuses on various typical architectures of each stage. Secondly, we review these attention-based methods following the proposed taxonomy in detail and summarize the advantages and disadvantages of various models. A model characteristics table is also provided for a more comprehensive comparison. Thirdly, we share our thoughts on some open issues and future directions of attention-based GNNs. We hope this survey will provide researchers with an up-to-date reference regarding applications of attention-based GNNs. In addition, to cope with the rapid development in this field, we intend to share the relevant latest papers as an open resource at https://github.com/sunxiaobei/awesome-attention-based-gnns.
Graph Attention Networks: A Comprehensive Review of Methods and Applications
Real-world problems often exhibit complex relationships and dependencies, which can be effectively captured by graph learning systems. Graph attention networks (GATs) have emerged as a powerful and versatile framework in this direction, inspiring numerous extensions and applications in several areas. In this review, we present a thorough examination of GATs, covering both diverse approaches and a wide range of applications. We examine the principal GAT-based categories, including Global Attention Networks, Multi-Layer Architectures, graph-embedding techniques, Spatial Approaches, and Variational Models. Furthermore, we delve into the diverse applications of GATs in various systems such as recommendation systems, image analysis, medical domain, sentiment analysis, and anomaly detection. This review seeks to act as a navigational reference for researchers and practitioners aiming to emphasize the capabilities and prospects of GATs.
GAT-LI: a graph attention network based learning and interpreting method for functional brain network classification
Background Autism spectrum disorders (ASD) imply a spectrum of symptoms rather than a single phenotype. ASD could affect brain connectivity at different degree based on the severity of the symptom. Given their excellent learning capability, graph neural networks (GNN) methods have recently been used to uncover functional connectivity patterns and biological mechanisms in neuropsychiatric disorders, such as ASD. However, there remain challenges to develop an accurate GNN learning model and understand how specific decisions of these graph models are made in brain network analysis. Results In this paper, we propose a graph attention network based learning and interpreting method, namely GAT-LI, which learns to classify functional brain networks of ASD individuals versus healthy controls (HC), and interprets the learned graph model with feature importance. Specifically, GAT-LI includes a graph learning stage and an interpreting stage. First, in the graph learning stage, a new graph attention network model, namely GAT2, uses graph attention layers to learn the node representation, and a novel attention pooling layer to obtain the graph representation for functional brain network classification. We experimentally compared GAT2 model’s performance on the ABIDE I database from 1035 subjects against the classification performances of other well-known models, and the results showed that the GAT2 model achieved the best classification performance. We experimentally compared the influence of different construction methods of brain networks in GAT2 model. We also used a larger synthetic graph dataset with 4000 samples to validate the utility and power of GAT2 model. Second, in the interpreting stage, we used GNNExplainer to interpret learned GAT2 model with feature importance. We experimentally compared GNNExplainer with two well-known interpretation methods including Saliency Map and DeepLIFT to interpret the learned model, and the results showed GNNExplainer achieved the best interpretation performance. We further used the interpretation method to identify the features that contributed most in classifying ASD versus HC. Conclusion We propose a two-stage learning and interpreting method GAT-LI to classify functional brain networks and interpret the feature importance in the graph model. The method should also be useful in the classification and interpretation tasks for graph data from other biomedical scenarios.
Graph Neural Network for Traffic Forecasting: The Research Progress
Traffic forecasting has been regarded as the basis for many intelligent transportation system (ITS) applications, including but not limited to trip planning, road traffic control, and vehicle routing. Various forecasting methods have been proposed in the literature, including statistical models, shallow machine learning models, and deep learning models. Recently, graph neural networks (GNNs) have emerged as state-of-the-art traffic forecasting solutions because they are well suited for traffic systems with graph structures. This survey aims to introduce the research progress on graph neural networks for traffic forecasting and the research trends observed from the most recent studies. Furthermore, this survey summarizes the latest open-source datasets and code resources for sharing with the research community. Finally, research challenges and opportunities are proposed to inspire follow-up research.
Scientific reports multi relational dual attention graph transformer for fine grained sentiment analysis
Aspect-Based Sentiment Analysis requires precise identification of sentiment polarity toward specific aspects, demanding robust modeling of syntactic, semantic, and discourse-level dependencies. Current graph-based approaches inadequately address the complex interplay between multiple relation types and lack effective attention regularization mechanisms for interpretability. We propose the Multi-Relational Dual-Attention Graph Transformer (MRDAGT), a novel framework unifying syntactic, semantic, and discourse relations within a coherent graph architecture. Our dual-attention mechanism strategically balances local token-level interactions with aspect-oriented contextual focus while attention regularization combining entropy-based penalties and L1 sparsity constraints ensures interpretable, focused predictions. MRDAGT establishes new state-of-the-art benchmarks across multiple datasets, delivering substantial performance improvements while maintaining transparent, linguistically grounded decision-making processes essential for real-world deployment.
DCAMCP : A deep learning model based on capsule network and attention mechanism for molecular carcinogenicity prediction
The carcinogenicity of drugs can have a serious impact on human health, so carcinogenicity testing of new compounds is very necessary before being put on the market. Currently, many methods have been used to predict the carcinogenicity of compounds. However, most methods have limited predictive power and there is still much room for improvement. In this study, we construct a deep learning model based on capsule network and attention mechanism named DCAMCP to discriminate between carcinogenic and non‐carcinogenic compounds. We train the DCAMCP on a dataset containing 1564 different compounds through their molecular fingerprints and molecular graph features. The trained model is validated by fivefold cross‐validation and external validation. DCAMCP achieves an average accuracy (ACC) of 0.718 ± 0.009, sensitivity (SE) of 0.721 ± 0.006, specificity (SP) of 0.715 ± 0.014 and area under the receiver‐operating characteristic curve (AUC) of 0.793 ± 0.012. Meanwhile, comparable results can be achieved on an external validation dataset containing 100 compounds, with an ACC of 0.750, SE of 0.778, SP of 0.727 and AUC of 0.811, which demonstrate the reliability of DCAMCP. The results indicate that our model has made progress in cancer risk assessment and could be used as an efficient tool in drug design.