Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,799 result(s) for "graph convolutional networks"
Sort by:
SORAG: Synthetic Data Over-Sampling Strategy on Multi-Label Graphs
In many real-world networks of interest in the field of remote sensing (e.g., public transport networks), nodes are associated with multiple labels, and node classes are imbalanced; that is, some classes have significantly fewer samples than others. However, the research problem of imbalanced multi-label graph node classification remains unexplored. This non-trivial task challenges the existing graph neural networks (GNNs) because the majority class can dominate the loss functions of GNNs and result in the overfitting of the majority class features and label correlations. On non-graph data, minority over-sampling methods (such as the synthetic minority over-sampling technique and its variants) have been demonstrated to be effective for the imbalanced data classification problem. This study proposes and validates a new hypothesis with unlabeled data over-sampling, which is meaningless for imbalanced non-graph data; however, feature propagation and topological interplay mechanisms between graph nodes can facilitate the representation learning of imbalanced graphs. Furthermore, we determine empirically that ensemble data synthesis through the creation of virtual minority samples in the central region of a minority and generation of virtual unlabeled samples in the boundary region between a minority and majority is the best practice for the imbalanced multi-label graph node classification task. Our proposed novel data over-sampling framework is evaluated using multiple real-world network datasets, and it outperforms diverse, strong benchmark models by a large margin.
Online social network user performance prediction by graph neural networks
Online social networks provide rich information that characterizes the user’s personality, his interests, hobbies, and reflects his current state. Users of social networks publish photos, posts, videos, audio, etc. every day. Online social networks (OSN) open up a wide range of research opportunities for scientists. Much research conducted in recent years using graph neural networks (GNN) has shown their advantages over conventional deep learning. In particular, the use of graph neural networks for online social network analysis seems to be the most suitable. In this article we studied the use of graph convolutional neural networks with different convolution layers (GCNConv, SAGEConv, GraphConv, GATConv, TransformerConv, GINConv) for predicting the user’s professional success in VKontakte online social network, based on data obtained from his profiles. We have used various parameters obtained from users’ personal pages in VKontakte social network (the number of friends, subscribers, interesting pages, etc.) as their features for determining the professional success, as well as networks (graphs) reflecting connections between users (followers/ friends). In this work we performed graph classification by using graph convolutional neural networks (with different types of convolution layers). The best accuracy of the graph convolutional neural network (0.88) was achieved by using the graph isomorphism network (GIN) layer. The results, obtained in this work, will serve for further studies of social success, based on metrics of personal profiles of OSN users and social graphs using neural network methods.
Pathological-Gait Recognition Using Spatiotemporal Graph Convolutional Networks and Attention Model
Walking is an exercise that uses muscles and joints of the human body and is essential for understanding body condition. Analyzing body movements through gait has been studied and applied in human identification, sports science, and medicine. This study investigated a spatiotemporal graph convolutional network model (ST-GCN), using attention techniques applied to pathological-gait classification from the collected skeletal information. The focus of this study was twofold. The first objective was extracting spatiotemporal features from skeletal information presented by joint connections and applying these features to graph convolutional neural networks. The second objective was developing an attention mechanism for spatiotemporal graph convolutional neural networks, to focus on important joints in the current gait. This model establishes a pathological-gait-classification system for diagnosing sarcopenia. Experiments on three datasets, namely NTU RGB+D, pathological gait of GIST, and multimodal-gait symmetry (MMGS), validate that the proposed model outperforms existing models in gait classification.
Multi-Label Classification in Anime Illustrations Based on Hierarchical Attribute Relationships
In this paper, we propose a hierarchical multi-modal multi-label attribute classification model for anime illustrations using a graph convolutional network (GCN). Our focus is on the challenging task of multi-label attribute classification, which requires capturing subtle features intentionally highlighted by creators of anime illustrations. To address the hierarchical nature of these attributes, we leverage hierarchical clustering and hierarchical label assignments to organize the attribute information into a hierarchical feature. The proposed GCN-based model effectively utilizes this hierarchical feature to achieve high accuracy in multi-label attribute classification. The contributions of the proposed method are as follows. Firstly, we introduce GCN to the multi-label attribute classification task of anime illustrations, enabling the capturing of more comprehensive relationships between attributes from their co-occurrence. Secondly, we capture subordinate relationships among the attributes by adopting hierarchical clustering and hierarchical label assignment. Lastly, we construct a hierarchical structure of attributes that appear more frequently in anime illustrations based on certain rules derived from previous studies, which helps to reflect the relationships between different attributes. The experimental results on multiple datasets show that the proposed method is effective and extensible by comparing it with some existing methods, including the state-of-the-art method.
Method for Training and White Boxing DL, BDT, Random Forest and Mind Maps Based on GNN
A method for training and white boxing of deep learning (DL) binary decision trees (BDT), random forest (RF) as well as mind maps (MM) based on graph neural networks (GNN) is proposed. By representing DL, BDT, RF, and MM as graphs, these can be trained by GNN. These learning architectures can be optimized through the proposed method. The proposed method allows representation of the architectures with matrices because the learning architecture can be expressed with graphs. These matrices and graphs are visible, which makes the learning processes visible, and therefore, more accountable. Some examples are shown here to highlight the usefulness of the proposed method, in particular, for learning processes and for ensuring the accountability of DL together with improvement in network architecture.
A Spatial Adaptive Algorithm Framework for Building Pattern Recognition Using Graph Convolutional Networks
Graph learning methods, especially graph convolutional networks, have been investigated for their potential applicability in many fields of study based on topological data. Their topological data processing capabilities have proven to be powerful. However, the relationships among separate entities include not only topological adjacency, but also correlation in vision, for example, the spatial vector data of buildings. In this study, we propose a spatial adaptive algorithm framework with a data-driven design to accomplish building group division and building group pattern recognition tasks, which is not sensitive to the difference in the spatial distribution of the buildings in various geographical regions. In addition, the algorithm framework has a multi-stage design, and processes the building group data from whole to parts, since the objective is closely related to multi-object detection on topological data. By using the graph convolution method and a deep neural network (DNN), the multitask model in this study can learn human thoughts through supervised training, and the whole process only depends upon the descriptive vector data of buildings without any ancillary data for building group partition. Experiments confirmed that the method for expressing buildings and the effect of the algorithm framework proposed are satisfactory. In summary, using deep learning methods to complete the tasks of building group division and building group pattern recognition is potentially effective, and the algorithm framework is worth further research.
An Attention-Guided Spatiotemporal Graph Convolutional Network for Sleep Stage Classification
Sleep staging has been widely used as an approach in sleep diagnoses at sleep clinics. Graph neural network (GNN)-based methods have been extensively applied for automatic sleep stage classifications with significant results. However, the existing GNN-based methods rely on a static adjacency matrix to capture the features of the different electroencephalogram (EEG) channels, which cannot grasp the information of each electrode. Meanwhile, these methods ignore the importance of spatiotemporal relations in classifying sleep stages. In this work, we propose a combination of a dynamic and static spatiotemporal graph convolutional network (ST-GCN) with inter-temporal attention blocks to overcome two shortcomings. The proposed method consists of a GCN with a CNN that takes into account the intra-frame dependency of each electrode in the brain region to extract spatial and temporal features separately. In addition, the attention block was used to capture the long-range dependencies between the different electrodes in the brain region, which helps the model to classify the dynamics of each sleep stage more accurately. In our experiments, we used the sleep-EDF and the subgroup III of the ISRUC-SLEEP dataset to compare with the most current methods. The results show that our method performs better in accuracy from 4.6% to 5.3%, in Kappa from 0.06 to 0.07, and in macro-F score from 4.9% to 5.7%. The proposed method has the potential to be an effective tool for improving sleep disorders.
A Traffic Flow Prediction Model Based on Dynamic Graph Convolution and Adaptive Spatial Feature Extraction
The inherent symmetry in traffic flow patterns plays a fundamental role in urban transportation systems. This study proposes a Dynamic Graph Convolutional Recurrent Adaptive Network (DGCRAN) for traffic flow prediction, leveraging symmetry principles in spatial–temporal dependencies. Unlike conventional models relying on static graph structures that often break real-world symmetry relationships, our approach introduces two key innovations respecting the dynamic symmetry of traffic networks: First, a Dynamic Graph Convolutional Recurrent Network (DGCRN) is proposed that preserves and adapts to the time-varying symmetry in node associations, and an Adaptive Graph Convolutional Network (AGCN) that captures the symmetric and asymmetric patterns between nodes. The experimental results on PEMS03, PEMS04, and PEMS08 datasets demonstrate that DGCRAN maintains superior performance symmetry across metrics: reducing MAE, RMSE, and MAPE by average margins of 12.7%, 10.3%, and 14.2%, respectively, compared to 15 benchmarks. Notably, the model achieves maximum MAE reduction of 21.33% on PEMS08, verifying its ability to model the symmetric and asymmetric characteristics in traffic flow dependencies while significantly improving prediction accuracy and generalization capability.
Dynamic Fall Detection Using Graph-Based Spatial Temporal Convolution and Attention Network
The prevention of falls has become crucial in the modern healthcare domain and in society for improving ageing and supporting the daily activities of older people. Falling is mainly related to age and health problems such as muscle, cardiovascular, and locomotive syndrome weakness, etc. Among elderly people, the number of falls is increasing every year, and they can become life-threatening if detected too late. Most of the time, ageing people consume prescription medication after a fall and, in the Japanese community, the prevention of suicide attempts due to taking an overdose is urgent. Many researchers have been working to develop fall detection systems to observe and notify about falls in real-time using handcrafted features and machine learning approaches. Existing methods may face difficulties in achieving a satisfactory performance, such as limited robustness and generality, high computational complexity, light illuminations, data orientation, and camera view issues. We proposed a graph-based spatial-temporal convolutional and attention neural network (GSTCAN) with an attention model to overcome the current challenges and develop an advanced medical technology system. The spatial-temporal convolutional system has recently proven the power of its efficiency and effectiveness in various fields such as human activity recognition and text recognition tasks. In the procedure, we first calculated the motion along the consecutive frame, then constructed a graph and applied a graph-based spatial and temporal convolutional neural network to extract spatial and temporal contextual relationships among the joints. Then, an attention module selected channel-wise effective features. In the same procedure, we repeat it six times as a GSTCAN and then fed the spatial-temporal features to the network. Finally, we applied a softmax function as a classifier and achieved high accuracies of 99.93%, 99.74%, and 99.12% for ImViA, UR-Fall, and FDD datasets, respectively. The high-performance accuracy with three datasets proved the proposed system’s superiority, efficiency, and generality.
3D skeleton-based human motion prediction using spatial–temporal graph convolutional network
3D human motion prediction; predicting future human poses in the basis of historically observed motion sequences, is a core task in computer vision. Thus far, it has been successfully applied to both autonomous driving and human–robot interaction. Previous research work has usually employed Recurrent Neural Networks (RNNs)-based models to predict future human poses. However, as previous works have amply demonstrated, RNN-based prediction models suffer from unrealistic and discontinuous problems in human motion prediction due to the accumulation of prediction errors. To address this, we propose a feed-forward, 3D skeleton-based model for human motion prediction. This model, the Spatial–Temporal Graph Convolutional Network (ST-GCN) model, automatically learns the spatial and temporal patterns of human motion from input sequences. This model overcomes the limitations of previous research approaches. Specifically, our ST-GCN model is based on an encoder-decoder architecture. The encoder consists of 5 ST-GCN modules, with each ST-GCN module consisting of a spatial GCN layer and a 2D convolution-based TCN layer, which facilitate the encoding of the spatio-temporal dynamics of human motion. Subsequently, the decoder, consisting of 5 TCN layers, exploits the encoded spatio-temporal representation of human motion to predict future human pose. We leveraged the ST-GCN model to perform extensive experiments on various large-scale human activity 3D pose datasets (Human3.6 M, AMASS, 3DPW) while adopting MPJPE (Mean Per Joint Position Error) as the evaluation metric. The experimental results demonstrate that our ST-GCN model outperforms the baseline models in both short-term (< 400 ms) and long-term (> 400 ms) predictions, thus yielding the best prediction results.