Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
66
result(s) for
"temporal convolutional network (TCN)"
Sort by:
A Temporal Convolutional Neural Network Fusion Attention Mechanism Runoff Prediction Model Based on Dynamic Decomposition Reconstruction Integration Processing
by
Ren, Pingan
,
Zhu, Sipeng
,
Qin, Zhou
in
Accuracy
,
Artificial intelligence
,
Comparative analysis
2024
Accurate and reliable runoff forecasting is of great significance for hydropower station operation and watershed water resource allocation. However, various complex factors, such as climate conditions and human activities, constantly affect the formation of runoff. Runoff data under changing environments exhibit highly nonlinear, time-varying, and stochastic characteristics, which undoubtedly pose great challenges to runoff prediction. Under this background, this study ingeniously merges reconstruction integration technology and dynamic decomposition technology to propose a Temporal Convolutional Network Fusion Attention Mechanism Runoff Prediction method based on dynamic decomposition reconstruction integration processing. This method uses the Temporal Convolutional Network to extract the cross-temporal nonlinear characteristics of longer runoff data, and introduces attention mechanisms to capture the importance distribution and duration relationship of historical temporal features in runoff prediction. It integrates a decomposition reconstruction process based on dynamic classification and filtering, fully utilizing decomposition techniques, reconstruction techniques, complexity analysis, dynamic decomposition techniques, and neural networks optimized by automatic hyperparameter optimization algorithms, effectively improving the model’s interpretability and precision of prediction accuracy. This study used historical monthly runoff datasets from the Pingshan Hydrological Station and Yichang Hydrological Station for validation, and selected eight models including the LSTM model, CEEMDAN-TCN-Attention model, and CEEMDAN-VMD-LSTM-Attention (DDRI) for comparative prediction experiments. The MAE, RMSE, MAPE, and NSE indicators of the proposed model showed the best performances, with test set values of 1007.93, 985.87, 16.47, and 0.922 for the Pingshan Hydrological Station and 1086.81, 1211.18, 17.20, and 0.919 for the Yichang Hydrological Station, respectively. The experimental results indicate that the fusion model generated through training has strong learning ability for runoff temporal features and the proposed model has obvious advantages in overall predictive performance, stability, correlation, comprehensive accuracy, and statistical testing.
Journal Article
Deep Learning Based Prediction on Greenhouse Crop Yield Combined TCN and RNN
by
Cutsuridis, Vassilis
,
Gong, Liyun
,
Yu, Miao
in
Communication
,
crop yield prediction
,
deep learning
2021
Currently, greenhouses are widely applied for plant growth, and environmental parameters can also be controlled in the modern greenhouse to guarantee the maximum crop yield. In order to optimally control greenhouses’ environmental parameters, one indispensable requirement is to accurately predict crop yields based on given environmental parameter settings. In addition, crop yield forecasting in greenhouses plays an important role in greenhouse farming planning and management, which allows cultivators and farmers to utilize the yield prediction results to make knowledgeable management and financial decisions. It is thus important to accurately predict the crop yield in a greenhouse considering the benefits that can be brought by accurate greenhouse crop yield prediction. In this work, we have developed a new greenhouse crop yield prediction technique, by combining two state-of-the-arts networks for temporal sequence processing—temporal convolutional network (TCN) and recurrent neural network (RNN). Comprehensive evaluations of the proposed algorithm have been made on multiple datasets obtained from multiple real greenhouse sites for tomato growing. Based on a statistical analysis of the root mean square errors (RMSEs) between the predicted and actual crop yields, it is shown that the proposed approach achieves more accurate yield prediction performance than both traditional machine learning methods and other classical deep neural networks. Moreover, the experimental study also shows that the historical yield information is the most important factor for accurately predicting future crop yields.
Journal Article
Pathological-Gait Recognition Using Spatiotemporal Graph Convolutional Networks and Attention Model
by
Seo, Haneol
,
Lee, Chan-Su
,
Naseem, Muhammad Tahir
in
Classification
,
Gait
,
gait classification
2022
Walking is an exercise that uses muscles and joints of the human body and is essential for understanding body condition. Analyzing body movements through gait has been studied and applied in human identification, sports science, and medicine. This study investigated a spatiotemporal graph convolutional network model (ST-GCN), using attention techniques applied to pathological-gait classification from the collected skeletal information. The focus of this study was twofold. The first objective was extracting spatiotemporal features from skeletal information presented by joint connections and applying these features to graph convolutional neural networks. The second objective was developing an attention mechanism for spatiotemporal graph convolutional neural networks, to focus on important joints in the current gait. This model establishes a pathological-gait-classification system for diagnosing sarcopenia. Experiments on three datasets, namely NTU RGB+D, pathological gait of GIST, and multimodal-gait symmetry (MMGS), validate that the proposed model outperforms existing models in gait classification.
Journal Article
Shipborne Multi-Function Radar Working Mode Recognition Based on DP-ATCN
by
Zhang, Zhizhong
,
Niu, Feng
,
Tian, Tian
in
Antennas
,
Artificial intelligence
,
Comparative analysis
2023
There has been increased interest in recognizing the dynamic and flexible changes in shipborne multi-function radar (MFR) working modes. The working modes determine the distribution of pulse descriptor words (PDWs). However, building the mapping relationship from PDWs to working modes in reconnaissance systems presents many challenges, such as the duration of the working modes not being fixed, incomplete temporal features in short PDW slices, and delayed feedback of the reconnaissance information in long PDW slices. This paper recommends an MFR working mode recognition method based on the ShakeDrop regularization dual-path attention temporal convolutional network (DP-ATCN) with prolonged temporal feature preservation. The method uses a temporal feature extraction network with the Convolutional Block Attention Module (CBAM) and ShakeDrop regularization to acquire a high-dimensional space mapping of temporal features of the PDWs in a short time slice. Additionally, with prolonged PDW accumulation, an enhanced TCN is introduced to attain the temporal variation of long-term dependence. This way, secondary correction of MFR working mode recognition results is achieved with both promptness and accuracy. Experimental results and analysis confirm that, despite the presence of missing and spurious pulses, the recommended method performs effectively and consistently in shipborne MFR working mode recognition tasks.
Journal Article
A TCN-BiLSTM and ANR-IEKF Hybrid Framework for Sustained Vehicle Positioning During GNSS Outages
2025
The performance of integrated Global Navigation Satellite System and Inertial Navigation System (GNSS/INS) navigation often declines in complex urban environments due to frequent GNSS signal blockages. This poses a significant challenge for autonomous driving applications that require continuous and reliable positioning. To address this limitation, this paper presents a novel hybrid framework that combines a deep learning architecture with an adaptive Kalman Filter. At the core of this framework is a Temporal Convolutional Network and Bidirectional Long Short-Term Memory (TCN-BiLSTM) model, which generates accurate pseudo-GNSS measurements from raw INS data during GNSS outages. These measurements are then fused with the INS data stream using an Adaptive Noise-Regulated Iterated Extended Kalman Filter (ANR-IEKF), which enhances robustness by dynamically estimating and adjusting the process and observation noise statistics in real time. The proposed ANR-IEKF + TCN-BiLSTM framework was validated using a real-world vehicle dataset that encompasses both straight-line and turning scenarios. The results demonstrate its superior performance in positioning accuracy and robustness compared to several baseline models, thereby confirming its effectiveness as a reliable solution for maintaining high-precision navigation in GNSS-denied environments. Validated in 70 s GNSS outage environments, our approach enhances positioning accuracy by over 50% against strong deep learning baselines with errors reduced to roughly 3.4 m.
Journal Article
Fraudulent account detection in social media using hybrid deep transformer model and hyperparameter optimization
by
Shukla, Prashant Kumar
,
Shukla, Piyush Kumar
,
Veerasamy, Bala Dhandayuthapani
in
639/166
,
639/4077
,
639/705
2025
The high rate of social media development has triggered a high rate of fake accounts, which are a great risk to the privacy of users and the integrity of the platform. These malicious accounts are hard to detect because user activity data is highly imbalanced, dimensional, and sequential. The emergence of fake profiles on social media endangers the privacy and trust of social media users. It is difficult to detect such accounts because of high-dimensional, highly sequential, and imbalanced user behavior data. Current techniques tend to miss out on the complicated activity patterns or even overfit, which is why a strong, scalable, and precise model of social media fraud detection is required. This study suggests a new deep learning architecture that entails a Temporal Convolutional Network (TCN) with Generative Adversarial Network (GAN)-based data augmentation to generate minority classes, and Autoencoder-based feature extraction to reduce dimensionality. The Seagull Optimization Algorithm (SOA), which is a metaheuristic algorithm, is used to optimize hyperparameters by balancing efficiency and speed of convergence in global search. The framework is tested on benchmark datasets (Cresci-2017 and TwiBot-22) and compared to the state-of-the-art models. It has been shown in experiments that the suggested TCN-GAN-SOA framework performs better, with ROC-AUC scores of 0.96 on Cresci-2017 and 0.95 on TwiBot-22, and a higher precision-recall value and better F1-scores. In addition, computational efficiency can be verified by the runtime analysis; case studies prove the framework’s strength when handling various situations of fraudulent behaviors. The given solution offers a scalable, reliable, and accurate methodology of detecting social media fraud based on the combination of sophisticated sequence modeling, realistic data augmentation, and hyperparameter optimization.
Journal Article
Deep Learning-Based Eye-Writing Recognition with Improved Preprocessing and Data Augmentation Techniques
by
Suzuki, Kota
,
Shin, Jungpil
,
Miah, Abu Saleh Musa
in
Accuracy
,
Algorithms
,
Amyotrophic lateral sclerosis
2025
Eye-tracking technology enables communication for individuals with muscle control difficulties, making it a valuable assistive tool. Traditional systems rely on electrooculography (EOG) or infrared devices, which are accurate but costly and invasive. While vision-based systems offer a more accessible alternative, they have not been extensively explored for eye-writing recognition. Additionally, the natural instability of eye movements and variations in writing styles result in inconsistent signal lengths, which reduces recognition accuracy and limits the practical use of eye-writing systems. To address these challenges, we propose a novel vision-based eye-writing recognition approach that utilizes a webcam-captured dataset. A key contribution of our approach is the introduction of a Discrete Fourier Transform (DFT)-based length normalization method that standardizes the length of each eye-writing sample while preserving essential spectral characteristics. This ensures uniformity in input lengths and improves both efficiency and robustness. Moreover, we integrate a hybrid deep learning model that combines 1D Convolutional Neural Networks (CNN) and Temporal Convolutional Networks (TCN) to jointly capture spatial and temporal features of eye-writing. To further improve model robustness, we incorporate data augmentation and initial-point normalization techniques. The proposed system was evaluated using our new webcam-captured Arabic numbers dataset and two existing benchmark datasets, with leave-one-subject-out (LOSO) cross-validation. The model achieved accuracies of 97.68% on the new dataset, 94.48% on the Japanese Katakana dataset, and 98.70% on the EOG-captured Arabic numbers dataset—outperforming existing systems. This work provides an efficient eye-writing recognition system, featuring robust preprocessing techniques, a hybrid deep learning model, and a new webcam-captured dataset.
Journal Article
Prediction of Icing on Wind Turbines Based on SCADA Data via Temporal Convolutional Network
2024
Icing on the blades of wind turbines during winter seasons causes a reduction in power and revenue losses. The prediction of icing before it occurs has the potential to enable mitigating actions to reduce ice accumulation. This paper presents a framework for the prediction of icing on wind turbines based on Supervisory Control and Data Acquisition (SCADA) data without requiring the installation of any additional icing sensors on the turbines. A Temporal Convolutional Network is considered as the model to predict icing from the SCADA data time series. All aspects of the icing prediction framework are described, including the necessary data preprocessing, the labeling of SCADA data for icing conditions, the selection of informative icing features or variables in SCADA data, and the design of a Temporal Convolutional Network as the prediction model. Two performance metrics to evaluate the prediction outcome are presented. Using SCADA data from an actual wind turbine, the model achieves an average prediction accuracy of 77.6% for future times of up to 48 h.
Journal Article
Prediction of protein secondary structure by the improved TCN-BiLSTM-MHA model with knowledge distillation
2024
Secondary structure prediction is a key step in understanding protein function and biological properties and is highly important in the fields of new drug development, disease treatment, bioengineering, etc. Accurately predicting the secondary structure of proteins helps to reveal how proteins are folded and how they function in cells. The application of deep learning models in protein structure prediction is particularly important because of their ability to process complex sequence information and extract meaningful patterns and features, thus significantly improving the accuracy and efficiency of prediction. In this study, a combined model integrating an improved temporal convolutional network (TCN), bidirectional long short-term memory (BiLSTM), and a multi-head attention (MHA) mechanism is proposed to enhance the accuracy of protein prediction in both eight-state and three-state structures. One-hot encoding features and word vector representations of physicochemical properties are incorporated. A significant emphasis is placed on knowledge distillation techniques utilizing the ProtT5 pretrained model, leading to performance improvements. The improved TCN, achieved through multiscale fusion and bidirectional operations, allows for better extraction of amino acid sequence features than traditional TCN models. The model demonstrated excellent prediction performance on multiple datasets. For the TS115, CB513 and PDB (2018–2020) datasets, the prediction accuracy of the eight-state structure of the six datasets in this paper reached 88.2%, 84.9%, and 95.3%, respectively, and the prediction accuracy of the three-state structure reached 91.3%, 90.3%, and 96.8%, respectively. This study not only improves the accuracy of protein secondary structure prediction but also provides an important tool for understanding protein structure and function, which is particularly applicable to resource-constrained contexts and provides a valuable tool for understanding protein structure and function.
Journal Article
Data-Driven 4D Trajectory Prediction Model Using Attention-TCN-GRU
2024
With reference to the trajectory-based operation (TBO) requirements proposed by the International Civil Aviation Organization (ICAO), this paper concentrates on the study of four-dimensional trajectory (4D Trajectory) prediction technology in busy terminal airspace, proposing a data-driven 4D trajectory prediction model. Initially, we propose a Spatial Gap Fill (Spat Fill) method to reconstruct each aircraft’s trajectory, resulting in a consistent time interval, noise-free, high-quality trajectory dataset. Subsequently, we design a hybrid neural network based on the seq2seq model, named Attention-TCN-GRU. This consists of an encoding section for extracting features from the data of historical trajectories, an attention module for obtaining the multilevel periodicity in the flight history trajectories, and a decoding section for recursively generating the predicted trajectory sequences, using the output of the coding part as the initial input. The proposed model can effectively capture long-term and short-term dependencies and repetitiveness between trajectories, enhancing the accuracy of 4D trajectory predictions. We utilize a real ADS-B trajectory dataset from the airspace of a busy terminal for validation. The experimental results indicate that the data-driven 4D trajectory prediction model introduced in this study achieves higher predictive accuracy, outperforming some of the current data-driven trajectory prediction methods.
Journal Article