Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
7,301 result(s) for "LSTM"
Sort by:
Real-Time Cuffless Continuous Blood Pressure Estimation Using Deep Learning Model
Blood pressure monitoring is one avenue to monitor people’s health conditions. Early detection of abnormal blood pressure can help patients to get early treatment and reduce mortality associated with cardiovascular diseases. Therefore, it is very valuable to have a mechanism to perform real-time monitoring for blood pressure changes in patients. In this paper, we propose deep learning regression models using an electrocardiogram (ECG) and photoplethysmogram (PPG) for the real-time estimation of systolic blood pressure (SBP) and diastolic blood pressure (DBP) values. We use a bidirectional layer of long short-term memory (LSTM) as the first layer and add a residual connection inside each of the following layers of the LSTMs. We also perform experiments to compare the performance between the traditional machine learning methods, another existing deep learning model, and the proposed deep learning models using the dataset of Physionet’s multiparameter intelligent monitoring in intensive care II (MIMIC II) as the source of ECG and PPG signals as well as the arterial blood pressure (ABP) signal. The results show that the proposed model outperforms the existing methods and is able to achieve accurate estimation which is promising in order to be applied in clinical practice effectively.
EXTENSIVE ERROR DERIVATIVE REVIEW OF LSTM MODELS WITH SIGN LANGUAGE INTERPRETATION
LSTM models are essential for systems that translate sign language, where the model suffers from error loss when processing data. LSTMs reduce error propagation by continuously calculating gradients, unlike traditional back propagation, which causes exponential error accumulation. This paper investigates error flow in bidirectional, hierarchical, and probabilistic long short-term memory models (LSTMs). While hierarchical LSTMs employ multitask learning to anticipate inputs and outputs, minimizing compounding mistakes reliably, bidirectional LSTMs reduce truncation errors. Model accuracy is increased by optimizing the gradients and parameters. This research offers a thorough evaluation of LSTM models from 2021 to 2024, examining their effectiveness in sign language recognition systems by analyzing both accuracy and loss. Keywords: RNN, LSTM, Bidirectional LSTM, Bayesian LSTM, Hierarchical LSTM, Parametric.
An Efficient Anomaly Recognition Framework Using an Attention Residual LSTM in Surveillance Videos
Video anomaly recognition in smart cities is an important computer vision task that plays a vital role in smart surveillance and public safety but is challenging due to its diverse, complex, and infrequent occurrence in real-time surveillance environments. Various deep learning models use significant amounts of training data without generalization abilities and with huge time complexity. To overcome these problems, in the current work, we present an efficient light-weight convolutional neural network (CNN)-based anomaly recognition framework that is functional in a surveillance environment with reduced time complexity. We extract spatial CNN features from a series of video frames and feed them to the proposed residual attention-based long short-term memory (LSTM) network, which can precisely recognize anomalous activity in surveillance videos. The representative CNN features with the residual blocks concept in LSTM for sequence learning prove to be effective for anomaly detection and recognition, validating our model’s effective usage in smart cities video surveillance. Extensive experiments on the real-world benchmark UCF-Crime dataset validate the effectiveness of the proposed model within complex surveillance environments and demonstrate that our proposed model outperforms state-of-the-art models with a 1.77%, 0.76%, and 8.62% increase in accuracy on the UCF-Crime, UMN and Avenue datasets, respectively.
Estimation of Daily Photovoltaic Power One Day Ahead With Hybrid Deep Learning and Machine Learning Models
In this study, hybrid LSTM‐SVM and hybrid LSTM‐KNN models were developed to predict hourly PV power one day ahead. The performances of these hybrid models were compared with K‐nearest neighbors (KNN), long short‐term memory (LSTM), and support vector machine (SVM) models. The input data of these models were pressure, cloudiness, humidity, temperature, and solar intensity, while the output data was the daily photovoltaic (PV) power one day ahead. The performances of the models were evaluated using mean square error (MSE), root mean square error (RMSE), normalized root mean square error (NRMSE), and peak signal‐to‐noise ratio (PSNR). The prediction accuracies of hybrid LSTM‐KNN, LSTM, KNN, hybrid LSTM‐SVM, and SVM were 98.72%, 95.8%, 90.25%, 76.3%, and 48.87%, respectively. Hybrid LSTM‐KNN predicted the daily PV power of the day ahead with higher accuracy than LSTM, KNN, SVM, and hybrid LSTM‐SVM. The effect of input variables on output variables was examined with sensitivity analysis. Sensitivity analyses showed that the most important meteorological data affecting the daily PV power one day ahead was solar intensity with a rate of 95%.
Prediction of significant wave height using a VMD-LSTM-rolling model in the South Sea of China
Accurate prediction of significant wave height is crucial for ocean engineering. Traditional time series prediction models fail to achieve satisfactory results due to the non-stationarity of significant wave height. Decomposition algorithms are adopted to address the problem of non-stationarity, but the traditional direct decomposition method exists information leakage. In this study, a hybrid VMD-LSTM-rolling model is proposed for non-stationary wave height prediction. In this model, time series are generated by a rolling method, after which each time series is decomposed, trained and predicted, then the predictions of each time series are combined to generate the final prediction of significant wave height. The performance of the LSTM model, the VMD-LSTM-direct model and the VMD-LSTM-rolling model are compared in terms of multi-step prediction. It is found that the error of the VMD-LSTM-direct model and the VMD-LSTM-rolling model is lower than that of the LSTM model. Due to the decomposition of the testing set, the VMD-LSTM-direct model has a slightly higher accuracy than the VMD-LSTM-rolling model. However, given the issue of information leakage, the accuracy of the VMD-LSTM-direct model is considered false. Thus, it has been proved that the VMD-LSTM-rolling model exhibits superiority in predicting significant wave height and can be applied in practice.
Unlocking the potential of RNN and CNN models for accurate rehabilitation exercise classification on multi-datasets
Physical rehabilitation is crucial in healthcare, facilitating recovery from injuries or illnesses and improving overall health. However, a notable global challenge stems from the shortage of professional physiotherapists, particularly acute in some developing countries, where the ratio can be as low as one physiotherapist per 100,000 individuals. To address these challenges and elevate patient care, the field of physical rehabilitation is progressively integrating Computer Vision and Human Activity Recognition (HAR) techniques. Numerous research efforts aim to explore methodologies that assist in rehabilitation exercises and evaluate patient movements, which is crucial as incorrect exercises can potentially worsen conditions. This study investigates applying various deep-learning models for classifying exercises using the benchmark KIMORE and UI-PRMD datasets. Employing Bi-LSTM, LSTM, CNN, and CNN-LSTM, alongside a Random Search for architectural design and Hyper-parameter tuning, our investigation reveals the (CNN) model as the top performer. After applying cross-validation, the technique achieves remarkable mean testing accuracy rates of 93.08% on the KIMORE dataset and 99.7% on the UI-PRMD dataset. This marks a slight improvement of 0.75% and 0.1%, respectively, compared to previous techniques. In addition, expanding beyond exercise classification, this study explores the KIMORE dataset’s utility for disease identification, where the (CNN) model consistently demonstrates an outstanding accuracy of 89.87%, indicating its promising role in both exercises and disease identification within the context of physical rehabilitation.
Detecting gradual trends: Integrating EWMA control charts with artificial intelligence algorithms (LSTM)
Control charts are widely used in statistical process control (SPC) to detect small, gradual shifts in process behavior, although effective at mitigating noise, such as the exponentially weighted moving average (EWMA). Traditional EWMAs, however, face significant challenges and limited adaptability in complex and dynamic environments. In this paper, we propose an improved hybrid approach that integrates EWMAs with artificial intelligence algorithms, such as anomaly detection models, deep learning networks, and unsupervised learning, to enhance the early detection of non-random variations and subtle process trends. Simulations and real-world datasets were used to validate the effectiveness of the integrated model in identifying slow-developing faults.
CNN-LSTM vs. LSTM-CNN to Predict Power Flow Direction: A Case Study of the High-Voltage Subnet of Northeast Germany
The massive installation of renewable energy sources together with energy storage in the power grid can lead to fluctuating energy consumption when there is a bi-directional power flow due to the surplus of electricity generation. To ensure the security and reliability of the power grid, high-quality bi-directional power flow prediction is required. However, predicting bi-directional power flow remains a challenge due to the ever-changing characteristics of power flow and the influence of weather on renewable power generation. To overcome these challenges, we present two of the most popular hybrid deep learning (HDL) models based on a combination of a convolutional neural network (CNN) and long-term memory (LSTM) to predict the power flow in the investigated network cluster. In our approach, the models CNN-LSTM and LSTM-CNN were trained with two different datasets in terms of size and included parameters. The aim was to see whether the size of the dataset and the additional weather data can affect the performance of the proposed model to predict power flow. The result shows that both proposed models can achieve a small error under certain conditions. While the size and parameters of the dataset can affect the training time and accuracy of the HDL model.
LSTM-Enhanced Chaotic Bat Algorithm for Real-Time Intelligent Motor Scheduling in Edge AI Environment
Smart motors regulate voltage adaptively to prevent economic losses resulting from voltage instability. These motors generate massive volumes of data, which existing scheduling methods struggle to process efficiently, leading to significant delays. To address these limitations, this paper proposes a novel intelligent motor scheduling framework that integrates Long Short-Term Memory (LSTM) networks with an Improved Chaotic Bat Algorithm (ICBA) to meet the real-time and large-scale optimization demands of smart grid environments. The LSTM module predicts high-quality initial solutions based on historical scheduling patterns, thereby accelerating the convergence of the ICBA. Enhancements to the standard bat algorithm include a second-order oscillation mechanism for improved global exploration and a chaotic search strategy based on logistic mapping to increase population diversity. Furthermore, a hierarchical cloud–edge–end collaborative optimization architecture is introduced to balance computational efficiency with real-time responsiveness. In terms of response time, the LSTM-ICBA achieves an average latency that is 47.4% faster than LSTM. For voltage deviation, the framework achieves a 24.3% reduction compared with LSTM.
Deep Learning-Based Time-Series Analysis for Detecting Anomalies in Internet of Things
Anomaly detection in time-series data is an integral part in the context of the Internet of Things (IoT). In particular, with the advent of sophisticated deep and machine learning-based techniques, this line of research has attracted many researchers to develop more accurate anomaly detection algorithms. The problem itself has been a long-lasting challenging problem in security and especially in malware detection and data tampering. The advancement of the IoT paradigm as well as the increasing number of cyber attacks on the networks of the Internet of Things worldwide raises the concern of whether flexible and simple yet accurate anomaly detection techniques exist. In this paper, we investigate the performance of deep learning-based models including recurrent neural network-based Bidirectional LSTM (BI-LSTM), Long Short-Term Memory (LSTM), CNN-based Temporal Convolutional (TCN), and CuDNN-LSTM, which is a fast LSTM implementation supported by CuDNN. In particular, we assess the performance of these models with respect to accuracy and the training time needed to build such models. According to our experiment, using different timestamps (i.e., 15, 20, and 30 min), we observe that in terms of performance, the CuDNN-LSTM model outperforms other models, whereas in terms of training time, the TCN-based model is trained faster. We report the results of experiments in comparing these four models with various look-back values.