Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
10,824 result(s) for "Long Short-term Memory"
Sort by:
A short-term hybrid forecasting model for time series electrical-load data using random forest and bidirectional long short-term memory
In the presence of the deregulated electric industry, load forecasting is more demanded than ever to ensure the execution of applications such as energy generation, pricing decisions, resource procurement, and infrastructure development. This paper presents a hybrid machine learning model for short-term load forecasting (STLF) by applying random forest and bidirectional long short-term memory to acquire the benefits of both methods. In the experimental evaluation, we used a Bangladeshi electricity consumption dataset of 36 months. The paper provides a comparative study between the proposed hybrid model and state-of-art models using performance metrics, loss analysis, and prediction plotting. Empirical results demonstrate that the hybrid model shows better performance than the standard long short-term memory and the bidirectional long short-term memory models by exhibiting more accurate forecast results.
Federated learning with multi‐cohort real‐world data for predicting the progression from mild cognitive impairment to Alzheimer's disease
INTRODUCTION Leveraging routinely collected electronic health records (EHRs) from multiple health‐care institutions, this approach aims to assess the feasibility of using federated learning (FL) to predict the progression from mild cognitive impairment (MCI) to Alzheimer's disease (AD). METHODS We analyzed EHR data from the OneFlorida+ consortium, simulating six sites, and used a long short‐term memory (LSTM) model with a federated averaging (FedAvg) algorithm. A personalized FL approach was used to address between‐site heterogeneity. Model performance was assessed using the area under the receiver operating characteristic curve (AUC) and feature importance techniques. RESULTS Of 44,899 MCI patients, 6391 progressed to AD. FL models achieved a 6% improvement in AUC compared to local models. Key predictive features included body mass index, vitamin B12, blood pressure, and others. DISCUSSION FL showed promise in predicting AD progression by integrating heterogeneous data across multiple institutions while preserving privacy. Despite limitations, it offers potential for future clinical applications. Highlights We applied long short‐term memory and federated learning (FL) to predict mild cognitive impairment to Alzheimer's disease progression using electronic health record data from multiple institutions. FL improved prediction performance, with a 6% increase in area under the receiver operating characteristic curve compared to local models. We identified key predictive features, such as body mass index, vitamin B12, and blood pressure. FL shows effectiveness in handling data heterogeneity across multiple sites while ensuring data privacy. Personalized and pooled FL models generally performed better than global and local models.
Efficient TD3 based path planning of mobile robot in dynamic environments using prioritized experience replay and LSTM
To address the challenges of sample utilization efficiency and managing temporal dependencies, this paper proposes an efficient path planning method for mobile robot in dynamic environments based on an improved twin delayed deep deterministic policy gradient (TD3) algorithm. The proposed method, named PL-TD3, integrates prioritized experience replay (PER) and long short-term memory (LSTM) neural networks, which enhance both sample efficiency and the ability to handle time-series data. To verify the effectiveness of the proposed method, simulation and practical experiments were designed and conducted. In the simulation experiments, both static and dynamic obstacles were included in the test environment, along with experiments to assess generalization capabilities. The algorithm demonstrated superior performance in terms of both execution time and path efficiency. The practical experiments, based on the assumptions from the simulation tests, further confirmed that PL-TD3 has improved the effectiveness and robustness of path planning for mobile robot in dynamic environments.
Dynamic prediction of global monthly burned area with hybrid deep neural networks
Wildfires not only severely damage the natural environment and global ecological balance but also cause substantial losses to global forest resources and human lives and property. Unprecedented fire events such as Australia's bushfires have alerted us to the fact that wildfire prediction is a critical scientific problem for fire management. Therefore, robust, long-lead models and dynamic predictions of wildfire are valuable for global fire prevention. However, despite decades of effort, the dynamic, effective, and accurate prediction of wildfire remains problematic. There is great uncertainty in predicting the future based on historical and existing spatiotemporal sequence data, but with advances in deep learning algorithms, solutions to prediction problems are being developed. Here, we present a dynamic prediction model of global burned area of wildfire employing a deep neural network (DNN) approach that produces effective wildfire forecasts based on historical time series predictors and satellite-based burned area products. A hybrid DNN that combines long short-term memory and a two-dimensional convolutional neural network (CNN2D-LSTM) was proposed, and CNN2D-LSTM model candidates with four different architectures were designed and compared to construct the optimal architecture for fire prediction. The proposed model was also shown to outperform convolutional neural networks (CNNs) and the fully connected long short-term memory (FcLSTM) approach using the refined index of agreement and evaluation metrics. We produced monthly global burned area spatiotemporal prediction maps and adequately reflected the seasonal peak in fire activity and highly fire-prone areas. Our combined CNN2D-LSTM approach can effectively predict the global burned area of wildfires 1 month in advance and can be generalized to provide seasonal estimates of global fire risk.
Personalized blood glucose prediction in type 1 diabetes using meta-learning with bidirectional long short term memory-transformer hybrid model
Personalized blood glucose (BG) prediction in Type 1 Diabetes (T1D) is challenged by significant inter-patient heterogeneity. To address this, we propose BiT-MAML, a hybrid model combining a Bidirectional LSTM-Transformer with Model-Agnostic Meta-Learning. We evaluated our model using a rigorous Leave-One-Patient-Out Cross-Validation (LOPO-CV) on the OhioT1DM dataset, ensuring a fair comparison against re-implemented LSTM and Edge-LSTM baselines. The results show our model achieved a mean RMSE of 24.89 mg/dL for the 30 min prediction horizon, marking a substantial improvement of 19.3% over the standard LSTM and 14.2% over the Edge-LSTM. Notably, our model also achieved the lowest standard deviation (±4.60 mg/dL), indicating more consistent and generalizable performance across the patient cohort. A key finding of our study is the confirmation of significant performance variability across individuals, a known clinical challenge. This was evident as our model’s 30 min RMSE ranged from an excellent 19.64 mg/dL to a more challenging 30.57 mg/dL, reflecting the inherent difficulty of personalizing predictions rather than model instability. From a clinical safety perspective, Clarke Error Grid Analysis confirmed the model’s robustness, with over 92% of predictions falling within the clinically acceptable Zones A and B. This study concludes that the development of effective personalized BG prediction requires not only advanced model architectures but also robust evaluation methods that transparently report the full spectrum of performance, providing a realistic pathway toward reliable clinical tools.
Multimodal data-based human motion intention prediction using adaptive hybrid deep learning network for movement challenged person
Recently, social demands for a good quality of life have increased among the elderly and disabled people. So, biomedical engineers and robotic researchers aimed to fuse these techniques in a novel rehabilitation system. Moreover, these models utilized the biomedical signals acquired from the human body's particular organ, cells, or tissues. The human motion intention prediction mechanism plays an essential role in various applications, such as assistive and rehabilitation robots, that execute specific tasks among elders and physically impaired individuals. However, more complications are increased in the human–machine-based interaction techniques, creating more scope for personalized assistance for the human motion intention prediction system. Therefore, in this paper, an Adaptive Hybrid Network (AHN) is implemented for effective human motion intention prediction. Initially, multimodal data like electroencephalogram (EEG)/Electromyography (EMG) signals and sensor measures data are collected from the available data resource. The gathered EEG/EMG signals are then converted into spectrogram images and sent to AH-CNN-LSTM, which is the integration of an Adaptive Hybrid Convolution Neural Network (AH-CNN) with a Long Short-Term Memory (LSTM) network. Similarly, the data details of sensor measures are directly subjected to AH-CNN-Res-LSTM, which is the combination of Adaptive Hybrid CNN with Residual Network and LSTM (Res-LSTM) to get the predictive result. Further, to enhance the prediction, the parameters in both the AH-CNN-LSTM and AH-CNN-Res-LSTM techniques are optimized using the Improved Yellow Saddle Goatfish Algorithm (IYSGA). The efficiency of the implemented model is computed by conducting the comparison experiment of the proposed technique with other standard models. The performance outcome of the developed method outperformed the other traditional methods.
Improving Wheat Yield Prediction Accuracy Using LSTM-RF Framework Based on UAV Thermal Infrared and Multispectral Imagery
Yield prediction is of great significance in agricultural production. Remote sensing technology based on unmanned aerial vehicles (UAVs) offers the capacity of non-intrusive crop yield prediction with low cost and high throughput. In this study, a winter wheat field experiment with three levels of irrigation (T1 = 240 mm, T2 = 190 mm, T3 = 145 mm) was conducted in Henan province. Multispectral vegetation indices (VIs) and canopy water stress indices (CWSI) were obtained using an UAV equipped with multispectral and thermal infrared cameras. A framework combining a long short-term memory neural network and random forest (LSTM-RF) was proposed for predicting wheat yield using VIs and CWSI from multi-growth stages as predictors. Validation results showed that the R2 of 0.61 and the RMSE value of 878.98 kg/ha was achieved in predicting grain yield using LSTM. LSTM-RF model obtained better prediction results compared to the LSTM with n R2 of 0.78 and RMSE of 684.1 kg/ha, which is equivalent to a 22% reduction in RMSE. The results showed that LSTM-RF considered both the time-series characteristics of the winter wheat growth process and the non-linear characteristics between remote sensing data and crop yield data, providing an alternative for accurate yield prediction in modern agricultural management.
Detecting emotions using a combination of bidirectional encoder representations from transformers embedding and bidirectional long short-term memory
One of the most difficult topics in natural language understanding (NLU) is emotion detection in text because human emotions are difficult to understand without knowing facial expressions. Because the structure of Indonesian differs from other languages, this study focuses on emotion detection in Indonesian text. The nine experimental scenarios of this study incorporate word embedding (bidirectional encoder representations from transformers (BERT), Word2Vec, and GloVe) and emotion detection models (bidirectional long short-term memory (BiLSTM), LSTM, and convolutional neural network (CNN)). With values of 88.28%, 88.42%, and 89.20% for Commuter Line, Transjakarta, and Commuter Line+Transjakarta, respectively, BERT-BiLSTM generates the highest accuracy on the data. In general, BiLSTM produces the highest accuracy, followed by LSTM, and finally CNN. When it came to word embedding, BERT embedding outperformed Word2Vec and GloVe. In addition, the BERT-BiLSTM model generates the highest precision, recall, and F1-measure values in each data scenario when compared to other models. According to the results of this study, BERT-BiLSTM can enhance the performance of the classification model when compared to previous studies that only used BERT or BiLSTM for emotion detection in Indonesian texts.
Prediction of Cognitive Load from Electroencephalography Signals Using Long Short-Term Memory Network
In recent years, the development of adaptive models to tailor instructional content to learners by measuring their cognitive load has become a topic of active research. Brain fog, also known as confusion, is a common cause of poor performance, and real-time detection of confusion is a challenging and important task for applications in online education and driver fatigue detection. In this study, we propose a deep learning method for cognitive load recognition based on electroencephalography (EEG) signals using a long short-term memory network (LSTM) with an attention mechanism. We obtained EEG signal data from a database of brainwave information and associated data on mental load. We evaluated the performance of the proposed LSTM technique in comparison with random forest, Adaptive Boosting (AdaBoost), support vector machine, eXtreme Gradient Boosting (XGBoost), and artificial neural network models. The experimental results demonstrated that the proposed approach had the highest accuracy of 87.1% compared to those of other algorithms, including random forest (64%), AdaBoost (64.31%), support vector machine (60.9%), XGBoost (67.3%), and artificial neural network models (71.4%). The results of this study support the development of a personalized adaptive learning system designed to measure and actively respond to learners’ cognitive load in real time using wireless portable EEG systems.
Advanced Soil Organic Matter Prediction with a Regional Soil NIR Spectral Library Using Long Short-Term Memory–Convolutional Neural Networks: A Case Study
Soil analysis using near-infrared spectroscopy has shown great potential to be an alternative to traditional laboratory analysis, and there is continuously increasing interest in building large-scale soil spectral libraries (SSLs). However, due to issues such as high non-linearity in soil spectral data and complexity in soil spatial variation, the establishment of robust prediction models for soil spectral libraries remains a challenge. This study aimed to investigate the performance of deep learning algorithms, including long short-term memory (LSTM) and LSTM–convolutional neural networks (LSTM–CNN) integrated models, to predict the soil organic matter (SOM) of a provincial-scale SSL, and compare it to the normally used local weighted regression (LWR) model. The Hebei soil spectral library (HSSL) contains 425 topsoil samples (0–20 cm), of which every 3 soil samples were collected from dry land, irrigated land, and paddy fields, respectively, in different counties of Hebei Province, China. The results show that the accuracy of the validation dataset rank as follows: LSTM–CNN (R2p = 0.96, RMSEp = 1.66 g/kg) > LSTM (R2p = 0.83, RMSEp = 3.42 g/kg) > LWR (R2p = 0.82, RMSEp = 3.79 g/kg). The LSTM–CNN model performed the best, mainly due to its comprehensive ability to effectively extract spatial and temporal features. Meanwhile, the LSTM model achieved higher accuracy than the LWR model, owing to its built-in memory unit and its advantage of faster feature band extraction. Thus, it was suggested to use deep learning algorithms for SOM predictions in SSLs. However, their performance on larger-scale SSLs such as continental/global SSLs still needs to be further investigated.