Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,053 result(s) for "RNN"
Sort by:
Stock price prediction using the RNN model
This paper proposes a deep learning technique to predict the stock market. Since RNN has the advantage of being able to process time series data, it is very suitable for forecasting stocks. Therefore, we use the RNN network and use Apple's stock price in the past ten years as data set to predict. Experiments show that the prediction accuracy is over 95%, and the loss close to 0.1%.
Recurrent Neural Networks: A Comprehensive Review of Architectures, Variants, and Applications
Recurrent neural networks (RNNs) have significantly advanced the field of machine learning (ML) by enabling the effective processing of sequential data. This paper provides a comprehensive review of RNNs and their applications, highlighting advancements in architectures, such as long short-term memory (LSTM) networks, gated recurrent units (GRUs), bidirectional LSTM (BiLSTM), echo state networks (ESNs), peephole LSTM, and stacked LSTM. The study examines the application of RNNs to different domains, including natural language processing (NLP), speech recognition, time series forecasting, autonomous vehicles, and anomaly detection. Additionally, the study discusses recent innovations, such as the integration of attention mechanisms and the development of hybrid models that combine RNNs with convolutional neural networks (CNNs) and transformer architectures. This review aims to provide ML researchers and practitioners with a comprehensive overview of the current state and future directions of RNN research.
Time series forecasting of COVID-19 transmission in Asia Pacific countries using deep neural networks
The novel human coronavirus disease COVID-19 has become the fifth documented pandemic since the 1918 flu pandemic. COVID-19 was first reported in Wuhan, China, and subsequently spread worldwide. Almost all of the countries of the world are facing this natural challenge. We present forecasting models to estimate and predict COVID-19 outbreak in Asia Pacific countries, particularly Pakistan, Afghanistan, India, and Bangladesh. We have utilized the latest deep learning techniques such as Long Short Term Memory networks (LSTM), Recurrent Neural Network (RNN), and Gated Recurrent Units (GRU) to quantify the intensity of pandemic for the near future. We consider the time variable and data non-linearity when employing neural networks. Each model's salient features have been evaluated to foresee the number of COVID-19 cases in the next 10 days. The forecasting performance of employed deep learning models shown up to July 01, 2020, is more than 90% accurate, which shows the reliability of the proposed study. We hope that the present comparative analysis will provide an accurate picture of pandemic spread to the government officials so that they can take appropriate mitigation measures.
Fast Adaptive RNN Encoder–Decoder for Anomaly Detection in SMD Assembly Machine
Surface Mounted Device (SMD) assembly machine manufactures various products on a flexible manufacturing line. An anomaly detection model that can adapt to the various manufacturing environments very fast is required. In this paper, we proposed a fast adaptive anomaly detection model based on a Recurrent Neural Network (RNN) Encoder–Decoder with operating machine sounds. RNN Encoder–Decoder has a structure very similar to Auto-Encoder (AE), but the former has significantly reduced parameters compared to the latter because of its rolled structure. Thus, the RNN Encoder–Decoder only requires a short training process for fast adaptation. The anomaly detection model decides abnormality based on Euclidean distance between generated sequences and observed sequence from machine sounds. Experimental evaluation was conducted on a set of dataset from the SMD assembly machine. Results showed cutting-edge performance with fast adaptation.
Heartbeat Sound Signal Classification Using Deep Learning
Presently, most deaths are caused by heart disease. To overcome this situation, heartbeat sound analysis is a convenient way to diagnose heart disease. Heartbeat sound classification is still a challenging problem in heart sound segmentation and feature extraction. Dataset-B applied in this study that contains three categories Normal, Murmur and Extra-systole heartbeat sound. In the purposed framework, we remove the noise from the heartbeat sound signal by applying the band filter, After that we fixed the size of the sampling rate of each sound signal. Then we applied down-sampling techniques to get more discriminant features and reduce the dimension of the frame rate. However, it does not affect the results and also decreases the computational power and time. Then we applied a purposed model Recurrent Neural Network (RNN) that is based on Long Short-Term Memory (LSTM), Dropout, Dense and Softmax layer. As a result, the purposed method is more competitive compared to other methods.
Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions
In recent years, deep learning (DL) has been the most popular computational approach in the field of machine learning (ML), achieving exceptional results on a variety of complex cognitive tasks, matching or even surpassing human performance. Deep learning technology, which grew out of artificial neural networks (ANN), has become a big deal in computing because it can learn from data. The ability to learn enormous volumes of data is one of the benefits of deep learning. In the past few years, the field of deep learning has grown quickly, and it has been used successfully in a wide range of traditional fields. In numerous disciplines, including cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, deep learning has outperformed well-known machine learning approaches. In order to provide a more ideal starting point from which to create a comprehensive understanding of deep learning, also, this article aims to provide a more detailed overview of the most significant facets of deep learning, including the most current developments in the field. Moreover, this paper discusses the significance of deep learning and the various deep learning techniques and networks. Additionally, it provides an overview of real-world application areas where deep learning techniques can be utilised. We conclude by identifying possible characteristics for future generations of deep learning modelling and providing research suggestions. On the same hand, this article intends to provide a comprehensive overview of deep learning modelling that can serve as a resource for academics and industry people alike. Lastly, we provide additional issues and recommended solutions to assist researchers in comprehending the existing research gaps. Various approaches, deep learning architectures, strategies, and applications are discussed in this work.
A comprehensive survey on model compression and acceleration
In recent years, machine learning (ML) and deep learning (DL) have shown remarkable improvement in computer vision, natural language processing, stock prediction, forecasting, and audio processing to name a few. The size of the trained DL model is large for these complex tasks, which makes it difficult to deploy on resource-constrained devices. For instance, size of the pre-trained VGG16 model trained on the ImageNet dataset is more than 500 MB. Resource-constrained devices such as mobile phones and internet of things devices have limited memory and less computation power. For real-time applications, the trained models should be deployed on resource-constrained devices. Popular convolutional neural network models have millions of parameters that leads to increase in the size of the trained model. Hence, it becomes essential to compress and accelerate these models before deploying on resource-constrained devices while making the least compromise with the model accuracy. It is a challenging task to retain the same accuracy after compressing the model. To address this challenge, in the last couple of years many researchers have suggested different techniques for model compression and acceleration. In this paper, we have presented a survey of various techniques suggested for compressing and accelerating the ML and DL models. We have also discussed the challenges of the existing techniques and have provided future research directions in the field.
Deep Learning Based Prediction on Greenhouse Crop Yield Combined TCN and RNN
Currently, greenhouses are widely applied for plant growth, and environmental parameters can also be controlled in the modern greenhouse to guarantee the maximum crop yield. In order to optimally control greenhouses’ environmental parameters, one indispensable requirement is to accurately predict crop yields based on given environmental parameter settings. In addition, crop yield forecasting in greenhouses plays an important role in greenhouse farming planning and management, which allows cultivators and farmers to utilize the yield prediction results to make knowledgeable management and financial decisions. It is thus important to accurately predict the crop yield in a greenhouse considering the benefits that can be brought by accurate greenhouse crop yield prediction. In this work, we have developed a new greenhouse crop yield prediction technique, by combining two state-of-the-arts networks for temporal sequence processing—temporal convolutional network (TCN) and recurrent neural network (RNN). Comprehensive evaluations of the proposed algorithm have been made on multiple datasets obtained from multiple real greenhouse sites for tomato growing. Based on a statistical analysis of the root mean square errors (RMSEs) between the predicted and actual crop yields, it is shown that the proposed approach achieves more accurate yield prediction performance than both traditional machine learning methods and other classical deep neural networks. Moreover, the experimental study also shows that the historical yield information is the most important factor for accurately predicting future crop yields.
Application of Long Short-Term Memory (LSTM) Neural Network for Flood Forecasting
Flood forecasting is an essential requirement in integrated water resource management. This paper suggests a Long Short-Term Memory (LSTM) neural network model for flood forecasting, where the daily discharge and rainfall were used as input data. Moreover, characteristics of the data sets which may influence the model performance were also of interest. As a result, the Da River basin in Vietnam was chosen and two different combinations of input data sets from before 1985 (when the Hoa Binh dam was built) were used for one-day, two-day, and three-day flowrate forecasting ahead at Hoa Binh Station. The predictive ability of the model is quite impressive: The Nash–Sutcliffe efficiency (NSE) reached 99%, 95%, and 87% corresponding to three forecasting cases, respectively. The findings of this study suggest a viable option for flood forecasting on the Da River in Vietnam, where the river basin stretches between many countries and downstream flows (Vietnam) may fluctuate suddenly due to flood discharge from upstream hydroelectric reservoirs.
Improving the forecasting accuracy of monthly runoff time series of the Brahmani River in India using a hybrid deep learning model
Accurate prediction of monthly runoff is critical for effective water resource management and flood forecasting in river basins. In this study, we developed a hybrid deep learning (DL) model, Fourier transform long short-term memory (FT-LSTM), to improve the prediction accuracy of monthly discharge time series in the Brahmani river basin at Jenapur station. We compare the performance of FT-LSTM with three popular DL models: LSTM, recurrent neutral network, and gated recurrent unit, considering different lag periods (1, 3, 6, and 12). The lag period, representing the interval between the observed data points and the predicted data points, is crucial for capturing the temporal relationships and identifying patterns within the hydrological data. The results of this study show that the FT-LSTM model consistently outperforms other models across all lag periods in terms of error metrics. Furthermore, the FT-LSTM model demonstrates higher Nash–Sutcliffe efficiency and R2 values, indicating a better fit between predicted and actual runoff values. This work contributes to the growing field of hybrid DL models for hydrological forecasting. The FT-LSTM model proves effective in improving the accuracy of monthly runoff forecasts and offers a promising solution for water resource management and river basin decision-making processes.