Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeDegree TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceGranting InstitutionTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
577,655
result(s) for
"Neural networks"
Sort by:
Advanced deep learning with TensorFlow 2 and Keras : apply DL, GANs, VAEs, deep RL, unsupervised learning, object detection and segmentation, and more
2020,2024
A second edition of the bestselling guide to exploring and mastering deep learning with Keras, updated to include TensorFlow 2.x with new chapters on object detection, semantic segmentation, and unsupervised learning using mutual information.
Performance Analysis of Various Activation Functions in Artificial Neural Networks
2019
The development of Artificial Neural Networks (ANNs) has achieved a lot of fruitful results so far, and we know that activation function is one of the principal factors which will affect the performance of the networks. In this work, the role of many different types of activation functions, as well as their respective advantages and disadvantages and applicable fields are discussed, so people can choose the appropriate activation functions to get the superior performance of ANNs.
Journal Article
Survey of Deep Learning Paradigms for Speech Processing
by
Kothandaraman, Mohanaprasad
,
Bhangale, Kishor Barasu
in
Algorithms
,
Artificial neural networks
,
Audio equipment
2022
Over the past decades, a particular focus is given to research on machine learning techniques for speech processing applications. However, in the past few years, research has focused on using deep learning for speech processing applications. This new machine learning field has become a very attractive area of study and has remarkably better performance than the others in the various speech processing applications. This paper presents a brief survey of application deep learning for various speech processing applications such as speech separation, speech enhancement, speech recognition, speaker recognition, emotion recognition, language recognition, music recognition, speech data retrieval, etc. The survey goes on to cover the use of Auto-Encoder, Generative Adversarial Network, Restricted Boltzmann Machine, Deep Belief Network, Deep Neural Network, Convolutional Neural Network, Recurrent Neural Network and Deep Reinforcement Learning for speech processing. Additionally, it focuses on the various speech database and evaluation metrics used by deep learning algorithms for performance evaluation.
Journal Article
Spatial-temporal graph neural network for traffic forecasting: An overview and open research issues
by
Bui, Khac-Hoai Nam
,
Yi Hongsuk
,
Cho Jiho
in
Deep learning
,
Forecasting
,
Graph neural networks
2022
Traffic forecasting plays an important role of modern Intelligent Transportation Systems (ITS). With the recent rapid advancement in deep learning, graph neural networks (GNNs) have become an emerging research issue for improving the traffic forecasting problem. Specifically, one of the main types of GNNs is the spatial-temporal GNN (ST-GNN), which has been applied to various time-series forecasting applications. This study aims to provide an overview of recent ST-GNN models for traffic forecasting. Particularly, we propose a new taxonomy of ST-GNN by dividing existing models into four approaches such as graph convolutional recurrent neural network, fully graph convolutional network, graph multi-attention network, and self-learning graph structure. Sequentially, we present experimental results based on the reconstruction of representative models using selected benchmark datasets to evaluate the main contributions of the key components in each type of ST-GNN. Finally, we discuss several open research issues for further investigations.
Journal Article
Deep learning modelling techniques: current progress, applications, advantages, and challenges
2023
Deep learning (DL) is revolutionizing evidence-based decision-making techniques that can be applied across various sectors. Specifically, it possesses the ability to utilize two or more levels of non-linear feature transformation of the given data via representation learning in order to overcome limitations posed by large datasets. As a multidisciplinary field that is still in its nascent phase, articles that survey DL architectures encompassing the full scope of the field are rather limited. Thus, this paper comprehensively reviews the state-of-art DL modelling techniques and provides insights into their advantages and challenges. It was found that many of the models exhibit a highly domain-specific efficiency and could be trained by two or more methods. However, training DL models can be very time-consuming, expensive, and requires huge samples for better accuracy. Since DL is also susceptible to deception and misclassification and tends to get stuck on local minima, improved optimization of parameters is required to create more robust models. Regardless, DL has already been leading to groundbreaking results in the healthcare, education, security, commercial, industrial, as well as government sectors. Some models, like the convolutional neural network (CNN), generative adversarial networks (GAN), recurrent neural network (RNN), recursive neural networks, and autoencoders, are frequently used, while the potential of other models remains widely unexplored. Pertinently, hybrid conventional DL architectures have the capacity to overcome the challenges experienced by conventional models. Considering that capsule architectures may dominate future DL models, this work aimed to compile information for stakeholders involved in the development and use of DL models in the contemporary world.
Journal Article