Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Degree Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Granting Institution
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
564,022 result(s) for "Neural networks"
Sort by:
Advanced deep learning with TensorFlow 2 and Keras : apply DL, GANs, VAEs, deep RL, unsupervised learning, object detection and segmentation, and more
A second edition of the bestselling guide to exploring and mastering deep learning with Keras, updated to include TensorFlow 2.x with new chapters on object detection, semantic segmentation, and unsupervised learning using mutual information.
Neural control engineering : the emerging intersection between control theory and neuroscience
How powerful new methods in nonlinear control engineering can be applied to neuroscience, from fundamental model formulation to advanced medical applications.Over the past sixty years, powerful methods of model-based control engineering have been responsible for such dramatic advances in engineering systems as autolanding aircraft, autonomous vehicles, and even weather forecasting. Over those same decades, our models of the nervous system have evolved from single-cell membranes to neuronal networks to large-scale models of the human brain. Yet until recently control theory was completely inapplicable to the types of nonlinear models being developed in neuroscience. The revolution in nonlinear control engineering in the late 1990s has made the intersection of control theory and neuroscience possible. In Neural Control Engineering, Steven Schiff seeks to bridge the two fields, examining the application of new methods in nonlinear control engineering to neuroscience. After presenting extensive material on formulating computational neuroscience models in a control environment-including some fundamentals of the algorithms helpful in crossing the divide from intuition to effective application-Schiff examines a range of applications, including brain-machine interfaces and neural stimulation. He reports on research that he and his colleagues have undertaken showing that nonlinear control theory methods can be applied to models of single cells, small neuronal networks, and large-scale networks in disease states of Parkinson's disease and epilepsy. With Neural Control Engineering the reader acquires a working knowledge of the fundamentals of control theory and computational neuroscience sufficient not only to understand the literature in this trandisciplinary area but also to begin working to advance the field. The book will serve as an essential guide for scientists in either biology or engineering and for physicians who wish to gain expertise in these areas.
Performance Analysis of Various Activation Functions in Artificial Neural Networks
The development of Artificial Neural Networks (ANNs) has achieved a lot of fruitful results so far, and we know that activation function is one of the principal factors which will affect the performance of the networks. In this work, the role of many different types of activation functions, as well as their respective advantages and disadvantages and applicable fields are discussed, so people can choose the appropriate activation functions to get the superior performance of ANNs.
Popular deep learning algorithms for disease prediction: a review
Due to its automatic feature learning ability and high performance, deep learning has gradually become the mainstream of artificial intelligence in recent years, playing a role in many fields. Especially in the medical field, the accuracy rate of deep learning even exceeds that of doctors. This paper introduces several deep learning algorithms: Artificial Neural Network (NN), FM-Deep Learning, Convolutional NN and Recurrent NN, and expounds their theory, development history and applications in disease prediction; we analyze the defects in the current disease prediction field and give some current solutions; our paper expounds the two major trends in the future disease prediction and medical field—integrating Digital Twins and promoting precision medicine. This study can better inspire relevant researchers, so that they can use this article to understand related disease prediction algorithms and then make better related research.
Spatial-temporal graph neural network for traffic forecasting: An overview and open research issues
Traffic forecasting plays an important role of modern Intelligent Transportation Systems (ITS). With the recent rapid advancement in deep learning, graph neural networks (GNNs) have become an emerging research issue for improving the traffic forecasting problem. Specifically, one of the main types of GNNs is the spatial-temporal GNN (ST-GNN), which has been applied to various time-series forecasting applications. This study aims to provide an overview of recent ST-GNN models for traffic forecasting. Particularly, we propose a new taxonomy of ST-GNN by dividing existing models into four approaches such as graph convolutional recurrent neural network, fully graph convolutional network, graph multi-attention network, and self-learning graph structure. Sequentially, we present experimental results based on the reconstruction of representative models using selected benchmark datasets to evaluate the main contributions of the key components in each type of ST-GNN. Finally, we discuss several open research issues for further investigations.