Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Modeling the Influence of Data Structure on Learning in Neural Networks: The Hidden Manifold Model
by
Goldt, Sebastian
, Krzakala, Florent
, Mézard, Marc
, Zdeborová, Lenka
in
Algorithms
/ Artificial neural networks
/ Computer Science
/ Condensed Matter
/ Data structures
/ Datasets
/ Deep learning
/ Differential equations
/ Disordered Systems and Neural Networks
/ Equivalence principle
/ Generative adversarial networks
/ Image classification
/ Machine Learning
/ Manifolds (mathematics)
/ Natural language processing
/ Neural networks
/ Other Statistics
/ Physics
/ Statistical analysis
/ Statistical Mechanics
/ Statistical models
/ Statistics
/ Training
2020
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Modeling the Influence of Data Structure on Learning in Neural Networks: The Hidden Manifold Model
by
Goldt, Sebastian
, Krzakala, Florent
, Mézard, Marc
, Zdeborová, Lenka
in
Algorithms
/ Artificial neural networks
/ Computer Science
/ Condensed Matter
/ Data structures
/ Datasets
/ Deep learning
/ Differential equations
/ Disordered Systems and Neural Networks
/ Equivalence principle
/ Generative adversarial networks
/ Image classification
/ Machine Learning
/ Manifolds (mathematics)
/ Natural language processing
/ Neural networks
/ Other Statistics
/ Physics
/ Statistical analysis
/ Statistical Mechanics
/ Statistical models
/ Statistics
/ Training
2020
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Modeling the Influence of Data Structure on Learning in Neural Networks: The Hidden Manifold Model
by
Goldt, Sebastian
, Krzakala, Florent
, Mézard, Marc
, Zdeborová, Lenka
in
Algorithms
/ Artificial neural networks
/ Computer Science
/ Condensed Matter
/ Data structures
/ Datasets
/ Deep learning
/ Differential equations
/ Disordered Systems and Neural Networks
/ Equivalence principle
/ Generative adversarial networks
/ Image classification
/ Machine Learning
/ Manifolds (mathematics)
/ Natural language processing
/ Neural networks
/ Other Statistics
/ Physics
/ Statistical analysis
/ Statistical Mechanics
/ Statistical models
/ Statistics
/ Training
2020
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Modeling the Influence of Data Structure on Learning in Neural Networks: The Hidden Manifold Model
Journal Article
Modeling the Influence of Data Structure on Learning in Neural Networks: The Hidden Manifold Model
2020
Request Book From Autostore
and Choose the Collection Method
Overview
Understanding the reasons for the success of deep neural networks trained using stochastic gradient-based methods is a key open problem for the nascent theory of deep learning. The types of data where these networks are most successful, such as images or sequences of speech, are characterized by intricate correlations. Yet, most theoretical work on neural networks does not explicitly model training data or assumes that elements of each data sample are drawn independently from some factorized probability distribution. These approaches are, thus, by construction blind to the correlation structure of real-world datasets and their impact on learning in neural networks. Here, we introduce a generative model for structured datasets that we call the hidden manifold model. The idea is to construct high-dimensional inputs that lie on a lower-dimensional manifold, with labels that depend only on their position within this manifold, akin to a single-layer decoder or generator in a generative adversarial network. We demonstrate that learning of the hidden manifold model is amenable to an analytical treatment by proving a “Gaussian equivalence property” (GEP), and we use the GEP to show how the dynamics of two-layer neural networks trained using one-pass stochastic gradient descent is captured by a set of integro-differential equations that track the performance of the network at all times. This approach permits us to analyze in detail how a neural network learns functions of increasing complexity during training, how its performance depends on its size, and how it is impacted by parameters such as the learning rate or the dimension of the hidden manifold.
Publisher
American Physical Society
This website uses cookies to ensure you get the best experience on our website.