Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Pruned Neural Networks are Surprisingly Modular
by
Wild, Cody
, Critch, Andrew
, Hod, Shlomi
, Filan, Daniel
, Russell, Stuart
in
Clustering
/ Datasets
/ Modular structures
/ Modularity
/ Modules
/ Multilayers
/ Neural networks
2022
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Pruned Neural Networks are Surprisingly Modular
by
Wild, Cody
, Critch, Andrew
, Hod, Shlomi
, Filan, Daniel
, Russell, Stuart
in
Clustering
/ Datasets
/ Modular structures
/ Modularity
/ Modules
/ Multilayers
/ Neural networks
2022
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Paper
Pruned Neural Networks are Surprisingly Modular
2022
Request Book From Autostore
and Choose the Collection Method
Overview
The learned weights of a neural network are often considered devoid of scrutable internal structure. To discern structure in these weights, we introduce a measurable notion of modularity for multi-layer perceptrons (MLPs), and investigate the modular structure of MLPs trained on datasets of small images. Our notion of modularity comes from the graph clustering literature: a \"module\" is a set of neurons with strong internal connectivity but weak external connectivity. We find that training and weight pruning produces MLPs that are more modular than randomly initialized ones, and often significantly more modular than random MLPs with the same (sparse) distribution of weights. Interestingly, they are much more modular when trained with dropout. We also present exploratory analyses of the importance of different modules for performance and how modules depend on each other. Understanding the modular structure of neural networks, when such structure exists, will hopefully render their inner workings more interpretable to engineers. Note that this paper has been superceded by \"Clusterability in Neural Networks\", arxiv:2103.03386 and \"Quantifying Local Specialization in Deep Neural Networks\", arxiv:2110.08058!
Publisher
Cornell University Library, arXiv.org
Subject
MBRLCatalogueRelatedBooks
Related Items
Related Items
We currently cannot retrieve any items related to this title. Kindly check back at a later time.
This website uses cookies to ensure you get the best experience on our website.