Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
10
result(s) for
"Filan, Daniel"
Sort by:
Structure and Representation in Neural Networks
2024
Since neural networks have become dominant within the field of artificial intelligence, a sub-field of research has emerged attempting to understand their inner workings. One standard method within this sub-field has been to primarily understand neural networks as representing human-comprehensible features. Another possibility that has been less explored is to understand them as multi-step computer programs. A seeming prerequisite for this is some form of modularity: for different parts of the network to operate independently enough to be understood in isolation, and to implement distinct interpretable sub-routines. To find modular structure inside neural networks, we initially use the tools of graphical clustering. A network is clusterable in this sense if it can be divided into groups of neurons with strong internal connectivity but weak external connectivity. We find that a trained neural network is typically more clusterable than randomly initialized networks, and often clusterable relative to random networks with the same distribution of weights as the trained network. We investigate factors that promote clusterability, and also develop novel methods targeted at that end. For modularity to be valuable for understanding neural networks, it needs to have some sort of functional relevance.The type of functional relevance we target is local specialization of functionality. A neural network is locally specialized to the extent that parts of its computational graph can be abstractly represented as performing some comprehensible sub-task relevant to the overall task. We propose two proxies for local specialization: importance, which reflects how valuable sets of neurons are to network performance; and coherence, which reflects how consistently their neurons associate with features of the inputs. We then operationalize these proxies using techniques conventionally used to interpret individual neurons, applying them instead to groups of neurons produced by graph clustering algorithms. Our results show that clustering succeeds at finding groups of neurons that are important and coherent, although not all groups of neurons found are so. We conclude with a case study of using more standard interpretability tools, designed to understand the features being represented by directions in activation space, applying them to the analysis of neural networks trained on the reward function of the game CoinRun. Despite our networks achieving a low test loss, the application of interpretability tools shows that networks do not adequately represent relevant features and badly mispredict reward out of distribution. That said, these tools do not reveal a clear picture of what computation the networks are in fact performing. This not only illustrates the need for better interpretability tools to understand generalization behaviour, but motivates it: if we take these networks as models of 'motivation systems' of policies trained by reinforcement learning, the conclusion is that such networks may competently pursue the wrong objectives when deployed in richer environments, indicating a need for interpretability techniques to shed light on generalization behaviour.
Dissertation
Constrained belief updates explain geometric structures in transformer representations
by
Riechers, Paul M
,
Shai, Adam S
,
Piotrowski, Mateusz
in
Bayesian analysis
,
Constraints
,
Markov chains
2025
What computational structures emerge in transformers trained on next-token prediction? In this work, we provide evidence that transformers implement constrained Bayesian belief updating -- a parallelized version of partial Bayesian inference shaped by architectural constraints. We integrate the model-agnostic theory of optimal prediction with mechanistic interpretability to analyze transformers trained on a tractable family of hidden Markov models that generate rich geometric patterns in neural activations. Our primary analysis focuses on single-layer transformers, revealing how the first attention layer implements these constrained updates, with extensions to multi-layer architectures demonstrating how subsequent layers refine these representations. We find that attention carries out an algorithm with a natural interpretation in the probability simplex, and create representations with distinctive geometric structure. We show how both the algorithmic behavior and the underlying geometry of these representations can be theoretically predicted in detail -- including the attention pattern, OV-vectors, and embedding vectors -- by modifying the equations for optimal future token predictions to account for the architectural constraints of attention. Our approach provides a principled lens on how architectural constraints shape the implementation of optimal prediction, revealing why transformers develop specific intermediate geometric structures.
Exploring Hierarchy-Aware Inverse Reinforcement Learning
2018
We introduce a new generative model for human planning under the Bayesian Inverse Reinforcement Learning (BIRL) framework which takes into account the fact that humans often plan using hierarchical strategies. We describe the Bayesian Inverse Hierarchical RL (BIHRL) algorithm for inferring the values of hierarchical planners, and use an illustrative toy model to show that BIHRL retains accuracy where standard BIRL fails. Furthermore, BIHRL is able to accurately predict the goals of `Wikispeedia' game players, with inclusion of hierarchical structure in the model resulting in a large boost in accuracy. We show that BIHRL is able to significantly outperform BIRL even when we only have a weak prior on the hierarchical structure of the plans available to the agent, and discuss the significant challenges that remain for scaling up this framework to more realistic settings.
Quantifying Local Specialization in Deep Neural Networks
by
Casper, Stephen
,
Critch, Andrew
,
Hod, Shlomi
in
Artificial neural networks
,
Clustering
,
Graph representations
2022
A neural network is locally specialized to the extent that parts of its computational graph (i.e. structure) can be abstractly represented as performing some comprehensible sub-task relevant to the overall task (i.e. functionality). Are modern deep neural networks locally specialized? How can this be quantified? In this paper, we consider the problem of taking a neural network whose neurons are partitioned into clusters, and quantifying how functionally specialized the clusters are. We propose two proxies for this: importance, which reflects how crucial sets of neurons are to network performance; and coherence, which reflects how consistently their neurons associate with features of the inputs. To measure these proxies, we develop a set of statistical methods based on techniques conventionally used to interpret individual neurons. We apply the proxies to partitionings generated by spectrally clustering a graph representation of the network's neurons with edges determined either by network weights or correlations of activations. We show that these partitionings, even ones based only on weights (i.e. strictly from non-runtime analysis), reveal groups of neurons that are important and coherent. These results suggest that graph-based partitioning can reveal local specialization and that statistical methods can be used to automatedly screen for sets of neurons that can be understood abstractly.
Pruned Neural Networks are Surprisingly Modular
2022
The learned weights of a neural network are often considered devoid of scrutable internal structure. To discern structure in these weights, we introduce a measurable notion of modularity for multi-layer perceptrons (MLPs), and investigate the modular structure of MLPs trained on datasets of small images. Our notion of modularity comes from the graph clustering literature: a \"module\" is a set of neurons with strong internal connectivity but weak external connectivity. We find that training and weight pruning produces MLPs that are more modular than randomly initialized ones, and often significantly more modular than random MLPs with the same (sparse) distribution of weights. Interestingly, they are much more modular when trained with dropout. We also present exploratory analyses of the importance of different modules for performance and how modules depend on each other. Understanding the modular structure of neural networks, when such structure exists, will hopefully render their inner workings more interpretable to engineers. Note that this paper has been superceded by \"Clusterability in Neural Networks\", arxiv:2103.03386 and \"Quantifying Local Specialization in Deep Neural Networks\", arxiv:2110.08058!
What would it have looked like if it looked like I were in a superposition?
by
Hope, Joseph J
,
Filan, Daniel
in
Quantum mechanics
,
Quantum theory
,
Superposition (mathematics)
2015
In this paper we address the question of whether it is possible to obtain evidence that we are in a superposition of different worlds, as suggested by the relative state interpretation of quantum mechanics. We find that it is impossible to find definitive proof, and that if one wishes to retain reliable memories of which world one was in, no evidence at all can be found. We then show that even for completely linear quantum state evolution, there is a test that can be done to tell if you can be placed in a superposition.
Clusterability in Neural Networks
2021
The learned weights of a neural network have often been considered devoid of scrutable internal structure. In this paper, however, we look for structure in the form of clusterability: how well a network can be divided into groups of neurons with strong internal connectivity but weak external connectivity. We find that a trained neural network is typically more clusterable than randomly initialized networks, and often clusterable relative to random networks with the same distribution of weights. We also exhibit novel methods to promote clusterability in neural network training, and find that in multi-layer perceptrons they lead to more clusterable networks with little reduction in accuracy. Understanding and controlling the clusterability of neural networks will hopefully render their inner workings more interpretable to engineers by facilitating partitioning into meaningful clusters.
Loss Bounds and Time Complexity for Speed Priors
2016
This paper establishes for the first time the predictive performance of speed priors and their computational complexity. A speed prior is essentially a probability distribution that puts low probability on strings that are not efficiently computable. We propose a variant to the original speed prior (Schmidhuber, 2002), and show that our prior can predict sequences drawn from probability measures that are estimable in polynomial time. Our speed prior is computable in doubly-exponential time, but not in polynomial time. On a polynomial time computable sequence our speed prior is computable in exponential time. We show better upper complexity bounds for Schmidhuber's speed prior under the same conditions, and that it predicts deterministic sequences that are computable in polynomial time; however, we also show that it is not computable in polynomial time, and the question of its predictive properties for stochastic sequences remains open.
Self-Modification of Policy and Utility Function in Rational Agents
by
Daswani, Mayank
,
Hutter, Marcus
,
Everitt, Tom
in
Actuators
,
Intelligent agents
,
Intelligent systems
2016
Any agent that is part of the environment it interacts with and has versatile actuators (such as arms and fingers), will in principle have the ability to self-modify -- for example by changing its own source code. As we continue to create more and more intelligent agents, chances increase that they will learn about this ability. The question is: will they want to use it? For example, highly intelligent systems may find ways to change their goals to something more easily achievable, thereby `escaping' the control of their designers. In an important paper, Omohundro (2008) argued that goal preservation is a fundamental drive of any intelligent system, since a goal is more likely to be achieved if future versions of the agent strive towards the same goal. In this paper, we formalise this argument in general reinforcement learning, and explore situations where it fails. Our conclusion is that the self-modification possibility is harmless if and only if the value function of the agent anticipates the consequences of self-modifications and use the current utility function when evaluating the future.
On the Impossibility of Supersized Machines
2017
In recent years, a number of prominent computer scientists, along with academics in fields such as philosophy and physics, have lent credence to the notion that machines may one day become as large as humans. Many have further argued that machines could even come to exceed human size by a significant margin. However, there are at least seven distinct arguments that preclude this outcome. We show that it is not only implausible that machines will ever exceed human size, but in fact impossible.