Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
370
result(s) for
"Wayne, Greg"
Sort by:
Batman : no man's land. 2
After suffering a cataclysmic earthquake, the U.S. government has deemed Gotham City uninhabitable and ordered all citizens to leave. It is now months later and those that have refused to vacate \"No Man's Land\" live amidst a citywide turf war in which the strongest prey on the weak. Batman and his allies continue their fight to save Gotham during its darkest hour.
Hierarchical motor control in mammals and machines
2019
Advances in artificial intelligence are stimulating interest in neuroscience. However, most attention is given to discrete tasks with simple action spaces, such as board games and classic video games. Less discussed in neuroscience are parallel advances in “synthetic motor control”. While motor neuroscience has recently focused on optimization of single, simple movements, AI has progressed to the generation of rich, diverse motor behaviors across multiple tasks, at humanoid scale. It is becoming clear that specific, well-motivated hierarchical design elements repeatedly arise when engineering these flexible control systems. We review these core principles of hierarchical control, relate them to hierarchy in the nervous system, and highlight research themes that we anticipate will be critical in solving challenges at this disciplinary intersection.
Recent research in motor neuroscience has focused on optimal feedback control of single, simple tasks while robotics and AI are making progress towards flexible movement control in complex environments employing hierarchical control strategies. Here, the authors argue for a return to hierarchical models of motor control in neuroscience.
Journal Article
Superman : President Luthor
His fame bolstered after helping to rebuild Gotham City after an earthquake, billionaire Lex Luthor decides to run for the highest office in the land, the American presidency.
Toward an Integration of Deep Learning and Neuroscience
by
Kording, Konrad P.
,
Wayne, Greg
,
Marblestone, Adam H.
in
Artificial intelligence
,
Back propagation
,
Circuits
2016
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.
Journal Article
Optimizing agent behavior over long time scales by transporting value
2019
Humans prolifically engage in mental time travel. We dwell on past actions and experience satisfaction or regret. More than storytelling, these recollections change how we act in the future and endow us with a computationally important ability to link actions and consequences across spans of time, which helps address the problem of long-term credit assignment: the question of how to evaluate the utility of actions within a long-duration behavioral sequence. Existing approaches to credit assignment in AI cannot solve tasks with long delays between actions and consequences. Here, we introduce a paradigm where agents use recall of specific memories to credit past actions, allowing them to solve problems that are intractable for existing algorithms. This paradigm broadens the scope of problems that can be investigated in AI and offers a mechanistic account of behaviors that may inspire models in neuroscience, psychology, and behavioral economics.
People are able to mentally time travel to distant memories and reflect on the consequences of those past events. Here, the authors show how a mechanism that connects learning from delayed rewards with memory retrieval can enable AI agents to discover links between past events to help decide better courses of action in the future.
Journal Article
Vector-based navigation using grid-like representations in artificial agents
by
Lillicrap, Timothy
,
Sadik, Amir
,
Hadsell, Raia
in
631/378/116/2396
,
639/705/117
,
Agents (artificial intelligence)
2018
Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go
1
,
2
. Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning
3
–
5
failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex
6
. Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space
7
,
8
and is critical for integrating self-motion (path integration)
6
,
7
,
9
and planning direct trajectories to goals (vector-based navigation)
7
,
10
,
11
. Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types
12
. We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments—optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation
7
,
10
,
11
, demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.
Grid-like representations emerge spontaneously within a neural network trained to self-localize, enabling the agent to take shortcuts to destinations using vector-based navigation.
Journal Article
A temporal basis for predicting the sensory consequences of motor commands in an electric fish
by
Kaifosh, Patrick
,
Alviña, Karina
,
Wayne, Greg
in
631/378/116/2395
,
631/378/2591
,
631/378/2629
2014
To adaptively navigate their environments organisms need to predict and cancel out the sensory consequences of their actions. Here the authors show that granule cells within the cerebellum-like structure of weakly electric fish have delayed responses that closely match the timing of self-generated sensory inputs. This enables corollary discharges to be transformed into negative images that are well-tuned to the animal's own behavior.
Mormyrid electric fish are a model system for understanding how neural circuits predict the sensory consequences of motor acts. Medium ganglion cells in the electrosensory lobe create negative images that predict sensory input resulting from the fish's electric organ discharge (EOD). Previous studies have shown that negative images can be created through plasticity at granule cell–medium ganglion cell synapses, provided that granule cell responses to the brief EOD command are sufficiently varied and prolonged. Here we show that granule cells indeed provide such a temporal basis and that it is well-matched to the temporal structure of self-generated sensory inputs, allowing rapid and accurate sensory cancellation and explaining paradoxical features of negative images. We also demonstrate an unexpected and critical role of unipolar brush cells (UBCs) in generating the required delayed responses. These results provide a mechanistic account of how copies of motor commands are transformed into sensory predictions.
Journal Article
A deep learning framework for neuroscience
by
Roelfsema, Pieter
,
Kriegeskorte, Nikolaus
,
Schapiro, Anna C
in
Artificial intelligence
,
Artificial neural networks
,
Brain
2019
Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.
Journal Article
Hybrid computing using a neural network with dynamic external memory
by
Cain, Adam
,
Danihelka, Ivo
,
Badia, Adrià Puigdomènech
in
631/378/116/1925
,
631/378/116/2396
,
Computer memory
2016
Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read–write memory.
A ‘differentiable neural computer’ is introduced that combines the learning capabilities of a neural network with an external memory analogous to the random-access memory in a conventional computer.
A neural network/computer program hybrid
Conventional computer algorithms can process extremely large and complex data structures such as the worldwide web or social networks, but they must be programmed manually by humans. Neural networks can learn from examples to recognize complex patterns, but they cannot easily parse and organize complex data structures. Now Alex Graves, Greg Wayne and colleagues have developed a hybrid learning machine, called a differentiable neural computer (DNC), that is composed of a neural network that can read from and write to an external memory structure analogous to the random-access memory in a conventional computer. The DNC can thus learn to plan routes on the London Underground, and to achieve goals in a block puzzle, merely by trial and error—without prior knowledge or ad hoc programming for such tasks.
Journal Article
A virtual rodent predicts the structure of neural activity across behaviours
by
Aldarondo, Diego
,
Gellis, Amanda
,
Ölveczky, Bence P.
in
631/378/116/1925
,
631/378/116/2392
,
631/378/2632
2024
Animals have exquisite control of their bodies, allowing them to perform a diverse range of behaviours. How such control is implemented by the brain, however, remains unclear. Advancing our understanding requires models that can relate principles of control to the structure of neural activity in behaving animals. Here, to facilitate this, we built a ‘virtual rodent’, in which an artificial neural network actuates a biomechanically realistic model of the rat
1
in a physics simulator
2
. We used deep reinforcement learning
3
,
4
–
5
to train the virtual agent to imitate the behaviour of freely moving rats, thus allowing us to compare neural activity recorded in real rats to the network activity of a virtual rodent mimicking their behaviour. We found that neural activity in the sensorimotor striatum and motor cortex was better predicted by the virtual rodent’s network activity than by any features of the real rat’s movements, consistent with both regions implementing inverse dynamics
6
. Furthermore, the network’s latent variability predicted the structure of neural variability across behaviours and afforded robustness in a way consistent with the minimal intervention principle of optimal feedback control
7
. These results demonstrate how physical simulation of biomechanically realistic virtual animals can help interpret the structure of neural activity across behaviour and relate it to theoretical principles of motor control.
We built an artificial neural network to control a biomechanically realistic virtual rodent, which, when trained to imitate real rats, predicts neural activity and variability across natural behaviours.
Journal Article