Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
28
result(s) for
"Pritzel, Alexander"
Sort by:
Highly accurate protein structure prediction for the human proteome
by
Nikolov, Stanislav
,
Senior, Andrew W.
,
Zielinski, Michal
in
631/114/1305
,
631/114/2411
,
631/1647/2067
2021
Protein structures can provide invaluable information, both for reasoning about biological processes and for enabling interventions such as structure-based drug development or targeted mutagenesis. After decades of effort, 17% of the total residues in human protein sequences are covered by an experimentally determined structure
1
. Here we markedly expand the structural coverage of the proteome by applying the state-of-the-art machine learning method, AlphaFold
2
, at a scale that covers almost the entire human proteome (98.5% of human proteins). The resulting dataset covers 58% of residues with a confident prediction, of which a subset (36% of all residues) have very high confidence. We introduce several metrics developed by building on the AlphaFold model and use them to interpret the dataset, identifying strong multi-domain predictions as well as regions that are likely to be disordered. Finally, we provide some case studies to illustrate how high-quality predictions could be used to generate biological hypotheses. We are making our predictions freely available to the community and anticipate that routine large-scale and high-accuracy structure prediction will become an important tool that will allow new questions to be addressed from a structural perspective.
AlphaFold is used to predict the structures of almost all of the proteins in the human proteome—the availability of high-confidence predicted structures could enable new avenues of investigation from a structural perspective.
Journal Article
Vector-based navigation using grid-like representations in artificial agents
by
Lillicrap, Timothy
,
Sadik, Amir
,
Hadsell, Raia
in
631/378/116/2396
,
639/705/117
,
Agents (artificial intelligence)
2018
Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go
1
,
2
. Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning
3
–
5
failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex
6
. Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space
7
,
8
and is critical for integrating self-motion (path integration)
6
,
7
,
9
and planning direct trajectories to goals (vector-based navigation)
7
,
10
,
11
. Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types
12
. We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments—optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation
7
,
10
,
11
, demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.
Grid-like representations emerge spontaneously within a neural network trained to self-localize, enabling the agent to take shortcuts to destinations using vector-based navigation.
Journal Article
Accurate structure prediction of biomolecular interactions with AlphaFold 3
by
O’Neill, Michael
,
Low, Caroline M. R.
,
Zielinski, Michal
in
631/114/1305
,
631/114/2411
,
631/154
2024
The introduction of AlphaFold 2
1
has spurred a revolution in modelling the structure of proteins and their interactions, enabling a huge range of applications in protein modelling and design
2
,
3
,
4
,
5
–
6
. Here we describe our AlphaFold 3 model with a substantially updated diffusion-based architecture that is capable of predicting the joint structure of complexes including proteins, nucleic acids, small molecules, ions and modified residues. The new AlphaFold model demonstrates substantially improved accuracy over many previous specialized tools: far greater accuracy for protein–ligand interactions compared with state-of-the-art docking tools, much higher accuracy for protein–nucleic acid interactions compared with nucleic-acid-specific predictors and substantially higher antibody–antigen prediction accuracy compared with AlphaFold-Multimer v.2.3
7
,
8
. Together, these results show that high-accuracy modelling across biomolecular space is possible within a single unified deep-learning framework.
AlphaFold 3 has a substantially updated architecture that is capable of predicting the joint structure of complexes including proteins, nucleic acids, small molecules, ions and modified residues with greatly improved accuracy over many previous specialized tools.
Journal Article
Highly accurate protein structure prediction with AlphaFold
by
Nikolov, Stanislav
,
Senior, Andrew W.
,
Zielinski, Michal
in
631/114/1305
,
631/114/2411
,
631/535
2021
Proteins are essential to life, and understanding their structure can facilitate a mechanistic understanding of their function. Through an enormous experimental effort
1
,
2
,
3
–
4
, the structures of around 100,000 unique proteins have been determined
5
, but this represents a small fraction of the billions of known protein sequences
6
,
7
. Structural coverage is bottlenecked by the months to years of painstaking effort required to determine a single protein structure. Accurate computational approaches are needed to address this gap and to enable large-scale structural bioinformatics. Predicting the three-dimensional structure that a protein will adopt based solely on its amino acid sequence—the structure prediction component of the ‘protein folding problem’
8
—has been an important open research problem for more than 50 years
9
. Despite recent progress
10
,
11
,
12
,
13
–
14
, existing methods fall far short of atomic accuracy, especially when no homologous structure is available. Here we provide the first computational method that can regularly predict protein structures with atomic accuracy even in cases in which no similar structure is known. We validated an entirely redesigned version of our neural network-based model, AlphaFold, in the challenging 14th Critical Assessment of protein Structure Prediction (CASP14)
15
, demonstrating accuracy competitive with experimental structures in a majority of cases and greatly outperforming other methods. Underpinning the latest version of AlphaFold is a novel machine learning approach that incorporates physical and biological knowledge about protein structure, leveraging multi-sequence alignments, into the design of the deep learning algorithm.
AlphaFold predicts protein structures with an accuracy competitive with experimental structures in the majority of cases using a novel deep learning architecture.
Journal Article
Normalizing flows for atomic solids
by
Wirnsberger, Peter
,
Ballard, Andrew J
,
Papamakarios, George
in
atomic solids
,
Estimates
,
Free energy
2022
We present a machine-learning approach, based on normalizing flows, for modelling atomic solids. Our model transforms an analytically tractable base distribution into the target solid without requiring ground-truth samples for training. We report Helmholtz free energy estimates for cubic and hexagonal ice modelled as monatomic water as well as for a truncated and shifted Lennard-Jones system, and find them to be in excellent agreement with literature values and with estimates from established baseline methods. We further investigate structural properties and show that the model samples are nearly indistinguishable from the ones obtained with molecular dynamics. Our results thus demonstrate that normalizing flows can provide high-quality samples and free energy estimates without the need for multi-staging.
Journal Article
Normalizing flows for atomic solids
by
Wirnsberger, Peter
,
Ballard, Andrew J
,
Papamakarios, George
in
Estimates
,
Free energy
,
Machine learning
2022
We present a machine-learning approach, based on normalizing flows, for modelling atomic solids. Our model transforms an analytically tractable base distribution into the target solid without requiring ground-truth samples for training. We report Helmholtz free energy estimates for cubic and hexagonal ice modelled as monatomic water as well as for a truncated and shifted Lennard-Jones system, and find them to be in excellent agreement with literature values and with estimates from established baseline methods. We further investigate structural properties and show that the model samples are nearly indistinguishable from the ones obtained with molecular dynamics. Our results thus demonstrate that normalizing flows can provide high-quality samples and free energy estimates without the need for multi-staging.
Topological Model for Domain Walls in (Super-)Yang-Mills Theories
2014
We derive a topological action that describes the confining phase of (Super-)Yang-Mills theories with gauge group \\(SU(N)\\), similar to the work recently carried out by Seiberg and collaborators. It encodes all the Aharonov-Bohm phases of the possible non-local operators and phases generated by the intersection of flux tubes. Within this topological framework we show that the worldvolume theory of domain walls contains a Chern-Simons term at level \\(N\\) also seen in string theory constructions. The discussion can also illuminate dynamical differences of domain walls in the supersymmetric and non-supersymmetric framework. Two further analogies, to string theory and the fractional quantum Hall effect might lead to additional possibilities to investigate the dynamics.
MultiScale MeshGraphNets
by
Wirnsberger, Peter
,
Pfaff, Tobias
,
Battaglia, Peter
in
Accuracy
,
Computer simulation
,
Finite element method
2022
In recent years, there has been a growing interest in using machine learning to overcome the high cost of numerical simulation, with some learned models achieving impressive speed-ups over classical solvers whilst maintaining accuracy. However, these methods are usually tested at low-resolution settings, and it remains to be seen whether they can scale to the costly high-resolution simulations that we ultimately want to tackle. In this work, we propose two complementary approaches to improve the framework from MeshGraphNets, which demonstrated accurate predictions in a broad range of physical systems. MeshGraphNets relies on a message passing graph neural network to propagate information, and this structure becomes a limiting factor for high-resolution simulations, as equally distant points in space become further apart in graph space. First, we demonstrate that it is possible to learn accurate surrogate dynamics of a high-resolution system on a much coarser mesh, both removing the message passing bottleneck and improving performance; and second, we introduce a hierarchical approach (MultiScale MeshGraphNets) which passes messages on two different resolutions (fine and coarse), significantly improving the accuracy of MeshGraphNets while requiring less computational resources.
Localization of gauge fields and Maxwell-Chern-Simons theory
2011
We propose an explicit model, where an axionic domain wall dynamically localizes a U(1)-component of a nonabelian gauge theory living in a 3+1 dimensional bulk. The effective theory on the wall is 2+1d Maxwell-Chern-Simons theory with a compact U(1) gauge group. This setup allows us to understand all key properties of MCS theory in terms of the dynamics of the underlying 3+1 dimensional gauge theory. Our findings can also shed some light on branes in supersymmetric gluodynamics.