Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
50
result(s) for
"Adler, Jonas"
Sort by:
Highly accurate protein structure prediction for the human proteome
by
Nikolov, Stanislav
,
Senior, Andrew W.
,
Zielinski, Michal
in
631/114/1305
,
631/114/2411
,
631/1647/2067
2021
Protein structures can provide invaluable information, both for reasoning about biological processes and for enabling interventions such as structure-based drug development or targeted mutagenesis. After decades of effort, 17% of the total residues in human protein sequences are covered by an experimentally determined structure
1
. Here we markedly expand the structural coverage of the proteome by applying the state-of-the-art machine learning method, AlphaFold
2
, at a scale that covers almost the entire human proteome (98.5% of human proteins). The resulting dataset covers 58% of residues with a confident prediction, of which a subset (36% of all residues) have very high confidence. We introduce several metrics developed by building on the AlphaFold model and use them to interpret the dataset, identifying strong multi-domain predictions as well as regions that are likely to be disordered. Finally, we provide some case studies to illustrate how high-quality predictions could be used to generate biological hypotheses. We are making our predictions freely available to the community and anticipate that routine large-scale and high-accuracy structure prediction will become an important tool that will allow new questions to be addressed from a structural perspective.
AlphaFold is used to predict the structures of almost all of the proteins in the human proteome—the availability of high-confidence predicted structures could enable new avenues of investigation from a structural perspective.
Journal Article
Exploiting prior knowledge about biological macromolecules in cryo-EM structure determination
by
Kimanius, Dari
,
Nakane, Takanori
,
Lunz, Sebastian
in
3D reconstruction
,
Artificial neural networks
,
cryo-electron microscopy
2021
Three-dimensional reconstruction of the electron-scattering potential of biological macromolecules from electron cryo-microscopy (cryo-EM) projection images is an ill-posed problem. The most popular cryo-EM software solutions to date rely on a regularization approach that is based on the prior assumption that the scattering potential varies smoothly over three-dimensional space. Although this approach has been hugely successful in recent years, the amount of prior knowledge that it exploits compares unfavorably with the knowledge about biological structures that has been accumulated over decades of research in structural biology. Here, a regularization framework for cryo-EM structure determination is presented that exploits prior knowledge about biological structures through a convolutional neural network that is trained on known macromolecular structures. This neural network is inserted into the iterative cryo-EM structure-determination process through an approach that is inspired by regularization by denoising. It is shown that the new regularization approach yields better reconstructions than the current state of the art for simulated data, and options to extend this work for application to experimental cryo-EM data are discussed.
Journal Article
Highly accurate protein structure prediction with AlphaFold
by
Nikolov, Stanislav
,
Senior, Andrew W.
,
Zielinski, Michal
in
631/114/1305
,
631/114/2411
,
631/535
2021
Proteins are essential to life, and understanding their structure can facilitate a mechanistic understanding of their function. Through an enormous experimental effort
1
,
2
,
3
–
4
, the structures of around 100,000 unique proteins have been determined
5
, but this represents a small fraction of the billions of known protein sequences
6
,
7
. Structural coverage is bottlenecked by the months to years of painstaking effort required to determine a single protein structure. Accurate computational approaches are needed to address this gap and to enable large-scale structural bioinformatics. Predicting the three-dimensional structure that a protein will adopt based solely on its amino acid sequence—the structure prediction component of the ‘protein folding problem’
8
—has been an important open research problem for more than 50 years
9
. Despite recent progress
10
,
11
,
12
,
13
–
14
, existing methods fall far short of atomic accuracy, especially when no homologous structure is available. Here we provide the first computational method that can regularly predict protein structures with atomic accuracy even in cases in which no similar structure is known. We validated an entirely redesigned version of our neural network-based model, AlphaFold, in the challenging 14th Critical Assessment of protein Structure Prediction (CASP14)
15
, demonstrating accuracy competitive with experimental structures in a majority of cases and greatly outperforming other methods. Underpinning the latest version of AlphaFold is a novel machine learning approach that incorporates physical and biological knowledge about protein structure, leveraging multi-sequence alignments, into the design of the deep learning algorithm.
AlphaFold predicts protein structures with an accuracy competitive with experimental structures in the majority of cases using a novel deep learning architecture.
Journal Article
Accurate structure prediction of biomolecular interactions with AlphaFold 3
by
O’Neill, Michael
,
Low, Caroline M. R.
,
Zielinski, Michal
in
631/114/1305
,
631/114/2411
,
631/154
2024
The introduction of AlphaFold 2
1
has spurred a revolution in modelling the structure of proteins and their interactions, enabling a huge range of applications in protein modelling and design
2
,
3
,
4
,
5
–
6
. Here we describe our AlphaFold 3 model with a substantially updated diffusion-based architecture that is capable of predicting the joint structure of complexes including proteins, nucleic acids, small molecules, ions and modified residues. The new AlphaFold model demonstrates substantially improved accuracy over many previous specialized tools: far greater accuracy for protein–ligand interactions compared with state-of-the-art docking tools, much higher accuracy for protein–nucleic acid interactions compared with nucleic-acid-specific predictors and substantially higher antibody–antigen prediction accuracy compared with AlphaFold-Multimer v.2.3
7
,
8
. Together, these results show that high-accuracy modelling across biomolecular space is possible within a single unified deep-learning framework.
AlphaFold 3 has a substantially updated architecture that is capable of predicting the joint structure of complexes including proteins, nucleic acids, small molecules, ions and modified residues with greatly improved accuracy over many previous specialized tools.
Journal Article
Waste Heat Recovery from a High Temperature Diesel Engine
2017
Government-mandated improvements in fuel economy and emissions from internal combustion engines (ICEs) are driving innovation in engine efficiency. Though incremental efficiency gains have been achieved, most combustion engines are still only 30–40% efficient at best, with most of the remaining fuel energy being rejected to the environment as waste heat through engine coolant and exhaust gases. Attempts have been made to harness this waste heat and use it to drive a Rankine cycle and produce additional work to improve efficiency. Research on waste heat recovery (WHR) demonstrates that it is possible to improve overall efficiency by converting wasted heat into usable work, but relative gains in overall efficiency are typically minimal (~5–8%) and often do not justify the cost and space requirements of a WHR system. The primary limitation of the current state-of-the-art in WHR is the low temperature of the engine coolant (~90 °C), which minimizes the WHR from a heat source that represents between 20% and 30% of the fuel energy. The current research proposes increasing the engine coolant temperature to improve the utilization of coolant waste heat as one possible path to achieving greater WHR system effectiveness. An experiment was performed to evaluate the effects of running a diesel engine at elevated coolant temperatures and to estimate the efficiency benefits. An energy balance was performed on a modified 3-cylinder diesel engine at six different coolant temperatures (90 °C, 100 °C, 125 °C, 150 °C, 175 °C, and 200 °C) to determine the change in quantity and quality of waste heat as the coolant temperature increased. The waste heat was measured using the flow rates and temperature differences of the coolant, engine oil, and exhaust flow streams into and out of the engine. Custom cooling and engine oil systems were fabricated to provide adequate adjustment to achieve target coolant and oil temperatures and large enough temperature differences across the engine to reduce uncertainty. Changes to exhaust emissions were recorded using a 5-gas analyzer. The engine condition was also monitored throughout the tests by engine compression testing, oil analysis, and a complete teardown and inspection after testing was completed. The integrity of the head gasket seal proved to be a significant problem and leakage of engine coolant into the combustion chamber was detected when testing ended. The post-test teardown revealed problems with oil breakdown at locations where temperatures were highest, with accompanying component wear. The results from the experiment were then used as inputs for a WHR system model using ethanol as the working fluid, which provided estimates of system output and improvement in efficiency. Thermodynamic models were created for eight different WHR systems with coolant temperatures of 90 °C, 150 °C, 175 °C, and 200 °C and condenser temperatures of 60 °C and 90 °C at a single operating point of 3100 rpm and 24 N-m of torque. The models estimated that WHR output for both condenser temperatures would increase by over 100% when the coolant temperature was increased from 90 °C to 200 °C. This increased WHR output translated to relative efficiency gains as high as 31.0% for the 60 °C condenser temperature and 24.2% for the 90 °C condenser temperature over the baseline engine efficiency at 90 °C. Individual heat exchanger models were created to estimate the footprint for a WHR system for each of the eight systems. When the coolant temperature increased from 90 °C to 200 °C, the total heat exchanger volume increased from 16.6 × 103 cm3 to 17.1 × 10 3 cm3 with a 60 °C condenser temperature, but decreased from 15.1 × 103 cm3 to 14.2 × 10 3 cm3 with a 90 °C condenser temperature. For all cases, increasing the coolant temperature resulted in an improvement in the efficiency gain for each cubic meter of heat exchanger volume required. Additionally, the engine oil coolers represented a significant portion of the required heat exchanger volume due to abnormally low engine oil temperatures during the experiment (~80 °C). Future studies should focus on allowing the engine oil to reach higher operating temperatures which would decrease the heat rejected to the engine oil and reduce the heat duty for the oil coolers resulting in reduced oil cooler volume.
Dissertation
Learned Primal-dual Reconstruction
2018
We propose the Learned Primal-Dual algorithm for tomographic reconstruction. The algorithm accounts for a (possibly non-linear) forward operator in a deep neural network by unrolling a proximal primal-dual optimization method, but where the proximal operators have been replaced with convolutional neural networks. The algorithm is trained end-to-end, working directly from raw measured data and it does not depend on any initial reconstruction such as FBP. We compare performance of the proposed method on low dose CT reconstruction against FBP, TV, and deep learning based post-processing of FBP. For the Shepp-Logan phantom we obtain >6dB PSNR improvement against all compared methods. For human phantoms the corresponding improvement is 6.6dB over TV and 2.2dB over learned post-processing along with a substantial improvement in the SSIM. Finally, our algorithm involves only ten forward-back-projection computations, making the method feasible for time critical clinical applications.
A modified fuzzy C means algorithm for shading correction in craniofacial CBCT images
2018
CBCT images suffer from acute shading artifacts primarily due to scatter. Numerous image-domain correction algorithms have been proposed in the literature that use patient-specific planning CT images to estimate shading contributions in CBCT images. However, in the context of radiosurgery applications such as gamma knife, planning images are often acquired through MRI which impedes the use of polynomial fitting approaches for shading correction. We present a new shading correction approach that is independent of planning CT images. Our algorithm is based on the assumption that true CBCT images follow a uniform volumetric intensity distribution per material, and scatter perturbs this uniform texture by contributing cupping and shading artifacts in the image domain. The framework is a combination of fuzzy C-means coupled with a neighborhood regularization term and Otsu's method. Experimental results on artificially simulated craniofacial CBCT images are provided to demonstrate the effectiveness of our algorithm. Spatial non-uniformity is reduced from 16% to 7% in soft tissue and from 44% to 8% in bone regions. With shading-correction, thresholding based segmentation accuracy for bone pixels is improved from 85% to 91% when compared to thresholding without shading-correction. The proposed algorithm is thus practical and qualifies as a plug and play extension into any CBCT reconstruction software for shading correction.
Solving ill-posed inverse problems using iterative deep neural networks
2017
We propose a partially learned approach for the solution of ill posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularization theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularizing functional. The method results in a gradient-like iterative scheme, where the \"gradient\" component is learned using a convolutional network that includes the gradients of the data discrepancy and regularizer as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against FBP and TV reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the TV reconstruction while being significantly faster, giving reconstructions of 512 x 512 volumes in about 0.4 seconds using a single GPU.
On the unreasonable effectiveness of CNNs
by
Hauptmann, Andreas
,
Adler, Jonas
in
Artificial neural networks
,
Computer architecture
,
Encryption
2020
Deep learning methods using convolutional neural networks (CNN) have been successfully applied to virtually all imaging problems, and particularly in image reconstruction tasks with ill-posed and complicated imaging models. In an attempt to put upper bounds on the capability of baseline CNNs for solving image-to-image problems we applied a widely used standard off-the-shelf network architecture (U-Net) to the \"inverse problem\" of XOR decryption from noisy data and show acceptable results.