Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
42 result(s) for "Cubuk, Ekin D."
Sort by:
Disconnecting structure and dynamics in glassy thin films
Nanometrically thin glassy films depart strikingly from the behavior of their bulk counterparts. We investigate whether the dynamical differences between a bulk and thin film polymeric glass former can be understood by differences in local microscopic structure. Machine learning methods have shown that local structure can serve as the foundation for successful, predictive models of particle rearrangement dynamics in bulk systems. By contrast, in thin glassy films, we find that particles at the center of the film and those near the surface are structurally indistinguishable despite exhibiting very different dynamics. Next, we show that structure-independent processes, already present in bulk systems and demonstrably different from simple facilitated dynamics, are crucial for understanding glassy dynamics in thin films. Our analysis suggests a picture of glassy dynamics in which two dynamical processes coexist, with relative strengths that depend on the distance from an interface. One of these processes depends on local structure and is unchanged throughout most of the film, while the other is purely Arrhenius, does not depend on local structure, and is strongly enhanced near the free surface of a film.
Machine learning determination of atomic dynamics at grain boundaries
In polycrystalline materials, grain boundaries are sites of enhanced atomic motion, but the complexity of the atomic structures within a grain boundary network makes it difficult to link the structure and atomic dynamics. Here, we use a machine learning technique to establish a connection between local structure and dynamics of these materials. Following previous work on bulk glassy materials, we define a purely structural quantity (softness) that captures the propensity of an atom to rearrange. This approach correctly identifies crystalline regions, stacking faults, and twin boundaries as having low likelihood of atomic rearrangements while finding a large variability within high-energy grain boundaries. As has been found in glasses, the probability that atoms of a given softness will rearrange is nearly Arrhenius. This indicates a well-defined energy barrier as well as a well-defined prefactor for the Arrhenius form for atoms of a given softness. The decrease in the prefactor for low-softness atoms indicates that variations in entropy exhibit a dominant influence on the atomic dynamics in grain boundaries.
End-to-end differentiability and tensor processing unit computing to accelerate materials’ inverse design
Numerical simulations have revolutionized material design. However, although simulations excel at mapping an input material to its output property, their direct application to inverse design has traditionally been limited by their high computing cost and lack of differentiability. Here, taking the example of the inverse design of a porous matrix featuring targeted sorption isotherm, we introduce a computational inverse design framework that addresses these challenges, by programming differentiable simulation on TensorFlow platform that leverages automated end-to-end differentiation. Thanks to its differentiability, the simulation is used to directly train a deep generative model, which outputs an optimal porous matrix based on an arbitrary input sorption isotherm curve. Importantly, this inverse design pipeline leverages the power of tensor processing units (TPU)—an emerging family of dedicated chips, which, although they are specialized in deep learning, are flexible enough for intensive scientific simulations. This approach holds promise to accelerate inverse materials design.
Relationship between local structure and relaxation in out-of-equilibrium glassy systems
The dynamical glass transition is typically taken to be the temperature at which a glassy liquid is no longer able to equilibrate on experimental timescales. Consequently, the physical properties of these systems just above or below the dynamical glass transition, such as viscosity, can change by many orders of magnitude over long periods of time following external perturbation. During this progress toward equilibrium, glassy systems exhibit a history dependence that has complicated their study. In previous work, we bridged the gap between structure and dynamics in glassy liquids above their dynamical glass transition temperatures by introducing a scalar field called “softness,” a quantity obtained using machine-learning methods. Softness is designed to capture the hidden patterns in relative particle positions that correlate strongly with dynamical rearrangements of particle positions. Here we show that the out-of-equilibrium behavior of a model glass-forming system can be understood in terms of softness. To do this we first demonstrate that the evolution of behavior following a temperature quench is a primarily structural phenomenon: The structure changes considerably, but the relationship between structure and dynamics remains invariant. We then show that the relaxation time can be robustly computed from structure as quantified by softness, with the same relation holding both in equilibrium and as the system ages. Together, these results show that the history dependence of the relaxation time in glasses requires knowledge only of the softness in addition to the usual state variables.
Designing self-assembling kinetics with differentiable statistical physics models
The inverse problem of designing component interactions to target emergent structure is fundamental to numerous applications in biotechnology, materials science, and statistical physics. Equally important is the inverse problem of designing emergent kinetics, but this has received considerably less attention. Using recent advances in automatic differentiation, we show how kinetic pathways can be precisely designed by directly differentiating through statistical physics models, namely free energy calculations and molecular dynamics simulations. We consider two systems that are crucial to our understanding of structural self-assembly: bulk crystallization and small nanoclusters. In each case, we are able to assemble precise dynamical features. Using gradient information, we manipulate interactions among constituent particles to tune the rate at which these systems yield specific structures of interest. Moreover, we use this approach to learn nontrivial features about the high-dimensional design space, allowing us to accurately predict when multiple kinetic features can be simultaneously and independently controlled. These results provide a concrete and generalizable foundation for studying nonstructural self-assembly, including kinetic properties as well as other complex emergent properties, in a vast array of systems.
dPV: An End-to-End Differentiable Solar-Cell Simulator
We introduce dPV, an end-to-end differentiable photovoltaic (PV) cell simulator based on the drift-diffusion model and Beer-Lambert law for optical absorption. dPV is programmed in Python using JAX, an automatic differentiation (AD) library for scientific computing. Using AD coupled with the implicit function theorem, dPV computes the power conversion efficiency (PCE) of an input PV design as well as the derivative of the PCE with respect to any input parameters, all within comparable time of solving the forward problem. We show an example of perovskite solar-cell optimization and multi-parameter discovery, and compare results with random search and finite differences. The simulator can be integrated with optimization algorithms and neural networks, opening up possibilities for data-efficient optimization and parameter discovery.
Kohn-Sham equations as regularizer: building prior knowledge into machine-learned physics
Including prior knowledge is important for effective machine learning models in physics, and is usually achieved by explicitly adding loss terms or constraints on model architectures. Prior knowledge embedded in the physics computation itself rarely draws attention. We show that solving the Kohn-Sham equations when training neural networks for the exchange-correlation functional provides an implicit regularization that greatly improves generalization. Two separations suffice for learning the entire one-dimensional H\\(_2\\) dissociation curve within chemical accuracy, including the strongly correlated region. Our models also generalize to unseen types of molecules and overcome self-interaction error.
Forward and Inverse Design of Kirigami via Supervised Autoencoder
Machine learning (ML) methods have recently been used as forward solvers to predict the mechanical properties of composite materials. Here, we use a supervised-autoencoder (sAE) to perform inverse design of graphene kirigami, where predicting the ultimate stress or strain under tensile loading is known to be difficult due to nonlinear effects arising from the out-of-plane buckling. Unlike the standard autoencoder, our sAE is able not only to reconstruct cut configurations but also to predict mechanical properties of graphene kirigami and classify the kirigami witheither parallel or orthogonal cuts. By interpolating in the latent space of kirigami structures, the sAE is able to generate novel designs that mix parallel and orthogonal cuts, despite being trained independently on parallel or orthogonal cuts. Our method allows us to both identify novel designs and predict, with reasonable accuracy, their mechanical properties, which is crucial for expanding the search space for materials design.
Do better ImageNet classifiers assess perceptual similarity better?
Perceptual distances between images, as measured in the space of pre-trained deep features, have outperformed prior low-level, pixel-based metrics on assessing perceptual similarity. While the capabilities of older and less accurate models such as AlexNet and VGG to capture perceptual similarity are well known, modern and more accurate models are less studied. In this paper, we present a large-scale empirical study to assess how well ImageNet classifiers perform on perceptual similarity. First, we observe a inverse correlation between ImageNet accuracy and Perceptual Scores of modern networks such as ResNets, EfficientNets, and Vision Transformers: that is better classifiers achieve worse Perceptual Scores. Then, we examine the ImageNet accuracy/Perceptual Score relationship on varying the depth, width, number of training steps, weight decay, label smoothing, and dropout. Higher accuracy improves Perceptual Score up to a certain point, but we uncover a Pareto frontier between accuracies and Perceptual Score in the mid-to-high accuracy regime. We explore this relationship further using a number of plausible hypotheses such as distortion invariance, spatial frequency sensitivity, and alternative perceptual functions. Interestingly we discover shallow ResNets and ResNets trained for less than 5 epochs only on ImageNet, whose emergent Perceptual Score matches the prior best networks trained directly on supervised human perceptual judgements. The checkpoints for the models in our study are available at https://console.cloud.google.com/storage/browser/gresearch/perceptual_similarity.
JAX, M.D.: A Framework for Differentiable Physics
We introduce JAX MD, a software package for performing differentiable physics simulations with a focus on molecular dynamics. JAX MD includes a number of physics simulation environments, as well as interaction potentials and neural networks that can be integrated into these environments without writing any additional code. Since the simulations themselves are differentiable functions, entire trajectories can be differentiated to perform meta-optimization. These features are built on primitive operations, such as spatial partitioning, that allow simulations to scale to hundreds-of-thousands of particles on a single GPU. These primitives are flexible enough that they can be used to scale up workloads outside of molecular dynamics. We present several examples that highlight the features of JAX MD including: integration of graph neural networks into traditional simulations, meta-optimization through minimization of particle packings, and a multi-agent flocking simulation. JAX MD is available at www.github.com/google/jax-md.