Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
384 result(s) for "Bayesian model parameterization"
Sort by:
Scale dependence in the effects of leaf ecophysiological traits on photosynthesis: Bayesian parameterization of photosynthesis models
Relationships between leaf traits and carbon assimilation rates are commonly used to predict primary productivity at scales from the leaf to the globe. We addressed how the shape and magnitude of these relationships vary across temporal, spatial and taxonomic scales to improve estimates of carbon dynamics. Photosynthetic CO2 and light response curves, leaf nitrogen (N), chlorophyll (Chl) concentration and specific leaf area (SLA) of 25 grassland species were measured. In addition, C3 and C4 photosynthesis models were parameterized using a novel hierarchical Bayesian approach to quantify the effects of leaf traits on photosynthetic capacity and parameters at different scales. The effects of plant physiological traits on photosynthetic capacity and parameters varied among species, plant functional types and taxonomic scales. Relationships in the grassland biome were significantly different from the global average. Within-species variability in photosynthetic parameters through the growing season could be attributed to the seasonal changes of leaf traits, especially leaf N and Chl, but these responses followed qualitatively different relationships from the across-species relationship. The results suggest that one broad-scale relationship is not sufficient to characterize ecosystem condition and change at multiple scales. Applying trait relationships without articulating the scales may cause substantial carbon flux estimation errors.
Unifying error structures in commonly used biotracer mixing models
Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet.
MARGINS OF DISCRETE BAYESIAN NETWORKS
Bayesian network models with latent variables are widely used in statistics and machine learning. In this paper, we provide a complete algebraic characterization of these models when the observed variables are discrete and no assumption is made about the state-space of the latent variables. We show that it is algebraically equivalent to the so-called nested Markov model, meaning that the two are the same up to inequality constraints on the joint probabilities. In particular, these two models have the same dimension, differing only by inequality constraints for which there is no general description. The nested Markov model is therefore the closest possible description of the latent variable model that avoids consideration of inequalities. A consequence of this is that the constraint finding algorithm of Tian and Pearl [In Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence (2002) 519–527] is complete for finding equality constraints. Latent variable models suffer from difficulties of unidentifiable parameters and nonregular asymptotics; in contrast the nested Markov model is fully identifiable, represents a curved exponential family of known dimension, and can easily be fitted using an explicit parameterization.
Active learning of reactive Bayesian force fields applied to heterogeneous catalysis dynamics of H/Pt
Atomistic modeling of chemically reactive systems has so far relied on either expensive ab initio methods or bond-order force fields requiring arduous parametrization. Here, we describe a Bayesian active learning framework for autonomous “on-the-fly” training of fast and accurate reactive many-body force fields during molecular dynamics simulations. At each time-step, predictive uncertainties of a sparse Gaussian process are evaluated to automatically determine whether additional ab initio training data are needed. We introduce a general method for mapping trained kernel models onto equivalent polynomial models whose prediction cost is much lower and independent of the training set size. As a demonstration, we perform direct two-phase simulations of heterogeneous H 2 turnover on the Pt(111) catalyst surface at chemical accuracy. The model trains itself in three days and performs at twice the speed of a ReaxFF model, while maintaining much higher fidelity to DFT and excellent agreement with experiment. Uncertainty-aware machine learning models are used to automate the training of reactive force fields. The method is used here to simulate hydrogen turnover on a platinum surface with unprecedented accuracy.
Confronting the Challenge of Modeling Cloud and Precipitation Microphysics
In the atmosphere, microphysics refers to the microscale processes that affect cloud and precipitation particles and is a key linkage among the various components of Earth's atmospheric water and energy cycles. The representation of microphysical processes in models continues to pose a major challenge leading to uncertainty in numerical weather forecasts and climate simulations. In this paper, the problem of treating microphysics in models is divided into two parts: (i) how to represent the population of cloud and precipitation particles, given the impossibility of simulating all particles individually within a cloud, and (ii) uncertainties in the microphysical process rates owing to fundamental gaps in knowledge of cloud physics. The recently developed Lagrangian particle‐based method is advocated as a way to address several conceptual and practical challenges of representing particle populations using traditional bulk and bin microphysics parameterization schemes. For addressing critical gaps in cloud physics knowledge, sustained investment for observational advances from laboratory experiments, new probe development, and next‐generation instruments in space is needed. Greater emphasis on laboratory work, which has apparently declined over the past several decades relative to other areas of cloud physics research, is argued to be an essential ingredient for improving process‐level understanding. More systematic use of natural cloud and precipitation observations to constrain microphysics schemes is also advocated. Because it is generally difficult to quantify individual microphysical process rates from these observations directly, this presents an inverse problem that can be viewed from the standpoint of Bayesian statistics. Following this idea, a probabilistic framework is proposed that combines elements from statistical and physical modeling. Besides providing rigorous constraint of schemes, there is an added benefit of quantifying uncertainty systematically. Finally, a broader hierarchical approach is proposed to accelerate improvements in microphysics schemes, leveraging the advances described in this paper related to process modeling (using Lagrangian particle‐based schemes), laboratory experimentation, cloud and precipitation observations, and statistical methods. Plain Language Summary In the atmosphere, microphysics—the small‐scale processes affecting cloud and precipitation particles such as their growth by condensation, evaporation, and melting—is a critical part of Earth's weather and climate. Because it is impossible to simulate every cloud particle individually owing to their sheer number within even a small cloud, atmospheric models have to represent the evolution of particle populations statistically. There are critical gaps in knowledge of the microphysical processes that act on particles, especially for atmospheric ice particles because of their wide variety and intricacy of their shapes. The difficulty of representing cloud and precipitation particle populations and knowledge gaps in cloud processes both introduce important uncertainties into models that translate into uncertainty in weather forecasts and climate simulations, including climate change assessments. We discuss several specific challenges related to these problems. To improve how cloud and precipitation particle populations are represented, we advocate a “particle‐based” approach that addresses several limitations of traditional approaches and has recently gained traction as a tool for cloud modeling. Advances in observations, including laboratory studies, are argued to be essential for addressing gaps in knowledge of microphysical processes. We also advocate using statistical modeling tools to improve how these observations are used to constrain model microphysics. Finally, we discuss a hierarchical approach that combines the various pieces discussed in this article, providing a possible blueprint for accelerating progress in how microphysics is represented in cloud, weather, and climate models. Key Points Microphysics is an important component of weather and climate models, but its representation in current models is highly uncertain Two critical challenges are identified: representing cloud and precipitation particle populations and knowledge gaps in cloud physics A possible blueprint for addressing these challenges is proposed to accelerate progress in improving microphysics schemes
A Bayesian hierarchical spatio-temporal model for extreme temperatures in Extremadura (Spain) simulated by a Regional Climate Model
A statistical study was made of the temporal trend in extreme temperatures in the region of Extremadura (Spain) during the period 1981–2015 using a Regional Climate Model. For this purpose, a Weather Research and Forecasting (WRF) Regional Climate Model extreme temperature dataset was obtained. This dataset was then subjected to a statistical study using a Bayesian hierarchical spatio-temporal model with a Generalized Extreme Value (GEV) parametrization of the extreme data. The Bayesian model was implemented in a Markov chain Monte Carlo framework that allows the posterior distribution of the parameters that intervene in the model to be estimated. The role of the altitude dependence of the temperature was considered in the proposed model. The results for the spatial-trend parameter lend confidence to the model since they are consistent with the dry adiabatic gradient. Furthermore, the statistical model showed a slight negative trend for the location parameter. This unexpected result may be due to the internal and modeling uncertainties in the WRF model. The shape parameter was negative, meaning that there is an upper bound for extreme temperatures in the model.
Modeling Upscaled Mass Discharge From Complex DNAPL Source Zones Using a Bayesian Neural Network: Prediction Accuracy, Uncertainty Quantification and Source Zone Feature Importance
The mass discharge emanating from dense non‐aqueous phase liquid (DNAPL) source zones (SZs) is often used as a key metric for risk assessment. To predict the temporal evolution of mass discharge, upscaled models have been developed to approximate the relationship between the depletion of SZ and the mass discharge. A significant challenge stems from the choice of the SZ parameterization, so that a limited number of domain‐averaged SZ metrics can suffice as an input and accurately predict the complex mass‐discharge behavior. Moreover, existing deterministic upscaled models cannot quantify prediction uncertainty stemming from modeling parameterization. To address these challenges, we propose a method based on a Bayesian Neural Network (BNN) which learns the non‐linear relationship between SZ metrics and mass discharge from multiphase‐modeling training data. The proposed BNN‐based upscaled model allows uncertainty quantification since it treats trainable parameters as distributions, and does not require a manual parameterization of the SZ a‐priori. Instead, the BNN model chooses three physically meaningful SZ quantities related to mass discharge as input features. Then, we use the expected gradients method to identify the feature importance for mass‐discharge prediction. We evaluated the proposed model on laboratory‐scale DNAPL dissolution experiments. The results show that the BNN model accurately reproduces the multistage mass‐discharge profiles with fewer parameters than existing upscaled models. Feature importance analysis shows that all chosen features are important and sufficient to reproduce complex mass discharge. This model provides accurate mass‐discharge predictions and uncertainty estimation, therefore holds a great potential for probabilistic risk assessments and decision‐making. Key Points We demonstrate a predictive Bayesian Neural Network (BNN) upscaled model for contaminant mass discharge from heterogeneous dense non‐aqueous phase liquid (DNAPL) source zones (SZs) The proposed model accurately captures complex multistage discharge and provides uncertainty bounds for its predictions The results highlight the importance of a minimum set of SZ features and parameterization required for accurate mass discharge prediction
A Bayesian multiscale CNN framework to predict local stress fields in structures with microscale features
Multiscale computational modelling is challenging due to the high computational cost of direct numerical simulation by finite elements. To address this issue, concurrent multiscale methods use the solution of cheaper macroscale surrogates as boundary conditions to microscale sliding windows. The microscale problems remain a numerically challenging operation both in terms of implementation and cost. In this work we propose to replace the local microscale solution by an Encoder-Decoder Convolutional Neural Network that will generate fine-scale stress corrections to coarse predictions around unresolved microscale features, without prior parametrisation of local microscale problems. We deploy a Bayesian approach providing credible intervals to evaluate the uncertainty of the predictions, which is then used to investigate the merits of a selective learning framework. We will demonstrate the capability of the approach to predict equivalent stress fields in porous structures using linearised and finite strain elasticity theories.
An Evaluation of Non-Gaussian Data Assimilation Methods in Moist Convective Regimes
Obtaining a faithful probabilistic depiction of moist convection is complicated by unknown errors in subgrid-scale physical parameterization schemes, invalid assumptions made by data assimilation (DA) techniques, and high system dimensionality. As an initial step toward untangling sources of uncertainty in convective weather regimes, we evaluate a novel Bayesian data assimilation methodology based on particle filtering within a WRF ensemble analysis and forecasting system. Unlike most geophysical DA methods, the particle filter (PF) represents prior and posterior error distributions nonparametrically rather than assuming a Gaussian distribution and can accept any type of likelihood function. This approach is known to reduce bias introduced by Gaussian approximations in low-dimensional and idealized contexts. The form of PF used in this research adopts a dimension-reduction strategy, making it affordable for typical weather applications. The present study examines posterior ensemble members and forecasts for select severe weather events between 2019 and 2020, comparing results from the PF with those from an ensemble Kalman filter (EnKF). We find that assimilating with a PF produces posterior quantities for microphysical variables that are more consistent with model climatology than comparable quantities from an EnKF, which we attribute to a reduction in DA bias. These differences are significant enough to impact the dynamic evolution of convective systems via cold pool strength and propagation, with impacts to forecast verification scores depending on the particular microphysics scheme. Our findings have broad implications for future approaches to the selection of physical parameterization schemes and parameter estimation within preexisting data assimilation frameworks.