Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
46,063
result(s) for
"Scale models"
Sort by:
Carbon release through abrupt permafrost thaw
by
Kuhry, Peter
,
Turetsky Merritt R
,
Jones, Miriam C
in
Atmospheric models
,
Carbon
,
Carbon emissions
2020
The permafrost zone is expected to be a substantial carbon source to the atmosphere, yet large-scale models currently only simulate gradual changes in seasonally thawed soil. Abrupt thaw will probably occur in <20% of the permafrost zone but could affect half of permafrost carbon through collapsing ground, rapid erosion and landslides. Here, we synthesize the best available information and develop inventory models to simulate abrupt thaw impacts on permafrost carbon balance. Emissions across 2.5 million km2 of abrupt thaw could provide a similar climate feedback as gradual thaw emissions from the entire 18 million km2 permafrost region under the warming projection of Representative Concentration Pathway 8.5. While models forecast that gradual thaw may lead to net ecosystem carbon uptake under projections of Representative Concentration Pathway 4.5, abrupt thaw emissions are likely to offset this potential carbon sink. Active hillslope erosional features will occupy 3% of abrupt thaw terrain by 2300 but emit one-third of abrupt thaw carbon losses. Thaw lakes and wetlands are methane hot spots but their carbon release is partially offset by slowly regrowing vegetation. After considering abrupt thaw stabilization, lake drainage and soil carbon uptake by vegetation regrowth, we conclude that models considering only gradual permafrost thaw are substantially underestimating carbon emissions from thawing permafrost.Analyses of inventory models under two climate change projection scenarios suggest that carbon emissions from abrupt thaw of permafrost through ground collapse, erosion and landslides could contribute significantly to the overall permafrost carbon balance.
Journal Article
GMD perspective: The quest to improve the evaluation of groundwater representation in continental- to global-scale models
by
Scanlon, Bridget
,
Hill, Mary
,
Condon, Laura
in
Aquifers
,
Atmosphere
,
Atmospheric boundary layer
2021
Continental- to global-scale hydrologic and land surface models increasingly include representations of the groundwater system. Such large-scale models are essential for examining, communicating, and understanding the dynamic interactions between the Earth system above and below the land surface as well as the opportunities and limits of groundwater resources. We argue that both large-scale and regional-scale groundwater models have utility, strengths, and limitations, so continued modeling at both scales is essential and mutually beneficial. A crucial quest is how to evaluate the realism, capabilities, and performance of large-scale groundwater models given their modeling purpose of addressing large-scale science or sustainability questions as well as limitations in data availability and commensurability. Evaluation should identify if, when, or where large-scale models achieve their purpose or where opportunities for improvements exist so that such models better achieve their purpose. We suggest that reproducing the spatiotemporal details of regional-scale models and matching local data are not relevant goals. Instead, it is important to decide on reasonable model expectations regarding when a large-scale model is performing “well enough” in the context of its specific purpose. The decision of reasonable expectations is necessarily subjective even if the evaluation criteria are quantitative. Our objective is to provide recommendations for improving the evaluation of groundwater representation in continental- to global-scale models. We describe current modeling strategies and evaluation practices, and we subsequently discuss the value of three evaluation strategies: (1) comparing model outputs with available observations of groundwater levels or other state or flux variables (observation-based evaluation), (2) comparing several models with each other with or without reference to actual observations (model-based evaluation), and (3) comparing model behavior with expert expectations of hydrologic behaviors in particular regions or at particular times (expert-based evaluation). Based on evolving practices in model evaluation as well as innovations in observations, machine learning, and expert elicitation, we argue that combining observation-, model-, and expert-based model evaluation approaches, while accounting for commensurability issues, may significantly improve the realism of groundwater representation in large-scale models, thus advancing our ability for quantification, understanding, and prediction of crucial Earth science and sustainability problems. We encourage greater community-level communication and cooperation on this quest, including among global hydrology and land surface modelers, local to regional hydrogeologists, and hydrologists focused on model development and evaluation.
Journal Article
Multifractal Analysis for Evaluating the Representation of Clouds in Global Kilometer‐Scale Models
by
Weiss, Philipp
,
Freischem, Lilli J.
,
Christensen, Hannah M.
in
Anvil clouds
,
Atmospheric models
,
Climate
2024
Clouds are one of the largest sources of uncertainty in climate predictions. Global km‐scale models need to simulate clouds and precipitation accurately to predict future climates. To isolate issues in their representation of clouds, models need to be thoroughly evaluated with observations. Here, we introduce multifractal analysis as a method for evaluating km‐scale simulations. We apply it to outgoing longwave radiation fields to investigate structural differences between observed and simulated anvil clouds. We compute fractal parameters which compactly characterize the scaling behavior of clouds and can be compared across simulations and observations. We use this method to evaluate the nextGEMS ICON simulations via comparison with observations from the geostationary satellite GOES‐16. We find that multifractal scaling exponents in the ICON model are significantly lower than in observations. We conclude that too much variability is contained in the small scales (<100km)$(< 100\\ \\mathrm{k}\\mathrm{m})$leading to less organized convection and smaller, isolated anvils. Plain Language Summary In this paper, we present a new approach to evaluating state‐of‐the‐art high‐resolution climate models. We use a type of analysis that captures how a field like outgoing radiation varies between two points in space; it is called multifractal analysis. We apply multifractal analysis to snapshots of climate model simulations and satellite observations, and compare the results to evaluate the model. In contrast to traditional evaluation approaches, our method focuses on the evaluation of the spatio‐temporal structure of cloud fields, exploiting previously untapped information content. Hence, it can take into account the fine details in time and space that high‐resolution climate models provide. We use our method to evaluate the ICON atmospheric model. We find that the simulations does not contain enough large clusters of clouds, as found in big thunderstorms, but instead clouds are randomly distributed in space: the simulated clouds are not organized enough. Key Points Quantifiable, structural evaluation metrics such as multifractal analysis should be used to evaluate and improve km‐scale models Multifractal analysis finds that deep convection in the ICON model is not organized enough leading to smaller fractal parameters The model's bias toward smaller fractal parameters can be attributed to clouds simulated over the ocean
Journal Article
A Particle-Surface-Area-Based Parameterization of Immersion Freezing on Desert Dust Particles
by
Klein, Holger
,
Niemand, Monika
,
Vogel, Bernhard
in
Aerosol concentrations
,
Aerosol interaction
,
Aerosol measurements
2012
In climate and weather models, the quantitative description of aerosol and cloud processes relies on simplified assumptions. This contributes major uncertainties to the prediction of global and regional climate change. Therefore, models need good parameterizations for heterogeneous ice nucleation by atmospheric aerosols. Here the authors present a new parameterization of immersion freezing on desert dust particles derived from a large number of experiments carried out at the Aerosol Interaction and Dynamics in the Atmosphere (AIDA) cloud chamber facility. The parameterization is valid in the temperature range between −12° and −36°C at or above water saturation and can be used in atmospheric models that include information about the dust surface area. The new parameterization was applied to calculate distribution maps of ice nuclei during a Saharan dust event based on model results from the regional-scale model Consortium for Small-Scale Modelling–Aerosols and Reactive Trace Gases (COSMO-ART). The results were then compared to measurements at the Taunus Observatory on Mount Kleiner Feldberg, Germany, and to three other parameterizations applied to the dust outbreak. The aerosol number concentration and surface area from the COSMO-ART model simulation were taken as input to different parameterizations. Although the surface area from the model agreed well with aerosol measurements during the dust event at Kleiner Feldberg, the ice nuclei (IN) number concentration calculated from the new surface-area-based parameterization was about a factor of 13 less than IN measurements during the same event. Systematic differences of more than a factor of 10 in the IN number concentration were also found among the different parameterizations. Uncertainties in the modeled and measured parameters probably both contribute to this discrepancy and should be addressed in future studies.
Journal Article
Metabolic Burden: Cornerstones in Synthetic Biology and Metabolic Engineering Applications
by
Fong, Stephen S.
,
Yan, Qiang
,
Tang, Yinjie J.
in
13C-MFA
,
adenosine triphosphate
,
artificial intelligence
2016
Engineering cell metabolism for bioproduction not only consumes building blocks and energy molecules (e.g., ATP) but also triggers energetic inefficiency inside the cell. The metabolic burdens on microbial workhorses lead to undesirable physiological changes, placing hidden constraints on host productivity. We discuss cell physiological responses to metabolic burdens, as well as strategies to identify and resolve the carbon and energy burden problems, including metabolic balancing, enhancing respiration, dynamic regulatory systems, chromosomal engineering, decoupling cell growth with production phases, and co-utilization of nutrient resources. To design robust strains with high chances of success in industrial settings, novel genome-scale models (GSMs), 13C-metabolic flux analysis (MFA), and machine-learning approaches are needed for weighting, standardizing, and predicting metabolic costs.
To commercialize recombinant organisms for renewable chemical production, it is essential to characterize the cost and benefit of metabolic burden using metabolic flux analysis tools.
Genome-scale modeling can incorporate 13C-fluxome information and machine learning to predict the metabolic burden of synthetic biology modules.
Modularized expression of native or recombinant pathways using a variety of experimental tools for controlling expression can substantially reduce the metabolic burden introduced by these pathways.
The development of a standard synthetic-biology publication database may allow the use of machine learning or artificial intelligence to harness past knowledge for future rational design.
Detailed computational methods have been developed to model macromolecule synthesis (DNA, RNA, proteins) to account for the maintenance costs associated with basal cellular function.
Systems-level dynamic simulations and design algorithms can inform new approaches to engineering microbial production strains.
Journal Article
Machine learning building-block-flow wall model for large-eddy simulation
by
Lozano-Durán, Adrián
,
Bae, H. Jane
in
Aircraft
,
Aircraft configurations
,
Artificial neural networks
2023
A wall model for large-eddy simulation (LES) is proposed by devising the flow as a combination of building blocks. The core assumption of the model is that a finite set of simple canonical flows contains the essential physics to predict the wall shear stress in more complex scenarios. The model is constructed to predict zero/favourable/adverse mean pressure gradient wall turbulence, separation, statistically unsteady turbulence with mean flow three-dimensionality, and laminar flow. The approach is implemented using two types of artificial neural networks: a classifier, which identifies the contribution of each building block in the flow, and a predictor, which estimates the wall shear stress via a combination of the building-block flows. The training data are obtained directly from wall-modelled LES (WMLES) optimised to reproduce the correct mean quantities. This approach guarantees the consistency of the training data with the numerical discretisation and the gridding strategy of the flow solver. The output of the model is accompanied by a confidence score in the prediction that aids the detection of regions where the model underperforms. The model is validated in canonical flows (e.g. laminar/turbulent boundary layers, turbulent channels, turbulent Poiseuille–Couette flow, turbulent pipe) and two realistic aircraft configurations: the NASA Common Research Model High-lift and NASA Juncture Flow experiment. It is shown that the building-block-flow wall model outperforms (or matches) the predictions by an equilibrium wall model. It is also concluded that further improvements in WMLES should incorporate advances in subgrid-scale modelling to minimise error propagation to the wall model.
Journal Article
Global catchment modelling using World-Wide HYPE (WWH), open data, and stepwise parameter estimation
by
Pimentel, Rafael
,
Hasan, Abdulghani
,
Crochemore, Louise
in
Analysis
,
Atmospheric models
,
Budgets
2020
Recent advancements in catchment hydrology (such as understanding catchment similarity, accessing new data sources, and refining methods for parameter constraints) make it possible to apply catchment models for ungauged basins over large domains. Here we present a cutting-edge case study applying catchment-modelling techniques with evaluation against river flow at the global scale for the first time. The modelling procedure was challenging but doable, and even the first model version showed better performance than traditional gridded global models of river flow. We used the open-source code of the HYPE model and applied it for >130 000 catchments (with an average resolution of 1000 km2), delineated to cover the Earth's landmass (except Antarctica). The catchments were characterized using 20 open databases on physiographical variables, to account for spatial and temporal variability of the global freshwater resources, based on exchange with the atmosphere (e.g. precipitation and evapotranspiration) and related budgets in all compartments of the land (e.g. soil, rivers, lakes, glaciers, and floodplains), including water stocks, residence times, and the pathways between various compartments. Global parameter values were estimated using a stepwise approach for groups of parameters regulating specific processes and catchment characteristics in representative gauged catchments. Daily and monthly time series (>10 years) from 5338 gauges of river flow across the globe were used for model evaluation (half for calibration and half for independent validation), resulting in a median monthly KGE of 0.4. However, the World-Wide HYPE (WWH) model shows large variation in model performance, both between geographical domains and between various flow signatures. The model performs best (KGE >0.6) in the eastern USA, Europe, South-East Asia, and Japan, as well as in parts of Russia, Canada, and South America. The model shows overall good potential to capture flow signatures of monthly high flows, spatial variability of high flows, duration of low flows, and constancy of daily flow. Nevertheless, there remains large potential for model improvements, and we suggest both redoing the parameter estimation and reconsidering parts of the model structure for the next WWH version. This first model version clearly indicates challenges in large-scale modelling, usefulness of open data, and current gaps in process understanding. However, we also found that catchment modelling techniques can contribute to advance global hydrological predictions. Setting up a global catchment model has to be a long-term commitment as it demands many iterations; this paper shows a first version, which will be subjected to continuous model refinements in the future. WWH is currently shared with regional/local modellers to appreciate local knowledge.
Journal Article
HTAP_v2.2: a mosaic of regional and global emission grid maps for 2008 and 2010 to study hemispheric transport of air pollution
by
Janssens-Maenhout, G.
,
Denier van der Gon, H.
,
Guizzardi, D.
in
Accuracy
,
Acidification
,
Aerosols
2015
The mandate of the Task Force Hemispheric Transport of Air Pollution (TF HTAP) under the Convention on Long-Range Transboundary Air Pollution (CLRTAP) is to improve the scientific understanding of the intercontinental air pollution transport, to quantify impacts on human health, vegetation and climate, to identify emission mitigation options across the regions of the Northern Hemisphere, and to guide future policies on these aspects. The harmonization and improvement of regional emission inventories is imperative to obtain consolidated estimates on the formation of global-scale air pollution. An emissions data set has been constructed using regional emission grid maps (annual and monthly) for SO2, NOx, CO, NMVOC, NH3, PM10, PM2.5, BC and OC for the years 2008 and 2010, with the purpose of providing consistent information to global and regional scale modelling efforts. This compilation of different regional gridded inventories – including that of the Environmental Protection Agency (EPA) for USA, the EPA and Environment Canada (for Canada), the European Monitoring and Evaluation Programme (EMEP) and Netherlands Organisation for Applied Scientific Research (TNO) for Europe, and the Model Inter-comparison Study for Asia (MICS-Asia III) for China, India and other Asian countries – was gap-filled with the emission grid maps of the Emissions Database for Global Atmospheric Research (EDGARv4.3) for the rest of the world (mainly South America, Africa, Russia and Oceania). Emissions from seven main categories of human activities (power, industry, residential, agriculture, ground transport, aviation and shipping) were estimated and spatially distributed on a common grid of 0.1° × 0.1° longitude-latitude, to yield monthly, global, sector-specific grid maps for each substance and year. The HTAP_v2.2 air pollutant grid maps are considered to combine latest available regional information within a complete global data set. The disaggregation by sectors, high spatial and temporal resolution and detailed information on the data sources and references used will provide the user the required transparency. Because HTAP_v2.2 contains primarily official and/or widely used regional emission grid maps, it can be recommended as a global baseline emission inventory, which is regionally accepted as a reference and from which different scenarios assessing emission reduction policies at a global scale could start. An analysis of country-specific implied emission factors shows a large difference between industrialised countries and developing countries for acidifying gaseous air pollutant emissions (SO2 and NOx) from the energy and industry sectors. This is not observed for the particulate matter emissions (PM10, PM2.5), which show large differences between countries in the residential sector instead. The per capita emissions of all world countries, classified from low to high income, reveal an increase in level and in variation for gaseous acidifying pollutants, but not for aerosols. For aerosols, an opposite trend is apparent with higher per capita emissions of particulate matter for low income countries.
Journal Article
Circuit explained: How does a transformer perform compositional generalization
by
Lake, Brenden
,
Jazayeri, Mehrdad
,
Tang, Cheng
in
Ablation
,
Algorithms
,
Biology and Life Sciences
2026
Compositional generalization—the systematic combination of known components into novel structures—is fundamental to flexible human cognition, yet the mechanisms that enable it in neural networks remain poorly understood in both machine learning and cognitive science. [1] showed that a compact encoder-decoder transformer can achieve simple forms of compositional generalization in a sequence arithmetic task. In this work, we identify and mechanistically interpret the circuit responsible for this behavior in such a model. Using causal ablations, we isolate the circuit and show that this understanding enables precise activation edits to steer the model’s outputs predictably. We find that the circuit performs function composition without encoding the specific semantics of any given function—instead, it leverages a disentangled representation of token position and identity to apply a general token remapping rule across an entire family of functions. Although the circuit mechanism was identified in a limited number of small scale models with a synthetic task, it sheds light to how symbolic compositionality can emerge in transformers and offer testable hypotheses for similar mechanisms in large-scale models. Code for model and analysis is publicly available .
Journal Article
Pore‐Scale Modeling of Water and Ion Diffusion in Partially Saturated Clays
2024
An accurate mechanistic understanding of solute diffusion in partially saturated clays is critical for assessing the safety of deep geological repositories for radioactive waste. In this study, a pore‐scale numerical framework is developed to simulate water and ion diffusion in partially saturated clays. First, the two‐phase Shan‐Chen Lattice Boltzmann method is employed to establish the liquid‐gas distribution in a reconstructed three‐dimensional pore geometry of a clay. An equivalent solute method is also developed and validated to improve the numerical stability of the solution at the liquid/gas interface corresponding to steep variations of the concentration and diffusion coefficient of the water tracer. By using a mobility‐distance relationship from molecular simulations, Fick's law is numerically solved to simulate water diffusion in nanopores, while the coupled Poisson‐Boltzmann‐Nernst‐Planck equations are solved to simulate ion diffusion under the influence of the electrical double layer (EDL). Our model reveals that the decrease of relative effective diffusion coefficients during the desaturation is more pronounced for ions than for water, due to the additional transport pathway of water tracers in the gas phase. The obtained effective diffusion coefficients of tritiated water and ions agree well with reported data from compacted sedimentary rocks. By comparing the local electric potential and the distribution of ion concentrations in single pores, the simulation results suggest that the EDL in unsaturated clays has a more complex influence on ion distribution than under fully water‐saturated conditions. This study provides critical insights into the coupled transport processes of solutes in partially saturated clays. Key Points Development of a pore‐scale numerical framework to simulate water and ion diffusion in partially saturated clays Derivation of an equivalent solute method to improve the numerical stability caused by the discontinuities at the water/vapor interface The electrical double layer has a stronger effect on ion transport under unsaturated conditions than under water saturated conditions
Journal Article