Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
22
result(s) for
"Arabas, Sylwester"
Sort by:
Toward a Numerical Benchmark for Warm Rain Processes
2023
The Kinematic Driver-Aerosol (KiD-A) intercomparison was established to test the hypothesis that detailed warm microphysical schemes provide a benchmark for lower-complexity bulk microphysics schemes. KiD-A is the first intercomparison to compare multiple Lagrangian cloud models (LCMs), size bin-resolved schemes, and double-moment bulk microphysics schemes in a consistent 1D dynamic framework and box cases. In the absence of sedimentation and collision–coalescence, the drop size distributions (DSDs) from the LCMs exhibit similar evolution with expected physical behaviors and good interscheme agreement, with the volume mean diameter ( D vol ) from the LCMs within 1%–5% of each other. In contrast, the bin schemes exhibit nonphysical broadening with condensational growth. These results further strengthen the case that LCMs are an appropriate numerical benchmark for DSD evolution under condensational growth. When precipitation processes are included, however, the simulated liquid water path, precipitation rates, and response to modified cloud drop/aerosol number concentrations from the LCMs vary substantially, while the bin and bulk schemes are relatively more consistent with each other. The lack of consistency in the LCM results stems from both the collision–coalescence process and the sedimentation process, limiting their application as a numerical benchmark for precipitation processes. Reassuringly, however, precipitation from bulk schemes, which are the basis for cloud microphysics in weather and climate prediction, is within the spread of precipitation from the detailed schemes (LCMs and bin). Overall, this intercomparison identifies the need for focused effort on the comparison of collision–coalescence methods and sedimentation in detailed microphysics schemes, especially LCMs.
Journal Article
Training Warm‐Rain Bulk Microphysics Schemes Using Super‐Droplet Simulations
2024
Cloud microphysics is a critical aspect of the Earth's climate system, which involves processes at the nano‐ and micrometer scales of droplets and ice particles. In climate modeling, cloud microphysics is commonly represented by bulk models, which contain simplified process rates that require calibration. This study presents a framework for calibrating warm‐rain bulk schemes using high‐fidelity super‐droplet simulations that provide a more accurate and physically based representation of cloud and precipitation processes. The calibration framework employs ensemble Kalman methods including Ensemble Kalman Inversion and Unscented Kalman Inversion to calibrate bulk microphysics schemes with probabilistic super‐droplet simulations. We demonstrate the framework's effectiveness by calibrating a single‐moment bulk scheme, resulting in a reduction of data‐model mismatch by more than 75% compared to the model with initial parameters. Thus, this study demonstrates a powerful tool for enhancing the accuracy of bulk microphysics schemes in atmospheric models and improving climate modeling. Plain Language Summary Cloud microphysics is a complex set of processes that determine the formation and evolution of particles in clouds, which affects the Earth's climate by regulating precipitation and cloud cover. However, the vast difference in scale between the microphysics and large‐scale atmospheric flows makes it impossible to simulate these processes in climate models directly. Instead, climate models use simplified methods to represent cloud microphysics, which can result in inaccuracies. In this study, we focus on calibrating the simplified models with more detailed simulations of cloud microphysics using the super‐droplet method. We demonstrate a framework for calibrating the simplified models using high‐fidelity simulations, which improves the accuracy of these models. Key Points A calibration framework for warm‐rain bulk microphysics parameterizations is presented The framework relies on a library of super‐droplet simulations of a rain shaft Calibrating a single‐moment microphysics scheme with the calibration framework substantially reduces the model‐data mismatch
Journal Article
Breakups are complicated: an efficient representation of collisional breakup in the superdroplet method
2023
A key constraint of particle-based methods for modeling cloud microphysics is the conservation of total particle number, which is required for computational tractability. The process of collisional breakup poses a particular challenge to this framework, as breakup events often produce many droplet fragments of varying sizes, which would require creating new particles in the system. This work introduces a representation of collisional breakup in the so-called “superdroplet” method which conserves the total number of superdroplets in the system. This representation extends an existing stochastic collisional-coalescence scheme and samples from a fragment size distribution in an additional Monte Carlo step. This method is demonstrated in a set of idealized box model and single-column warm-rain simulations. We further discuss the effects of the breakup dynamic and fragment size distribution on the particle size distribution, hydrometeor population, and microphysical process rates. Box model experiments serve to characterize the impacts of properties such as coalescence efficiency and fragmentation function on the relative roles of collisional breakup and coalescence. The results demonstrate that this representation of collisional breakup can produce a stationary particle size distribution, in which breakup and coalescence rates are approximately equal, and that it recovers expected behavior such as a reduction in precipitate-sized particles in the column model. The breakup algorithm presented here contributes to an open-source pythonic implementation of the superdroplet method, PySDM, which will facilitate future research using particle-based microphysics.
Journal Article
Large-Eddy Simulations of Trade Wind Cumuli Using Particle-Based Microphysics with Monte Carlo Coalescence
2013
A series of simulations employing the superdroplet method (SDM) for representing aerosol, cloud, and rain microphysics in large-eddy simulations (LES) is discussed. The particle-based formulation treats all particles in the same way, subjecting them to condensational growth and evaporation, transport of the particles by the flow, gravitational settling, and collisional growth. SDM features a Monte Carlo–type numerical scheme for representing the collision and coalescence process. All processes combined cover representation of cloud condensation nuclei (CCN) activation, drizzle formation by autoconversion, accretion of cloud droplets, self-collection of raindrops, and precipitation, including aerosol wet deposition. The model setup used in the study is based on observations from the Rain in Cumulus over the Ocean (RICO) field project. Cloud and rain droplet size spectra obtained in the simulations are discussed in context of previously published analyses of aircraft observations carried out during RICO. The analysis covers height-resolved statistics of simulated cloud microphysical parameters such as droplet number concentration, effective radius, and parameters describing the width of the cloud droplet size spectrum. A reasonable agreement with measurements is found for several of the discussed parameters. The sensitivity of the results to the grid resolution of the LES, as well as to the sampling density of the probabilistic Monte Carlo–type model, is explored.
Journal Article
On the CCN (de)activation nonlinearities
2017
We take into consideration the evolution of particle size in a monodisperse aerosol population during activation and deactivation of cloud condensation nuclei (CCN). Our analysis reveals that the system undergoes a saddle-node bifurcation and a cusp catastrophe. The control parameters chosen for the analysis are the relative humidity and the particle concentration. An analytical estimate of the activation timescale is derived through estimation of the time spent in the saddle-node bifurcation bottleneck. Numerical integration of the system coupled with a simple air-parcel cloud model portrays two types of activation/deactivation hystereses: one associated with the kinetic limitations on droplet growth when the system is far from equilibrium, and one occurring close to equilibrium and associated with the cusp catastrophe. We discuss the presented analyses in context of the development of particle-based models of aerosol–cloud interactions in which activation and deactivation impose stringent time-resolution constraints on numerical integration.
Journal Article
Immersion Freezing in Particle‐Based Aerosol‐Cloud Microphysics: A Probabilistic Perspective on Singular and Time‐Dependent Models
2025
Cloud droplets containing immersed ice‐nucleating particles (INPs) may freeze at temperatures above the homogeneous freezing threshold temperature in a process referred to as immersion freezing. In modeling studies, immersion freezing is often described using either so‐called “singular” or “time‐dependent” parameterizations. Here, we compare both approaches and discuss them in the context of probabilistic particle‐based (super‐droplet) cloud microphysics modeling. First, using a box model, we contrast how both parameterizations respond to idealized ambient cooling rate profiles and quantify the impact of the polydispersity of the immersed surface spectrum on the frozen fraction evolution. Presented simulations highlight that the singular approach, constituting a time‐integrated form of a more general time‐dependent approach, is only accurate under a limited range of ambient cooling rates. The time‐dependent approach is free from this limitation. Second, using a prescribed‐flow two‐dimensional cloud model, we illustrate the macroscopic differences in the evolution in time of ice particle concentrations in simulations with flow regimes relevant to ambient cloud conditions. The flow‐coupled aerosol‐budget‐resolving simulations highlight the benefits and challenges of modeling cloud condensation nuclei activation and immersion freezing on insoluble ice nuclei with super‐particle methods. The challenges stem, on the one hand, from heterogeneous ice nucleation being contingent on the presence of relatively sparse immersed INPs, and on the other hand, from the need to represent a vast population of particles with relatively few so‐called super particles (each representing a multiplicity of real particles). We discuss the critical role of the sampling strategy for particle attributes, including the INP size, the freezing temperature (for singular scheme) and the multiplicity. Plain Language Summary Clouds are composed of water droplets and/or ice particles. One of the ways ice forms in clouds, immersion freezing, is when an ice nucleus immersed in a liquid water droplet triggers freezing. Without presence of the nuclei (grains of minerals, proteins or organic layers), freezing requires a lower temperature. Here, we focus on the ways of including the immersion freezing process in computer simulations of clouds. We discuss the recurrent question in this field of research, namely the role of time in the freezing process. The tool we use is a so‐called particle‐based simulation which relies on tracking simulation particles, each representing a large number of droplets or ice particles. We confirm that treating immersion freezing as characterized by a chance of freezing per unit of time (rather than the alternative approach involving a fixed freezing temperature for each droplet) is technically feasible. It makes the simulations costlier but more robust to diverse ambient cooling rates. The cooling rate—how fast the ambient temperature changes from the perspective of a droplet—may differ from the conditions in laboratory experiments that form the basis for deriving model parameters. Consequently, the achieved robustness is expected to improve the fidelity of models. Key Points Discussion of origins, congruence and limitations of the time‐dependent and the singular immersion freezing modeling approaches Comparison of time‐dependent water‐activity‐based and singular active‐sites parameterizations, both cast as particle‐based Monte‐Carlo schemes Zero and 2‐dimensional particle‐based simulations suggest time‐dependent freezing schemes are suitable for a wide range of cooling rates
Journal Article
On numerical broadening of particle-size spectra: a condensational growth study using PyMPDATA 1.0
by
Baumgartner, Manuel
,
Unterstrasser, Simon
,
Olesik, Michael A
in
Advection
,
Aerosols
,
Algorithms
2022
This work discusses the numerical aspects of representing the condensational growth of particles in models of aerosol systems such as atmospheric clouds. It focuses on the Eulerian modelling approach, in which fixed-bin discretisation is used for the probability density function describing the particle-size spectrum. Numerical diffusion is inherent to the employment of the fixed-bin discretisation for solving the arising transport problem (advection equation describing size spectrum evolution). The focus of this work is on a technique for reducing the numerical diffusion in solutions based on the upwind scheme: the multidimensional positive definite advection transport algorithm (MPDATA). Several MPDATA variants are explored including infinite-gauge, non-oscillatory, third-order terms and recursive antidiffusive correction (double-pass donor cell, DPDC) options. Methodologies for handling coordinate transformations associated with both particle-size spectrum coordinate choice and with numerical grid layout choice are expounded. Analysis of the performance of the scheme for different discretisation parameters and different settings of the algorithm is performed using (i) an analytically solvable box-model test case and (ii) the single-column kinematic driver (“KiD”) test case in which the size-spectral advection due to condensation is solved simultaneously with the advection in the vertical spatial coordinate, and in which the supersaturation evolution is coupled with the droplet growth through water mass budget. The box-model problem covers size-spectral dynamics only; no spatial dimension is considered. The single-column test case involves a numerical solution of a two-dimensional advection problem (spectral and spatial dimensions). The discussion presented in the paper covers size-spectral, spatial and temporal convergence as well as computational cost, conservativeness and quantification of the numerical broadening of the particle-size spectrum. The box-model simulations demonstrate that, compared with upwind solutions, even a 10-fold decrease in the spurious numerical spectral broadening can be obtained by an apt choice of the MPDATA variant (maintaining the same spatial and temporal resolution), yet at an increased computational cost. Analyses using the single-column test case reveal that the width of the droplet size spectrum is affected by numerical diffusion pertinent to both spatial and spectral advection. Application of even a single corrective iteration of MPDATA robustly decreases the relative dispersion of the droplet spectrum, roughly by a factor of 2 at the levels of maximal liquid water content. Presented simulations are carried out using PyMPDATA – a new open-source Python implementation of MPDATA based on the Numba just-in-time compilation infrastructure.
Journal Article
On the design of Monte-Carlo particle coagulation solver interface: a CPU/GPU Super-Droplet Method case study with PySDM
2021
Super-Droplet Method (SDM) is a probabilistic Monte-Carlo-type model of particle coagulation process, an alternative to the mean-field formulation of Smoluchowski. SDM as an algorithm has linear computational complexity with respect to the state vector length, the state vector length is constant throughout simulation, and most of the algorithm steps are readily parallelizable. This paper discusses the design and implementation of two number-crunching backends for SDM implemented in PySDM, a new free and open-source Python package for simulating the dynamics of atmospheric aerosol, cloud and rain particles. The two backends share their application programming interface (API) but leverage distinct parallelism paradigms, target different hardware, and are built on top of different lower-level routine sets. First offers multi-threaded CPU computations and is based on Numba (using Numpy arrays). Second offers GPU computations and is built on top of ThrustRTC and CURandRTC (and does not use Numpy arrays). In the paper, the API is discussed focusing on: data dependencies across steps, parallelisation opportunities, CPU and GPU implementation nuances, and algorithm workflow. Example simulations suitable for validating implementations of the API are presented.
Derivative pricing as a transport problem: MPDATA solutions to Black-Scholes-type equations
2018
We discuss in this note applications of the Multidimensional Positive Definite Advection Transport Algorithm (MPDATA) to numerical solutions of partial differential equations arising from stochastic models in quantitative finance. In particular, we develop a framework for solving Black-Scholes-type equations by first transforming them into advection-diffusion problems, and numerically integrating using an iterative explicit finite-difference approach, in which the Fickian term is represented as an additional advective term. We discuss the correspondence between transport phenomena and financial models, uncovering the possibility of expressing the no-arbitrage principle as a conservation law. We depict second-order accuracy in time and space of the embraced numerical scheme. This is done in a convergence analysis comparing MPDATA numerical solutions with classic Black-Scholes analytical formulae for the valuation of European options. We demonstrate in addition a way of applying MPDATA to solve the free boundary problem (leading to a linear complementarity problem) for the valuation of American options. We finally comment on the potential the MPDATA framework has with respect to being applied in tandem with more complex models typically used in quantitive finance.
Enabling MPI communication within Numba/LLVM JIT-compiled Python code using numba-mpi v1.0
by
Bulenok, Oleksii
,
Derlatka, Kacper
,
Manna, Maciej
in
Application programming interface
,
Arrays
,
Building codes
2024
The numba-mpi package offers access to the Message Passing Interface (MPI) routines from Python code that uses the Numba just-in-time (JIT) compiler. As a result, high-performance and multi-threaded Python code may utilize MPI communication facilities without leaving the JIT-compiled code blocks, which is not possible with the mpi4py package, a higher-level Python interface to MPI. For debugging purposes, numba-mpi retains full functionality of the code even if the JIT compilation is disabled. The numba-mpi API constitutes a thin wrapper around the C API of MPI and is built around Numpy arrays including handling of non-contiguous views over array slices. Project development is hosted at GitHub leveraging the mpi4py/setup-mpi workflow enabling continuous integration tests on Linux (MPICH, OpenMPI & Intel MPI), macOS (MPICH & OpenMPI) and Windows (MS MPI). The paper covers an overview of the package features, architecture and performance. As of v1.0, the following MPI routines are exposed and covered by unit tests: size/rank, [i]send/[i]recv, wait[all|any], test[all|any], allreduce, bcast, barrier, scatter/[all]gather & wtime. The package is implemented in pure Python and depends on numpy, numba and mpi4py (the latter used at initialization and as a source of utility routines only). The performance advantage of using numba-mpi compared to mpi4py is depicted with a simple example, with entirety of the code included in listings discussed in the text. Application of numba-mpi for handling domain decomposition in numerical solvers for partial differential equations is presented using two external packages that depend on numba-mpi: py-pde and PyMPDATA-MPI.