Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
11
result(s) for
"Dueben, P."
Sort by:
A Comparison of Data‐Driven Approaches to Build Low‐Dimensional Ocean Models
2021
We present a comprehensive inter‐comparison of linear regression (LR), stochastic, and deep‐learning approaches for reduced‐order statistical emulation of ocean circulation. The reference data set is provided by an idealized, eddy‐resolving, double‐gyre ocean circulation model. Our goal is to conduct a systematic and comprehensive assessment and comparison of skill, cost, and complexity of statistical models from the three methodological classes. The model based on LR is considered as a baseline. Additionally, we investigate its additive white noise augmentation and a multi‐level stochastic approach, deep‐learning methods, hybrid frameworks (LR plus deep‐learning), and simple stochastic extensions of deep‐learning and hybrid methods. The assessment metrics considered are: root mean squared error, anomaly cross‐correlation, climatology, variance, frequency map, forecast horizon, and computational cost. We found that the multi‐level linear stochastic approach performs the best for both short‐ and long‐timescale forecasts. The deep‐learning hybrid models augmented by additive state‐dependent white noise came second, while their deterministic counterparts failed to reproduce the characteristic frequencies in climate‐range forecasts. Pure deep learning implementations performed worse than LR and its simple white noise augmentation. Skills of LR and its white noise extension were similar on short timescales, but the latter performed better on long timescales, while LR‐only outputs decay to zero for long simulations. Overall, our analysis promotes multi‐level LR stochastic models with memory effects, and hybrid models with linear dynamical core augmented by additive stochastic terms learned via deep learning, as a more practical, accurate, and cost‐effective option for ocean emulation than pure deep‐learning solutions. Plain Language Summary In weather and climate predictions, scientists use comprehensive ocean circulation models for representing the effects of the oceans on the atmosphere. These models simulate the three‐dimensional ocean dynamics using millions of variables and, thus, require significant computational resources and running time. Therefore, there is a need for low‐cost, data‐driven ocean models with fewer variables that can reproduce essential oceanic circulations with reasonable accuracy. There are several popular data‐driven approaches to build these models, but singling out the best one is difficult and significantly understudied. We have systematically assessed and compared the accuracy, stability, and computational cost of various data‐driven models against the linear regression—a fundamental and easy‐to‐implement deterministic model, that is, it provides a fixed output for a fixed input. We considered several stochastic and deep‐learning models for comparison; stochastic models combine a deterministic model with customized noise, whereas deep‐learning models train a complex network of neurons similar to the human brain. We found that the stochastic models that properly include the core dynamics, time‐delay effects, and model errors perform the best. The core dynamics provides the essential changes, time‐delay effects are the changes due to correlation between successive ocean states, and model errors provide other possible causes of changes. Key Points The multi‐level stochastic approach produces the most stable, accurate, and low‐cost emulator of a double‐gyre ocean model solution Artificial neural networks and long short term memory work better in a hybrid form with linear regression, providing the core dynamics, than in their standalone application Emulators incorporating memory effects and state‐dependent noise show enhanced performance and deep learning can learn these effects
Journal Article
Outcomes of the WMO Prize Challenge to Improve Subseasonal to Seasonal Predictions Using Artificial Intelligence
2022
There is a high demand and expectation for subseasonal to seasonal (S2S) prediction, which provides forecasts beyond 2 weeks, but less than 3 months ahead. To assess the potential benefit of artificial intelligence (AI) methods for S2S prediction through better postprocessing of ensemble prediction system outputs, the World Meteorological Organization (WMO) coordinated a prize challenge in 2021 to improve subseasonal prediction. The goal of this competition was to produce the most skillful forecasts of precipitation and 2-m temperature globally averaged over forecast weeks 3 and 4 and over weeks 5 and 6 for the year 2020 using artificial intelligence techniques. The top three submissions, described in this article, succeeded in producing S2S forecasts significantly more skillful than the bias-corrected ECMWF operational reference forecasts, particularly for precipitation, through improved calibration of the ECMWF raw forecast outputs or multimodel combination. These forecast improvements should benefit the use of S2S forecasts in applications.
Journal Article
Challenges and design choices for global weather and climate models based on machine learning
2018
Can models that are based on deep learning and trained on atmospheric data compete with weather and climate models that are based on physical principles and the basic equations of motion? This question has been asked often recently due to the boom in deep-learning techniques. The question is valid given the huge amount of data that are available, the computational efficiency of deep-learning techniques and the limitations of today's weather and climate models in particular with respect to resolution and complexity.In this paper, the question will be discussed in the context of global weather forecasts. A toy model for global weather predictions will be presented and used to identify challenges and fundamental design choices for a forecast system based on neural networks.
Journal Article
Deep learning for quality control of surface physiographic fields using satellite Earth observations
2023
A purposely built deep learning algorithm for the Verification of Earth System ParametERization (VESPER) is used to assess recent upgrades to the global physiographic datasets underpinning the quality of the Integrated Forecasting System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF), which is used in both numerical weather prediction and climate reanalyses. A neural network regression model is trained to learn the mapping between the surface physiographic dataset, plus the main meteorologic fields from ERA5, and the MODIS satellite skin temperature observations. Once trained, this tool is applied to rapidly assess the quality of upgrades to the physiographic fields used by land surface schemes. Upgrades which improve the prediction accuracy of the machine learning tool indicate a reduction in the errors in the surface fields used as input to the surface parameterization schemes. Conversely, incorrect specifications of the surface fields decrease the accuracy with which VESPER can make predictions. We apply VESPER to assess the accuracy of recent upgrades to the permanent lake and glacier covers, as well as of planned upgrades to represent seasonally varying water bodies (i.e. ephemeral lakes). We show that, for grid cells where the lake fields have been updated, the prediction accuracy of VESPER in the land surface temperature (as quantified by the mean absolute error) improves by 0.37 K on average, whilst for the subset of points where the lakes have been completely removed and replaced with bare ground, the improvement is 0.83 K. We also show that updates to the glacier cover improve the prediction accuracy by 0.22 K. We highlight how neural networks such as VESPER can assist the research and development of surface parameterizations and their input physiography to better represent Earth’s surface coupled processes in weather and climate models.
Journal Article
Systematic detection of local CH4 anomalies by combining satellite measurements with high-resolution forecasts
by
Aben, Ilse
,
Agustí-Panareda, Anna
,
Ribas, Roberto
in
Albedo
,
Anomalies
,
Anthropogenic factors
2021
In this study, we present a novel monitoring methodology that combines satellite retrievals and forecasts to detect local CH4 concentration anomalies worldwide. These anomalies are caused by rapidly changing anthropogenic emissions that significantly contribute to the CH4 atmospheric budget and by biases in the satellite retrieval data. The method uses high-resolution (7 km × 7 km) retrievals of total column CH4 from the TROPOspheric Monitoring Instrument (TROPOMI) on board the Sentinel 5 Precursor satellite. Observations are combined with high-resolution CH4 forecasts (∼ 9 km) produced by the Copernicus Atmosphere Monitoring Service (CAMS) to provide departures (observations minus forecasts) at close to the satellite's native resolution at appropriate time. Investigating these departures is an effective way to link satellite measurements and emission inventory data in a quantitative manner. We perform filtering on the departures to remove the synoptic-scale and meso-alpha-scale biases in both forecasts and satellite observations. We then apply a simple classification scheme to the filtered departures to detect anomalies and plumes that are missing (e.g. pipeline or facility leaks), underreported or overreported (e.g. depleted drilling fields) in the CAMS emissions. The classification method also shows some limitations to detect emission anomalies only due to local satellite retrieval biases linked to albedo and scattering issues.
Journal Article
Model intercomparison of COSMO 5.0 and IFS 45r1 at kilometer-scale grid spacing
by
Ban, Nikolina
,
Schär, Christoph
,
Wedi, Nils P
in
Approximation
,
Atmospheric models
,
Climate models
2021
The increase in computing power and recent model developments allow for the use of global kilometer-scale weather and climate models for routine forecasts. At these scales, deep convective processes can be partially resolved explicitly by the model dynamics. Next to horizontal resolution, other aspects such as the applied numerical methods, the use of the hydrostatic approximation, and time step size are factors that might influence a model's ability to resolve deep convective processes.In order to improve our understanding of the role of these factors, a model intercomparison between the nonhydrostatic COSMO model and the hydrostatic Integrated Forecast System (IFS) from ECMWF has been conducted. Both models have been run with different spatial and temporal resolutions in order to simulate 2 summer days over Europe with strong convection. The results are analyzed with a focus on vertical wind speed and precipitation.Results show that even at around 3 km horizontal grid spacing the effect of the hydrostatic approximation seems to be negligible. However, time step proves to be an important factor for deep convective processes, with a reduced time step generally allowing for higher updraft velocities and thus more energy in vertical velocity spectra, in particular for shorter wavelengths. A shorter time step is also causing an earlier onset and peak of the diurnal cycle. Furthermore, the amount of horizontal diffusion plays a crucial role for deep convection with more diffusion generally leading to larger convective cells and higher precipitation intensities. The study also shows that for both models the parameterization of deep convection leads to lower updraft and precipitation intensities and biases in the diurnal cycle with a precipitation peak which is too early.
Journal Article
The digital revolution of Earth-system science
by
Dueben, Peter D.
,
Wedi, Nils P.
,
Bauer, Peter
in
Climate change
,
Computation
,
Data assimilation
2021
Computational science is crucial for delivering reliable weather and climate predictions. However, despite decades of high-performance computing experience, there is serious concern about the sustainability of this application in the post-Moore/Dennard era. Here, we discuss the present limitations in the field and propose the design of a novel infrastructure that is scalable and more adaptable to future, yet unknown computing architectures.
Journal Article
A Baseline for Global Weather and Climate Simulations at 1 km Resolution
by
Wedi, Nils P.
,
Boussetta, Souhail
,
Quintino, Tiago
in
Accuracy
,
atmosphere
,
Atmospheric circulation
2020
In an attempt to advance the understanding of the Earth's weather and climate by representing deep convection explicitly, we present a global, four‐month simulation (November 2018 to February 2019) with ECMWF's hydrostatic Integrated Forecasting System (IFS) at an average grid spacing of 1.4 km. The impact of explicitly simulating deep convection on the atmospheric circulation and its variability is assessed by comparing the 1.4 km simulation to the equivalent well‐tested and calibrated global simulations at 9 km grid spacing with and without parametrized deep convection. The explicit simulation of deep convection at 1.4 km results in a realistic large‐scale circulation, better representation of convective storm activity, and stronger convective gravity wave activity when compared to the 9 km simulation with parametrized deep convection. Comparison of the 1.4 km simulation to the 9 km simulation without parametrized deep convection shows that switching off deep convection parametrization at a too coarse resolution (i.e., 9 km) generates too strong convective gravity waves. Based on the limited statistics available, improvements to the Madden‐Julian Oscillation or tropical precipitation are not observed at 1.4 km, suggesting that other Earth system model components and/or their interaction are important for an accurate representation of these processes and may well need adjusting at deep convection resolving resolutions. Overall, the good agreement of the 1.4 km simulation with the 9 km simulation with parametrized deep convection is remarkable, despite one of the most fundamental parametrizations being turned off at 1.4 km resolution and despite no adjustments being made to the remaining parametrizations. Plain Language Summary We present the world's first global simulation of an entire season of the Earth's atmosphere with 1.4 km average grid spacing and the top of the modeled atmosphere as high as 80 km. Albeit only a single realization due to its considerable computational cost, the resulting model output provides a reference and guidance for future simulations. For illustration we compare to simulations at 9 km grid spacing that represent the state of the art in numerical weather prediction and are still considerably finer when compared to models that are used for climate projections today. Thanks to its unprecedented detail, the simulation output will support future model development and satellite mission planning and may be seen as a prototype contribution to a future digital twin of our Earth. Key Points A unique simulation with 1.4 km average grid spacing is presented for model development and process evaluation The 1.4 km simulation shows remarkable fidelity with respect to the well‐calibrated simulation at 9 km with parametrized deep convection Switching off deep convection at a too coarse resolution (9 km) generates too strong convective gravity waves
Journal Article
On the use of programmable hardware and reduced numerical precision in earth‐system modeling
by
Düben, Peter D.
,
Niu, Xinyu
,
Palmer, T. N.
in
Accuracy
,
Atmospheric Processes
,
Computational Geophysics
2015
Programmable hardware, in particular Field Programmable Gate Arrays (FPGAs), promises a significant increase in computational performance for simulations in geophysical fluid dynamics compared with CPUs of similar power consumption. FPGAs allow adjusting the representation of floating‐point numbers to specific application needs. We analyze the performance‐precision trade‐off on FPGA hardware for the two‐scale Lorenz '95 model. We scale the size of this toy model to that of a high‐performance computing application in order to make meaningful performance tests. We identify the minimal level of precision at which changes in model results are not significant compared with a maximal precision version of the model and find that this level is very similar for cases where the model is integrated for very short or long intervals. It is therefore a useful approach to investigate model errors due to rounding errors for very short simulations (e.g., 50 time steps) to obtain a range for the level of precision that can be used in expensive long‐term simulations. We also show that an approach to reduce precision with increasing forecast time, when model errors are already accumulated, is very promising. We show that a speed‐up of 1.9 times is possible in comparison to FPGA simulations in single precision if precision is reduced with no strong change in model error. The single‐precision FPGA setup shows a speed‐up of 2.8 times in comparison to our model implementation on two 6‐core CPUs for large model setups. Key Points: Huge saving in computing cost via reduced numerical precision in earth‐system modeling Long and short‐term simulations have similar level of minimal numerical precision Numerical precision can be reduced with time in a weather forecast simulations
Journal Article
Multi-year simulations at kilometre scale with the Integrated Forecasting System coupled to FESOM2.5 and NEMOv3.4
2025
We report on the first multi-year kilometre-scale global coupled simulations using ECMWF's Integrated Forecasting System (IFS) coupled to both the NEMO and FESOM ocean–sea ice models, as part of the H2020 Next Generation Earth Modelling Systems (nextGEMS) project. We focus mainly on an unprecedented IFS-FESOM coupled setup, with an atmospheric resolution of 4.4 km and a spatially varying ocean resolution that reaches locally below 5 km grid spacing. A shorter coupled IFS-FESOM simulation with an atmospheric resolution of 2.8 km has also been performed. A number of shortcomings in the original numerical weather prediction (NWP)-focused model configurations were identified and mitigated over several cycles collaboratively by the modelling centres, academia, and the wider nextGEMS community. The main improvements are (i) better conservation properties of the coupled model system in terms of water and energy budgets, which also benefit ECMWF's operational 9 km IFS-NEMO model; (ii) a realistic top-of-the-atmosphere (TOA) radiation balance throughout the year; (iii) improved intense precipitation characteristics; and (iv) eddy-resolving features in large parts of the mid- and high-latitude oceans (finer than 5 km grid spacing) to resolve mesoscale eddies and sea ice leads. New developments at ECMWF for a better representation of snow and land use, including a dedicated scheme for urban areas, were also tested on multi-year timescales. We provide first examples of significant advances in the realism and thus opportunities of these kilometre-scale simulations, such as a clear imprint of resolved Arctic sea ice leads on atmospheric temperature, impacts of kilometre-scale urban areas on the diurnal temperature cycle in cities, and better propagation and symmetry characteristics of the Madden–Julian Oscillation.
Journal Article