Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
841 result(s) for "pattern scaling"
Sort by:
Evaluating the accuracy of climate change pattern emulation for low warming targets
Global climate policy is increasingly debating the value of very low warming targets, yet not many experiments conducted with global climate models in their fully coupled versions are currently available to help inform studies of the corresponding impacts. This raises the question whether a map of warming or precipitation change in a world 1.5 °C warmer than preindustrial can be emulated from existing simulations that reach higher warming targets, or whether entirely new simulations are required. Here we show that also for this type of low warming in strong mitigation scenarios, climate change signals are quite linear as a function of global temperature. Therefore, emulation techniques amounting to linear rescaling on the basis of global temperature change ratios (like simple pattern scaling) provide a viable way forward. The errors introduced are small relative to the spread in the forced response to a given scenario that we can assess from a multi-model ensemble. They are also small relative to the noise introduced into the estimates of the forced response by internal variability within a single model, which we can assess from either control simulations or initial condition ensembles. Challenges arise when scaling inadvertently reduces the inter-model spread or suppresses the internal variability, both important sources of uncertainty for impact assessment, or when the scenarios have very different characteristics in the composition of the forcings. Taking advantage of an available suite of coupled model simulations under low-warming and intermediate scenarios, we evaluate the accuracy of these emulation techniques and show that they are unlikely to represent a substantial contribution to the total uncertainty.
Robustness of European climate projections from dynamical downscaling
How climate change will unfold in the years to come is a central topic in today’s environmental debate, in particular at the regional level. While projections using large ensembles of global climate models consistently indicate a future decrease in summer precipitation over southern Europe and an increase over northern Europe, individual models substantially modulate these distinct signals of change in precipitation. So far model improvements and higher resolution from regional downscaling have not been seen as able to resolve these disagreements. In this paper we assess whether 2 decades of investments in large ensembles of downscaling experiments with regional climate model simulations for Europe have contributed to a more robust model assessment of the future climate at a range of geographical scales. We study climate change projections of European seasonal temperature and precipitation using an ensemble-suite comprised by all readily available pan-European regional model projections for the twenty-first-century, representing increasing model resolution from ~ 50 to ~ 12 km grid distance, as well as lateral boundary and sea surface temperature conditions from a variety of global model simulations. Employing a simple scaling with global mean temperature change we identify emerging robust signals of future seasonal temperature and precipitation changes also found to resemble current observed trends, where these are judged to be statistically significant.
Long-term probabilistic temperature projections for all locations
The climate change projections of the Intergovernmental Panel on Climate Change are based on scenarios for future emissions, but these are not statistically-based and do not have a full probabilistic interpretation. Raftery et al. (Nat Clim Change 7:637–641, 2017) and Liu and Raftery (Commun Earth Environ 2:1–10, 2021) developed probabilistic forecasts for global average temperature change to 2100, but these do not give forecasts for specific parts of the globe. Here we develop a method for probabilistic long-term spatial forecasts of local average annual temperature change, combining the probabilistic global method with a pattern scaling approach. This yields a probability distribution for temperature in any year and any part of the globe in the future. Out-of-sample predictive validation experiments show the method to be well calibrated. Consistent with previous studies, we find that for long-term temperature changes, high latitudes warm more than low latitudes, continents more than oceans, and the Northern Hemisphere more than the Southern Hemisphere, except for the North Atlantic. There is a 5% chance that the temperature change for the Arctic would reach 16 ∘C. With probability 95%, the temperature of North Africa, West Asia and most of Europe will increase by at least 2 ∘C. We find that natural variability is a large part of the uncertainty in early years, but this declines so that by 2100 most of the overall uncertainty comes from model uncertainty and uncertainty about future emissions.
A methodology for probabilistic predictions of regional climate change from perturbed physics ensembles
A methodology is described for probabilistic predictions of future climate. This is based on a set of ensemble simulations of equilibrium and time-dependent changes, carried out by perturbing poorly constrained parameters controlling key physical and biogeochemical processes in the HadCM3 coupled ocean-atmosphere global climate model. These (ongoing) experiments allow quantification of the effects of earth system modelling uncertainties and internal climate variability on feedbacks likely to exert a significant influence on twenty-first century climate at large regional scales. A further ensemble of regional climate simulations at 25 km resolution is being produced for Europe, allowing the specification of probabilistic predictions at spatial scales required for studies of climate impacts. The ensemble simulations are processed using a set of statistical procedures, the centrepiece of which is a Bayesian statistical framework designed for use with complex but imperfect models. This supports the generation of probabilities constrained by a wide range of observational metrics, and also by expert-specified prior distributions defining the model parameter space. The Bayesian framework also accounts for additional uncertainty introduced by structural modelling errors, which are estimated using our ensembles to predict the results of alternative climate models containing different structural assumptions. This facilitates the generation of probabilistic predictions combining information from perturbed physics and multi-model ensemble simulations. The methodology makes extensive use of emulation and scaling techniques trained on climate model results. These are used to sample the equilibrium response to doubled carbon dioxide at any required point in the parameter space of surface and atmospheric processes, to sample time-dependent changes by combining this information with ensembles sampling uncertainties in the transient response of a wider set of earth system processes, and to sample changes at local scales. The methodology is necessarily dependent on a number of expert choices, which are highlighted throughout the paper.
Analysis of the regional pattern of sea level change due to ocean dynamics and density change for 1993–2099 in observations and CMIP5 AOGCMs
Predictions of twenty-first century sea level change show strong regional variation. Regional sea level change observed by satellite altimetry since 1993 is also not spatially homogenous. By comparison with historical and pre-industrial control simulations using the atmosphere–ocean general circulation models (AOGCMs) of the CMIP5 project, we conclude that the observed pattern is generally dominated by unforced (internal generated) variability, although some regions, especially in the Southern Ocean, may already show an externally forced response. Simulated unforced variability cannot explain the observed trends in the tropical Pacific, but we suggest that this is due to inadequate simulation of variability by CMIP5 AOGCMs, rather than evidence of anthropogenic change. We apply the method of pattern scaling to projections of sea level change and show that it gives accurate estimates of future local sea level change in response to anthropogenic forcing as simulated by the AOGCMs under RCP scenarios, implying that the pattern will remain stable in future decades. We note, however, that use of a single integration to evaluate the performance of the pattern-scaling method tends to exaggerate its accuracy. We find that ocean volume mean temperature is generally a better predictor than global mean surface temperature of the magnitude of sea level change, and that the pattern is very similar under the different RCPs for a given model. We determine that the forced signal will be detectable above the noise of unforced internal variability within the next decade globally and may already be detectable in the tropical Atlantic.
Emulating climate extreme indices
We use simple pattern scaling and time-shift to emulate changes in a set of climate extreme indices under future scenarios, and we evaluate the emulators' accuracy. We propose an error metric that separates systematic emulation errors from discrepancies between emulated and target values due to internal variability, taking advantage of the availability of climate model simulations in the form of initial condition ensembles. We compute the error metric at grid-point scale, and we show geographically resolved results, or aggregate them as global averages. We use a range of scenarios spanning global temperature increases by the end of the century of 1.5 C and 2.0 C compared to a pre-industrial baseline, and two higher trajectories, RCP4.5 and RCP8.5. With this suite of scenarios we can test the effects on the error of the size of the temperature gap between emulation origin and target scenarios. We find that in the emulation of most indices the dominant source of discrepancy is internal variability. For at least one index, however, counting exceedances of a high temperature threshold, significant portions of the globally aggregated discrepancy and its regional pattern originate from the systematic emulation error. The metric also highlights a fundamental difference in the two methods related to the simulation of internal variability, which is significantly resized by simple pattern scaling. This aspect needs to be considered when using these methods in applications where preserving variability for uncertainty quantification is important. We propose our metric as a diagnostic tool, facilitating the formulation of scientific hypotheses on the reasons for the error. In the meantime, we show that for many impact relevant indices these two well established emulation techniques perform accurately when measured against internal variability, establishing the fundamental condition for using them to represent climate drivers in impact modeling.
Emulating mean patterns and variability of temperature across and within scenarios in anthropogenic climate change experiments
There are many climate change scenarios that are of interest to explore by climate models, but computational power limits the total number of model runs. Pattern scaling is a useful approach to approximate mean changes in climate model projections, and we extend this methodology to build a climate model emulator that also accounts for variability of temperature projections at the seasonal scale. Using 30 runs from the NCAR/DOE CESM1 large initial condition ensemble for RCP8.5 from 2006 to 2080, we fit a pattern scaling model to grid-specific seasonal average temperature change. We then use this fitted model to emulate seasonal average temperature change for the RCP4.5 scenario based on its global average temperature trend. By using a linear mixed-effects model and carefully resampling the residuals from the RCP8.5 model, we emulate the variability of RCP4.5 and allow the variability to depend on global average temperature. Specifically, we emulate both the internal variability affecting the long-term trends across initial condition ensemble members, and the variability superimposed on the long-term trend within individual ensemble members. The 15 initial condition ensemble members available for RCP4.5 from the same climate model are then used to validate the emulator. We view this approach as a step forward in providing relevant climate information for avoided impacts studies, and more broadly for impact models, since we allow both forced changes and internal variability to play a role in determining future impact risks.
Measurement invariance explains the universal law of generalization for psychological perception
The universal law of generalization describes how animals discriminate between alternative sensory stimuli. On an appropriate perceptual scale, the probability that an organism perceives two stimuli as similar typically declines exponentially with the difference on the perceptual scale. Exceptions often follow a Gaussian probability pattern rather than an exponential pattern. Previous explanations have been based on underlying theoretical frameworks such as information theory, Kolmogorov complexity, or empirical multidimensional scaling. This article shows that the few inevitable invariances that must apply to any reasonable perceptual scale provide a sufficient explanation for the universal exponential law of generalization. In particular, reasonable measurement scales of perception must be invariant to shift by a constant value, which by itself leads to the exponential form. Similarly, reasonable measurement scales of perception must be invariant to multiplication, or stretch, by a constant value, which leads to the conservation of the slope of discrimination with perceptual difference. In some cases, an additional assumption about exchangeability or rotation of underlying perceptual dimensions leads to a Gaussian pattern of discrimination, which can be understood as a special case of the more general exponential form. The three measurement invariances of shift, stretch, and rotation provide a sufficient explanation for the universally observed patterns of perceptual generalization. All of the additional assumptions and language associated with information, complexity, and empirical scaling are superfluous with regard to the broad patterns of perception.
The Impact of Internal Variability on Benchmarking Deep Learning Climate Emulators
Full‐complexity Earth system models (ESMs) are computationally very expensive, limiting their use in exploring the climate outcomes of multiple emission pathways. More efficient emulators that approximate ESMs can directly map emissions onto climate outcomes, and benchmarks are being used to evaluate their accuracy on standardized tasks and data sets. We investigate a popular benchmark in data‐driven climate emulation, ClimateBench, on which deep learning‐based emulators are currently achieving the best performance. We compare these deep learning emulators with a linear regression‐based emulator, akin to pattern scaling, and show that it outperforms the incumbent 100M‐parameter deep learning foundation model, ClimaX, on 3 out of 4 regionally resolved climate variables, notably surface temperature and precipitation. While emulating surface temperature is expected to be predominantly linear, this result is surprising for emulating precipitation. Precipitation is a much more noisy variable, and we show that deep learning emulators can overfit to internal variability noise at low frequencies, degrading their performance in comparison to a linear emulator. We address the issue of overfitting by increasing the number of climate simulations per emission pathway (from 3 to 50) and updating the benchmark targets with the respective ensemble averages from the MPI‐ESM1.2‐LR model. Using the new targets, we show that linear pattern scaling continues to be more accurate on temperature, but can be outperformed by a deep learning‐based technique for emulating precipitation. We publish our code and data at https://github.com/blutjens/climate‐emulator. Plain Language Summary Running a state‐of‐the‐art climate model for a century‐long future projection can take multiple weeks on the worlds largest supercomputers. Emulators are approximations of climate models that quickly compute climate forecasts when running the full climate model is computationally too expensive. Our work examines how different emulation techniques can be compared with each other. We find that a simple linear regression‐based emulator can forecast local temperatures and rainfall more accurately than a complex machine learning‐based emulator on a commonly used benchmark data set. It is surprising that linear regression is better for local rainfall, which is expected to be more accurately emulated by nonlinear techniques. We identify that noise from natural variations in climate, called internal variability, is one reason for the comparatively good performance of linear regression on local rainfall. This implies that addressing internal variability is necessary for assessing the performance of climate emulators. Thus, we assemble a benchmark data set with reduced internal variability and, using it, show that a deep learning‐based emulator can be more accurate for emulating local rainfall, while linear regression continues to be more accurate for temperature. Key Points Linear regression outperforms deep learning for emulating 3 out of 4 spatial atmospheric variables in the ClimateBench benchmark Deep learning emulators can overfit unpredictable (multi‐)decadal fluctuations, when trained on a few ensemble realizations only We recommend evaluating climate emulation techniques on large ensembles, such as the Em‐MPI data subset with means over 50 realizations
Scalability of regional climate change in Europe for high-end scenarios
With the help of a simulation using the global circulation model (GCM) EC-Earth, downscaled over Europe with the regional model DMI-HIRHAM5 at a 25 km grid point distance, we investigated regional climate change corresponding to 6°C of global warming to investigate whether regional climate change generally scales with global temperature even for very high levels of global warming. Through a complementary analysis of CMIP5 GCM results, we estimated the time at which this temperature may be reached; this warming could be reached in the first half of the 22nd century provided that future emissions are close to the RCP8.5 emission scenario. We investigated the extent to which pattern scaling holds, i.e. the approximation that the amplitude of any climate change will be approximately proportional to the amount of global warming. We address this question through a comparison of climate change results from downscaling simulations over the same integration domain, but for different driving and regional models and scenarios, mostly from the EU ENSEMBLES project. For almost all quantities investigated, pattern scaling seemed to apply to the 6° simulation. This indicates that the single 6° simulation in question is not an outlier with respect to these quantities, and that conclusions based on this simulation would probably correspond to conclusions drawn from ensemble simulations of such a scenario. In the case of very extreme precipitation, the changes in the 6° simulation are larger than would be expected from a linear behaviour. Conversely, the fact that the many model results follow a linear relationship for a large number of variables and areas confirms that the pattern scaling approximation is sound for the fields investigated, with the identified possible exceptions of high extremes of e.g. daily precipitation and maximum temperature.