Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,906 result(s) for "downscaling"
Sort by:
Precipitation downscaling under climate change: Recent developments to bridge the gap between dynamical models and the end user
Precipitation downscaling improves the coarse resolution and poor representation of precipitation in global climate models and helps end users to assess the likely hydrological impacts of climate change. This paper integrates perspectives from meteorologists, climatologists, statisticians, and hydrologists to identify generic end user (in particular, impact modeler) needs and to discuss downscaling capabilities and gaps. End users need a reliable representation of precipitation intensities and temporal and spatial variability, as well as physical consistency, independent of region and season. In addition to presenting dynamical downscaling, we review perfect prognosis statistical downscaling, model output statistics, and weather generators, focusing on recent developments to improve the representation of space‐time variability. Furthermore, evaluation techniques to assess downscaling skill are presented. Downscaling adds considerable value to projections from global climate models. Remaining gaps are uncertainties arising from sparse data; representation of extreme summer precipitation, subdaily precipitation, and full precipitation fields on fine scales; capturing changes in small‐scale processes and their feedback on large scales; and errors inherited from the driving global climate model.
Dynamical Downscaling of Climate Simulations in the Tropics
The long‐existing double‐Intertropical Convergence Zone (ITCZ) problem in global climate models (GCMs) hampers accurate climate simulations in the tropics. Using a regional climate model (RCM) over the tropical and sub‐tropical Atlantic with a horizontal resolution of 12 km and explicit convection, we develop a bias‐corrected downscaling methodology to produce limited‐area simulations with a realistic ITCZ, despite the double ITCZ in the driving GCM. The methodology effectively removes GCM biases in the RCM boundary conditions, such as to produce more realistic large‐scale driving conditions. We show that the double‐ITCZ problem persists with conventional dynamical downscaling, but with bias‐corrected downscaling the RCM simulations yield credible ITCZ with a realistic seasonal cycle. Detailed analysis attributes the main cause of the double‐ITCZ problem of the selected GCM to the sea surface temperature bias. Compared to the GCM's AMIP simulations, RCMs with higher resolution allow explicit deep convection and enable a better simulation of tropical convection and clouds. Plain Language Summary Global climate models (GCMs) have a problem in simulating the Intertropical Convergence Zone (ITCZ), which makes it hard to accurately simulate the climate. We show that these ITCZ biases are also present in high‐resolution regional climate model (RCM) simulations, when the inaccurate GCM fields are used as boundary conditions. However, after removing these large‐scale biases from the boundary conditions, the RCM simulations show a more accurate representation of the ITCZ. We found that the main cause of the difficulties in simulating the ITCZ is related to the bias in sea surface temperatures. By using the RCM with higher resolution, we were able to get better simulations of tropical convection and clouds compared to the GCM. Key Points Downscaling of global climate model (GCM) results with RCMs in the tropics is problematic, as conventional downscaling replicates the driving model's Intertropical Convergence Zone (ITCZ) bias The bias‐corrected downscaling approach enables a credible simulation of the ITCZ in the limited‐area downscaling domain For the tested GCM, the double‐ITCZ bias is mainly attributed to the SST bias
Regional climate model emulator based on deep learning: concept and first evaluation of a novel hybrid downscaling approach
Providing reliable information on climate change at local scale remains a challenge of first importance for impact studies and policymakers. Here, we propose a novel hybrid downscaling method combining the strengths of both empirical statistical downscaling methods and Regional Climate Models (RCMs). In the longer term, the final aim of this tool is to enlarge the high-resolution RCM simulation ensembles at low cost to explore better the various sources of projection uncertainty at local scale. Using a neural network, we build a statistical RCM-emulator by estimating the downscaling function included in the RCM. This framework allows us to learn the relationship between large-scale predictors and a local surface variable of interest over the RCM domain in present and future climate. The RCM-emulator developed in this study is trained to produce daily maps of the near-surface temperature at the RCM resolution (12 km). The emulator demonstrates an excellent ability to reproduce the complex spatial structure and daily variability simulated by the RCM, particularly how the RCM refines the low-resolution climate patterns. Training in future climate appears to be a key feature of our emulator. Moreover, there is a substantial computational benefit of running the emulator rather than the RCM, since training the emulator takes about 2 h on GPU, and the prediction takes less than a minute. However, further work is needed to improve the reproduction of some temperature extremes, the climate change intensity and extend the proposed methodology to different regions, GCMs, RCMs, and variables of interest.
VALUE: A framework to validate downscaling approaches for climate change studies
VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. In this paper, we present the key ingredients of this framework. VALUE's main approach to validation is user‐ focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community‐open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies. Key Points VALUE has developed a framework to validate and compare downscaling methods The experiments comprise different observed and pseudo‐reality reference data The framework is the basis for a comprehensive downscaling comparison study
Intercomparison of statistical and dynamical downscaling models under the EURO- and MED-CORDEX initiative framework: present climate evaluations
Given the coarse spatial resolution of General Circulation Models, finer scale projections of variables affected by local-scale processes such as precipitation are often needed to drive impacts models, for example in hydrology or ecology among other fields. This need for high-resolution data leads to apply projection techniques called downscaling. Downscaling can be performed according to two approaches: dynamical and statistical models. The latter approach is constituted by various statistical families conceptually different. If several studies have made some intercomparisons of existing downscaling models, none of them included all those families and approaches in a manner that all the models are equally considered. To this end, the present study conducts an intercomparison exercise under the EURO- and MED-CORDEX initiative hindcast framework. Six Statistical Downscaling Models (SDMs) and five Regional Climate Models (RCMs) are compared in terms of precipitation outputs. The downscaled simulations are driven by the ERAinterim reanalyses over the 1989–2008 period over a common area at 0.44° of resolution. The 11 models are evaluated according to four aspects of the precipitation: occurrence, intensity, as well as spatial and temporal properties. For each aspect, one or several indicators are computed to discriminate the models. The results indicate that marginal properties of rain occurrence and intensity are better modelled by stochastic and resampling-based SDMs, while spatial and temporal variability are better modelled by RCMs and resampling-based SDM. These general conclusions have to be considered with caution because they rely on the chosen indicators and could change when considering other specific criteria. The indicators suit specific purpose and therefore the model evaluation results depend on the end-users point of view and how they intend to use with model outputs. Nevertheless, building on previous intercomparison exercises, this study provides a consistent intercomparison framework, including both SDMs and RCMs, which is designed to be flexible, i.e., other models and indicators can easily be added. More generally, this framework provides a tool to select the downscaling model to be used according to the statistical properties of the local-scale climate data to drive properly specific impact models.
On the Extrapolation of Generative Adversarial Networks for Downscaling Precipitation Extremes in Warmer Climates
While deep‐learning downscaling algorithms can generate fine‐scale climate projections cost‐effectively, it is unclear how effectively they extrapolate to unobserved climates. We assess the extrapolation capabilities of a deterministic Convolutional Neural Network baseline and a Generative Adversarial Network (GAN) built with this baseline, trained to predict daily precipitation simulated by a Regional Climate Model (RCM) over New Zealand. Both approaches emulate future changes in annual mean precipitation well, when trained on historical data, though training on a future climate improves performance. For extreme precipitation (99.5th percentile), RCM simulations predict a robust end‐of‐century increase with future warming (∼5.8%/° $\\mathit{{}^{\\circ}}$C on average from five simulations). When trained on a future climate, GANs capture 97% of the warming‐driven increase in extreme precipitation compared to 65% in a deterministic baseline. Even GANs trained historically capture 77% of this increase. Overall, GANs offer better generalization for downscaling extremes, which is important in applications relying on historical data. Plain Language Summary The resolution of climate models (∼150 km) is too coarse for studying the effects of climate change at regional scales. The resolution can be enhanced or “downscaled” by a physics‐based method known as dynamical downscaling, but it is costly and limits the number of climate models that can be downscaled. Deep learning approaches offer a promising and computationally efficient alternative to dynamical downscaling, but it is unclear whether their downscaling of climate models produces plausible and reliable climate projections. We show that one commonly used deep‐learning algorithm underestimates future projections of extreme rainfall. However, we show that another algorithm known as a Generative Adversarial Network is better suited for predicting future changes in extreme rainfall and could be useful in similar applications. Key Points A deterministic (regression) downscaling method underestimates future increases in extreme precipitation, even when trained on future climates Generative Adversarial Networks (GANs) tested here better capture these future increases than deterministic methods, even when trained historically GANs trained on future climates have better historical and future extrapolation skill versus historical training
On the suitability of a convolutional neural network based RCM-emulator for fine spatio-temporal precipitation
High resolution regional climate models (RCM) are necessary to capture local precipitation but are too expensive to fully explore the uncertainties associated with future projections. To resolve the large cost of RCMs, Doury et al. ( 2023 ) proposed a neural network based RCM-emulator for the near-surface temperature, at a daily and 12 km-resolution. It uses existing RCM simulations to learn the relationship between low-resolution predictors and high resolution surface variables. When trained the emulator can be applied to any low resolution simulation to produce ensembles of high resolution emulated simulations. This study assesses the suitability of applying the RCM-emulator for precipitation thanks to a novel asymmetric loss function to reproduce the entire precipitation distribution over any grid point. Under a perfect conditions framework, the resulting emulator shows striking ability to reproduce the RCM original series with an excellent spatio-temporal correlation. In particular, a very good behaviour is obtained for the two tails of the distribution, measured by the number of dry days and the 99th quantile. Moreover, it creates consistent precipitation objects even if the highest frequency details are missed. The emulator quality holds for all simulations of the same RCM, with any driving GCM, ensuring transferability of the tool to GCMs never downscaled by the RCM. A first showcase of downscaling GCM simulations showed that the RCM-emulator brings significant added-value with respect to the GCM as it produces the correct high resolution spatial structure and heavy precipitation intensity. Nevertheless, further work is needed to establish a relevant evaluation framework for GCM applications.
Probabilistic estimates of future changes in California temperature and precipitation using statistical and dynamical downscaling
Sixteen global general circulation models were used to develop probabilistic projections of temperature (T) and precipitation (P) changes over California by the 2060s. The global models were downscaled with two statistical techniques and three nested dynamical regional climate models, although not all global models were downscaled with all techniques. Both monthly and daily timescale changes in T and P are addressed, the latter being important for a range of applications in energy use, water management, and agriculture. The T changes tend to agree more across downscaling techniques than the P changes. Year-to-year natural internal climate variability is roughly of similar magnitude to the projected T changes. In the monthly average, July temperatures shift enough that that the hottest July found in any simulation over the historical period becomes a modestly cool July in the future period. Januarys as cold as any found in the historical period are still found in the 2060s, but the median and maximum monthly average temperatures increase notably. Annual and seasonal P changes are small compared to interannual or intermodel variability. However, the annual change is composed of seasonally varying changes that are themselves much larger, but tend to cancel in the annual mean. Winters show modestly wetter conditions in the North of the state, while spring and autumn show less precipitation. The dynamical downscaling techniques project increasing precipitation in the Southeastern part of the state, which is influenced by the North American monsoon, a feature that is not captured by the statistical downscaling.
Performance Evaluation of Downscaling Sentinel-2 Imagery for Land Use and Land Cover Classification by Spectral-Spatial Features
Land Use and Land Cover (LULC) classification is vital for environmental and ecological applications. Sentinel-2 is a new generation land monitoring satellite with the advantages of novel spectral capabilities, wide coverage and fine spatial and temporal resolutions. The effects of different spatial resolution unification schemes and methods on LULC classification have been scarcely investigated for Sentinel-2. This paper bridged this gap by comparing the differences between upscaling and downscaling as well as different downscaling algorithms from the point of view of LULC classification accuracy. The studied downscaling algorithms include nearest neighbor resampling and five popular pansharpening methods, namely, Gram-Schmidt (GS), nearest neighbor diffusion (NNDiffusion), PANSHARP algorithm proposed by Y. Zhang, wavelet transformation fusion (WTF) and high-pass filter fusion (HPF). Two spatial features, textural metrics derived from Grey-Level-Co-occurrence Matrix (GLCM) and extended attribute profiles (EAPs), are investigated to make up for the shortcoming of pixel-based spectral classification. Random forest (RF) is adopted as the classifier. The experiment was conducted in Xitiaoxi watershed, China. The results demonstrated that downscaling obviously outperforms upscaling in terms of classification accuracy. For downscaling, image sharpening has no obvious advantages than spatial interpolation. Different image sharpening algorithms have distinct effects. Two multiresolution analysis (MRA)-based methods, i.e., WTF and HFP, achieve the best performance. GS achieved a similar accuracy with NNDiffusion and PANSHARP. Compared to image sharpening, the introduction of spatial features, both GLCM and EAPs can greatly improve the classification accuracy for Sentinel-2 imagery. Their effects on overall accuracy are similar but differ significantly to specific classes. In general, using the spectral bands downscaled by nearest neighbor interpolation can meet the requirements of regional LULC applications, and the GLCM and EAPs spatial features can be used to obtain more precise classification maps.
Machine learning for downscaling: the use of parallel multiple populations in genetic programming
In the implementation of traditional GP algorithm as models are evolved in a single deme (an environment in which a population of models is evolved) it may tend to produce sub-optimal models with poor generalisation skills due to lack of model diversity. As a solution to above issue, in this study the potential of evolving models in parallel multiple demes with different genetic attributes (parallel heterogeneous environments) and subsequent further evolution of some of the fittest models selected from each deme in another deme called the master deme was investigated, in relation to downscaling of large-scale climate data to daily minimum temperature (Tmin) and daily maximum temperature (Tmax). It was discovered that independent of the climate regime (i.e. warm or cold) and the geographic location of the observation station, a fraction of the fittest models (e.g. 25%) obtained from the last generation of each deme alone are sufficient for the formulation of a diverse initial population of models for the master deme. Also, independent of the climate regime and the geographic location of the observation station, both daily Tmin and Tmax downscaling models developed with the parallel multi-population genetic programming (PMPGP) algorithm showed better generalisation skills compared to that of models developed with the traditional single deme GP, even when the amount of redundant information in the data of predictors was high. The models developed for daily Tmin and Tmax with the PMPGP algorithm simulated fewer unphysically large outliers compared to that of models developed with the GP algorithm.