Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
818
result(s) for
"Climatology Data processing."
Sort by:
Downscaling techniques for high-resolution climate projections : from global change to local impacts
\"Downscaling is a widely used technique for translating information from large-scale climate models to the spatial and temporal scales needed to assess local and regional climate impacts, vulnerability, risk and resilience. This book is a comprehensive guide to the downscaling techniques used for climate data. A general introduction of the science of climate modeling is followed by a discussion of techniques, models and methodologies used for producing downscaled projections, and the advantages, disadvantages and uncertainties of each. The book provides detailed information on dynamic and statistical downscaling techniques in non-technical language, as well as recommendations for selecting suitable downscaled datasets for different applications. The use of downscaled climate data in national and international assessments is also discussed using global examples. This is a practical guide for graduate students and researchers working on climate impacts and adaptation, as well as for policy makers and practitioners interested in climate risk and resilience\"-- Provided by publisher.
Mapping and Modeling Weather and Climate with GIS
2015
Mapping and Modeling Weather and Climate with GIS is a contributed volume of 23 chapters from leading climatologists, meteorologists, and other experts about how geospatial cartography and analysis helps to advance atmospheric science research. Coverage includes data and software resources, data representation, observations, modeling, data-model integration, web services, and the areas of current and potential cross-fertilization of atmospheric and geospatial sciences. Providing both the concepts and practices of mapping and modeling projects, the book is useful to novices using GIS on weather and climate projects. Practitioners and managers will gain a clear picture of the advances in GIS for atmospheric sciences and appreciate the helpful lists of available geospatial resources.
Invisible in the storm
2013
Invisible in the Stormis the first book to recount the history, personalities, and ideas behind one of the greatest scientific successes of modern times--the use of mathematics in weather prediction. Although humans have tried to forecast weather for millennia, mathematical principles were used in meteorology only after the turn of the twentieth century. From the first proposal for using mathematics to predict weather, to the supercomputers that now process meteorological information gathered from satellites and weather stations, Ian Roulstone and John Norbury narrate the groundbreaking evolution of modern forecasting.
The authors begin with Vilhelm Bjerknes, a Norwegian physicist and meteorologist who in 1904 came up with a method now known as numerical weather prediction. Although his proposed calculations could not be implemented without computers, his early attempts, along with those of Lewis Fry Richardson, marked a turning point in atmospheric science. Roulstone and Norbury describe the discovery of chaos theory's butterfly effect, in which tiny variations in initial conditions produce large variations in the long-term behavior of a system--dashing the hopes of perfect predictability for weather patterns. They explore how weather forecasters today formulate their ideas through state-of-the-art mathematics, taking into account limitations to predictability. Millions of variables--known, unknown, and approximate--as well as billions of calculations, are involved in every forecast, producing informative and fascinating modern computer simulations of the Earth system.
Accessible and timely,Invisible in the Stormexplains the crucial role of mathematics in understanding the ever-changing weather.
Daily evaluation of 26 precipitation datasets using Stage-IV gauge-radar data for the CONUS
by
Pappenberger, Florian
,
Adler, Robert F.
,
Huffman, George J.
in
Bias
,
Case studies
,
Climate models
2019
New precipitation (P) datasets are released regularly, following innovations in weather forecasting models, satellite retrieval methods, and multi-source merging techniques. Using the conterminous US as a case study, we evaluated the performance of 26 gridded (sub-)daily P datasets to obtain insight into the merit of these innovations. The evaluation was performed at a daily timescale for the period 2008–2017 using the Kling–Gupta efficiency (KGE), a performance metric combining correlation, bias, and variability. As a reference, we used the high-resolution (4 km) Stage-IV gauge-radar P dataset. Among the three KGE components, the P datasets performed worst overall in terms of correlation (related to event identification). In terms of improving KGE scores for these datasets, improved P totals (affecting the bias score) and improved distribution of P intensity (affecting the variability score) are of secondary importance. Among the 11 gauge-corrected P datasets, the best overall performance was obtained by MSWEP V2.2, underscoring the importance of applying daily gauge corrections and accounting for gauge reporting times. Several uncorrected P datasets outperformed gauge-corrected ones. Among the 15 uncorrected P datasets, the best performance was obtained by the ERA5-HRES fourth-generation reanalysis, reflecting the significant advances in earth system modeling during the last decade. The (re)analyses generally performed better in winter than in summer, while the opposite was the case for the satellite-based datasets. IMERGHH V05 performed substantially better than TMPA-3B42RT V7, attributable to the many improvements implemented in the IMERG satellite P retrieval algorithm. IMERGHH V05 outperformed ERA5-HRES in regions dominated by convective storms, while the opposite was observed in regions of complex terrain. The ERA5-EDA ensemble average exhibited higher correlations than the ERA5-HRES deterministic run, highlighting the value of ensemble modeling. The WRF regional convection-permitting climate model showed considerably more accurate P totals over the mountainous west and performed best among the uncorrected datasets in terms of variability, suggesting there is merit in using high-resolution models to obtain climatological P statistics. Our findings provide some guidance to choose the most suitable P dataset for a particular application.
Journal Article
Centennial-Scale Sea Surface Temperature Analysis and Its Uncertainty
by
Hirahara, Shoji
,
Fukuda, Yoshikazu
,
Ishii, Masayoshi
in
Air temperature
,
Algorithms
,
Analysis
2014
A new sea surface temperature (SST) analysis on a centennial time scale is presented. In this analysis, a daily SST field is constructed as a sum of a trend, interannual variations, and daily changes, using in situ SST and sea ice concentration observations. All SST values are accompanied with theory-based analysis errors as a measure of reliability. An improved equation is introduced to represent the ice–SST relationship, which is used to produce SST data from observed sea ice concentrations. Prior to the analysis, biases of individual SST measurement types are estimated for a homogenized long-term time series of global mean SST. Because metadata necessary for the bias correction are unavailable for many historical observational reports, the biases are determined so as to ensure consistency among existing SST and nighttime air temperature observations. The global mean SSTs with bias-corrected observations are in agreement with those of a previously published study, which adopted a different approach. Satellite observations are newly introduced for the purpose of reconstruction of SST variability over data-sparse regions. Moreover, uncertainty in areal means of the present and previous SST analyses is investigated using the theoretical analysis errors and estimated sampling errors. The result confirms the advantages of the present analysis, and it is helpful in understanding the reliability of SST for a specific area and time period.
Journal Article
SO, HOW MUCH OF THE EARTH’S SURFACE IS COVERED BY RAIN GAUGES?
by
Joe, Paul
,
Becker, Andreas
,
Huffman, George J.
in
Area
,
Atmospheric precipitations
,
Autocorrelation
2017
The measurement of global precipitation, both rainfall and snowfall, is critical to a wide range of users and applications. Rain gauges are indispensable in the measurement of precipitation, remaining the de facto standard for precipitation information across Earth’s surface for hydrometeorological purposes. However, their distribution across the globe is limited: over land their distribution and density is variable, while over oceans very few gauges exist and where measurements are made, they may not adequately reflect the rainfall amounts of the broader area. Critically, the number of gauges available, or appropriate for a particular study, varies greatly across the Earth owing to temporal sampling resolutions, periods of operation, data latency, and data access. Numbers of gauges range from a few thousand available in near–real time to about 100,000 for all “official” gauges, and to possibly hundreds of thousands if all possible gauges are included. Gauges routinely used in the generation of global precipitation products cover an equivalent area of between about 250 and 3,000 m². For comparison, the center circle of a soccer pitch or tennis court is about 260 m². Although each gauge should represent more than just the gauge orifice, autocorrelation distances of precipitation vary greatly with regime and the integration period. Assuming each Global Precipitation Climatology Centre (GPCC)–available gauge is independent and represents a surrounding area of 5-km radius, this represents only about 1% of Earth’s surface. The situation is further confounded for snowfall, which has a greater measurement uncertainty.
Journal Article
Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables
2018
Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.
Journal Article
Prediction of crime occurrence from multi-modal data using deep learning
by
Kang, Hang-Bong
,
Kang, Hyeon-Woo
in
Aggression
,
Artificial intelligence
,
Artificial neural networks
2017
In recent years, various studies have been conducted on the prediction of crime occurrences. This predictive capability is intended to assist in crime prevention by facilitating effective implementation of police patrols. Previous studies have used data from multiple domains such as demographics, economics, and education. Their prediction models treat data from different domains equally. These methods have problems in crime occurrence prediction, such as difficulty in discovering highly nonlinear relationships, redundancies, and dependencies between multiple datasets. In order to enhance crime prediction models, we consider environmental context information, such as broken windows theory and crime prevention through environmental design. In this paper, we propose a feature-level data fusion method with environmental context based on a deep neural network (DNN). Our dataset consists of data collected from various online databases of crime statistics, demographic and meteorological data, and images in Chicago, Illinois. Prior to generating training data, we select crime-related data by conducting statistical analyses. Finally, we train our DNN, which consists of the following four kinds of layers: spatial, temporal, environmental context, and joint feature representation layers. Coupled with crucial data extracted from various domains, our fusion DNN is a product of an efficient decision-making process that statistically analyzes data redundancy. Experimental performance results show that our DNN model is more accurate in predicting crime occurrence than other prediction models.
Journal Article
Rain or Snow Detection in Image Sequences Through Use of a Histogram of Orientation of Streaks
by
Hautière, Nicolas
,
Bossu, Jérémie
,
Tarel, Jean-Philippe
in
Algorithms
,
Applied sciences
,
Artificial Intelligence
2011
The detection of bad weather conditions is crucial for meteorological centers, specially with demand for air, sea and ground traffic management. In this article, a system based on computer vision is presented which detects the presence of rain or snow. To separate the foreground from the background in image sequences, a classical Gaussian Mixture Model is used. The foreground model serves to detect rain and snow, since these are dynamic weather phenomena. Selection rules based on photometry and size are proposed in order to select the potential rain streaks. Then a Histogram of Orientations of rain or snow Streaks (HOS), estimated with the method of geometric moments, is computed, which is assumed to follow a model of Gaussian-uniform mixture. The Gaussian distribution represents the orientation of the rain or the snow whereas the uniform distribution represents the orientation of the noise. An algorithm of expectation maximization is used to separate these two distributions. Following a goodness-of-fit test, the Gaussian distribution is temporally smoothed and its amplitude allows deciding the presence of rain or snow. When the presence of rain or of snow is detected, the HOS makes it possible to detect the pixels of rain or of snow in the foreground images, and to estimate the intensity of the precipitation of rain or of snow. The applications of the method are numerous and include the detection of critical weather conditions, the observation of weather, the reliability improvement of video-surveillance systems and rain rendering.
Journal Article