Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
4,621 result(s) for "data gaps"
Sort by:
The Inequality (or the Growth) We Measure: Data Gaps and the Distribution of Incomes
How much correspondence is there between the income measured in microeconomic inequality studies and the income measured in macroeconomic growth statistics? The presence of significant gaps would question both our assessment of the relevance of economic growth across the population and our confidence in mainstream distributional statistics as accurate representations of income flows in an economy. In this paper, we document large gaps between income estimates from household surveys, administrative tax records and national accounts for ten Latin American countries, a region that experienced a relatively unique combination of strong growth and falling income inequality according to official statistics since the early twenty-first century. We find that surveys only account for around half of the macroeconomic income, and thus growth, of these countries over the past twenty years. We estimate that less than half of this gap is due to conceptual differences, the remainder coming from growing measurement issues, which mainly concern capital incomes. Comparing top tails of administrative data and surveys, we find diverging average incomes, especially for non-wage incomes, and differing shapes. We discuss the implications of such discrepancies for our understanding of inequality and growth. JEL Classification Codes: D3; E01; N36; O54
A Review on Sustainability of Watershed Management in Indonesia
This paper provides an overview of the implementation and obstacles of watershed management, and the alternative solutions based on a synoptic review of related studies and experiences across Indonesia. The review found that problems in the institutional aspect were hierarchical confusion, discrepancy, and asynchrony among regulations, and weak (participation, synchronization, and coordination) among watershed management stakeholders. The weaknesses in the planning stage are integration among sectors, a lack of community participation, and limited readiness to integrate watershed planning into regional planning. Stakeholders’ involvement is also a critical factor in successful implementation of degraded watershed rehabilitation, including in peatland and mangrove areas. Failure should be minimized by providing adequate information on degraded watershed characteristics, appropriate species choices, and effective mechanical construction for soil and water conservation. Community participation as the main factor in driving watershed management should be achieved by strengthening public awareness of the importance of a sustainable watershed and providing access for the community to be involved in each stage of watershed management. Another problem is data gaps which are essential to address from the planning to evaluation stages. The gaps can be bridged by using remotely sensed data and by applying hydrological-based simulation models. Simplified criteria for watershed assessment may also be required, depending on site-specific issues and the watershed scale.
3D Reconstruction of Coastal Cliffs from Fixed-Wing and Multi-Rotor UAS: Impact of SfM-MVS Processing Parameters, Image Redundancy and Acquisition Geometry
Monitoring the dynamics of coastal cliffs is fundamental for the safety of communities, buildings, utilities, and infrastructures located near the coastline. Structure-from-Motion and Multi View Stereo (SfM-MVS) photogrammetry based on Unmanned Aerial Systems (UAS) is a flexible and cost-effective surveying technique for generating a dense 3D point cloud of the whole cliff face (from bottom to top), with high spatial and temporal resolution. In this paper, in order to generate a reproducible, reliable, precise, accurate, and dense point cloud of the cliff face, a comprehensive analysis of the SfM-MVS processing parameters, image redundancy and acquisition geometry was performed. Using two different UAS, a fixed-wing and a multi-rotor, two flight missions were executed with the aim of reconstructing the geometry of an almost vertical cliff located at the central Portuguese coast. The results indicated that optimizing the processing parameters of Agisoft Metashape can improve the 3D accuracy of the point cloud up to 2 cm. Regarding the image acquisition geometry, the high off-nadir (90°) dataset taken by the multi-rotor generated a denser and more accurate point cloud, with lesser data gaps, than that generated by the low off-nadir dataset (3°) taken by the fixed wing. Yet, it was found that reducing properly the high overlap of the image dataset acquired by the multi-rotor drone permits to get an optimal image dataset, allowing to speed up the processing time without compromising the accuracy and density of the generated point cloud. The analysis and results presented in this paper improve the knowledge required for the 3D reconstruction of coastal cliffs by UAS, providing new insights into the technical aspects needed for optimizing the monitoring surveys.
Bridging Data Gaps in Emergency Care: The NIGHTINGALE Project and the Future of AI in Mass Casualty Management
In the context of mass casualty incident (MCI) management, artificial intelligence (AI) represents a promising future, offering potential improvements in processes such as triage, decision support, and resource optimization. However, the effectiveness of AI is heavily reliant on the availability of quality data. Currently, MCI data are scarce and difficult to obtain, as critical information regarding patient demographics, vital signs, and treatment responses is often missing or incomplete, particularly in the prehospital setting. Although the NIGHTINGALE (Novel Integrated Toolkit for Enhanced Pre-Hospital Life Support and Triage in Challenging and Large Emergencies) project is actively addressing these challenges by developing a comprehensive toolkit designed to support first responders and enhance data collection during MCIs, significant work remains to ensure the tools are fully operational and can effectively integrate continuous monitoring and data management. To further advance these efforts, we provide a series of recommendation, advocating for increased European Union funding to facilitate the generation of diverse and high-quality datasets essential for training AI models, including the application of transfer learning and the development of tools supporting data collection during MCIs, while fostering continuous collaboration between end users and technical developers. By securing these resources, we can enhance the efficiency and adaptability of AI applications in emergency care, bridging the current data gaps and ultimately improving outcomes during critical situations.
The Rise of the Data Poor: The COVID-19 Pandemic Seen From the Margins
Quantification is central to the narration of the COVID-19 pandemic. Numbers determine the existence of the problem and affect our ability to care and contribute to relief efforts. Yet many communities at the margins, including many areas of the Global South, are virtually absent from this number-based narration of the pandemic. This essay builds on critical data studies to warn against the universalization of problems, narratives, and responses to the virus. To this end, it explores two types of data gaps and the corresponding “data poor.” The first gap concerns the data poverty perduring in low-income countries and jeopardizing their ability to adequately respond to the pandemic. The second affects vulnerable populations within a variety of geopolitical and socio-political contexts, whereby data poverty constitutes a dangerous form of invisibility which perpetuates various forms of inequality. But, even during the pandemic, the disempowered manage to create innovative forms of solidarity from below that partially mitigate the negative effects of their invisibility.
Estimating where and how animals travel: An optimal framework for path reconstruction from autocorrelated tracking data
An animal's trajectory is a fundamental object of interest in movement ecology, as it directly informs a range of topics from resource selection to energy expenditure and behavioral states. Optimally inferring the mostly unobserved movement path and its dynamics from a limited sample of telemetry observations is a key unsolved problem, however. The field of geostatistics has focused significant attention on a mathematically analogous problem that has a statistically optimal solution coined after its inventor, Krige. Kriging revolutionized geostatistics and is now the gold standard for interpolating between a limited number of autocorrelated spatial point observations. Here we translate Kriging for use with animal movement data. Our Kriging formalism encompasses previous methods to estimate animal's trajectories—the Brownian bridge and continuous‐time correlated random walk library—as special cases, informs users as to when these previous methods are appropriate, and provides a more general method when they are not. We demonstrate the capabilities of Kriging on a case study with Mongolian gazelles where, compared to the Brownian bridge, Kriging with a more optimal model was 10% more precise in interpolating locations and 500% more precise in estimating occurrence areas.
Automated Extraction of Consistent Time-Variable Water Surfaces of Lakes and Reservoirs Based on Landsat and Sentinel-2
In this study, a new approach for the automated extraction of high-resolution time-variable water surfaces is presented. For that purpose, optical images from Landsat and Sentinel-2 are used between January 1984 and June 2018. The first part of this new approach is the extraction of land-water masks by combining five water indexes and using an automated threshold computation. In the second part of this approach, all data gaps caused by voids, clouds, cloud shadows, or snow are filled by using a long-term water probability mask. This mask is finally used in an iterative approach for filling remaining data gaps in all monthly masks which leads to a gap-less surface area time series for lakes and reservoirs. The results of this new approach are validated by comparing the surface area changes with water level time series from gauging stations. For inland waters in remote areas without in situ data water level time series from satellite altimetry are used. Overall, 32 globally distributed lakes and reservoirs of different extents up to 2482.27 km 2 are investigated. The average correlation coefficients between surface area time series and water levels from in situ and satellite altimetry have increased from 0.611 to 0.862 after filling the data gaps which is an improvement of about 41%. This new approach clearly demonstrates the quality improvement for the estimated land-water masks but also the strong impact of a reliable data gap-filling approach. All presented surface area time series are freely available on the Database of Hydrological Time Series of Inland (DAHITI).
Comparing Single and Multiple Imputation Approaches for Missing Values in Univariate and Multivariate Water Level Data
Missing values in water level data is a persistent problem in data modelling and especially common in developing countries. Data imputation has received considerable research attention, to raise the quality of data in the study of extreme events such as flooding and droughts. This article evaluates single and multiple imputation methods used on monthly univariate and multivariate water level data from four water stations on the rivers Benue and Niger in Nigeria. The missing completely at random, missing at random and missing not at random data mechanisms were each considered. The best imputation method is identified using two error metrics: root mean square error and mean absolute percentage error. For the univariate case, the seasonal decomposition method is best for imputing missing values at various missingness levels for all three missing mechanisms, followed by Kalman smoothing, while random imputation is much poorer. For instance, for 5% missing data for the Kainji water station, missing completely at random, the Kalman smoothing, random and seasonal decomposition methods had average root mean square errors of 13.61, 102.60 and 10.46, respectively. For the multivariate case, missForest is best, closely followed by k nearest neighbour for the missing completely at random and missing at random mechanisms, and k nearest neighbour is best, followed by missForest, for the missing not at random mechanism. The random forest and predictive mean matching methods perform poorly in terms of the two metrics considered. For example, for 10% missing data missing completely at random for the Ibi water station, the average root mean square errors for random forest, k nearest neighbour, missForest and predictive mean matching were 22.51, 17.17, 14.60 and 25.98, respectively. The results indicate that the seasonal decomposition method, and missForest or k nearest neighbour methods, can impute univariate and multivariate water level missing data, respectively, with higher accuracy than the other methods considered.
Bias-free estimation of the covariance function and the power spectral density from data with missing samples including extended data gaps
Nonparametric estimation of the covariance function and the power spectral density of uniformly spaced data from stationary stochastic processes with missing samples is investigated. Several common methods are tested for their systematic and random errors under the condition of variations in the distribution of the missing samples. In addition to random and independent outliers, the influence of longer and hence correlated data gaps on the performance of the various estimators is also investigated. The aim is to construct a bias-free estimation routine for the covariance function and the power spectral density from stationary stochastic processes under the condition of missing samples with an optimum use of the available information in terms of low estimation variance and mean square error, and that independent of the spectral composition of the data gaps. The proposed procedure is a combination of three methods that allow bias-free estimation of the desired statistical functions with efficient use of the available information: weighted averaging over valid samples, derivation of the covariance estimate for the entire data set and restriction of the domain of the covariance function in a post-processing step, and appropriate correction of the covariance estimate after removal of the estimated mean value. The procedures abstain from interpolation of missing samples as well as block subdivision. Spectral estimates are obtained from covariance functions and vice versa using Wiener–Khinchin’s theorem.
Comparison of Cloud-Filling Algorithms for Marine Satellite Data
Marine remote sensing provides comprehensive characterizations of the ocean surface across space and time. However, cloud cover is a significant challenge in marine satellite monitoring. Researchers have proposed various algorithms to fill data gaps “below the clouds”, but a comparison of algorithm performance across several geographic regions has not yet been conducted. We compared ten basic algorithms, including data-interpolating empirical orthogonal functions (DINEOF), geostatistical interpolation, and supervised learning methods, in two gap-filling tasks: the reconstruction of chlorophyll a in pixels covered by clouds, and the correction of regional mean chlorophyll a concentrations. For this purpose, we combined tens of cloud-free images with hundreds of cloud masks in four study areas, creating thousands of situations in which to test the algorithms. The best algorithm depended on the study area and task, and differences between the best algorithms were small. Ordinary Kriging, spatiotemporal Kriging, and DINEOF worked well across study areas and tasks. Random forests reconstructed individual pixels most accurately. We also found that high levels of cloud cover led to considerable errors in estimated regional mean chlorophyll a concentration. These errors could, however, be reduced by about 50% to 80% (depending on the study area) with prior cloud-filling.