Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
5,881 result(s) for "spatiotemporal data"
Sort by:
Leveraging Spatiotemporal Redundancy for Sensor Data Imputation in Water Distribution Networks
The rapid digital transformation of Water Distribution Networks (WDNs) has led to the collection of multi‐sensor time series with high temporal and spatial resolution. However, missing data poses a significant challenge, undermining the usability and effectiveness of data‐driven applications. Performing missing data imputation is essential to enhance data quality and support intelligent management. This study first reveals that WDN sensor data in tensor form inherently exhibit spatiotemporal redundancy across three dimensions: inter‐sensor similarity, intra‐day regularity, and daily recurrence. The redundancy can be algebraically characterized by the low‐rank structure of WDN tensor data, providing a robust foundation for imputation. Based on these findings, a novel Low‐rank Autoregressive Tensor Completion (LATC) approach is proposed to efficiently impute spatiotemporal WDN data. The LATC combines autoregressive regularization with standard low‐rank tensor completion, effectively capturing both global redundancy and local correlation of multi‐sensor WDN data. Finally, the LATC is validated on four real‐world and simulated WDN data sets under eight different missing scenarios. Extensive experiments show that the LATC significantly outperforms state‐of‐the‐art baseline methods, achieving accurate imputation even under severe corruption and complex missing patterns.
Detection and prediction of land use/land cover change using spatiotemporal data fusion and the Cellular Automata–Markov model
The detection and prediction of land use/land cover (LULC) change is crucial for guiding land resource management, planning, and sustainable development. In the view of seasonal rhythm and phenological effect, detection and prediction would benefit greatly from LULC maps of the same seasons for different years. However, due to frequent cloudiness contamination, it is difficult to obtain same-season LULC maps when using existing remote sensing images. This study utilized the spatiotemporal data fusion (STF) method to obtain summer Landsat-scale images in Hefei over the past 30 years. The Cellular Automata–Markov model was applied to simulate and predict future LULC maps. The results demonstrate the following: (1) the STF method can generate the same inter-annual interval summer Landsat-scale data for analyzing LULC change; (2) the fused data can improve the LULC detection and prediction accuracy by shortening the inter-annual interval, and also obtain LULC prediction results for a specific year; (3) the areas of cultivated land, water, and vegetation decreased by 33.14%, 2.03%, and 16.36%, respectively, and the area of construction land increased by 200.46% from 1987 to 2032. The urban expansion rate will reach its peak until 2020, and then slow down. The findings provide valuable information for urban planners to achieve sustainable development goals.
DGMI: A diffusion-based generative adversarial framework for multivariate air quality imputation
In the process of monitoring spatiotemporal air quality data, data sample missingness is prevalent, thus rectifying missing values in spatiotemporal data holds paramount significance. In recent years, diffusion probability models have played a prominent role in image, video, and text generation, and have also begun to be applied in the field of spatiotemporal data imputation. However, such models face challenges in extracting fine-grained features for stable model operation and accurate modeling of data probability distributions. To address the aforementioned issues, we propose a Diffusion-based Generative adversarial framework for Multivariate air quality data Imputation, termed DGMI. Recognizing the similar temporal, sensor, and indicator change characteristics inherent in air quality data, our framework is designed to cater to the spatiotemporal characteristics of air quality data by incorporating a multi-cycle temporal feature extraction module and a sensor indicator feature extraction module, facilitating multidimensional refinement and integration of temporal, sensor, and indicator information. Moreover, the initial missing value is encoded with linear interpolation and sine-cosine functions. Following the generation of imputed values by the model, we introduce a discriminator module to discern the consistency between imputed values and observed values to provide feedback for optimizing the model from a data distribution perspective. DGMI outperforms most current data imputation methods under various missing ratios in two real air quality datasets by 4.1% (root mean square error) and 3.0% (mean absolute error), exhibiting efficacy in scenarios characterized by multidimensional spatiotemporal and high missing rates data.
A fault-tolerant optimization mechanism for spatiotemporal data analysis in flink
Spatiotemporal data analysis plays a vital role in big data processing, and it is also a research hotspot in location-aware and recommender systems. In these applications, graph modeling and distributed iterative computing are the basis and guarantee for data query and mining. Because of the constant repeated execution of specific calculation logic, iterative jobs have the characteristics of being time-consuming and exerting high pressure on system resources. However, iterative jobs always face the risk of stopping due to computing node fault, which in turn causes serious economic losses. At present, the latest generation of distributed computing system Flink’s recovery strategy for node faults in batch processing mode is to restart the job from the beginning, which is extremely time-consuming. If the checkpoint mechanism in Flink’s stream-processing mode is used to recover from batch jobs failures, it will greatly increase the running time and storage overhead in trouble-free state. Therefore, a lightweight fault-tolerant mechanism is needed to reduce failure recovery time while ensuring the job efficiency of spatiotemporal data analysis. In view of the above situation and the characteristics of the iterative computing model for graph computing, a single-node failure recovery mechanism only for the failed node is proposed, which reduces the failure recovery time by introducing lightweight checkpoints and local logs. Based on the proposed single-node failure recovery mechanism, a failure recovery mechanism under multi-node fault and associated fault is proposed, which can cope with more complex failure situations occurs. Experimental results show that the proposed method can quickly and effectively recover jobs after failure, reducing the average recovery time by 37% in the case of single node fault, and reducing the average recovery time by 24% in the case of multi-node fault.
Dynamic monitoring of flood disaster based on remote sensing data cube
High-frequency dynamic monitoring of flood disaster using remote sensing technology is crucial for accurate decision-making of disaster prevention and relief. However, the current trade-off between spatial and temporal resolution of remote sensing sensors limits their application in high-frequency dynamic monitoring of flood disaster. To deal with this challenge, in this study, we presented an approach to conduct high-frequency dynamic monitoring of flood disaster based on remote sensing data cube with high spatial and temporal resolution. The presented approach included two steps: a, removing the cloudy areas in original MODIS data to construct the cloud-free MODIS data cube by using the information provided by GPM rainfall data; b, fusing the cloud-free MODIS data cube and Landsat-8 data by using the spatiotemporal data fusion algorithm to construct the high spatiotemporal resolution (Landsat-like) data cube. The approach was tested by conducting high-frequency dynamic monitoring of flood disaster occurred in Henan Province, PR China. Our study had three main results: (1) the presented cloud removal algorithm in the first step was able to retain flood information and performed well in removing clouds during consecutive rainy days. The differences between cloud-free MODIS data cube and original MODIS data were small and the cloud-free MODIS data cube could be used for constructing high spatiotemporal resolution data cube. (2) Our presented approach could be used to conduct high-frequency dynamic monitoring of flood disaster. (3) Testing results showed that there were two floods occurred in the study area from July 17, 2021, to October 16, 2021; the first flood occurred from July 17, 2021, to September 15, 2021, with maximum affected area of 668.36 km2; the second flood occurred from September 18, 2021, to October 16, 2021, with maximum affected area of 303.88 km2. Our study provides a general approach for high-frequency monitoring of flood disaster.
An adaptive XGBoost-based optimized sliding window for concept drift handling in non-stationary spatiotemporal data streams classifications
In recent years, the popularity of using data science for decision-making has grown significantly. This rise in popularity has led to a significant learning challenge known as concept drifting, primarily due to the increasing use of spatial and temporal data streaming applications. Concept drift can have highly negative consequences, leading to the degradation of models used in these applications. A new model called BOASWIN-XGBoost (Bayesian Optimized Adaptive Sliding Window and XGBoost) has been introduced in this work to handle concept drift. This model is designed explicitly for classifying streaming data and comprises three main procedures: pre-processing, concept drift detection, and classification. The BOASWIN-XGBoost model utilizes a method called Bayesian-Optimized Adaptive Sliding Window (BOASWIN) to identify the presence of concept drift in the streaming data. Additionally, it employs an optimized XGBoost (eXtreme Gradient Boosting) model for classification purposes. The hyperparameter tuning approach known as BO-TPE (Bayesian Optimization with Tree-structured Parzen Estimator) is employed to fine-tune the XGBoost model's parameters, thus enhancing the classifier's performance. Seven streaming datasets were used to evaluate the proposed approach's performance, including Agrawal_a, Agrawal_g, SEA_a, SEA_g, Hyperplane, Phishing, and Weather. The simulation results demonstrate that the suggested model achieves impressive accuracy values of 70.83%, 71.02%, 76.76%, 76.96%, 84.26%, 95.53%, and 78.35% on the corresponding datasets, affirming its superior performance in handling concept drift and classifying streaming data.
How to manage massive spatiotemporal dataset from stationary and non-stationary sensors in commercial DBMS?
The growing diffusion of the latest information and communication technologies in different contexts allowed the constitution of enormous sensing networks that form the underlying texture of smart environments. The amount and the speed at which these environments produce and consume data are starting to challenge current spatial data management technologies. In this work, we report on our experience handling real-world spatiotemporal datasets: a stationary dataset referring to the parking monitoring system and a non-stationary dataset referring to a train-mounted railway monitoring system. In particular, we present the results of an empirical comparison of the retrieval performances achieved by three different off-the-shelf settings to manage spatiotemporal data, namely the well-established combination of PostgreSQL + PostGIS with standard indexing, a clustered version of the same setup, and then a combination of the basic setup with Timescale, a storage extension specialized in handling temporal data. Since the non-stationary dataset has put much pressure on the configurations above, we furtherly investigated the advantages achievable by combining the TSMS setup with state-of-the-art indexing techniques. Results showed that the standard indexing is by far outperformed by the other solutions, which have different trade-offs. This experience may help researchers and practitioners facing similar problems managing these types of data.
MTLPM: a long-term fine-grained PM2.5 prediction method based on spatio-temporal graph neural network
The concentration of PM2.5 is one of the air quality indicators that the public pays the most attention to. Existing methods for PM2.5 prediction primarily analyze and forecast data from individual monitoring stations, without considering the mutual influence among multiple stations caused by natural environmental factors, e.g., air circulation. Moreover, the existing methods are mostly short-term predictions and perform poorly in long-term forecasting. In this paper, we propose MTLPM, i.e., a spatio-temporal graph neural network model based on an encoder-decoder architecture, which fully exploits the spatial dynamic patterns and long-term dependencies. Firstly, we adopt a message passing mechanism combined with spatial features and complex environmental factors (e.g., temperature, humidity, and wind direction) to update station data, capturing real-time spatial dynamic information. Secondly, we adopt the Multi-head ProbSparse Self-attention to extract temporal features, learning the long-term dependency relationships among the data. Finally, we adopt a generative one-step decoder structure to simultaneously forecast the data for multiple stations over a long period. We conducted experiments on both the project dataset and the publicly available dataset. Compared to existing state-of-the-art methods, MTLPM achieved an average reduction of approximately 1.6 in mean absolute error (MAE) and approximately 0.02 in symmetric mean absolute percentage error (SMAPE) in predicting results. The relevant source code is publicly available on GitHub 1 .
An energy-efficient hierarchical data fusion approach in IoT
  Data Fusion (DF) involves merging data from various heterogeneous sources to generate fused data that is reduced in volume while preserving its integrity, consistency, and veracity. However, DF methodologies often pose challenges for low computational-powered sensor nodes (SNs) in energy-constrained Wireless Sensor Networks (WSNs) enabled Internet of Things (IoT). This study introduces a hierarchical data fusion (HDF) technique specifically designed to distribute the computational load among SNs with a focus on addressing the challenges of spatiotemporal data (STD). The hierarchy consists of three levels: A spatiotemporal data fusion (STDF) method, employed at the SNs level that efficiently handles the complex relationships between STD attributes; A fuzzy data fusion method, implemented at the cluster head (CH) level that effectively addresses the imprecise and fuzzy nature of real-world; The final fusion, applied at the sink (SKN) level that is based on the count of encoded icon values (EIVs). The proposed method achieves high accuracy (ACC), low error rates (ERR), and improved precision (PRE), recall (REC), and f1-score (F1S) values compared to avant-garde methods. Moreover, the analysis of the proposed technique reveals reduced computational complexity by distributing the computational load across different levels of hierarchy. Additionally, the proposed HDF technique exhibits lowered energy consumption and reduced communication overhead, making it well-suited for implementation in WSNs-enabled IoT.
An Improved Spatiotemporal Data Fusion Method Using Surface Heterogeneity Information Based on ESTARFM
The use of the spatiotemporal data fusion method as an effective data interpolation method has received extensive attention in remote sensing (RS) academia. The enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM) is one of the most famous spatiotemporal data fusion methods, as it is widely used to generate synthetic data. However, the ESTARFM algorithm uses moving windows with a fixed size to get the information around the central pixel, which hampers the efficiency and precision of spatiotemporal data fusion. In this paper, a modified ESTARFM data fusion algorithm that integrated the surface spatial information via a statistical method was developed. In the modified algorithm, the local variance of pixels around the central one was used as an index to adaptively determine the window size. Satellite images from two regions were acquired by employing the ESTARFM and modified algorithm. Results showed that the images predicted using the modified algorithm obtained more details than ESTARFM, as the frequency of pixels with the absolute difference of mean value of six bands’ reflectance between true observed image and predicted between 0 and 0.04 were 78% by ESTARFM and 85% by modified algorithm, respectively. In addition, the efficiency of the modified algorithm improved and the verification test showed the robustness of the modified algorithm. These promising results demonstrated the superiority of the modified algorithm to provide synthetic images compared with ESTARFM. Our research enriches the spatiotemporal data fusion method, and the automatic selection of moving window strategy lays the foundation of automatic processing of spatiotemporal data fusion on a large scale.