Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
25 result(s) for "Wu, Penghai"
Sort by:
Reconstructing Geostationary Satellite Land Surface Temperature Imagery Based on a Multiscale Feature Connected Convolutional Neural Network
Geostationary satellite land surface temperature (GLST) data are important for various dynamic environmental and natural resource applications for terrestrial ecosystems. Due to clouds, shadows, and other atmospheric conditions, the derived LSTs are often missing a large number of values. Reconstructing the missing values is essential for improving the usability of the geostationary satellite LST data. However, current reconstruction methods mainly aim to fill the values of a small number of invalid pixels with many valid pixels, which can provide useful land surface temperature values. When the missing data extent becomes large, the reconstruction effect will worsen because the relationship between different spatiotemporal geostationary satellite LSTs is complex and highly nonlinear. Inspired by the superiority of the deep convolutional neural network (CNN) in solving highly nonlinear and dynamic problems, a multiscale feature connection CNN model is proposed to fill missing LSTs with large missing regions. The proposed method has been tested on both FengYun-2G and Meteosat Second Generation-Spinning Enhanced Visible and InfraRed Imager geostationary satellite LST datasets. The results of simulated and actual experiments show that the proposed method is accurate to within about 1 °C, with 70% missing data rates. This is feasible and effective for large regions of LST reconstruction tasks.
Building Extraction in Very High Resolution Imagery by Dense-Attention Networks
Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this task by using both high-level and low-level feature maps. However, it is difficult to utilize different level features rationally with the present deep learning networks. To tackle this problem, a novel network based on DenseNets and the attention mechanism was proposed, called the dense-attention network (DAN). The DAN contains an encoder part and a decoder part which are separately composed of lightweight DenseNets and a spatial attention fusion module. The proposed encoder–decoder architecture can strengthen feature propagation and effectively bring higher-level feature information to suppress the low-level feature and noises. Experimental results based on public international society for photogrammetry and remote sensing (ISPRS) datasets with only red–green–blue (RGB) images demonstrated that the proposed DAN achieved a higher score (96.16% overall accuracy (OA), 92.56% F1 score, 90.56% mean intersection over union (MIOU), less training and response time and higher-quality value) when compared with other deep learning methods.
Multitemporal SAR Image Despeckling Based on a Scattering Covariance Matrix of Image Patch
This paper presents a despeckling method for multitemporal images acquired by synthetic aperture radar (SAR) sensors. The proposed method uses a scattering covariance matrix of each image patch as the basic processing unit, which can exploit both the amplitude information of each pixel and the phase difference between any two pixels in a patch. The proposed filtering framework consists of four main steps: (1) a prefiltering result of each image is obtained by a nonlocal weighted average using only the information of the corresponding time phase; (2) an adaptively temporal linear filter is employed to further suppress the speckle; (3) the final output of each patch is obtained by a guided filter using both the original speckled data and the filtering result of step 3; and (4) an aggregation step is used to tackle the multiple estimations problem for each pixel. The despeckling experiments conducted on both simulated and real multitemporal SAR datasets reveal the pleasing performance of the proposed method in both suppressing speckle and retaining details, when compared with both advanced single-temporal and multitemporal SAR despeckling techniques.
Land Use Classification of the Deep Convolutional Neural Network Method Reducing the Loss of Spatial Features
Land use classification is a fundamental task of information extraction from remote sensing imagery. Semantic segmentation based on deep convolutional neural networks (DCNNs) has shown outstanding performance in this task. However, these methods are still affected by the loss of spatial features. In this study, we proposed a new network, called the dense-coordconv network (DCCN), to reduce the loss of spatial features and strengthen object boundaries. In this network, the coordconv module is introduced into the improved DenseNet architecture to improve spatial information by putting coordinate information into feature maps. The proposed DCCN achieved an obvious performance in terms of the public ISPRS (International Society for Photogrammetry and Remote Sensing) 2D semantic labeling benchmark dataset. Compared with the results of other deep convolutional neural networks (U-net, SegNet, Deeplab-V3), the results of the DCCN method improved a lot and the OA (overall accuracy) and mean F1 score reached 89.48% and 86.89%, respectively. This indicates that the DCCN method can effectively reduce the loss of spatial features and improve the accuracy of semantic segmentation in high resolution remote sensing imagery.
Remote Sensing Crop Recognition by Coupling Phenological Features and Off-Center Bayesian Deep Learning
Obtaining accurate and timely crop area information is crucial for crop yield estimates and food security. Because most existing crop mapping models based on remote sensing data have poor generalizability, they cannot be rapidly deployed for crop identification tasks in different regions. Based on a priori knowledge of phenology, we designed an off-center Bayesian deep learning remote sensing crop classification method that can highlight phenological features, combined with an attention mechanism and residual connectivity. In this paper, we first optimize the input image and input features based on a phenology analysis. Then, a convolutional neural network (CNN), recurrent neural network (RNN), and random forest classifier (RFC) were built based on farm data in northeastern Inner Mongolia and applied to perform comparisons with the method proposed here. Then, classification tests were performed on soybean, maize, and rice from four measurement areas in northeastern China to verify the accuracy of the above methods. To further explore the reliability of the method proposed in this paper, an uncertainty analysis was conducted by Bayesian deep learning to analyze the model’s learning process and model structure for interpretability. Finally, statistical data collected in Suibin County, Heilongjiang Province, over many years, and Shandong Province in 2020 were used as reference data to verify the applicability of the methods. The experimental results show that the classification accuracy of the three crops reached 90.73% overall and the average F1 and IOU were 89.57% and 81.48%, respectively. Furthermore, the proposed method can be directly applied to crop area estimations in different years in other regions based on its good correlation with official statistics.
A No-Reference Edge-Preservation Assessment Index for SAR Image Filters under a Bayesian Framework Based on the Ratio Gradient
Denoising is an essential preprocessing step for most applications using synthetic aperture radar (SAR) images at different processing levels. Besides suppressing the noise, a good filter should also effectively preserve the image edge information. To quantitatively assess the edge-preservation performance of SAR filters, a number of indices have been investigated in the literature; however, most of them do not fully employ the statistical traits of the SAR image. In this paper, we review some of the typical edge-preservation assessment indices. A new referenceless index is then proposed. The ratio gradient is utilized to characterize the difference between two non-overlapping neighborhoods on opposite sides of each pixel in both the speckled and despeckled images. Based on these gradients and the statistical traits of the speckle, the proposed indicator is derived under a Bayesian framework. A series of experiments conducted with both simulated and real SAR datasets reveal that the proposed index shows good performances, in both robustness and consistency. For reproducibility, the source codes of the index and the testing datasets are provided.
Multiple Classifiers Based Semi-Supervised Polarimetric SAR Image Classification Method
Polarimetric synthetic aperture radar (PolSAR) image classification has played an important role in PolSAR data application. Deep learning has achieved great success in PolSAR image classification over the past years. However, when the labeled training dataset is insufficient, the classification results are usually unsatisfactory. Furthermore, the deep learning approach is based on hierarchical features, which is an approach that cannot take full advantage of the scattering characteristics in PolSAR data. Hence, it is worthwhile to make full use of scattering characteristics to obtain a high classification accuracy based on limited labeled samples. In this paper, we propose a novel semi-supervised classification method for PolSAR images, which combines the deep learning technique with the traditional scattering trait-based classifiers. Firstly, based on only a small number of training samples, the classification results of the Wishart classifier, support vector machine (SVM) classifier, and a complex-valued convolutional neural network (CV-CNN) are used to conduct majority voting, thus generating a strong dataset and a weak dataset. The strong training set are then used as pseudo-labels to reclassify the weak dataset by CV-CNN. The final classification results are obtained by combining the strong training set and the reclassification results. Experiments on two real PolSAR images on agricultural and forest areas indicate that, in most cases, significant improvements can be achieved with the proposed method, compared to the base classifiers, and the improvement is approximately 3–5%. When the number of labeled samples was small, the superiority of the proposed method is even more apparent. The improvement for built-up areas or infrastructure objects is not as significant as forests.
A Land Cover Classification Method for High-Resolution Remote Sensing Images Based on NDVI Deep Learning Fusion Network
High-resolution remote sensing (HRRS) images have few spectra, low interclass separability and large intraclass differences, and there are some problems in land cover classification (LCC) of HRRS images that only rely on spectral information, such as misclassification of small objects and unclear boundaries. Here, we propose a deep learning fusion network that effectively utilizes NDVI, called the Dense-Spectral-Location-NDVI network (DSLN). In DSLN, we first extract spatial location information from NDVI data at the same time as remote sensing image data to enhance the boundary information. Then, the spectral features are put into the encoding-decoding structure to abstract the depth features and restore the spatial information. The NDVI fusion module is used to fuse the NDVI information and depth features to improve the separability of land cover information. Experiments on the GF-1 dataset show that the mean OA (mOA) and the mean value of the Kappa coefficient (mKappa) of the DSLN network model reach 0.8069 and 0.7161, respectively, which have good applicability to temporal and spatial distribution. The comparison of the forest area released by Xuancheng Forestry Bureau and the forest area in Xuancheng produced by the DSLN model shows that the former is consistent with the latter. In conclusion, the DSLN network model is effectively applied in practice and can provide more accurate land cover data for regional ESV analysis.
Evaluation of Seven Atmospheric Profiles from Reanalysis and Satellite-Derived Products: Implication for Single-Channel Land Surface Temperature Retrieval
Land surface temperature (LST) is vital for studies of hydrology, ecology, climatology, and environmental monitoring. The radiative-transfer-equation-based single-channel algorithm, in conjunction with the atmospheric profile, is regarded as the most suitable one with which to produce long-term time series LST products from Landsat thermal infrared (TIR) data. In this study, the performances of seven atmospheric profiles from different sources (the MODerate-resolution Imaging Spectroradiomete atmospheric profile product (MYD07), the Atmospheric Infrared Sounder atmospheric profile product (AIRS), the European Centre for Medium-range Weather Forecasts (ECMWF), the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA2), the National Centers for Environmental Prediction (NCEP)/Global Forecasting System (GFS), NCEP/Final Operational Global Analysis (FNL), and NCEP/Department of Energy (DOE)) were comprehensively evaluated in the single-channel algorithm for LST retrieval from Landsat 8 TIR data. Results showed that when compared with the radio sounding profile downloaded from the University of Wyoming (UWYO), the worst accuracies of atmospheric parameters were obtained for the MYD07 profile. Furthermore, the root-mean-square error (RMSE) values (approximately 0.5 K) of the retrieved LST when using the ECMWF, MERRA2, NCEP/GFS, and NCEP/FNL profiles were smaller than those but greater than 0.8 K when the MYD07, AIRS, and NCEP/DOE profiles were used. Compared with the in situ LST measurements that were collected at the Hailar, Urad Front Banner, and Wuhai sites, the RMSE values of the LST that were retrieved by using the ECMWF, MERRA2, NCEP/GFS, and NCEP/FNL profiles were approximately 1.0 K. The largest discrepancy between the retrieved and in situ LST was obtained for the NCEP/DOE profile, with an RMSE value of approximately 1.5 K. The results reveal that the ECMWF, MERRA2, NCEP/GFS, and NCEP/FNL profiles have great potential to perform accurate atmospheric correction and generate long-term time series LST products from Landsat TIR data by using a single-channel algorithm.
Physical-Based Spatial-Spectral Deep Fusion Network for Chlorophyll-a Estimation Using MODIS and Sentinel-2 MSI Data
Satellite-derived Chlorophyll-a (Chl-a) is an important environmental evaluation indicator for monitoring water environments. However, the available satellite images either have a coarse spatial or low spectral resolution, which restricts the applicability of Chl-a retrieval in coastal water (e.g., less than 1 km from the shoreline) for large- and medium-sized lakes/oceans. Considering Lake Chaohu as the study area, this paper proposes a physical-based spatial-spectral deep fusion network (PSSDFN) for Chl-a retrieval using Moderate Resolution Imaging Spectroradiometer (MODIS) and Sentinel-2 Multispectral Instrument (MSI) reflectance data. The PSSDFN combines residual connectivity and attention mechanisms to extract effective features, and introduces physical constraints, including spectral response functions and the physical degradation model, to reconcile spatial and spectral information. The fused and MSI data were used as input variables for collaborative retrieval, while only the MSI data were used as input variables for MSI retrieval. Combined with the Chl-a field data, a comparison between MSI and collaborative retrieval was conducted using four machine learning models. The results showed that collaborative retrieval can greatly improve the accuracy compared with MSI retrieval. This research illustrates that the PSSDFN can improve the estimated accuracy of Chl-a for coastal water (less than 1 km from the shoreline) in large- and medium-sized lakes/oceans.