Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
24 result(s) for "optical synthetic aperture radar integration"
Sort by:
Integration of Satellite-Based Optical and Synthetic Aperture Radar Imagery to Estimate Winter Cover Crop Performance in Cereal Grasses
The magnitude of ecosystem services provided by winter cover crops is linked to their performance (i.e., biomass and associated nitrogen content, forage quality, and fractional ground cover), although few studies quantify these characteristics across the landscape. Remote sensing can produce landscape-level assessments of cover crop performance. However, commonly employed optical vegetation indices (VI) saturate, limiting their ability to measure high-biomass cover crops. Contemporary VIs that employ red-edge bands have been shown to be more robust to saturation issues. Additionally, synthetic aperture radar (SAR) data have been effective at estimating crop biophysical characteristics, although this has not been demonstrated on winter cover crops. We assessed the integration of optical (Sentinel-2) and SAR (Sentinel-1) imagery to estimate winter cover crops biomass across 27 fields over three winter–spring seasons (2018–2021) in Maryland. We used log-linear models to predict cover crop biomass as a function of 27 VIs and eight SAR metrics. Our results suggest that the integration of the normalized difference red-edge vegetation index (NDVI_RE1; employing Sentinel-2 bands 5 and 8A), combined with SAR interferometric (InSAR) coherence, best estimated the biomass of cereal grass cover crops. However, these results were season- and species-specific (R2 = 0.74, 0.81, and 0.34; RMSE = 1227, 793, and 776 kg ha−1, for wheat (Triticum aestivum L.), triticale (Triticale hexaploide L.), and cereal rye (Secale cereale), respectively, in spring (March–May)). Compared to the optical-only model, InSAR coherence improved biomass estimations by 4% in wheat, 5% in triticale, and by 11% in cereal rye. Both optical-only and optical-SAR biomass prediction models exhibited saturation occurring at ~1900 kg ha−1; thus, more work is needed to enable accurate biomass estimations past the point of saturation. To address this continued concern, future work could consider the use of weather and climate variables, machine learning models, the integration of proximal sensing and satellite observations, and/or the integration of process-based crop-soil simulation models and remote sensing observations.
Improving Crop Classification Accuracy with Integrated Sentinel-1 and Sentinel-2 Data: a Case Study of Barley and Wheat
Crop classification plays a crucial role in ensuring food security, agricultural policy development, and effective land management. Remote sensing data, particularly Sentinel-1 and Sentinel-2 data, has been widely used for crop mapping and classification in cloudy regions due to their high temporal and spatial resolution. This study aimed to enhance the classification accuracy of grain crops, specifically barley and wheat, by integrating Sentinel-1 synthetic aperture radar (SAR) and Sentinel-2 multispectral instrument (MSI) data. The study utilized two classification models, random forest (RF) and classification and regression trees (CART), to classify the grain crops based on the integrated data. The results showed an overall accuracy (OA) of 93%, and a Kappa coefficient ( K ) of 0.896 for RF, and an OA of 89.15% and K of 0.84 for the CART classifier. The integration of both radar and optical data has the potential to improve the accuracy of crop classification compared to using a single-sensor classification technique. The significance of this study is that it demonstrates the effectiveness of integrating radar and optical data to improve crop classification accuracy. These findings can be used to support crop management, environmental monitoring, and policy development, particularly in areas with cloud cover or limited optical data. The study’s implications are particularly relevant in the context of global food security, where accurate crop classification is essential for monitoring crop health and yield estimation. Concisely, this study provides a useful approach for crop classification using Sentinel-1 and Sentinel-2 data integration, which can be employed to support sustainable agriculture and food security initiatives.
Exploiting Time Series of Sentinel-1 and Sentinel-2 Imagery to Detect Meadow Phenology in Mountain Regions
A synergic integration of Synthetic Aperture Radar (SAR) and optical time series offers an unprecedented opportunity in vegetation phenology monitoring for mountain agriculture management. In this paper, we performed a correlation analysis of radar signal to vegetation and soil conditions by using a time series of Sentinel-1 C-band dual-polarized (VV and VH) SAR images acquired in the South Tyrol region (Italy) from October 2014 to September 2016. Together with Sentinel-1 images, we exploited corresponding Sentinel-2 images and ground measurements. Results show that Sentinel-1 cross-polarized VH backscattering coefficients have a strong vegetation contribution and are well correlated with the Normalized Difference Vegetation Index (NDVI) values retrieved from optical sensors, thus allowing the extraction of meadow phenological phases. Particularly for the Start Of Season (SOS) at low altitudes, the mean difference in days between Sentinel-1 and ground sensors is compatible with the acquisition time of the SAR sensor. However, the results show a decrease in accuracy with increasing altitude. The same trend is observed for senescence. The main outcomes of our investigations in terms of inter-satellite comparison show that Sentinel-1 is less effective than Sentinel-2 in detecting the SOS. At the same time, Sentinel-1 is as robust as Sentinel-2 in defining mowing events. Our study shows that SAR-Optical data integration is a promising approach for phenology detection in mountain regions.
Multispectral and Radar Data for the Setting of Gold Mineralization in the South Eastern Desert, Egypt
Satellite-based multi-sensor data coupled with field and microscopic investigations are used to unravel the setting and controls of gold mineralization in the Wadi Beitan–Wadi Rahaba area in the South Eastern Desert of Egypt. The satellite-based multispectral and Synthetic Aperture Radar (SAR) data promoted a vibrant litho-tectonic understanding and abetted in assessing the regional structural control of the scattered gold occurrences in the study area. The herein detailed approach includes band rationing, principal component and independent component analyses, directional filtering, and automated and semi-automated lineament extraction techniques to Landsat 8- Operational Land Imager (OLI), Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Phased Array L-band Synthetic Aperture Radar (PALSAR), and Sentinel-1B data. Results of optical and SAR data processed as grayscale raster images of band ratios, Relative Absorption Band Depth (RBD), and (mafic–carbonate–hydrous) mineralogical indices are used to extract the representative pixels (regions of interest). The extracted pixels are then converted to vector shape files and are finally imported into the ArcMap environment. Similarly, manually and automatically extracted lineaments are merged with the band ratios and mineralogical indices vector layers. The data fusion approach used herein reveals no particular spatial association between gold occurrences and certain lithological units, but shows a preferential distribution of gold–quartz veins in zones of chlorite–epidote alteration overlapping with high-density intersections of lineaments. Structural features including en-echelon arrays of quartz veins and intense recrystallization and sub-grain development textures are consistent with vein formation and gold deposition syn-kinematic with the host shear zones. The mineralized, central-shear quartz veins, and the associated strong stretching lineation affirm vein formation amid stress build-up and stress relaxation of an enduring oblique convergence (assigned as Najd-related sinistral transpression; ~640–610 Ma). As the main outcome of this research, we present a priority map with zones defined as high potential targets for undiscovered gold resources.
Construction of High-Precision and Complete Images of a Subsidence Basin in Sand Dune Mining Areas by InSAR-UAV-LiDAR Heterogeneous Data Integration
Affected by geological factors, the scale of surface deformation in a hilly semi-desertification mining area varies. Meanwhile, there is certain dense vegetation on the ground, so it is difficult to construct a high-precision and complete image of a subsidence basin by using a single monitoring method, and hence the laws of the deformation and inversion of mining parameters cannot be known. Therefore, we firstly propose conducting collaborative monitoring by using InSAR (Interferometric Synthetic Aperture Radar), UAV (unmanned aerial vehicle), and 3DTLS (three-dimensional terrestrial laser scanning). The time-series complete surface subsidence basin is constructed by fusing heterogeneous data. In this paper, SBAS-InSAR (Small Baseline Subset) technology, which has the characteristics of reducing the time and space discorrelation, is used to obtain the small-scale deformation of the subsidence basin, oblique photogrammetry and 3D-TLS with strong penetrating power are used to obtain the anomaly and large-scale deformation, and the local polynomial interpolation based on the weight of heterogeneous data is used to construct a complete and high-precision subsidence basin. Compared with GNSS (Global Navigation Satellite System) monitoring data, the mean square errors of 1.442 m, 0.090 m, 0.072 m are obtained. The root mean square error of the high-precision image of the subsidence basin data is 0.040 m, accounting for 1.4% of the maximum subsidence value. The high-precision image of complete subsidence basin data can provide reliable support for the study of surface subsidence law and mining parameter inversion.
Cloud Removal with SAR-Optical Data Fusion and Graph-Based Feature Aggregation Network
In observations of Earth, the existence of clouds affects the quality and usability of optical remote sensing images in practical applications. Many cloud removal methods have been proposed to solve this issue. Among these methods, synthetic aperture radar (SAR)-based methods have more potential than others because SAR imaging is hardly affected by clouds, and can reflect ground information differences and changes. While SAR images used as auxiliary information for cloud removal may be blurred and noisy, the similar non-local information of spectral and electromagnetic features cannot be effectively utilized by traditional cloud removal methods. To overcome these weaknesses, we propose a novel cloud removal method using SAR-optical data fusion and a graph-based feature aggregation network (G-FAN). First, cloudy optical images and contemporary SAR images are concatenated and transformed into hyper-feature maps by pre-convolution. Second, the hyper-feature maps are inputted into the G-FAN to reconstruct the missing data of the cloud-covered area by aggregating the electromagnetic backscattering information of the SAR image, and the spectral information of neighborhood and non-neighborhood pixels in the optical image. Finally, post-convolution and a long skip connection are adopted to reconstruct the final predicted cloud-free images. Both the qualitative and quantitative experimental results from the simulated data and real data experiments show that our proposed method outperforms traditional deep learning methods for cloud removal.
Multi-Temporal Sentinel-1 and -2 Data Fusion for Optical Image Simulation
In this paper, we present the optical image simulation from synthetic aperture radar (SAR) data using deep learning based methods. Two models, i.e., optical image simulation directly from the SAR data and from multi-temporal SAR-optical data, are proposed to testify the possibilities. The deep learning based methods that we chose to achieve the models are a convolutional neural network (CNN) with a residual architecture and a conditional generative adversarial network (cGAN). We validate our models using the Sentinel-1 and -2 datasets. The experiments demonstrate that the model with multi-temporal SAR-optical data can successfully simulate the optical image; meanwhile, the state-of-the-art model with simple SAR data as input failed. The optical image simulation results indicate the possibility of SAR-optical information blending for the subsequent applications such as large-scale cloud removal, and optical data temporal super-resolution. We also investigate the sensitivity of the proposed models against the training samples, and reveal possible future directions.
Optical and SAR Data Fusion Based on Transformer for Rice Identification: A Comparative Analysis from Early to Late Integration
The accurate identification of rice fields through remote sensing is critical for agricultural monitoring and global food security. While optical and Synthetic Aperture Radar (SAR) data offer complementary advantages for crop mapping—spectral richness from optical imagery and all-weather capabilities from SAR—their integration remains challenging due to heterogeneous data characteristics and environmental variability. This study systematically evaluates three Transformer-based fusion strategies for rice identification: Early Fusion Transformer (EFT), Feature Fusion Transformer (FFT), and Decision Fusion Transformer (DFT), designed to integrate optical-SAR data at the input level, feature level, and decision level, respectively. Experiments conducted in Arkansas, USA—a major rice-producing region with complex agroclimatic conditions—demonstrate that EFT achieves superior performance, with an overall accuracy (OA) of 98.33% and rice-specific Intersection over Union (IoU_rice) of 83.47%, surpassing single-modality baselines (optical: IoU_rice = 75.78%; SAR: IoU_rice = 66.81%) and alternative fusion approaches. The model exhibits exceptional robustness in cloud-obstructed regions and diverse field patterns, effectively balancing precision (90.98%) and recall (90.35%). These results highlight the superiority of early-stage fusion in preserving complementary spectral–structural information, while revealing limitations of delayed integration strategies. Our work advances multi-modal remote sensing methodologies, offering a scalable framework for operational agricultural monitoring in challenging environments.
A Novel Multimodal Fusion Framework Based on Point Cloud Registration for Near-Field 3D SAR Perception
This study introduces a pioneering multimodal fusion framework to enhance near-field 3D Synthetic Aperture Radar (SAR) imaging, crucial for applications like radar cross-section measurement and concealed object detection. Traditional near-field 3D SAR imaging struggles with issues like target–background confusion due to clutter and multipath interference, shape distortion from high sidelobes, and lack of color and texture information, all of which impede effective target recognition and scattering diagnosis. The proposed approach presents the first known application of multimodal fusion in near-field 3D SAR imaging, integrating LiDAR and optical camera data to overcome its inherent limitations. The framework comprises data preprocessing, point cloud registration, and data fusion, where registration between multi-sensor data is the core of effective integration. Recognizing the inadequacy of traditional registration methods in handling varying data formats, noise, and resolution differences, particularly between near-field 3D SAR and other sensors, this work introduces a novel three-stage registration process to effectively address these challenges. First, the approach designs a structure–intensity-constrained centroid distance detector, enabling key point extraction that reduces heterogeneity and accelerates the process. Second, a sample consensus initial alignment algorithm with SHOT features and geometric relationship constraints is proposed for enhanced coarse registration. Finally, the fine registration phase employs adaptive thresholding in the iterative closest point algorithm for precise and efficient data alignment. Both visual and quantitative analyses of measured data demonstrate the effectiveness of our method. The experimental results show significant improvements in registration accuracy and efficiency, laying the groundwork for future multimodal fusion advancements in near-field 3D SAR imaging.
Improving Urban Land Cover Classification in Cloud-Prone Areas with Polarimetric SAR Images
Urban land cover (ULC) serves as fundamental environmental information for urban studies, while accurate and timely ULC mapping remains challenging due to cloud contamination in tropical and subtropical areas. Synthetic aperture radar (SAR) has excellent all-weather working capability to overcome the challenge, while optical SAR data fusion is often required due to the limited land surface information provided by SAR. However, the mechanism by which SAR can compensate optical images, given the occurrence of clouds, in order to improve the ULC mapping, remains unexplored. To address the issue, this study proposes a framework, through various sampling strategies and three typical supervised classification methods, to quantify the ULC classification accuracy using optical and SAR data with various cloud levels. The land cover confusions were investigated in detail to understand the role of SAR in distinguishing land cover under different types of cloud coverage. Several interesting experimental results were found. First, 50% cloud coverage over the optical images decreased the overall accuracy by 10–20%, while the incorporation of SAR images was able to improve the overall accuracy by approximately 4%, by increasing the recognition of cloud-covered ULC information, particularly the water bodies. Second, if all the training samples were not contaminated by clouds, the cloud coverage had a higher impact with a reduction of 35% in the overall accuracy, whereas the incorporation of SAR data contributed to an increase of approximately 5%. Third, the thickness of clouds also brought about different impacts on the results, with an approximately 10% higher reduction from thick clouds compared with that from thin clouds, indicating that certain spectral information might still be available in the areas covered by thin clouds. These findings provide useful references for the accurate monitoring of ULC over cloud-prone areas, such as tropical and subtropical cities, where cloud contamination is often unavoidable.