Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
198 result(s) for "multi-spectral remote sensing"
Sort by:
Mapping of Urban Surface Water Bodies from Sentinel-2 MSI Imagery at 10 m Resolution via NDWI-Based Image Sharpening
This study conducts an exploratory evaluation of the performance of the newly available Sentinel-2A Multispectral Instrument (MSI) imagery for mapping water bodies using the image sharpening approach. Sentinel-2 MSI provides spectral bands with different resolutions, including RGB and Near-Infra-Red (NIR) bands in 10 m and Short-Wavelength InfraRed (SWIR) bands in 20 m, which are closely related to surface water information. It is necessary to define a pan-like band for the Sentinel-2 image sharpening process because of the replacement of the panchromatic band by four high-resolution multi-spectral bands (10 m). This study, which aimed at urban surface water extraction, utilised the Normalised Difference Water Index (NDWI) at 10 m resolution as a high-resolution image to sharpen the 20 m SWIR bands. Then, object-level Modified NDWI (MNDWI) mapping and minimum valley bottom adjustment threshold were applied to extract water maps. The proposed method was compared with the conventional most related band- (between the visible spectrum/NIR and SWIR bands) based and principal component analysis first component-based sharpening. Results show that the proposed NDWI-based MNDWI image exhibits higher separability and is more effective for both classification-level and boundary-level final water maps than traditional approaches.
Employing a Multi-Input Deep Convolutional Neural Network to Derive Soil Clay Content from a Synergy of Multi-Temporal Optical and Radar Imagery Data
Earth observation (EO) has an immense potential as being an enabling tool for mapping spatial characteristics of the topsoil layer. Recently, deep learning based algorithms and cloud computing infrastructure have become available with a great potential to revolutionize the processing of EO data. This paper aims to present a novel EO-based soil monitoring approach leveraging open-access Copernicus Sentinel data and Google Earth Engine platform. Building on key results from existing data mining approaches to extract bare soil reflectance values the current study delivers valuable insights on the synergistic use of open access optical and radar images. The proposed framework is driven by the need to eliminate the influence of ambient factors and evaluate the efficiency of a convolutional neural network (CNN) to effectively combine the complimentary information contained in the pool of both optical and radar spectral information and those form auxiliary geographical coordinates mainly for soil. We developed and calibrated our multi-input CNN model based on soil samples (calibration = 80% and validation 20%) of the LUCAS database and then applied this approach to predict soil clay content. A promising prediction performance (R2 = 0.60, ratio of performance to the interquartile range (RPIQ) = 2.02, n = 6136) was achieved by the inclusion of both types (synthetic aperture radar (SAR) and laboratory visible near infrared–short wave infrared (VNIR-SWIR) multispectral) of observations using the CNN model, demonstrating an improvement of more than 5.5% in RMSE using the multi-year median optical composite and current state-of-the-art non linear machine learning methods such as random forest (RF; R2 = 0.55, RPIQ = 1.91, n = 6136) and artificial neural network (ANN; R2 = 0.44, RPIQ = 1.71, n = 6136). Moreover, we examined post-hoc techniques to interpret the CNN model and thus acquire an understanding of the relationships between spectral information and the soil target identified by the model. Looking to the future, the proposed approach can be adopted on the forthcoming hyperspectral orbital sensors to expand the current capabilities of the EO component by estimating more soil attributes with higher predictive performance.
MMYFnet: Multi-Modality YOLO Fusion Network for Object Detection in Remote Sensing Images
Object detection in remote sensing images is crucial for airport management, hazard prevention, traffic monitoring, and more. The precise ability for object localization and identification enables remote sensing imagery to provide early warnings, mitigate risks, and offer strong support for decision-making processes. While traditional deep learning-based object detection techniques have achieved significant results in single-modal environments, their detection capabilities still encounter challenges when confronted with complex environments, such as adverse weather conditions or situations where objects are obscured. To overcome the limitations of existing fusion methods in terms of complexity and insufficient information utilization, we innovatively propose a Cosine Similarity-based Image Feature Fusion (CSIFF) module and integrate it into a dual-branch YOLOv8 network, constructing a lightweight and efficient target detection network called Multi-Modality YOLO Fusion Network (MMYFNet). This network utilizes cosine similarity to divide the original features into common features and specific features, which are then refined and fused through specific modules. Experimental and analytical results show that MMYFNet performs excellently on both the VEDAI and FLIR datasets, achieving mAP values of 80% and 76.8%, respectively. Further validation through parameter sensitivity experiments, ablation studies, and visual analyses confirms the effectiveness of the CSIFF module. MMYFNet achieves high detection accuracy with fewer parameters, and the CSIFF module, as a plug-and-play module, can be integrated into other CNN-based cross-modality network models, providing a new approach for object detection in remote sensing image fusion.
A Classified Adversarial Network for Multi-Spectral Remote Sensing Image Change Detection
Adversarial training has demonstrated advanced capabilities for generating image models. In this paper, we propose a deep neural network, named a classified adversarial network (CAN), for multi-spectral image change detection. This network is based on generative adversarial networks (GANs). The generator captures the distribution of the bitemporal multi-spectral image data and transforms it into change detection results, and these change detection results (as the fake data) are input into the discriminator to train the discriminator. The results obtained by pre-classification are also input into the discriminator as the real data. The adversarial training can facilitate the generator learning the transformation from a bitemporal image to a change map. When the generator is trained well, the generator has the ability to generate the final result. The bitemporal multi-spectral images are input into the generator, and then the final change detection results are obtained from the generator. The proposed method is completely unsupervised, and we only need to input the preprocessed data that were obtained from the pre-classification and training sample selection. Through adversarial training, the generator can better learn the relationship between the bitemporal multi-spectral image data and the corresponding labels. Finally, the well-trained generator can be applied to process the raw bitemporal multi-spectral images to obtain the final change map (CM). The effectiveness and robustness of the proposed method were verified by the experimental results on the real high-resolution multi-spectral image data sets.
Estimation of Soil Salt Content at Different Depths Using UAV Multi-Spectral Remote Sensing Combined with Machine Learning Algorithms
Soil salinization seriously affects the sustainable development of agricultural production; thus, the timely, efficient, and accurate estimation of soil salt content (SSC) has important research significance. In this study, the feasibility of soil salt content retrieval using machine learning models was explored based on a UAV (unmanned aerial vehicle) multi-spectral remote sensing platform. First, two variable screening methods (Pearson correlation analysis and Grey relational analysis) are used to screen the characteristic importance of 20 commonly used spectral indices. Then, the sensitive spectral variables were divided into a vegetation index group, a salt index group, and a combination variable group, which represent the model. To estimate SSC information for soil depths of 0–20 cm and 20–40 cm, three machine learning regression models were constructed: Support Vector Machine (SVM), Random Forest (RF), and Backpropagation Neural Network (BPNN). Finally, the salt distribution map for a 0–20 cm soil depth was drawn based on the best estimation model. The results of experiments show that GRA is better than PCA in improving the accuracy of the estimation model, and the combination variable group containing soil moisture information performs best. The three machine learning models have achieved good prediction effects to some extent. The accuracy and stability of the model are considered comprehensively, the prediction effect of 0–20 cm is higher than that of 20–40 cm, and the validation set coefficient of determination (R2), Root-Mean-Square-Error (RMSE), and Mean Absolute Error (MAE) of the best inversion model are 0.775, 0.055, and 0.038, and the soil salt spatial map based on the optimal estimation model can reflect the salinization distribution in the study area. Therefore, this study shows that a UAV multi-spectral remote sensing platform combined with machine learning models can better monitor farmland soil salt content.
Estimation of Rice Leaf Area Index Utilizing a Kalman Filter Fusion Methodology Based on Multi-Spectral Data Obtained from Unmanned Aerial Vehicles (UAVs)
The rapid and accurate estimation of leaf area index (LAI) through remote sensing holds significant importance for precise crop management. However, the direct construction of a vegetation index model based on multi-spectral data lacks robustness and spatiotemporal expansibility, making its direct application in practical production challenging. This study aimed to establish a simple and effective method for LAI estimation to address the issue of poor accuracy and stability that is encountered by vegetation index models under varying conditions. Based on seven years of field plot trials with different varieties and nitrogen fertilizer treatments, the Kalman filter (KF) fusion method was employed to integrate the estimated outcomes of multiple vegetation index models, and the fusion process was investigated by comparing and analyzing the relationship between fixed and dynamic variances alongside the fusion accuracy of optimal combinations during different growth stages. A novel multi-model integration fusion method, KF-DGDV (Kalman Filtering with Different Growth Periods and Different Vegetation Index Models), which combines the growth characteristics and uncertainty of LAI, was designed for the precise monitoring of LAI across various growth phases of rice. The results indicated that the KF-DGDV technique exhibits a superior accuracy in estimating LAI compared with statistical data fusion and the conventional vegetation index model method. Specifically, during the tillering to booting stage, a high R2 value of 0.76 was achieved, while at the heading to maturity stage, it reached 0.66. In contrast, within the framework of the traditional vegetation index model, the red-edge difference vegetation index (DVIREP) model demonstrated a superior performance, with an R2 value of 0.65, during tillering to booting stage, and 0.50 during the heading to maturity stage, respectively. The multi-model integration method (MME) yielded an R2 value of 0.67 for LAI estimation during the tillering to booting stage, and 0.53 during the heading to maturity stage. Consequently, KF-DGDV presented an effective and stable real-time quantitative estimation method for LAI in rice.
Use of satellite images to monitor Leucoptera sinuella leaf damage in poplar plantations in central Chile
The invasive poplar leaf miner Leucoptera sinuella (Lepidoptera: Lyonetiidae), has spread throughout poplar plantations in central Chile. We developed and validated models based on two methodologies of foliar damage estimation, and from different bands and indexes obtained from Sentinel-2 satellite images. We estimated foliar damage with field visual application of an ordinal severity scale, or with a laboratory estimation of leaf mined area with an image software (ImageJ) from a sample of leaves using an ordinal severity scale. We developed four models for the visual estimation on the field using red band and three spectral indexes, while we developed four models for the laboratory image software estimation using near infrared (NIR) band and the same three spectral indexes. Models developed from field visual estimation with red band (R2 = 0.88) and Normalized Difference Vegetation Index (NDVI) (R2 = 0.89) produced better results, as well as from image software estimation with NIR band (R2 = 0.86) and NDVI (R2 = 0.83). The field visual estimation and red band model got the best validation results, with R2 of 0.90, mean square error of 0.73, mean absolute error of 0.59, and a slope of 0.91. This model could predict the severity of foliar damage by L. sinuella in poplar plantations, representing a potentially useful monitoring tool for decision-making in the management of the poplar leaf miner in large areas of poplar plantations in central Chile.
Identification of banana fusarium wilt using supervised classification algorithms with UAV-based multi-spectral imagery
The disease of banana Fusarium wilt currently threatens banana production areas all over the world. Rapid and large-area monitoring of Fusarium wilt disease is very important for the disease treatment and crop planting adjustments. The objective of this study was to evaluate the performance of supervised classification algorithms such as support vector machine (SVM), random forest (RF), and artificial neural network (ANN) algorithms to identify locations that were infested or not infested with Fusarium wilt. An unmanned aerial vehicle (UAV) equipped with a five-band multi-spectral sensor (blue, green, red, red-edge and near-infrared bands) was used to capture the multi-spectral imagery. A total of 139 ground sample-sites were surveyed to assess the occurrence of banana Fusarium wilt. The results showed that the SVM, RF, and ANN algorithms exhibited good performance for identifying and mapping banana Fusarium wilt disease in UAV-based multi-spectral imagery. The overall accuracies of the SVM, RF, and ANN were 91.4%, 90.0%, and 91.1%, respectively for the pixel-based approach. The RF algorithm required significantly less training time than the SVM and ANN algorithms. The maps generated by the SVM, RF, and ANN algorithms showed the areas of occurrence of Fusarium wilt disease were in the range of 5.21-5.75 hm2, accounting for 36.3%-40.1% of the total planting area of bananas in the study area. The results also showed that the inclusion of the red-edge band resulted in an increase in the overall accuracy of 2.9%-3.0%. A simulation of the resolutions of satellite-based imagery (i.e., 0.5 m, 1 m, 2 m, and 5 m resolutions) showed that imagery with a spatial resolution higher than 2 m resulted in good identification accuracy of Fusarium wilt. The results of this study demonstrate that the RF classifier is well suited for the identification and mapping of banana Fusarium wilt disease from UAV-based remote sensing imagery. The results provide guidance for disease treatment and crop planting adjustments.
Precision Estimation of Rice Nitrogen Fertilizer Topdressing According to the Nitrogen Nutrition Index Using UAV Multi-Spectral Remote Sensing: A Case Study in Southwest China
The precision estimation of N fertilizer application according to the nitrogen nutrition index (NNI) using unmanned aerial vehicle (UAV) multi-spectral measurements remains to be tested in different rice cultivars and planting areas. Therefore, two field experiments were conducted using varied N rates (0, 60, 120, 160, and 200 kg N ha−1) on two rice cultivars, Yunjing37 (YJ-37, Oryza sativa subsp. Japonica Kato., the Institute of Food Crops at the Yunnan Academy of Agricultural Sciences, Kunming, China) and Jiyou6135 (JY-6135, Oryza sativa subsp. indica Kato., Hunan Longping Gaoke Nongping seed industry Co., Ltd., Changsha, China), in southwest China. The rice canopy spectral images were measured by the UAV’s multi-spectral remote sensing at three growing stages. The NNI was calculated based on the critical N (Nc) dilution curve. A random forest model integrating multi-vegetation indices established the NNI inversion, facilitating precise N topdressing through a linear platform of NNI-Relative Yield and the remote sensing NNI-based N balance approaches. The Nc dilution curve calibrated with aboveground dry matter demonstrated the highest accuracy (R2 = 0.93, 0.97 for shoot components in cultivars YJ-37 and JY-6135), outperforming stem (R2 = 0.70, 0.76) and leaf (R2 = 0.80, 0.89) based models. The RF combined with six vegetation index combinations was found to be the best predictor of NNI at each growing period (YJ-37: R2 is 0.70–0.97, RMSE is 0.02~0.04; JY-6135: R2 is 0.71–0.92, RMSE is 0.04~0.05). The RF surpassed BPNN/PLSR by 6.14–10.10% in R2 and 13.71–33.65% in error reduction across the critical rice growth stages. The topdressing amounts of YJ-37 and JY-6135 were 111–124 kg ha−1 and 80–133 kg ha−1, with low errors of 2.50~8.73 kg ha−1 for YJ-37 and 2.52~5.53 kg ha−1 for JY-6135 in the jointing (JT) and heading (HD) stages. These results are promising for the precise topdressing of rice using a remote sensing NNI-based N balance method. The combination of UAV multi-spectral imaging with the NNI-nitrogen balance method was tested for the first time in southwest China, demonstrating its feasibility and offering a regional approach for precise rice topdressing.
EIAGA-S: Rapid Mapping of Mangroves Using Geospatial Data without Ground Truth Samples
Mangrove forests are essential for coastal protection and carbon sequestration, yet accurately mapping their distribution remains challenging due to spectral similarities with other vegetation. This study introduces a novel unsupervised learning method, the Elite Individual Adaptive Genetic Algorithm-Semantic Inference (EIAGA-S), designed for the high-precision semantic segmentation of mangrove forests using remote sensing images without the need for ground truth samples. EIAGA-S integrates an adaptive Genetic Algorithm with an elite individual’s evolution strategy, optimizing the segmentation process. A new Mangrove Enhanced Vegetation Index (MEVI) was developed to better distinguish mangroves from other vegetation types within the spectral feature space. EIAGA-S constructs segmentation rules through iterative rule stacking and enhances boundary information using connected component analysis. The method was evaluated using a multi-source remote sensing dataset covering the Hainan Dongzhai Port Mangrove Nature Reserve in China. The experimental results demonstrate that EIAGA-S achieves a superior overall mIoU (mean intersection over union) of 0.92 and an F1 score of 0.923, outperforming traditional models such as K-means and SVM (Support Vector Machine). A detailed boundary analysis confirms EIAGA-S’s ability to extract fine-grained mangrove patches. The segmentation includes five categories: mangrove canopy, other terrestrial vegetation, buildings and streets, bare land, and water bodies. The proposed EIAGA-S model offers a precise and data-efficient solution for mangrove semantic mapping while eliminating the dependency on extensive field sampling and labeled data. Additionally, the MEVI index facilitates large-scale mangrove monitoring. In future work, EIAGA-S can be integrated with long-term remote sensing data to analyze mangrove forest dynamics under climate change conditions. This innovative approach has potential applications in rapid forest change detection, environmental protection, and beyond.