Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
2,543
result(s) for
"sentinel-2"
Sort by:
Mapping of Land Cover with Optical Images, Supervised Algorithms, and Google Earth Engine
by
Aquino-Santos, Raúl
,
Rios-Toledo, German
,
Pech-May, Fernando
in
Agricultural production
,
Algorithms
,
Artificial intelligence
2022
Crops and ecosystems constantly change, and risks are derived from heavy rains, hurricanes, droughts, human activities, climate change, etc. This has caused additional damages with economic and social impacts. Natural phenomena have caused the loss of crop areas, which endangers food security, destruction of the habitat of species of flora and fauna, and flooding of populations, among others. To help in the solution, it is necessary to develop strategies that maximize agricultural production as well as reduce land wear, environmental impact, and contamination of water resources. The generation of crop and land-use maps is advantageous for identifying suitable crop areas and collecting precise information about the produce. In this work, a strategy is proposed to identify and map sorghum and corn crops as well as land use and land cover. Our approach uses Sentinel-2 satellite images, spectral indices for the phenological detection of vegetation and water bodies, and automatic learning methods: support vector machine, random forest, and classification and regression trees. The study area is a tropical agricultural area with water bodies located in southeastern Mexico. The study was carried out from 2017 to 2019, and considering the climate and growing seasons of the site, two seasons were created for each year. Land use was identified as: water bodies, land in recovery, urban areas, sandy areas, and tropical rainforest. The results in overall accuracy were: 0.99% for the support vector machine, 0.95% for the random forest, and 0.92% for classification and regression trees. The kappa index was: 0.99% for the support vector machine, 0.97% for the random forest, and 0.94% for classification and regression trees. The support vector machine obtained the lowest percentage of false positives and margin of error. It also acquired better results in the classification of soil types and identification of crops.
Journal Article
Global Mangrove Watch: Updated 2010 Mangrove Forest Extent (v2.5)
2022
This study presents an updated global mangrove forest baseline for 2010: Global Mangrove Watch (GMW) v2.5. The previous GMW maps (v2.0) of the mangrove extent are currently considered the most comprehensive available global products, however areas were identified as missing or poorly mapped. Therefore, this study has updated the 2010 baseline map to increase the mapping quality and completeness of the mangrove extent. This revision resulted in an additional 2660 km2 of mangroves being mapped yielding a revised global mangrove extent for 2010 of some 140,260 km2. The overall map accuracy was estimated to be 95.1% with a 95th confidence interval of 93.8–96.5%, as assessed using 50,750 reference points located across 60 globally distributed sites. Of these 60 validation sites, 26 were located in areas that were remapped to produce the v2.5 map and the overall accuracy for these was found to have increased from 82.6% (95th confidence interval: 80.1–84.9) for the v2.0 map to 95.0% (95th confidence interval: 93.7–96.4) for the v2.5 map. Overall, the improved GMW v2.5 map provides a more robust product to support the conservation and sustainable use of mangroves globally.
Journal Article
Land-Use and Land-Cover Classification in Semi-Arid Areas from Medium-Resolution Remote-Sensing Imagery: A Deep Learning Approach
2022
Detailed Land-Use and Land-Cover (LULC) information is of pivotal importance in, e.g., urban/rural planning, disaster management, and climate change adaptation. Recently, Deep Learning (DL) has emerged as a paradigm shift for LULC classification. To date, little research has focused on using DL methods for LULC mapping in semi-arid regions, and none that we are aware of have compared the use of different Sentinel-2 image band combinations for mapping LULC in semi-arid landscapes with deep Convolutional Neural Network (CNN) models. Sentinel-2 multispectral image bands have varying spatial resolutions, and there is often high spectral similarity of different LULC features in semi-arid regions; therefore, selection of suitable Sentinel-2 bands could be an important factor for LULC mapping in these areas. Our study contributes to the remote sensing literature by testing different Sentinel-2 bands, as well as the transferability of well-optimized CNNs, for semi-arid LULC classification in semi-arid regions. We first trained a CNN model in one semi-arid study site (Gujranwala city, Gujranwala Saddar and Wazirabadtownships, Pakistan), and then applied the pre-trained model to map LULC in two additional semi-arid study sites (Lahore and Faisalabad city, Pakistan). Two different composite images were compared: (i) a four-band composite with 10 m spatial resolution image bands (Near-Infrared (NIR), green, blue, and red bands), and (ii) a ten-band composite made by adding two Short Wave Infrared (SWIR) bands and four vegetation red-edge bands to the four-band composite. Experimental results corroborate the validity of the proposed CNN architecture. Notably, the four-band CNN model has shown robustness in semi-arid regions, where spatially and spectrally confusing land-covers are present.
Journal Article
Sentinel-2 Data for Land Cover/Use Mapping: A Review
by
Simwanda, Matamyo
,
Murayama, Yuji
,
Nyirenda, Vincent
in
classification
,
land cover/use
,
remote sensing
2020
The advancement in satellite remote sensing technology has revolutionised the approaches to monitoring the Earth’s surface. The development of the Copernicus Programme by the European Space Agency (ESA) and the European Union (EU) has contributed to the effective monitoring of the Earth’s surface by producing the Sentinel-2 multispectral products. Sentinel-2 satellites are the second constellation of the ESA Sentinel missions and carry onboard multispectral scanners. The primary objective of the Sentinel-2 mission is to provide high resolution satellite data for land cover/use monitoring, climate change and disaster monitoring, as well as complementing the other satellite missions such as Landsat. Since the launch of Sentinel-2 multispectral instruments in 2015, there have been many studies on land cover/use classification which use Sentinel-2 images. However, no review studies have been dedicated to the application of ESA Sentinel-2 land cover/use monitoring. Therefore, this review focuses on two aspects: (1) assessing the contribution of ESA Sentinel-2 to land cover/use classification, and (2) exploring the performance of Sentinel-2 data in different applications (e.g., forest, urban area and natural hazard monitoring). The present review shows that Sentinel-2 has a positive impact on land cover/use monitoring, specifically in monitoring of crop, forests, urban areas, and water resources. The contemporary high adoption and application of Sentinel-2 can be attributed to the higher spatial resolution (10 m) than other medium spatial resolution images, the high temporal resolution of 5 days and the availability of the red-edge bands with multiple applications. The ability to integrate Sentinel-2 data with other remotely sensed data, as part of data analysis, improves the overall accuracy (OA) when working with Sentinel-2 images. The free access policy drives the increasing use of Sentinel-2 data, especially in developing countries where financial resources for the acquisition of remotely sensed data are limited. The literature also shows that the use of Sentinel-2 data produces high accuracies (>80%) with machine-learning classifiers such as support vector machine (SVM) and Random forest (RF). However, other classifiers such as maximum likelihood analysis are also common. Although Sentinel-2 offers many opportunities for land cover/use classification, there are challenges which include mismatching with Landsat OLI-8 data, a lack of thermal bands, and the differences in spatial resolution among the bands of Sentinel-2. Sentinel-2 data show promise and have the potential to contribute significantly towards land cover/use monitoring.
Journal Article
Comparison of Random Forest, k-Nearest Neighbor, and Support Vector Machine Classifiers for Land Cover Classification Using Sentinel-2 Imagery
by
Thanh Noi, Phan
,
Kappas, Martin
in
Classification
,
classification algorithms
,
k-Nearest Neighbor (kNN)
2017
In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km2 within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets.
Journal Article
Determining the features on the image of Satellite Sentinel-2 with different resolution and comparing them with images of Satellite Sentinel-1
Land cover-land use (LCLU) classification tasks can take advantage of the fusion of radar and optical remote sensing data, leading generally to increase mapping accuracy. Here we propose a methodological approach to fuse information from the new European Space Sentinel-2 imagery for accurate land cover mapping of a portion of the region, Baghdad. I First step Download the Sentinel 2 image in its correct geographic location, then take a 10-meter, 20 meter and 60 meter resolution images then drawn point, line and polygon feature of each resolution image the discuss the difference between them The aim of this study was to discuss which resolution gives better accuracy.
as the difference between the features, by redusing the resolution, it will be made difficulty in identifying the features.
The landmarks appear clearly as the image resolution increases, so the features are clearer in the image with a resolution of 10 meters than the image with a resolution of 20 meters and 60 meters. Also, the images of the Sentinel-2 are clearer, and dealing with them is much easier than the images of the Sentinel-1.
Journal Article
Enhancement of Detecting Permanent Water and Temporary Water in Flood Disasters by Fusing Sentinel-1 and Sentinel-2 Imagery Using Deep Learning Algorithms: Demonstration of Sen1Floods11 Benchmark Datasets
2021
Identifying permanent water and temporary water in flood disasters efficiently has mainly relied on change detection method from multi-temporal remote sensing imageries, but estimating the water type in flood disaster events from only post-flood remote sensing imageries still remains challenging. Research progress in recent years has demonstrated the excellent potential of multi-source data fusion and deep learning algorithms in improving flood detection, while this field has only been studied initially due to the lack of large-scale labelled remote sensing images of flood events. Here, we present new deep learning algorithms and a multi-source data fusion driven flood inundation mapping approach by leveraging a large-scale publicly available Sen1Flood11 dataset consisting of roughly 4831 labelled Sentinel-1 SAR and Sentinel-2 optical imagery gathered from flood events worldwide in recent years. Specifically, we proposed an automatic segmentation method for surface water, permanent water, and temporary water identification, and all tasks share the same convolutional neural network architecture. We utilize focal loss to deal with the class (water/non-water) imbalance problem. Thorough ablation experiments and analysis confirmed the effectiveness of various proposed designs. In comparison experiments, the method proposed in this paper is superior to other classical models. Our model achieves a mean Intersection over Union (mIoU) of 52.99%, Intersection over Union (IoU) of 52.30%, and Overall Accuracy (OA) of 92.81% on the Sen1Flood11 test set. On the Sen1Flood11 Bolivia test set, our model also achieves very high mIoU (47.88%), IoU (76.74%), and OA (95.59%) and shows good generalization ability.
Journal Article
Synergetic Use of Sentinel-1 and Sentinel-2 Data for Soil Moisture Mapping at 100 m Resolution
2017
The recent deployment of ESA’s Sentinel operational satellites has established a new paradigm for remote sensing applications. In this context, Sentinel-1 radar images have made it possible to retrieve surface soil moisture with a high spatial and temporal resolution. This paper presents two methodologies for the retrieval of soil moisture from remotely-sensed SAR images, with a spatial resolution of 100 m. These algorithms are based on the interpretation of Sentinel-1 data recorded in the VV polarization, which is combined with Sentinel-2 optical data for the analysis of vegetation effects over a site in Urgell (Catalunya, Spain). The first algorithm has already been applied to observations in West Africa by Zribi et al., 2008, using low spatial resolution ERS scatterometer data, and is based on change detection approach. In the present study, this approach is applied to Sentinel-1 data and optimizes the inversion process by taking advantage of the high repeat frequency of the Sentinel observations. The second algorithm relies on a new method, based on the difference between backscattered Sentinel-1 radar signals observed on two consecutive days, expressed as a function of NDVI optical index. Both methods are applied to almost 1.5 years of satellite data (July 2015–November 2016), and are validated using field data acquired at a study site. This leads to an RMS error in volumetric moisture of approximately 0.087 m3/m3 and 0.059 m3/m3 for the first and second methods, respectively. No site calibrations are needed with these techniques, and they can be applied to any vegetation-covered area for which time series of SAR data have been recorded.
Journal Article
FUSION OF SENTINEL-2 AND PLANETSCOPE IMAGERY FOR VEGETATION DETECTION AND MONITORING
2018
Different spatial resolutions satellite imagery with global almost daily revisit time provide valuable information about the earth surface in a short time. Based on the remote sensing methods satellite imagery can have different applications like environmental development, urban monitoring, etc. For accurate vegetation detection and monitoring, especially in urban areas, spectral characteristics, as well as the spatial resolution of satellite imagery is important. In this research, 10-m and 20-m Sentinel-2 and 3.7-m PlanetScope satellite imagery were used. Although in nowadays research Sentinel-2 satellite imagery is often used for land-cover classification or vegetation detection and monitoring, we decided to test a fusion of Sentinel-2 imagery with PlanetScope because of its higher spatial resolution. The main goal of this research is a new method for Sentinel-2 and PlanetScope imagery fusion. The fusion method validation was provided based on the land-cover classification accuracy. Three land-cover classifications were made based on the Sentinel-2, PlanetScope and fused imagery. As expected, results show better accuracy for PS and fused imagery than the Sentinel-2 imagery. PlanetScope and fused imagery have almost the same accuracy. For the vegetation monitoring testing, the Normalized Difference Vegetation Index (NDVI) from Sentinel-2 and fused imagery was calculated and mutually compared. In this research, all methods and tests, image fusion and satellite imagery classification were made in the free and open source programs. The method developed and presented in this paper can easily be applied to other sciences, such as urbanism, forestry, agronomy, ecology and geology.
Journal Article