Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
28,835
result(s) for
"Spatial resolution"
Sort by:
Comparing Deep Neural Networks, Ensemble Classifiers, and Support Vector Machine Algorithms for Object-Based Urban Land Use/Land Cover Classification
by
Brian Alan Johnson
,
Dongmei Chen
,
Shahab Eddin Jozdani
in
Algorithms
,
Artificial intelligence
,
Artificial neural networks
2019
With the advent of high-spatial resolution (HSR) satellite imagery, urban land use/land cover (LULC) mapping has become one of the most popular applications in remote sensing. Due to the importance of context information (e.g., size/shape/texture) for classifying urban LULC features, Geographic Object-Based Image Analysis (GEOBIA) techniques are commonly employed for mapping urban areas. Regardless of adopting a pixel- or object-based framework, the selection of a suitable classifier is of critical importance for urban mapping. The popularity of deep learning (DL) (or deep neural networks (DNNs)) for image classification has recently skyrocketed, but it is still arguable if, or to what extent, DL methods can outperform other state-of-the art ensemble and/or Support Vector Machines (SVM) algorithms in the context of urban LULC classification using GEOBIA. In this study, we carried out an experimental comparison among different architectures of DNNs (i.e., regular deep multilayer perceptron (MLP), regular autoencoder (RAE), sparse, autoencoder (SAE), variational autoencoder (AE), convolutional neural networks (CNN)), common ensemble algorithms (Random Forests (RF), Bagging Trees (BT), Gradient Boosting Trees (GB), and Extreme Gradient Boosting (XGB)), and SVM to investigate their potential for urban mapping using a GEOBIA approach. We tested the classifiers on two RS images (with spatial resolutions of 30 cm and 50 cm). Based on our experiments, we drew three main conclusions: First, we found that the MLP model was the most accurate classifier. Second, unsupervised pretraining with the use of autoencoders led to no improvement in the classification result. In addition, the small difference in the classification accuracies of MLP from those of other models like SVM, GB, and XGB classifiers demonstrated that other state-of-the-art machine learning classifiers are still versatile enough to handle mapping of complex landscapes. Finally, the experiments showed that the integration of CNN and GEOBIA could not lead to more accurate results than the other classifiers applied.
Journal Article
Pre-Trained AlexNet Architecture with Pyramid Pooling and Supervision for High Spatial Resolution Remote Sensing Image Scene Classification
by
Han, Xiaobing
,
Cao, Liqin
,
Zhang, Liangpei
in
Architectural engineering
,
Artificial neural networks
,
Big Data
2017
The rapid development of high spatial resolution (HSR) remote sensing imagery techniques not only provide a considerable amount of datasets for scene classification tasks but also request an appropriate scene classification choice when facing with finite labeled samples. AlexNet, as a relatively simple convolutional neural network (CNN) architecture, has obtained great success in scene classification tasks and has been proven to be an excellent foundational hierarchical and automatic scene classification technique. However, current HSR remote sensing imagery scene classification datasets always have the characteristics of small quantities and simple categories, where the limited annotated labeling samples easily cause non-convergence. For HSR remote sensing imagery, multi-scale information of the same scenes can represent the scene semantics to a certain extent but lacks an efficient fusion expression manner. Meanwhile, the current pre-trained AlexNet architecture lacks a kind of appropriate supervision for enhancing the performance of this model, which easily causes overfitting. In this paper, an improved pre-trained AlexNet architecture named pre-trained AlexNet-SPP-SS has been proposed, which incorporates the scale pooling—spatial pyramid pooling (SPP) and side supervision (SS) to improve the above two situations. Extensive experimental results conducted on the UC Merced dataset and the Google Image dataset of SIRI-WHU have demonstrated that the proposed pre-trained AlexNet-SPP-SS model is superior to the original AlexNet architecture as well as the traditional scene classification methods.
Journal Article
Integrating spatial gene expression and breast tumour morphology via deep learning
2020
Spatial transcriptomics allows for the measurement of RNA abundance at a high spatial resolution, making it possible to systematically link the morphology of cellular neighbourhoods and spatially localized gene expression. Here, we report the development of a deep learning algorithm for the prediction of local gene expression from haematoxylin-and-eosin-stained histopathology images using a new dataset of 30,612 spatially resolved gene expression data matched to histopathology images from 23 patients with breast cancer. We identified over 100 genes, including known breast cancer biomarkers of intratumoral heterogeneity and the co-localization of tumour growth and immune activation, the expression of which can be predicted from the histopathology images at a resolution of 100 µm. We also show that the algorithm generalizes well to The Cancer Genome Atlas and to other breast cancer gene expression datasets without the need for re-training. Predicting the spatially resolved transcriptome of a tissue directly from tissue images may enable image-based screening for molecular biomarkers with spatial variation.
Deep learning can predict spatial variations in gene expression from haematoxylin-and-eosin-stained histopathology images of patients with cancer.
Journal Article
Capability of Remote Sensing Images to Distinguish the Urban Surface Materials: A Case Study of Venice City
2021
Many countries share an effort to understand the impact of growing urban areas on the environment. Spatial, spectral, and temporal resolutions of remote sensing images offer unique access to this information. Nevertheless, their use is limited because urban surface materials exhibit a great diversity of types and are not well spatially and spectrally distinguishable. This work aims to quantify the effect of these spatial and spectral characteristics of urban surface materials on their retrieval from images. To avoid other sources of error, synthetic images of the historical center of Venice were analyzed. A hyperspectral library, which characterizes the main materials of Venice city and knowledge of the city, allowed to create a starting image at a spatial resolution of 30 cm and spectral resolution of 3 nm and with a spectral range of 365–2500 nm, which was spatially and spectrally resampled to match the characteristics of most remote sensing sensors. Linear spectral mixture analysis was applied to every resampled image to evaluate and compare their capabilities to distinguish urban surface materials. In short, the capability depends mainly on spatial resolution, secondarily on spectral range and mixed pixel percentage, and lastly on spectral resolution; impervious surfaces are more distinguishable than pervious surfaces. This analysis of capability behavior is very important to select more suitable remote sensing images and/or to decide the complementarity use of different data.
Journal Article
Comparing ultra-high spatial resolution remote-sensing methods in mapping peatland vegetation
by
Tuittila, Eeva-Stiina
,
Juutinen, Sari
,
Aurela, Mika
in
Accuracy
,
Biogeochemical cycles
,
Classification
2019
Questions How to map floristic variation in a patterned fen in an ecologically meaningfully way? Can plant communities be delineated with species data generalized into plant functional types? What are the benefits and drawbacks of the two selected remote‐sensing approaches in mapping vegetation patterns, namely: (a) regression models of floristically defined fuzzy plant community clusters and (b) classification of predefined habitat types that combine vegetation and land cover information? Location Treeless 0.4 km2 mesotrophic string–flark fen in Kaamanen, northern Finland. Methods We delineated plant community clusters with fuzzy c‐means clustering based on two different inventories of plant species and functional type distribution. We used multiple optical remote‐sensing data sets, digital elevation models and vegetation height models derived from drone, aerial and satellite platforms from ultra‐high to very high spatial resolution (0.05–3 m) in an object‐based approach. We mapped spatial patterns for fuzzy and crisp plant community clusters using boosted regression trees, and fuzzy and crisp habitat types using supervised random forest classification. Results Clusters delineated with species‐specific data or plant functional type data produced comparable results. However, species‐specific data for graminoids and mosses improved the accuracy of clustering in the case of flarks and string margins. Mapping accuracy was higher for habitat types (overall accuracy 0.72) than for fuzzy plant community clusters (R2 values between 0.27 and 0.67). Conclusions For ecologically meaningful mapping of a patterned fen vegetation, plant functional types provide enough information. However, if the aim is to capture floristic variation in vegetation as realistically as possible, species‐specific data should be used. Maps of plant community clusters and habitat types complement each other. While fuzzy plant communities appear to be floristically most accurate, crisp habitat types are easiest to interpret and apply to different landscape and biogeochemical cycle analyses and modeling. We tested if plant communities can be delineated using plant functional types instead of species‐specific data. We found that the approaches produce comparable results. We compared two remote‐sensing approaches in mapping vegetation patterns. Regression models of floristically defined plant communities reveal the fuzziness of vegetation. Classification of pre‐defined habitat types is easier to interpret and has higher mapping accuracy.
Journal Article
Single-Temporal Supervised Learning for Universal Remote Sensing Change Detection
by
Ma, Ailong
,
Zhang, Liangpei
,
Zhong, Yanfei
in
Binary stars
,
Change detection
,
Computer vision
2024
Bitemporal supervised learning paradigm always dominates remote sensing change detection using numerous labeled bitemporal image pairs, especially for high spatial resolution (HSR) remote sensing imagery. However, it is very expensive and labor-intensive to label change regions in large-scale bitemporal HSR remote sensing image pairs. In this paper, we propose single-temporal supervised learning (STAR) for universal remote sensing change detection from a new perspective of exploiting changes between unpaired images as supervisory signals. STAR enables us to train a high-accuracy change detector only using unpaired labeled images and can generalize to real-world bitemporal image pairs. To demonstrate the flexibility and scalability of STAR, we design a simple yet unified change detector, termed ChangeStar2, capable of addressing binary change detection, object change detection, and semantic change detection in one architecture. ChangeStar2 achieves state-of-the-art performances on eight public remote sensing change detection datasets, covering above two supervised settings, multiple change types, multiple scenarios.
Journal Article
Water Body Extraction from Very High Spatial Resolution Remote Sensing Data Based on Fully Convolutional Networks
2019
This paper studies the use of the Fully Convolutional Networks (FCN) model in the extraction of water bodies from Very High spatial Resolution (VHR) optical images in the case of limited training samples. Two different seasonal GaoFen-2 images with a spatial resolution of 0.8 m in the south of the Beijing metropolitan area were used to extensively validate the FCN model. Four key factors including input features, training data, transfer learning, and data augmentation related to the performance of the FCN model were empirically analyzed by using 36 combinations of various parameter settings. Our findings indicate that the FCN-based method can work as a robust and cost-effective tool in the extraction of water bodies from VHR images. The FCN-based method trained on a small amount of labeled L1A data can also significantly outperform the Normalized Difference Water Index (NDWI) based method, the Support Vector Machine (SVM) based method, and the Sparsity Model (SM) based method, even when radiometric normalization and spatial contexts are introduced to preprocess the input data for the latter three methods. The advantages of the FCN-based method are mainly due to its capability to exploit spatial contexts in the image, especially in urban areas with mixed water and shadows. Though the settings of four key factors significantly affect the performance of the FCN based method, choosing a qualified setting for the FCN model is not difficult. Our lessons learned from the successful use of the FCN model for the extraction of water from VHR images can be extended to extract other land covers.
Journal Article
Maize Yield Estimation in Intercropped Smallholder Fields Using Satellite Data in Southern Malawi
by
Kambombe, Oscar
,
Dash, Jadunandan
,
Chimimba, Ellasy Gulule
in
Accuracy
,
accuracy assessment
,
Agricultural practices
2022
Satellite data provide high potential for estimating crop yield, which is crucial to understanding determinants of yield gaps and therefore improving food production, particularly in sub-Saharan Africa (SSA) regions. However, accurate assessment of crop yield and its spatial variation is challenging in SSA because of small field sizes, widespread intercropping practices, and inadequate field observations. This study aimed to firstly evaluate the potential of satellite data in estimating maize yield in intercropped smallholder fields and secondly assess how factors such as satellite data spatial and temporal resolution, within-field variability, field size, harvest index and intercropping practices affect model performance. Having collected in situ data (field size, yield, intercrops occurrence, harvest index, and leaf area index), statistical models were developed to predict yield from multisource satellite data (i.e., Sentinel-2 and PlanetScope). Model accuracy and residuals were assessed against the above factors. Among 150 investigated fields, our study found that nearly half were intercropped with legumes, with an average plot size of 0.17 ha. Despite mixed pixels resulting from intercrops, the model based on the Sentinel-2 red-edge vegetation index (VI) could estimate maize yield with moderate accuracy (R2 = 0.51, nRMSE = 19.95%), while higher spatial resolution satellite data (e.g., PlanetScope 3 m) only showed a marginal improvement in performance (R2 = 0.52, nRMSE = 19.95%). Seasonal peak VI values provided better accuracy than seasonal mean/median VI, suggesting peak VI values may capture the signal of the dominant upper maize foliage layer and may be less impacted by understory intercrop effects. Still, intercropping practice reduces model accuracy, as the model residuals are lower in fields with pure maize (1 t/ha) compared to intercropped fields (1.3 t/ha). This study provides a reference for operational maize yield estimation in intercropped smallholder fields, using free satellite data in Southern Malawi. It also highlights the difficulties of estimating yield in intercropped fields using satellite imagery, and stresses the importance of sufficient satellite observations for monitoring intercropping practices in SSA.
Journal Article
Influence of high-resolution data on the assessment of forest fragmentation
2019
ContextRemote sensing has been a foundation of landscape ecology. The spatial resolution (pixel size) of remotely sensed land cover products has improved since the introduction of landscape ecology in the United States. Because patterns depend on spatial resolution, emerging improvements in the spatial resolution of land cover may lead to new insights about the scaling of landscape patterns.ObjectiveWe compared forest fragmentation measures derived from very high resolution (1 m2) data with the same measures derived from the commonly used (30 m × −30 m; 900 m2) Landsat-based data.MethodsWe applied area-density scaling to binary (forest; non-forest) maps for both sources to derive source-specific estimates of dominant (density ≥ 60%), interior (≥ 90%), and intact (100%) forest.ResultsSwitching from low- to high-resolution data produced statistical and geographic shifts in forest spatial patterns. Forest and non-forest features that were “invisible” at low resolution but identifiable at high resolution resulted in higher estimates of dominant and interior forest but lower estimates of intact forest from the high-resolution source. Overall, the high-resolution data detected more forest that was more contagiously distributed even at larger spatial scales.ConclusionWe anticipate that improvements in the spatial resolution of remotely sensed land cover products will advance landscape ecology through re-interpretations of patterns and scaling, by fostering new landscape pattern measurements, and by testing new spatial pattern-ecological process hypotheses.
Journal Article
Smallholder Crop Area Mapped with a Semantic Segmentation Deep Learning Method
by
Ou, Cong
,
Du, Zhenrong
,
Zhang, Tingting
in
Agricultural production
,
Agriculture
,
Artificial intelligence
2019
The growing population in China has led to an increasing importance of crop area (CA) protection. A powerful tool for acquiring accurate and up-to-date CA maps is automatic mapping using information extracted from high spatial resolution remote sensing (RS) images. RS image information extraction includes feature classification, which is a long-standing research issue in the RS community. Emerging deep learning techniques, such as the deep semantic segmentation network technique, are effective methods to automatically discover relevant contextual features and get better image classification results. In this study, we exploited deep semantic segmentation networks to classify and extract CA from high-resolution RS images. WorldView-2 (WV-2) images with only Red-Green-Blue (RGB) bands were used to confirm the effectiveness of the proposed semantic classification framework for information extraction and the CA mapping task. Specifically, we used the deep learning framework TensorFlow to construct a platform for sampling, training, testing, and classifying to extract and map CA on the basis of DeepLabv3+. By leveraging per-pixel and random sample point accuracy evaluation methods, we conclude that the proposed approach can efficiently obtain acceptable accuracy (Overall Accuracy = 95%, Kappa = 0.90) of CA classification in the study area, and the approach performs better than other deep semantic segmentation networks (U-Net/PspNet/SegNet/DeepLabv2) and traditional machine learning methods, such as Maximum Likelihood (ML), Support Vector Machine (SVM), and RF (Random Forest). Furthermore, the proposed approach is highly scalable for the variety of crop types in a crop area. Overall, the proposed approach can train a precise and effective model that is capable of adequately describing the small, irregular fields of smallholder agriculture and handling the great level of details in RGB high spatial resolution images.
Journal Article