Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
303
result(s) for
"multisensor classification"
Sort by:
Mapping Succession in Non-Forest Habitats by Means of Remote Sensing: Is the Data Acquisition Time Critical for Species Discrimination?
by
Radecka, Aleksandra
,
Ostrowski, Wojciech
,
Michalska-Hejduk, Dorota
in
Accuracy
,
Agricultural production
,
Airborne sensing
2019
The process of secondary succession is one of the most significant threats to non-forest (natural and semi-natural open) Natura 2000 habitats in Poland; shrub and tree encroachment taking place on abandoned, low productive agricultural areas, historically used as pastures or meadows, leads to changes to the composition of species and biodiversity loss, and results in landscape transformations. There is a perceived need to create a methodology for the monitoring of vegetation succession by airborne remote sensing, both from quantitative (area, volume) and qualitative (plant species) perspectives. This is likely to become a very important issue for the effective protection of natural and semi-natural habitats and to advance conservation planning. A key variable to be established when implementing a qualitative approach is the remote sensing data acquisition date, which determines the developmental stage of trees and shrubs forming the succession process. It is essential to choose the optimal date on which the spectral and geometrical characteristics of the species are as different from each other as possible. As part of the research presented here, we compare classifications based on remote sensing data acquired during three different parts of the growing season (spring, summer and autumn) for five study areas. The remote sensing data used include high-resolution hyperspectral imagery and LiDAR (Light Detection and Ranging) data acquired simultaneously from a common aerial platform. Classifications are done using the random forest algorithm, and the set of features to be classified is determined by a recursive feature elimination procedure. The results show that the time of remote sensing data acquisition influences the possibility of differentiating succession species. This was demonstrated by significant differences in the spatial extent of species, which ranged from 33.2% to 56.2% when comparing pairs of maps, and differences in classification accuracies, which when expressed in values of Cohen’s Kappa reached ~0.2. For most of the analysed species, the spring and autumn dates turned out to be slightly more favourable than the summer one. However, the final recommendation for the data acquisition time should take into consideration the phenological cycle of deciduous species present within the research area and the abiotic conditions.
Journal Article
The Methodology for Identifying Secondary Succession in Non-Forest Natura 2000 Habitats Using Multi-Source Airborne Remote Sensing Data
by
Radecka, Aleksandra
,
Ostrowski, Wojciech
,
Bakuła, Krzysztof
in
Aerial photography
,
Airborne lasers
,
Airborne sensing
2021
The succession process of trees and shrubs is considered as one of the threats to non-forest Natura 2000 habitats. Poland, as a member of the European Union, is obliged to monitor these habitats and preserve them in the best possible condition. If threats are identified, it is necessary to take action—as part of the so-called active protection—that will ensure the preservation of habitats in a non-deteriorated condition. At present, monitoring of Natura 2000 habitats is carried out in expert terms, i.e., the habitat conservation status is determined during field visits. This process is time- and cost-intensive, and it is subject to the subjectivism of the person performing the assessment. As a result of the research, a methodology for the identification and monitoring of the succession process in non-forest Natura 2000 habitats was developed, in which multi-sensor remote sensing data are used—airborne laser scanner (ALS) and hyperspectral (HS) data. The methodology also includes steps required to analyse the dynamics of the succession process in the past, which is done using archival photogrammetric data (aerial photographs and ALS data). The algorithms implemented within the methodology include structure from motion and dense image matching for processing the archival images, segmentation and Voronoi tessellation for delineating the spatial extent of succession, machine learning random forest classifier, recursive feature elimination and t-distributed stochastic neighbour embedding algorithms for succession species differentiation, as well as landscape metrics used for threat level analysis. The proposed methodology has been automated and enables a rapid assessment of the level of threat for a whole given area, as well as in relation to individual Natura 2000 habitats. The prepared methodology was successfully tested on seven research areas located in Poland.
Journal Article
A review on multimodal medical image fusion: Compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics
by
Gandomi, Amir H.
,
Rehman, Eid
,
Azam, Muhammad Adeel
in
Algorithms
,
Benchmarking
,
Business metrics
2022
Over the past two decades, medical imaging has been extensively apply to diagnose diseases. Medical experts continue to have difficulties for diagnosing diseases with a single modality owing to a lack of information in this domain. Image fusion may be use to merge images of specific organs with diseases from a variety of medical imaging systems. Anatomical and physiological data may be included in multi-modality image fusion, making diagnosis simpler. It is a difficult challenge to find the best multimodal medical database with fusion quality evaluation for assessing recommended image fusion methods. As a result, this article provides a complete overview of multimodal medical image fusion methodologies, databases, and quality measurements.
In this article, a compendious review of different medical imaging modalities and evaluation of related multimodal databases along with the statistical results is provided. The medical imaging modalities are organized based on radiation, visible-light imaging, microscopy, and multimodal imaging.
The medical imaging acquisition is categorized into invasive or non-invasive techniques. The fusion techniques are classified into six main categories: frequency fusion, spatial fusion, decision-level fusion, deep learning, hybrid fusion, and sparse representation fusion. In addition, the associated diseases for each modality and fusion approach presented. The quality assessments fusion metrics are also encapsulated in this article.
This survey provides a baseline guideline to medical experts in this technical domain that may combine preoperative, intraoperative, and postoperative imaging, Multi-sensor fusion for disease detection, etc. The advantages and drawbacks of the current literature are discussed, and future insights are provided accordingly.
Journal Article
Automatic targetless LiDAR–camera calibration: a survey
2023
The recent trend of fusing complementary data from LiDARs and cameras for more accurate perception has made the extrinsic calibration between the two sensors critically important. Indeed, to align the sensors spatially for proper data fusion, the calibration process usually involves estimating the extrinsic parameters between them. Traditional LiDAR–camera calibration methods often depend on explicit targets or human intervention, which can be prohibitively expensive and cumbersome. Recognizing these weaknesses, recent methods usually adopt the autonomic targetless calibration approach, which can be conducted at a much lower cost. This paper presents a thorough review of these automatic targetless LiDAR–camera calibration methods. Specifically, based on how the potential cues in the environment are retrieved and utilized in the calibration process, we divide the methods into four categories: information theory based, feature based, ego-motion based, and learning based methods. For each category, we provide an in-depth overview with insights we have gathered, hoping to serve as a potential guidance for researchers in the related fields.
Journal Article
WeedMap: A Large-Scale Semantic Weed Mapping Framework Using Aerial Multispectral Imaging and Deep Neural Network for Precision Farming
by
Liebisch, Frank
,
Khanna, Raghav
,
Nieto, Juan
in
Agricultural economics
,
Agricultural land
,
Agriculture
2018
The ability to automatically monitor agricultural fields is an important capability in precision farming, enabling steps towards more sustainable agriculture. Precise, high-resolution monitoring is a key prerequisite for targeted intervention and the selective application of agro-chemicals. The main goal of this paper is developing a novel crop/weed segmentation and mapping framework that processes multispectral images obtained from an unmanned aerial vehicle (UAV) using a deep neural network (DNN). Most studies on crop/weed semantic segmentation only consider single images for processing and classification. Images taken by UAVs often cover only a few hundred square meters with either color only or color and near-infrared (NIR) channels. Although a map can be generated by processing single segmented images incrementally, this requires additional complex information fusion techniques which struggle to handle high fidelity maps due to their computational costs and problems in ensuring global consistency. Moreover, computing a single large and accurate vegetation map (e.g., crop/weed) using a DNN is non-trivial due to difficulties arising from: (1) limited ground sample distances (GSDs) in high-altitude datasets, (2) sacrificed resolution resulting from downsampling high-fidelity images, and (3) multispectral image alignment. To address these issues, we adopt a stand sliding window approach that operates on only small portions of multispectral orthomosaic maps (tiles), which are channel-wise aligned and calibrated radiometrically across the entire map. We define the tile size to be the same as that of the DNN input to avoid resolution loss. Compared to our baseline model (i.e., SegNet with 3 channel RGB (red, green, and blue) inputs) yielding an area under the curve (AUC) of [background=0.607, crop=0.681, weed=0.576], our proposed model with 9 input channels achieves [0.839, 0.863, 0.782]. Additionally, we provide an extensive analysis of 20 trained models, both qualitatively and quantitatively, in order to evaluate the effects of varying input channels and tunable network hyperparameters. Furthermore, we release a large sugar beet/weed aerial dataset with expertly guided annotations for further research in the fields of remote sensing, precision agriculture, and agricultural robotics.
Journal Article
A Review of Data Fusion Techniques
The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion.
Journal Article
Crop Type and Land Cover Mapping in Northern Malawi Using the Integration of Sentinel-1, Sentinel-2, and PlanetScope Satellite Data
by
Kpienbaareh, Daniel
,
Bezner Kerr, Rachel
,
Sun, Xiaoxuan
in
Agricultural land
,
Agriculture
,
Algorithms
2021
Mapping crop types and land cover in smallholder farming systems in sub-Saharan Africa remains a challenge due to data costs, high cloud cover, and poor temporal resolution of satellite data. With improvement in satellite technology and image processing techniques, there is a potential for integrating data from sensors with different spectral characteristics and temporal resolutions to effectively map crop types and land cover. In our Malawi study area, it is common that there are no cloud-free images available for the entire crop growth season. The goal of this experiment is to produce detailed crop type and land cover maps in agricultural landscapes using the Sentinel-1 (S-1) radar data, Sentinel-2 (S-2) optical data, S-2 and PlanetScope data fusion, and S-1 C2 matrix and S-1 H/α polarimetric decomposition. We evaluated the ability to combine these data to map crop types and land cover in two smallholder farming locations. The random forest algorithm, trained with crop and land cover type data collected in the field, complemented with samples digitized from Google Earth Pro and DigitalGlobe, was used for the classification experiments. The results show that the S-2 and PlanetScope fused image + S-1 covariance (C2) matrix + H/α polarimetric decomposition (an entropy-based decomposition method) fusion outperformed all other image combinations, producing higher overall accuracies (OAs) (>85%) and Kappa coefficients (>0.80). These OAs represent a 13.53% and 11.7% improvement on the Sentinel-2-only (OAs < 80%) experiment for Thimalala and Edundu, respectively. The experiment also provided accurate insights into the distribution of crop and land cover types in the area. The findings suggest that in cloud-dense and resource-poor locations, fusing high temporal resolution radar data with available optical data presents an opportunity for operational mapping of crop types and land cover to support food security and environmental management decision-making.
Journal Article
Model Fusion for Building Type Classification from Aerial and Street View Images
by
Wang, Yuanyuan
,
Zhu, Xiao Xiang
,
Hoffmann, Eike Jens
in
Accuracy
,
aerial image
,
Artificial intelligence
2019
This article addresses the question of mapping building functions jointly using both aerial and street view images via deep learning techniques. One of the central challenges here is determining a data fusion strategy that can cope with heterogeneous image modalities. We demonstrate that geometric combinations of the features of such two types of images, especially in an early stage of the convolutional layers, often lead to a destructive effect due to the spatial misalignment of the features. Therefore, we address this problem through a decision-level fusion of a diverse ensemble of models trained from each image type independently. In this way, the significant differences in appearance of aerial and street view images are taken into account. Compared to the common multi-stream end-to-end fusion approaches proposed in the literature, we are able to increase the precision scores from 68% to 76%. Another challenge is that sophisticated classification schemes needed for real applications are highly overlapping and not very well defined without sharp boundaries. As a consequence, classification using machine learning becomes significantly harder. In this work, we choose a highly compact classification scheme with four classes, commercial, residential, public, and industrial because such a classification has a very high value to urban geography being correlated with socio-demographic parameters such as population density and income.
Journal Article
Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage: A Comprehensive Review
by
Yang, Su
,
Hou, Miaole
,
Li, Songnian
in
Algorithms
,
Archaeology
,
Building information modeling
2023
In the cultural heritage field, point clouds, as important raw data of geomatics, are not only three-dimensional (3D) spatial presentations of 3D objects but they also have the potential to gradually advance towards an intelligent data structure with scene understanding, autonomous cognition, and a decision-making ability. The approach of point cloud semantic segmentation as a preliminary stage can help to realize this advancement. With the demand for semantic comprehensibility of point cloud data and the widespread application of machine learning and deep learning approaches in point cloud semantic segmentation, there is a need for a comprehensive literature review covering the topics from the point cloud data acquisition to semantic segmentation algorithms with application strategies in cultural heritage. This paper first reviews the current trends of acquiring point cloud data of cultural heritage from a single platform with multiple sensors and multi-platform collaborative data fusion. Then, the point cloud semantic segmentation algorithms are discussed with their advantages, disadvantages, and specific applications in the cultural heritage field. These algorithms include region growing, model fitting, unsupervised clustering, supervised machine learning, and deep learning. In addition, we summarized the public benchmark point cloud datasets related to cultural heritage. Finally, the problems and constructive development trends of 3D point cloud semantic segmentation in the cultural heritage field are presented.
Journal Article
FORCE—Landsat + Sentinel-2 Analysis Ready Data and Beyond
2019
Ever increasing data volumes of satellite constellations call for multi-sensor analysis ready data (ARD) that relieve users from the burden of all costly preprocessing steps. This paper describes the scientific software FORCE (Framework for Operational Radiometric Correction for Environmental monitoring), an ‘all-in-one’ solution for the mass-processing and analysis of Landsat and Sentinel-2 image archives. FORCE is increasingly used to support a wide range of scientific to operational applications that are in need of both large area, as well as deep and dense temporal information. FORCE is capable of generating Level 2 ARD, and higher-level products. Level 2 processing is comprised of state-of-the-art cloud masking and radiometric correction (including corrections that go beyond ARD specification, e.g., topographic or bidirectional reflectance distribution function correction). It further includes data cubing, i.e., spatial reorganization of the data into a non-overlapping grid system for enhanced efficiency and simplicity of ARD usage. However, the usage barrier of Level 2 ARD is still high due to the considerable data volume and spatial incompleteness of valid observations (e.g., clouds). Thus, the higher-level modules temporally condense multi-temporal ARD into manageable amounts of spatially seamless data. For data mining purposes, per-pixel statistics of clear sky data availability can be generated. FORCE provides functionality for compiling best-available-pixel composites and spectral temporal metrics, which both utilize all available observations within a defined temporal window using selection and statistical aggregation techniques, respectively. These products are immediately fit for common Earth observation analysis workflows, such as machine learning-based image classification, and are thus referred to as highly analysis ready data (hARD). FORCE provides data fusion functionality to improve the spatial resolution of (i) coarse continuous fields like land surface phenology and (ii) Landsat ARD using Sentinel-2 ARD as prediction targets. Quality controlled time series preparation and analysis functionality with a number of aggregation and interpolation techniques, land surface phenology retrieval, and change and trend analyses are provided. Outputs of this module can be directly ingested into a geographic information system (GIS) to fuel research questions without any further processing, i.e., hARD+. FORCE is open source software under the terms of the GNU General Public License v. >= 3, and can be downloaded from http://force.feut.de.
Journal Article