Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
55
result(s) for
"Saeid Homayouni"
Sort by:
Urban Land Use and Land Cover Change Analysis Using Random Forest Classification of Landsat Time Series
2022
Efficient implementation of remote sensing image classification can facilitate the extraction of spatiotemporal information for land use and land cover (LULC) classification. Mapping LULC change can pave the way to investigate the impacts of different socioeconomic and environmental factors on the Earth’s surface. This study presents an algorithm that uses Landsat time-series data to analyze LULC change. We applied the Random Forest (RF) classifier, a robust classification method, in the Google Earth Engine (GEE) using imagery from Landsat 5, 7, and 8 as inputs for the 1985 to 2019 period. We also explored the performance of the pan-sharpening algorithm on Landsat bands besides the impact of different image compositions to produce a high-quality LULC map. We used a statistical pan-sharpening algorithm to increase multispectral Landsat bands’ (Landsat 7–9) spatial resolution from 30 m to 15 m. In addition, we checked the impact of different image compositions based on several spectral indices and other auxiliary data such as digital elevation model (DEM) and land surface temperature (LST) on final classification accuracy based on several spectral indices and other auxiliary data on final classification accuracy. We compared the classification result of our proposed method and the Copernicus Global Land Cover Layers (CGLCL) map to verify the algorithm. The results show that: (1) Using pan-sharpened top-of-atmosphere (TOA) Landsat products can produce more accurate results for classification instead of using surface reflectance (SR) alone; (2) LST and DEM are essential features in classification, and using them can increase final accuracy; (3) the proposed algorithm produced higher accuracy (94.438% overall accuracy (OA), 0.93 for Kappa, and 0.93 for F1-score) than CGLCL map (84.4% OA, 0.79 for Kappa, and 0.50 for F1-score) in 2019; (4) the total agreement between the classification results and the test data exceeds 90% (93.37–97.6%), 0.9 (0.91–0.96), and 0.85 (0.86–0.95) for OA, Kappa values, and F1-score, respectively, which is acceptable in both overall and Kappa accuracy. Moreover, we provide a code repository that allows classifying Landsat 4, 5, 7, and 8 within GEE. This method can be quickly and easily applied to other regions of interest for LULC mapping.
Journal Article
Bagging and Boosting Ensemble Classifiers for Classification of Multispectral, Hyperspectral and PolSAR Data: A Comparative Evaluation
by
Jafarzadeh, Hamid
,
Homayouni, Saeid
,
Mahdianpari, Masoud
in
Accuracy
,
Adaptive algorithms
,
Algorithms
2021
In recent years, several powerful machine learning (ML) algorithms have been developed for image classification, especially those based on ensemble learning (EL). In particular, Extreme Gradient Boosting (XGBoost) and Light Gradient Boosting Machine (LightGBM) methods have attracted researchers’ attention in data science due to their superior results compared to other commonly used ML algorithms. Despite their popularity within the computer science community, they have not yet been well examined in detail in the field of Earth Observation (EO) for satellite image classification. As such, this study investigates the capability of different EL algorithms, generally known as bagging and boosting algorithms, including Adaptive Boosting (AdaBoost), Gradient Boosting Machine (GBM), XGBoost, LightGBM, and Random Forest (RF), for the classification of Remote Sensing (RS) data. In particular, different classification scenarios were designed to compare the performance of these algorithms on three different types of RS data, namely high-resolution multispectral, hyperspectral, and Polarimetric Synthetic Aperture Radar (PolSAR) data. Moreover, the Decision Tree (DT) single classifier, as a base classifier, is considered to evaluate the classification’s accuracy. The experimental results demonstrated that the RF and XGBoost methods for the multispectral image, the LightGBM and XGBoost methods for hyperspectral data, and the XGBoost and RF algorithms for PolSAR data produced higher classification accuracies compared to other ML techniques. This demonstrates the great capability of the XGBoost method for the classification of different types of RS data.
Journal Article
The First Wetland Inventory Map of Newfoundland at a Spatial Resolution of 10 m Using Sentinel-1 and Sentinel-2 Data on the Google Earth Engine Cloud Computing Platform
by
Homayouni, Saeid
,
Mahdianpari, Masoud
,
Gill, Eric
in
Accuracy
,
Application programming interface
,
Artificial intelligence
2019
Wetlands are one of the most important ecosystems that provide a desirable habitat for a great variety of flora and fauna. Wetland mapping and modeling using Earth Observation (EO) data are essential for natural resource management at both regional and national levels. However, accurate wetland mapping is challenging, especially on a large scale, given their heterogeneous and fragmented landscape, as well as the spectral similarity of differing wetland classes. Currently, precise, consistent, and comprehensive wetland inventories on a national- or provincial-scale are lacking globally, with most studies focused on the generation of local-scale maps from limited remote sensing data. Leveraging the Google Earth Engine (GEE) computational power and the availability of high spatial resolution remote sensing data collected by Copernicus Sentinels, this study introduces the first detailed, provincial-scale wetland inventory map of one of the richest Canadian provinces in terms of wetland extent. In particular, multi-year summer Synthetic Aperture Radar (SAR) Sentinel-1 and optical Sentinel-2 data composites were used to identify the spatial distribution of five wetland and three non-wetland classes on the Island of Newfoundland, covering an approximate area of 106,000 km2. The classification results were evaluated using both pixel-based and object-based random forest (RF) classifications implemented on the GEE platform. The results revealed the superiority of the object-based approach relative to the pixel-based classification for wetland mapping. Although the classification using multi-year optical data was more accurate compared to that of SAR, the inclusion of both types of data significantly improved the classification accuracies of wetland classes. In particular, an overall accuracy of 88.37% and a Kappa coefficient of 0.85 were achieved with the multi-year summer SAR/optical composite using an object-based RF classification, wherein all wetland and non-wetland classes were correctly identified with accuracies beyond 70% and 90%, respectively. The results suggest a paradigm-shift from standard static products and approaches toward generating more dynamic, on-demand, large-scale wetland coverage maps through advanced cloud computing resources that simplify access to and processing of the “Geo Big Data.” In addition, the resulting ever-demanding inventory map of Newfoundland is of great interest to and can be used by many stakeholders, including federal and provincial governments, municipalities, NGOs, and environmental consultants to name a few.
Journal Article
Convolutional neural network and long short-term memory models for ice-jam predictions
by
Homayouni, Saeid
,
Chokmani, Karem
,
Madaeni, Fatemehalsadat
in
Accuracy
,
Analysis
,
Artificial neural networks
2022
In cold regions, ice jams frequently result in severe flooding due to a rapid rise in water levels upstream of the jam. Sudden floods resulting from ice jams threaten human safety and cause damage to properties and infrastructure. Hence, ice-jam prediction tools can give an early warning to increase response time and minimize the possible damages. However, ice-jam prediction has always been a challenge as there is no analytical method available for this purpose. Nonetheless, ice jams form when some hydro-meteorological conditions happen, a few hours to a few days before the event. Ice-jam prediction can be addressed as a binary multivariate time-series classification. Deep learning techniques have been widely used for time-series classification in many fields such as finance, engineering, weather forecasting, and medicine. In this research, we successfully applied convolutional neural networks (CNN), long short-term memory (LSTM), and combined convolutional–long short-term memory (CNN-LSTM) networks to predict the formation of ice jams in 150 rivers in the province of Quebec (Canada). We also employed machine learning methods including support vector machine (SVM), k-nearest neighbors classifier (KNN), decision tree, and multilayer perceptron (MLP) for this purpose. The hydro-meteorological variables (e.g., temperature, precipitation, and snow depth) along with the corresponding jam or no-jam events are used as model inputs. Ten percent of the data were excluded from the model and set aside for testing, and 100 reshuffling and splitting iterations were applied to 80 % of the remaining data for training and 20 % for validation. The developed deep learning models achieved improvements in performance in comparison to the developed machine learning models. The results show that the CNN-LSTM model yields the best results in the validation and testing with F1 scores of 0.82 and 0.92, respectively. This demonstrates that CNN and LSTM models are complementary, and a combination of both further improves classification.
Journal Article
High-Resolution Daily XCH4 Prediction Using New Convolutional Neural Network Autoencoder Model and Remote Sensing Data
2025
Atmospheric methane (CH4) concentrations have increased to 2.5 times their pre-industrial levels, with a marked acceleration in recent decades. CH4 is responsible for approximately 30% of the global temperature rise since the Industrial Revolution. This growing concentration contributes to environmental degradation, including ocean acidification, accelerated climate change, and a rise in natural disasters. The column-averaged dry-air mole fraction of methane (XCH4) is a crucial indicator for assessing atmospheric CH4 levels. In this study, the Sentinel-5P TROPOMI instrument was employed to monitor, map, and estimate CH4 concentrations on both regional and global scales. However, TROPOMI data exhibits limitations such as spatial gaps and relatively coarse resolution, particularly at regional scales or over small areas. To mitigate these limitations, a novel Convolutional Neural Network Autoencoder (CNN-AE) model was developed. Validation was performed using the Total Carbon Column Observing Network (TCCON), providing a benchmark for evaluating the accuracy of various interpolation and prediction models. The CNN-AE model demonstrated the highest accuracy in regional-scale analysis, achieving a Mean Absolute Error (MAE) of 28.48 ppb and a Root Mean Square Error (RMSE) of 30.07 ppb. This was followed by the Random Forest (RF) regressor (MAE: 29.07 ppb; RMSE: 36.89 ppb), GridData Nearest Neighbor Interpolator (NNI) (MAE: 30.06 ppb; RMSE: 32.14 ppb), and the Radial Basis Function (RBF) Interpolator (MAE: 80.23 ppb; RMSE: 90.54 ppb). On a global scale, the CNN-AE again outperformed other methods, yielding the lowest MAE and RMSE (19.78 and 24.7 ppb, respectively), followed by RF (21.46 and 27.23 ppb), GridData NNI (25.3 and 32.62 ppb), and RBF (43.08 and 54.93 ppb).
Journal Article
Active Fire Detection from Landsat-8 Imagery Using Deep Multiple Kernel Learning
by
Homayouni, Saeid
,
Shah-Hosseini, Reza
,
Zarei, Arastou
in
Ablation
,
active fire
,
active fire index
2022
Active fires are devastating natural disasters that cause socio-economical damage across the globe. The detection and mapping of these disasters require efficient tools, scientific methods, and reliable observations. Satellite images have been widely used for active fire detection (AFD) during the past years due to their nearly global coverage. However, accurate AFD and mapping in satellite imagery is still a challenging task in the remote sensing community, which mainly uses traditional methods. Deep learning (DL) methods have recently yielded outstanding results in remote sensing applications. Nevertheless, less attention has been given to them for AFD in satellite imagery. This study presented a deep convolutional neural network (CNN) “MultiScale-Net” for AFD in Landsat-8 datasets at the pixel level. The proposed network had two main characteristics: (1) several convolution kernels with multiple sizes, and (2) dilated convolution layers (DCLs) with various dilation rates. Moreover, this paper suggested an innovative Active Fire Index (AFI) for AFD. AFI was added to the network inputs consisting of the SWIR2, SWIR1, and Blue bands to improve the performance of the MultiScale-Net. In an ablation analysis, three different scenarios were designed for multi-size kernels, dilation rates, and input variables individually, resulting in 27 distinct models. The quantitative results indicated that the model with AFI-SWIR2-SWIR1-Blue as the input variables, using multiple kernels of sizes 3 × 3, 5 × 5, and 7 × 7 simultaneously, and a dilation rate of 2, achieved the highest F1-score and IoU of 91.62% and 84.54%, respectively. Stacking AFI with the three Landsat-8 bands led to fewer false negative (FN) pixels. Furthermore, our qualitative assessment revealed that these models could detect single fire pixels detached from the large fire zones by taking advantage of multi-size kernels. Overall, the MultiScale-Net met expectations in detecting fires of varying sizes and shapes over challenging test samples.
Journal Article
Meta-analysis of Unmanned Aerial Vehicle (UAV) Imagery for Agro-environmental Monitoring Using Machine Learning and Statistical Models
by
Brisco, Brian
,
Homayouni, Saeid
,
Mahdianpari, Masoud
in
Accuracy
,
Aerial photography
,
Agriculture
2020
Unmanned Aerial Vehicle (UAV) imaging systems have recently gained significant attention from researchers and practitioners as a cost-effective means for agro-environmental applications. In particular, machine learning algorithms have been applied to UAV-based remote sensing data for enhancing the UAV capabilities of various applications. This systematic review was performed on studies through a statistical meta-analysis of UAV applications along with machine learning algorithms in agro-environmental monitoring. For this purpose, a total number of 163 peer-reviewed articles published in 13 high-impact remote sensing journals over the past 20 years were reviewed focusing on several features, including study area, application, sensor type, platform type, and spatial resolution. The meta-analysis revealed that 62% and 38% of the studies applied regression and classification models, respectively. Visible sensor technology was the most frequently used sensor with the highest overall accuracy among classification articles. Regarding regression models, linear regression and random forest were the most frequently applied models in UAV remote sensing imagery processing. Finally, the results of this study confirm that applying machine learning approaches on UAV imagery produces fast and reliable results. Agriculture, forestry, and grassland mapping were found as the top three UAV applications in this review, in 42%, 22%, and 8% of the studies, respectively.
Journal Article
A Meta-Analysis of Remote Sensing Technologies and Methodologies for Crop Characterization
by
McNairn, Heather
,
Homayouni, Saeid
,
Mahdianpari, Masoud
in
Agricultural production
,
Agriculture
,
Aircraft
2022
Climate change and population growth risk the world’s food supply. Annual crop yield production is one of the most crucial components of the global food supply. Moreover, the COVID-19 pandemic has stressed global food security, production, and supply chains. Using biomass estimation as a reliable yield indicator, space-based monitoring of crops can assist in mitigating these stresses by providing reliable product information. Research has been conducted to estimate crop biophysical parameters by destructive and non-destructive approaches. In particular, researchers have investigated the potential of various analytical methods to determine a range of crop parameters using remote sensing data and methods. To this end, they have investigated diverse sources of Earth observations, including radar and optical images with various spatial, spectral, and temporal resolutions. This paper reviews and analyzes publications from the past 30 years to identify trends in crop monitoring research using remote sensing data and tools. This analysis is accomplished through a systematic review of 277 papers and documents the methods, challenges, and opportunities frequently cited in the scientific literature. The results revealed that research in this field had increased dramatically over this study period. In addition, the analyses confirmed that the normalized difference vegetation index (NDVI) had been the most studied vegetation index to estimate crop parameters. Moreover, this analysis showed that wheat and corn were the most studied crops, globally.
Journal Article
An Automated Framework for Plant Detection Based on Deep Simulated Learning from Drone Imagery
by
Rastiveis, Heidar
,
Homayouni, Saeid
,
Hosseiny, Benyamin
in
accuracy
,
Agricultural land
,
Agriculture
2020
Traditional mapping and monitoring of agricultural fields are expensive, laborious, and may contain human errors. Technological advances in platforms and sensors, followed by artificial intelligence (AI) and deep learning (DL) breakthroughs in intelligent data processing, led to improving the remote sensing applications for precision agriculture (PA). Therefore, technological advances in platforms and sensors and intelligent data processing methods, such as machine learning and DL, and geospatial and remote sensing technologies, have improved the quality of agricultural land monitoring for PA needs. However, providing ground truth data for model training is a time-consuming and tedious task and may contain multiple human errors. This paper proposes an automated and fully unsupervised framework based on image processing and DL methods for plant detection in agricultural lands from very high-resolution drone remote sensing imagery. The proposed framework’s main idea is to automatically generate an unlimited amount of simulated training data from the input image. This capability is advantageous for DL methods and can solve their biggest drawback, i.e., requiring a considerable amount of training data. This framework’s core is based on the faster regional convolutional neural network (R-CNN) with the backbone of ResNet-101 for object detection. The proposed framework’s efficiency was evaluated by two different image sets from two cornfields, acquired by an RGB camera mounted on a drone. The results show that the proposed method leads to an average counting accuracy of 90.9%. Furthermore, based on the average Hausdorff distance (AHD), an average object detection localization error of 11 pixels was obtained. Additionally, by evaluating the object detection metrics, the resulting mean precision, recall, and F1 for plant detection were 0.868, 0.849, and 0.855, respectively, which seem to be promising for an unsupervised plant detection method.
Journal Article
An Unsupervised Saliency-Guided Deep Convolutional Neural Network for Accurate Burn Mapping from Sentinel-1 SAR Data
by
Radman, Ali
,
Homayouni, Saeid
,
Shah-Hosseini, Reza
in
Accuracy
,
Aerial photography
,
Artificial intelligence
2023
SAR data provide sufficient information for burned area detection in any weather condition, making it superior to optical data. In this study, we assess the potential of Sentinel-1 SAR images for precise forest-burned area mapping using deep convolutional neural networks (DCNN). Accurate mapping with DCNN techniques requires high quantity and quality training data. However, labeled ground truth might not be available in many cases or requires professional expertise to generate them via visual interpretation of aerial photography or field visits. To overcome this problem, we proposed an unsupervised method that derives DCNN training data from fuzzy c-means (FCM) clusters with the highest and lowest probability of being burned. Furthermore, a saliency-guided (SG) approach was deployed to reduce false detections and SAR image speckles. This method defines salient regions with a high probability of being burned. These regions are not affected by noise and can improve the model performance. The developed approach based on the SG-FCM-DCNN model was investigated to map the burned area of Rossomanno-Grottascura-Bellia, Italy. This method significantly improved the burn detection ability of non-saliency-guided models. Moreover, the proposed model achieved superior accuracy of 87.67% (i.e., more than 2% improvement) compared to other saliency-guided techniques, including SVM and DNN.
Journal Article