Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
8,536
result(s) for
"crop classification"
Sort by:
Best Accuracy Land Use/Land Cover (LULC) Classification to Derive Crop Types Using Multitemporal, Multisensor, and Multi-Polarization SAR Satellite Images
2016
When using microwave remote sensing for land use/land cover (LULC) classifications, there are a wide variety of imaging parameters to choose from, such as wavelength, imaging mode, incidence angle, spatial resolution, and coverage. There is still a need for further study of the combination, comparison, and quantification of the potential of multiple diverse radar images for LULC classifications. Our study site, the Qixing farm in Heilongjiang province, China, is especially suitable to demonstrate this. As in most rice growing regions, there is a high cloud cover during the growing season, making LULC from optical images unreliable. From the study year 2009, we obtained nine TerraSAR-X, two Radarsat-2, one Envisat-ASAR, and an optical FORMOSAT-2 image, which is mainly used for comparison, but also for a combination. To evaluate the potential of the input images and derive LULC with the highest possible precision, two classifiers were used: the well-established Maximum Likelihood classifier, which was optimized to find those input bands, yielding the highest precision, and the random forest classifier. The resulting highly accurate LULC-maps for the whole farm with a spatial resolution as high as 8 m demonstrate the beneficial use of a combination of x- and c-band microwave data, the potential of multitemporal very high resolution multi-polarization TerraSAR-X data, and the profitable integration and comparison of microwave and optical remote sensing images for LULC classifications.
Journal Article
Precise Crop Classification of Hyperspectral Images Using Multi-Branch Feature Fusion and Dilation-Based MLP
by
Zhou, Huaming
,
Wang, Aili
,
Iwahori, Yuji
in
Accuracy
,
Artificial neural networks
,
Classification
2022
The precise classification of crop types using hyperspectral remote sensing imaging is an essential application in the field of agriculture, and is of significance for crop yield estimation and growth monitoring. Among the deep learning methods, Convolutional Neural Networks (CNNs) are the premier model for hyperspectral image (HSI) classification for their outstanding locally contextual modeling capability, which facilitates spatial and spectral feature extraction. Nevertheless, the existing CNNs have a fixed shape and are limited to observing restricted receptive fields, constituting a simulation difficulty for modeling long-range dependencies. To tackle this challenge, this paper proposed two novel classification frameworks which are both built from multilayer perceptrons (MLPs). Firstly, we put forward a dilation-based MLP (DMLP) model, in which the dilated convolutional layer replaced the ordinary convolution of MLP, enlarging the receptive field without losing resolution and keeping the relative spatial position of pixels unchanged. Secondly, the paper proposes multi-branch residual blocks and DMLP concerning performance feature fusion after principal component analysis (PCA), called DMLPFFN, which makes full use of the multi-level feature information of the HSI. The proposed approaches are carried out on two widely used hyperspectral datasets: Salinas and KSC; and two practical crop hyperspectral datasets: WHU-Hi-LongKou and WHU-Hi-HanChuan. Experimental results show that the proposed methods outshine several state-of-the-art methods, outperforming CNN by 6.81%, 12.45%, 4.38% and 8.84%, and outperforming ResNet by 4.48%, 7.74%, 3.53% and 6.39% on the Salinas, KSC, WHU-Hi-LongKou and WHU-Hi-HanChuan datasets, respectively. As a result of this study, it was confirmed that the proposed methods offer remarkable performances for hyperspectral precise crop classification.
Journal Article
Research on Crop Classification Using U-Net Integrated with Multimodal Remote Sensing Temporal Features
2025
Crop classification plays a vital role in acquiring the spatial distribution of agricultural crops, enhancing agricultural management efficiency, and ensuring food security. With the continuous advancement of remote sensing technologies, achieving efficient and accurate crop classification using remote sensing imagery has become a prominent research focus. Conventional approaches largely rely on empirical rules or single-feature selection (e.g., NDVI or VV) for temporal feature extraction, lacking systematic optimization of multimodal feature combinations from optical and radar data. To address this limitation, this study proposes a crop classification method based on feature-level fusion of multimodal remote sensing data, integrating the complementary advantages of optical and SAR imagery to overcome the temporal and spatial representation constraints of single-sensor observations. The study was conducted in Story County, Iowa, USA, focusing on the growth cycles of corn and soybean. Eight vegetation indices (including NDVI and NDRE) and five polarimetric features (VV and VH) were constructed and analyzed. Using a random forest algorithm to assess feature importance, NDVI+NDRE and VV+VH were identified as the optimal feature combinations. Subsequently, 16 scenes of optical imagery (Sentinel-2) and 30 scenes of radar imagery (Sentinel-1) were fused at the feature level to generate a multimodal temporal feature image with 46 channels. Using Cropland Data Layer (CDL) samples as reference data, a U-Net deep neural network was employed for refined crop classification and compared with single-modal results. Experimental results demonstrated that the fusion model outperforms single-modal approaches in classification accuracy, boundary delineation, and consistency, achieving training, validation, and test accuracies of 95.83%, 91.99%, and 90.81% respectively. Furthermore, consistent improvements were observed across evaluation metrics, including F1-score, precision, and recall.
Journal Article
Remote-Sensing Data and Deep-Learning Techniques in Crop Mapping and Yield Prediction: A Systematic Review
by
Pradhan, Biswajeet
,
Gite, Shilpa
,
Chakraborty, Subrata
in
Agricultural management
,
Agricultural practices
,
Agricultural production
2023
Reliable and timely crop-yield prediction and crop mapping are crucial for food security and decision making in the food industry and in agro-environmental management. The global coverage, rich spectral and spatial information and repetitive nature of remote sensing (RS) data have made them effective tools for mapping crop extent and predicting yield before harvesting. Advanced machine-learning methods, particularly deep learning (DL), can accurately represent the complex features essential for crop mapping and yield predictions by accounting for the nonlinear relationships between variables. The DL algorithm has attained remarkable success in different fields of RS and its use in crop monitoring is also increasing. Although a few reviews cover the use of DL techniques in broader RS and agricultural applications, only a small number of references are made to RS-based crop-mapping and yield-prediction studies. A few recently conducted reviews attempted to provide overviews of the applications of DL in crop-yield prediction. However, they did not cover crop mapping and did not consider some of the critical attributes that reveal the essential issues in the field. This study is one of the first in the literature to provide a thorough systematic review of the important scientific works related to state-of-the-art DL techniques and RS in crop mapping and yield estimation. This review systematically identified 90 papers from databases of peer-reviewed scientific publications and comprehensively reviewed the aspects related to the employed platforms, sensors, input features, architectures, frameworks, training data, spatial distributions of study sites, output scales, evaluation metrics and performances. The review suggests that multiple DL-based solutions using different RS data and DL architectures have been developed in recent years, thereby providing reliable solutions for crop mapping and yield prediction. However, challenges related to scarce training data, the development of effective, efficient and generalisable models and the transparency of predictions should be addressed to implement these solutions at scale for diverse locations and crops.
Journal Article
Synergistic Use of Radar Sentinel-1 and Optical Sentinel-2 Imagery for Crop Mapping: A Case Study for Belgium
by
Piccard, Isabelle
,
Van Tricht, Kristof
,
Gilliams, Sven
in
Alliances
,
Backscattering
,
Classification
2018
A timely inventory of agricultural areas and crop types is an essential requirement for ensuring global food security and allowing early crop monitoring practices. Satellite remote sensing has proven to be an increasingly more reliable tool to identify crop types. With the Copernicus program and its Sentinel satellites, a growing source of satellite remote sensing data is publicly available at no charge. Here, we used joint Sentinel-1 radar and Sentinel-2 optical imagery to create a crop map for Belgium. To ensure homogenous radar and optical inputs across the country, Sentinel-1 12-day backscatter mosaics were created after incidence angle normalization, and Sentinel-2 normalized difference vegetation index (NDVI) images were smoothed to yield 10-daily cloud-free mosaics. An optimized random forest classifier predicted the eight crop types with a maximum accuracy of 82% and a kappa coefficient of 0.77. We found that a combination of radar and optical imagery always outperformed a classification based on single-sensor inputs, and that classification performance increased throughout the season until July, when differences between crop types were largest. Furthermore, we showed that the concept of classification confidence derived from the random forest classifier provided insight into the reliability of the predicted class for each pixel, clearly showing that parcel borders have a lower classification confidence. We concluded that the synergistic use of radar and optical data for crop classification led to richer information increasing classification accuracies compared to optical-only classification. Further work should focus on object-level classification and crop monitoring to exploit the rich potential of combined radar and optical observations.
Journal Article
Progress in the Application of CNN-Based Image Classification and Recognition in Whole Crop Growth Cycles
by
Zhang, Hui
,
Yu, Feng
,
Xiao, Jun
in
Agricultural practices
,
Algorithms
,
Artificial intelligence
2023
The categorization and identification of agricultural imagery constitute the fundamental requisites of contemporary farming practices. Among the various methods employed for image classification and recognition, the convolutional neural network (CNN) stands out as the most extensively utilized and swiftly advancing machine learning technique. Its immense potential for advancing precision agriculture cannot be understated. By comprehensively reviewing the progress made in CNN applications throughout the entire crop growth cycle, this study aims to provide an updated account of these endeavors spanning the years 2020 to 2023. During the seed stage, classification networks are employed to effectively categorize and screen seeds. In the vegetative stage, image classification and recognition play a prominent role, with a diverse range of CNN models being applied, each with its own specific focus. In the reproductive stage, CNN’s application primarily centers around target detection for mechanized harvesting purposes. As for the post-harvest stage, CNN assumes a pivotal role in the screening and grading of harvested products. Ultimately, through a comprehensive analysis of the prevailing research landscape, this study presents the characteristics and trends of current investigations, while outlining the future developmental trajectory of CNN in crop identification and classification.
Journal Article
A Review of CNN Applications in Smart Agriculture Using Multimodal Data
by
El Sakka, Mohammad
,
Chaari, Lotfi
,
Ivanovici, Mihai
in
Agriculture
,
Agriculture - methods
,
Artificial Intelligence
2025
This review explores the applications of Convolutional Neural Networks (CNNs) in smart agriculture, highlighting recent advancements across various applications including weed detection, disease detection, crop classification, water management, and yield prediction. Based on a comprehensive analysis of more than 115 recent studies, coupled with a bibliometric study of the broader literature, this paper contextualizes the use of CNNs within Agriculture 5.0, where technological integration optimizes agricultural efficiency. Key approaches analyzed involve image classification, image segmentation, regression, and object detection methods that use diverse data types ranging from RGB and multispectral images to radar and thermal data. By processing UAV and satellite data with CNNs, real-time and large-scale crop monitoring can be achieved, supporting advanced farm management. A comparative analysis shows how CNNs perform with respect to other techniques that involve traditional machine learning and recent deep learning models in image processing, particularly when applied to high-dimensional or temporal data. Future directions point toward integrating IoT and cloud platforms for real-time data processing and leveraging large language models for regulatory insights. Potential research advancements emphasize improving increased data accessibility and hybrid modeling to meet the agricultural demands of climate variability and food security, positioning CNNs as pivotal tools in sustainable agricultural practices. A related repository that contains the reviewed articles along with their publication links is made available.
Journal Article
3D Convolutional Neural Networks for Crop Classification with Multi-Temporal Remote Sensing Images
by
Shi, Yun
,
Xu, Anjian
,
Zhang, Chi
in
3D convolution
,
active learning
,
convolutional neural networks
2018
This study describes a novel three-dimensional (3D) convolutional neural networks (CNN) based method that automatically classifies crops from spatio-temporal remote sensing images. First, 3D kernel is designed according to the structure of multi-spectral multi-temporal remote sensing data. Secondly, the 3D CNN framework with fine-tuned parameters is designed for training 3D crop samples and learning spatio-temporal discriminative representations, with the full crop growth cycles being preserved. In addition, we introduce an active learning strategy to the CNN model to improve labelling accuracy up to a required threshold with the most efficiency. Finally, experiments are carried out to test the advantage of the 3D CNN, in comparison to the two-dimensional (2D) CNN and other conventional methods. Our experiments show that the 3D CNN is especially suitable in characterizing the dynamics of crop growth and outperformed the other mainstream methods.
Journal Article
Crop Classification Based on Temporal Signatures of Sentinel-1 Observations over Navarre Province, Spain
by
Campo-Bescós, Miguel Ángel
,
Álvarez-Mozos, Jesús
,
Arias, María
in
Accuracy
,
Agricultural policy
,
Agricultural subsidies
2020
Crop classification provides relevant information for crop management, food security assurance and agricultural policy design. The availability of Sentinel-1 image time series, with a very short revisit time and high spatial resolution, has great potential for crop classification in regions with pervasive cloud cover. Dense image time series enable the implementation of supervised crop classification schemes based on the comparison of the time series of the element to classify with the temporal signatures of the considered crops. The main objective of this study is to investigate the performance of a supervised crop classification approach based on crop temporal signatures obtained from Sentinel-1 time series in a challenging case study with a large number of crops and a high heterogeneity in terms of agro-climatic conditions and field sizes. The case study considered a large dataset on the Spanish province of Navarre in the framework of the verification of Common Agricultural Policy (CAP) subsidies. Navarre presents a large agro-climatic diversity with persistent cloud cover areas, and therefore, the technique was implemented both at the provincial and regional scale. In total, 14 crop classes were considered, including different winter crops, summer crops, permanent crops and fallow. Classification results varied depending on the set of input features considered, obtaining Overall Accuracies higher than 70% when the three (VH, VV and VH/VV) channels were used as the input. Crops exhibiting singularities in their temporal signatures were more easily identified, with barley, rice, corn and wheat achieving F1-scores above 75%. The size of fields severely affected classification performance, with ~14% better classification performance for larger fields (>1 ha) in comparison to smaller fields (<0.5 ha). Results improved when agro-climatic diversity was taken into account through regional stratification. It was observed that regions with a higher diversity of crop types, management techniques and a larger proportion of fallow fields obtained lower accuracies. The approach is simple and can be easily implemented operationally to aid CAP inspection procedures or for other purposes.
Journal Article