Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
8,255 result(s) for "agricultural dataset"
Sort by:
AgriPest: A Large-Scale Domain-Specific Benchmark Dataset for Practical Agricultural Pest Detection in the Wild
The recent explosion of large volume of standard dataset of annotated images has offered promising opportunities for deep learning techniques in effective and efficient object detection applications. However, due to a huge difference of quality between these standardized dataset and practical raw data, it is still a critical problem on how to maximize utilization of deep learning techniques in practical agriculture applications. Here, we introduce a domain-specific benchmark dataset, called AgriPest, in tiny wild pest recognition and detection, providing the researchers and communities with a standard large-scale dataset of practically wild pest images and annotations, as well as evaluation procedures. During the past seven years, AgriPest captures 49.7K images of four crops containing 14 species of pests by our designed image collection equipment in the field environment. All of the images are manually annotated by agricultural experts with up to 264.7K bounding boxes of locating pests. This paper also offers a detailed analysis of AgriPest where the validation set is split into four types of scenes that are common in practical pest monitoring applications. We explore and evaluate the performance of state-of-the-art deep learning techniques over AgriPest. We believe that the scale, accuracy, and diversity of AgriPest can offer great opportunities to researchers in computer vision as well as pest monitoring applications.
PlantInfoCMS: Scalable Plant Disease Information Collection and Management System for Training AI Models
In recent years, the development of deep learning technology has significantly benefited agriculture in domains such as smart and precision farming. Deep learning models require a large amount of high-quality training data. However, collecting and managing large amounts of guaranteed-quality data is a critical issue. To meet these requirements, this study proposes a scalable plant disease information collection and management system (PlantInfoCMS). The proposed PlantInfoCMS consists of data collection, annotation, data inspection, and dashboard modules to generate accurate and high-quality pest and disease image datasets for learning purposes. Additionally, the system provides various statistical functions allowing users to easily check the progress of each task, making management highly efficient. Currently, PlantInfoCMS handles data on 32 types of crops and 185 types of pests and diseases, and stores and manages 301,667 original and 195,124 labeled images. The PlantInfoCMS proposed in this study is expected to significantly contribute to the diagnosis of crop pests and diseases by providing high-quality AI images for learning about and facilitating the management of crop pests and diseases.
ObjectDetection in Agriculture: A Comprehensive Review of Methods, Applications, Challenges, and Future Directions
Object detection is revolutionizing precision agriculture by enabling advanced crop monitoring, weed management, pest detection, and autonomous field operations. This comprehensive review synthesizes object detection methodologies, tracing their evolution from traditional feature-based approaches to cutting-edge deep learning architectures. We analyze key agricultural applications, leveraging datasets like PlantVillage, DeepWeeds, and AgriNet, and introduce a novel framework for evaluating algorithm performance based on mean Average Precision (mAP), inference speed, and computational efficiency. Through a comparative analysis of leading algorithms, including Faster R-CNN, YOLO, and SSD, we identify critical trade-offs and highlight advancements in real-time detection for resource-constrained environments. Persistent challenges, such as environmental variability, limited labeled data, and model generalization, are critically examined, with proposed solutions including multi-modal data fusion and lightweight models for edge deployment. By integrating technical evaluations, meaningful insights, and actionable recommendations, this work bridges technical innovation with practical deployment, paving the way for sustainable, resilient, and productive agricultural systems.
Images and CNN applications in smart agriculture
In recent years, the agricultural sector has undergone a revolutionary shift toward “smart farming”, integrating advanced technologies to strengthen crop health and productivity significantly. This paradigm shift holds profound implications for food safety and the broader economy. At the forefront of this transformation is deep learning, a subset of artificial intelligence based on artificial neural networks, which emerged as a powerful tool in detection and classification tasks. Specifically, Convolutional Neural Networks (CNNs), a specialized type of deep learning and computer vision models, demonstrated remarkable proficiency in analyzing crop imagery, whether sourced from satellites, aircraft, or terrestrial cameras. These networks often leverage vegetation indices and multispectral imagery to enhance their analytical capabilities. Such models contribute to the development of systems that could enhance agricultural outcomes. This review encapsulates the current state of the art in using CNNs in agriculture. It details the image types utilized within this context, including, but not limited to, multispectral images and vegetation indices. Furthermore, it catalogs accessible online datasets pertinent to this field. Collectively, this paper underscores the pivotal role of CNNs in agriculture and highlights the transformative impact of multispectral images in this rapidly evolving domain.
3D Pedestrian Detection in Farmland by Monocular RGB Image and Far-Infrared Sensing
The automated driving of agricultural machinery is of great significance for the agricultural production efficiency, yet is still challenging due to the significantly varied environmental conditions through day and night. To address operation safety for pedestrians in farmland, this paper proposes a 3D person sensing approach based on monocular RGB and Far-Infrared (FIR) images. Since public available datasets for agricultural 3D pedestrian detection are scarce, a new dataset is proposed, named as “FieldSafePedestrian”, which includes field images in both day and night. The implemented data augmentations of night images and semi-automatic labeling approach are also elaborated to facilitate the 3D annotation of pedestrians. To fuse heterogeneous images of sensors with non-parallel optical axis, the Dual-Input Depth-Guided Dynamic-Depthwise-Dilated Fusion network (D5F) is proposed, which assists the pixel alignment between FIR and RGB images with estimated depth information and deploys a dynamic filtering to guide the heterogeneous information fusion. Experiments on field images in both daytime and nighttime demonstrate that compared with the state-of-the-arts, the dynamic aligned image fusion achieves an accuracy gain of 3.9% and 4.5% in terms of center distance and BEV-IOU, respectively, without affecting the run-time efficiency.
Integrating YOLOv8-agri and DeepSORT for Advanced Motion Detection in Agriculture and Fisheries
This paper integrates the YOLOv8-agri models with the DeepSORT algorithm to advance object detection and tracking in the agricultural and fisheries sectors. We address the current limitations in object classification by adapting YOLOv8 to the unique demands of these environments, where misclassification can hinder operational efficiency. Through the strategic use of transfer learning on specialized datasets, our study refines the YOLOv8-agri models for precise recognition and categorization of diverse biological entities. Coupling these models with DeepSORT significantly enhances motion tracking, leading to more accurate and reliable monitoring systems. The research outcomes identify the YOLOv8l-agri model as the optimal solution for balancing detection accuracy with training time, making it highly suitable for precision agriculture and fisheries applications. We have publicly made our experimental datasets and trained models publicly available to foster reproducibility and further research. This initiative marks a step forward in applying sophisticated computer vision techniques to real-world agricultural and fisheries management.
OpenPlant: A Large-Scale Benchmark Dataset for Agricultural Plant Classification Using CNNs, ViTs, and VLMs
Accurate plant classification based on deep learning is important for precision agriculture, such as weed control, crop monitoring, and smart farming systems. The accuracies of deep learning models rely on datasets. Although many datasets have been proposed in recent decades, they have the common limitations in terms of scale, less environmental diversity, and challenges of data integration. To solve these problems, in this paper, we introduce a new dataset named OpenPlant, which is a large-scale and open dataset containing 635,176 RGB images across 1167 plant species. OpenPlant includes diverse growth stages of plants, plant structures, and environmental conditions, and its annotations were carefully verified to ensure quality. The proposed OpenPlant can be a benchmark for agricultural plant classification. In this paper, we benchmarked 10 widely used convolutional neural networks (CNNs), 6 vision transformers (ViTs), and 12 vision–language models (VLMs) to provide a comprehensive evaluation. The OpenPlant dataset offers a comprehensive benchmark for agricultural research using deep learning and the results provide insights into future directions.
MSFCA-Net: A Multi-Scale Feature Convolutional Attention Network for Segmenting Crops and Weeds in the Field
Weed control has always been one of the most important issues in agriculture. The research based on deep learning methods for weed identification and segmentation in the field provides necessary conditions for intelligent point-to-point spraying and intelligent weeding. However, due to limited and difficult-to-obtain agricultural weed datasets, complex changes in field lighting intensity, mutual occlusion between crops and weeds, and uneven size and quantity of crops and weeds, the existing weed segmentation methods are unable to perform effectively. In order to address these issues in weed segmentation, this study proposes a multi-scale convolutional attention network for crop and weed segmentation. In this work, we designed a multi-scale feature convolutional attention network for segmenting crops and weeds in the field called MSFCA-Net using various sizes of strip convolutions. A hybrid loss designed based on the Dice loss and focal loss is used to enhance the model’s sensitivity towards different classes and improve the model’s ability to learn from hard samples, thereby enhancing the segmentation performance of crops and weeds. The proposed method is trained and tested on soybean, sugar beet, carrot, and rice weed datasets. Comparisons with popular semantic segmentation methods show that the proposed MSFCA-Net has higher mean intersection over union (MIoU) on these datasets, with values of 92.64%, 89.58%, 79.34%, and 78.12%, respectively. The results show that under the same experimental conditions and parameter configurations, the proposed method outperforms other methods and has strong robustness and generalization ability.
Optimal nitrogen rate strategy for sustainable rice production in China
Avoiding excessive agricultural nitrogen (N) use without compromising yields has long been a priority for both research and government policy in China 1 , 2 . Although numerous rice-related strategies have been proposed 3 – 5 , few studies have assessed their impacts on national food self-sufficiency and environmental sustainability and fewer still have considered economic risks faced by millions of smallholders. Here we established an optimal N rate strategy based on maximizing either economic (ON) or ecological (EON) performance using new subregion-specific models. Using an extensive on-farm dataset, we then assessed the risk of yield losses among smallholder farmers and the challenges of implementing the optimal N rate strategy. We find that meeting national rice production targets in 2030 is possible while concurrently reducing nationwide N consumption by 10% (6–16%) and 27% (22–32%), mitigating reactive N (Nr) losses by 7% (3–13%) and 24% (19–28%) and increasing N-use efficiency by 30% (3–57%) and 36% (8–64%) for ON and EON, respectively. This study identifies and targets subregions with disproportionate environmental impacts and proposes N rate strategies to limit national Nr pollution below proposed environmental thresholds, without compromising soil N stocks or economic benefits for smallholders. Thereafter, the preferable N strategy is allocated to each region based on the trade-off between economic risk and environmental benefit. To facilitate the adoption of the annually revised subregional N rate strategy, several recommendations were provided, including a monitoring network, fertilization quotas and smallholder subsidies. A proposed optimal nitrogen rate strategy together with analysis of an extensive on-farm dataset shows that meeting national rice production targets in 2030 in China is possible while concurrently reducing nationwide nitrogen consumption.
Deep learning implementation of image segmentation in agricultural applications: a comprehensive review
Image segmentation is a crucial task in computer vision, which divides a digital image into multiple segments and objects. In agriculture, image segmentation is extensively used for crop and soil monitoring, predicting the best times to sow, fertilize, and harvest, estimating crop yield, and detecting plant diseases. However, image segmentation faces difficulties in agriculture, such as the challenges of disease staging recognition, labeling inconsistency, and changes in plant morphology with the environment. Consequently, we have conducted a comprehensive review of image segmentation techniques based on deep learning, exploring the development and prospects of image segmentation in agriculture. Deep learning-based image segmentation solutions widely used in agriculture are categorized into eight main groups: encoder-decoder structures, multi-scale and pyramid-based methods, dilated convolutional networks, visual attention models, generative adversarial networks, graph neural networks, instance segmentation networks, and transformer-based models. In addition, the applications of image segmentation methods in agriculture are presented, such as plant disease detection, weed identification, crop growth monitoring, crop yield estimation, and counting. Furthermore, a collection of publicly available plant image segmentation datasets has been reviewed, and the evaluation and comparison of performance for image segmentation algorithms have been conducted on benchmark datasets. Finally, there is a discussion of the challenges and future prospects of image segmentation in agriculture.