Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
610 result(s) for "wildfire detection"
Sort by:
Deep Learning and Transformer Approaches for UAV-Based Wildfire Detection and Segmentation
Wildfires are a worldwide natural disaster causing important economic damages and loss of lives. Experts predict that wildfires will increase in the coming years mainly due to climate change. Early detection and prediction of fire spread can help reduce affected areas and improve firefighting. Numerous systems were developed to detect fire. Recently, Unmanned Aerial Vehicles were employed to tackle this problem due to their high flexibility, their low-cost, and their ability to cover wide areas during the day or night. However, they are still limited by challenging problems such as small fire size, background complexity, and image degradation. To deal with the aforementioned limitations, we adapted and optimized Deep Learning methods to detect wildfire at an early stage. A novel deep ensemble learning method, which combines EfficientNet-B5 and DenseNet-201 models, is proposed to identify and classify wildfire using aerial images. In addition, two vision transformers (TransUNet and TransFire) and a deep convolutional model (EfficientSeg) were employed to segment wildfire regions and determine the precise fire regions. The obtained results are promising and show the efficiency of using Deep Learning and vision transformers for wildfire classification and segmentation. The proposed model for wildfire classification obtained an accuracy of 85.12% and outperformed many state-of-the-art works. It proved its ability in classifying wildfire even small fire areas. The best semantic segmentation models achieved an F1-score of 99.9% for TransUNet architecture and 99.82% for TransFire architecture superior to recent published models. More specifically, we demonstrated the ability of these models to extract the finer details of wildfire using aerial images. They can further overcome current model limitations, such as background complexity and small wildfire areas.
Development of the User Requirements for the Canadian WildFireSat Satellite Mission
In 2019 the Canadian Space Agency initiated development of a dedicated wildfire monitoring satellite (WildFireSat) mission. The intent of this mission is to support operational wildfire management, smoke and air quality forecasting, and wildfire carbon emissions reporting. In order to deliver the mission objectives, it was necessary to identify the technical and operational challenges which have prevented broad exploitation of Earth Observation (EO) in Canadian wildfire management and to address these challenges in the mission design. In this study we emphasize the first objective by documenting the results of wildfire management end-user engagement activities which were used to identify the key Fire Management Functionalities (FMFs) required for an Earth Observation wildfire monitoring system. These FMFs are then used to define the User Requirements for the Canadian Wildland Fire Monitoring System (CWFMS) which are refined here for the WildFireSat mission. The User Requirements are divided into Observational, Measurement, and Precision requirements and form the foundation for the design of the WildFireSat mission (currently in Phase-A, summer 2020).
Thermal Infrared Sensing for Near Real-Time Data-Driven Fire Detection and Monitoring Systems
With the increasing interest in leveraging mobile robotics for fire detection and monitoring arises the need to design recognition technology systems for these extreme environments. This work focuses on evaluating the sensing capabilities and image processing pipeline of thermal imaging sensors for fire detection applications, paving the way for the development of autonomous systems for early warning and monitoring of fire events. The contributions of this work are threefold. First, we overview image processing algorithms used in thermal imaging regarding data compression and image enhancement. Second, we present a method for data-driven thermal imaging analysis designed for fire situation awareness in robotic perception. A study is undertaken to test the behavior of the thermal cameras in controlled fire scenarios, followed by an in-depth analysis of the experimental data, which reveals the inner workings of these sensors. Third, we discuss key takeaways for the integration of thermal cameras in robotic perception pipelines for autonomous unmanned aerial vehicle (UAV)-based fire surveillance.
Wildfire Detection Probability of MODIS Fire Products under the Constraint of Environmental Factors: A Study Based on Confirmed Ground Wildfire Records
The Moderate Resolution Imaging Spectroradiometer (MODIS) has been widely used for wildfire occurrence and distribution detecting and fire risk assessments. Compared with its commission error, the omission error of MODIS wildfire detection has been revealed as a much more challenging, unsolved issue, and ground-level environmental factors influencing the detection capacity are also variable. This study compared the multiple MODIS fire products and the records of ground wildfire investigations during December 2002–November 2015 in Yunnan Province, Southwest China, in an attempt to reveal the difference in the spatiotemporal patterns of regional wildfire detected by the two approaches, to estimate the omission error of MODIS fire products based on confirmed ground wildfire records, and to explore how instantaneous and local environmental factors influenced the wildfire detection probability of MODIS. The results indicated that across the province, the total number of wildfire events recorded by MODIS was at least twice as many as that in the ground records, while the wildfire distribution patterns revealed by the two approaches were inconsistent. For the 5145 confirmed ground records, however, only 11.10% of them could be detected using multiple MODIS fire products (i.e., MOD14A1, MYD14A1, and MCD64A1). Opposing trends during the studied period were found between the yearly occurrence of ground-based wildfire records and the corresponding proportion detected by MODIS. Moreover, the wildfire detection proportion by MODIS was 11.36% in forest, 9.58% in shrubs, and 5.56% in grassland, respectively. Random forest modeling suggested that fire size was a primary limiting factor for MODIS fire detecting capacity, where a small fire size could likely result in MODIS omission errors at a threshold of 1 ha, while MODIS had a 50% probability of detecting a wildfire whose size was at least 18 ha. Aside from fire size, the wildfire detection probability of MODIS was also markedly influenced by weather factors, especially the daily relative humidity and the daily wind speed, and the altitude of wildfire occurrence. Considering the environmental factors’ contribution to the omission error in MODIS wildfire detection, we emphasized the importance of attention to the local conditions as well as ground inspection in practical wildfire monitoring and management and global wildfire simulations.
Wildfire-Detection Method Using DenseNet and CycleGAN Data Augmentation-Based Remote Camera Imagery
To minimize the damage caused by wildfires, a deep learning-based wildfire-detection technology that extracts features and patterns from surveillance camera images was developed. However, many studies related to wildfire-image classification based on deep learning have highlighted the problem of data imbalance between wildfire-image data and forest-image data. This data imbalance causes model performance degradation. In this study, wildfire images were generated using a cycle-consistent generative adversarial network (CycleGAN) to eliminate data imbalances. In addition, a densely-connected-convolutional-networks-based (DenseNet-based) framework was proposed and its performance was compared with pre-trained models. While training with a train set containing an image generated by a GAN in the proposed DenseNet-based model, the best performance result value was realized among the models with an accuracy of 98.27% and an F1 score of 98.16, obtained using the test dataset. Finally, this trained model was applied to high-quality drone images of wildfires. The experimental results showed that the proposed framework demonstrated high wildfire-detection accuracy.
Computationally Efficient Wildfire Detection Method Using a Deep Convolutional Network Pruned via Fourier Analysis
In this paper, we propose a deep convolutional neural network for camera based wildfire detection. We train the neural network via transfer learning and use window based analysis strategy to increase the fire detection rate. To achieve computational efficiency, we calculate frequency response of the kernels in convolutional and dense layers and eliminate those filters with low energy impulse response. Moreover, to reduce the storage for edge devices, we compare the convolutional kernels in Fourier domain and discard similar filters using the cosine similarity measure in the frequency domain. We test the performance of the neural network with a variety of wildfire video clips and the pruned system performs as good as the regular network in daytime wild fire detection, and it also works well on some night wild fire video clips.
Fire Detection with Deep Learning: A Comprehensive Review
Wildfires are a critical driver of landscape transformation on Earth, representing a dynamic and ephemeral process that poses challenges for accurate early detection. To address this challenge, researchers have increasingly turned to deep learning techniques, which have demonstrated remarkable potential in enhancing the performance of wildfire detection systems. This paper provides a comprehensive review of fire detection using deep learning, spanning from 1990 to 2023. This study employed a comprehensive approach, combining bibliometric analysis, qualitative and quantitative methods, and systematic review techniques to examine the advancements in fire detection using deep learning in remote sensing. It unveils key trends in publication patterns, author collaborations, and thematic focuses, emphasizing the remarkable growth in fire detection using deep learning in remote sensing (FDDL) research, especially from the 2010s onward, fueled by advancements in computational power and remote sensing technologies. The review identifies “Remote Sensing” as the primary platform for FDDL research dissemination and highlights the field’s collaborative nature, with an average of 5.02 authors per paper. The co-occurrence network analysis reveals diverse research themes, spanning technical approaches and practical applications, with significant contributions from China, the United States, South Korea, Brazil, and Australia. Highly cited papers are explored, revealing their substantial influence on the field’s research focus. The analysis underscores the practical implications of integrating high-quality input data and advanced deep-learning techniques with remote sensing for effective fire detection. It provides actionable recommendations for future research, emphasizing interdisciplinary and international collaboration to propel FDDL technologies and applications. The study’s conclusions highlight the growing significance of FDDL technologies and the necessity for ongoing advancements in computational and remote sensing methodologies. The practical takeaway is clear: future research should prioritize enhancing the synergy between deep learning techniques and remote sensing technologies to develop more efficient and accurate fire detection systems, ultimately fostering groundbreaking innovations.
AI for Wildfire Management: From Prediction to Detection, Simulation, and Impact Analysis—Bridging Lab Metrics and Real-World Validation
Artificial intelligence (AI) offers several opportunities in wildfire management, particularly for improving short- and long-term fire occurrence forecasting, spread modeling, and decision-making. When properly adapted beyond research into real-world settings, AI can significantly reduce risks to human life, as well as ecological and economic damages. However, despite increasingly sophisticated research, the operational use of AI in wildfire contexts remains limited. In this article, we review the main domains of wildfire management where AI has been applied—susceptibility mapping, prediction, detection, simulation, and impact assessment—and highlight critical limitations that hinder practical adoption. These include challenges with dataset imbalance and accessibility, the inadequacy of commonly used metrics, the choice of prediction formats, and the computational costs of large-scale models, all of which reduce model trustworthiness and applicability. Beyond synthesizing existing work, our survey makes four explicit contributions: (1) we provide a reproducible taxonomy supported by detailed dataset tables, emphasizing both the reliability and shortcomings of frequently used data sources; (2) we propose evaluation guidance tailored to imbalanced and spatial tasks, stressing the importance of using accurate metrics and format; (3) we provide a complete state of the art, highlighting important issues and recommendations to enhance models’ performances and reliability from susceptibility to damage analysis; (4) we introduce a deployment checklist that considers cost, latency, required expertise, and integration with decision-support and optimization systems. By bridging the gap between laboratory-oriented models and real-world validation, our work advances prior reviews and aims to strengthen confidence in AI-driven wildfire management while guiding future research toward operational applicability.
Computer vision for wildfire detection: a critical brief review
In this critical brief review, we explore the pivotal role of computer vision in wildfire detection, following the PRISMA methodology and focusing on 35 key studies published between 2018 and 2023. Notably, convolutional neural networks, including models like YOLOv5, Inception v3, MobileNetV2, and Faster R-CNN, have emerged as the preferred choice for researchers in this field. Object detection emerges as the predominant computer vision task employed for wildfire identification. The review underscores a rising trend where researchers opt to utilize existing image datasets or create their own, incorporating various imaging modalities, from conventional RGB to thermal and infrared imagery. Unmanned aerial vehicles have gained increasing prominence for data collection, though they come with challenges such as limited battery life and data transmission bottlenecks. While alternative deployment methods like ground stations are considered, the review reveals a significant gap in literature regarding the practical deployment of satellite systems and advance monitoring systems for wildfire detection, pointing to a need for comprehensive studies on their operational viability and maintenance costs. Overall, this study aims to broaden the understanding of the complex interplay between wildfire detection and computer vision, highlighting the need for future solutions to be both technologically innovative and operationally viable.
Hybrid learning framework for synergistic fusion of SAR and optical UAV data in wildfire surveillance
Wildfires are a critical global threat, necessitating advanced early detection and monitoring systems. This research introduces a novel multi-modal framework that integrates wide-area Synthetic Aperture Radar (SAR) for all-weather surveillance with high-resolution UAV-based optical and thermal imagery for precise analysis. The proposed hybrid learning framework utilizes FPANet, a Vision Transformer-based architecture that captures both local textures and global spatial dependencies to achieve robust segmentation from SAR data under cloudy or smoky conditions. For fine-grained analysis, the system employs DualSegFormer, a model designed for the synergistic multi-modal fusion of thermal and RGB UAV images, ensuring high-fidelity fire front delineation even when visibility is compromised. Additionally, a Vision-Language Model (VLM) is integrated to translate complex sensor data into actionable, human-readable insights for effective disaster response. Experimental results demonstrate a significant improvement over conventional methods: the SAR-based FPANet achieves an F1-score of 0.830 and an Intersection over Union (IoU) of 0.750, the UAV-based DualSegFormer attains a superior F1-score of 0.946, and the VLM component shows strong semantic alignment with a BERTScore of 0.953. These results confirm the ability of the proposed hybrid learning approach to provide more effective and reliable wildfire monitoring, thereby advancing ecosystem resilience and facilitating timely disaster response in alignment with international SDG initiatives.