Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1
result(s) for
"DualSegFormer"
Sort by:
Hybrid learning framework for synergistic fusion of SAR and optical UAV data in wildfire surveillance
2025
Wildfires are a critical global threat, necessitating advanced early detection and monitoring systems. This research introduces a novel multi-modal framework that integrates wide-area Synthetic Aperture Radar (SAR) for all-weather surveillance with high-resolution UAV-based optical and thermal imagery for precise analysis. The proposed hybrid learning framework utilizes FPANet, a Vision Transformer-based architecture that captures both local textures and global spatial dependencies to achieve robust segmentation from SAR data under cloudy or smoky conditions. For fine-grained analysis, the system employs DualSegFormer, a model designed for the synergistic multi-modal fusion of thermal and RGB UAV images, ensuring high-fidelity fire front delineation even when visibility is compromised. Additionally, a Vision-Language Model (VLM) is integrated to translate complex sensor data into actionable, human-readable insights for effective disaster response. Experimental results demonstrate a significant improvement over conventional methods: the SAR-based FPANet achieves an F1-score of 0.830 and an Intersection over Union (IoU) of 0.750, the UAV-based DualSegFormer attains a superior F1-score of 0.946, and the VLM component shows strong semantic alignment with a BERTScore of 0.953. These results confirm the ability of the proposed hybrid learning approach to provide more effective and reliable wildfire monitoring, thereby advancing ecosystem resilience and facilitating timely disaster response in alignment with international SDG initiatives.
Journal Article