Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
84
result(s) for
"runway detection"
Sort by:
CR-Mask RCNN: An Improved Mask RCNN Method for Airport Runway Detection and Segmentation in Remote Sensing Images
2025
Airport runways, as the core part of airports, belong to vital national infrastructure, and the target detection and segmentation of airport runways in remote sensing images using deep learning methods have significant research value. Most of the existing airport target detection methods based on deep learning rely on horizontal bounding boxes for localization, which often contain irrelevant background information. Moreover, when detecting multiple intersecting airport runways in a single remote sensing image, issues such as false positives and false negatives are apt to occur. To address these challenges, this study proposes an end-to-end remote sensing image airport runway detection and segmentation method based on an improved Mask RCNN (CR-Mask RCNN). The proposed method uses a rotated region generation network instead of a non-rotated region generation network, allowing it to generate rotated bounding boxes that fit the shape of the airport runway more closely, thus avoiding the interference of a large amount of invalid background information brought about by horizontal bounding boxes. Furthermore, the method incorporates an attention mechanism into the backbone feature extraction network to allocate attention to different airport runway feature map scales, which enhances the extraction of local feature information, captures detailed information more effectively, and reduces issues of false positives and false negatives when detecting airport runway targets. The results indicate that, when comparing horizontal bounding boxes with rotated bounding boxes for detecting and segmenting airport runways, the latter are more precise for complex backgrounds. Furthermore, incorporating an attention mechanism enhances the accuracy of airport runway recognition, making it highly effective and practical.
Journal Article
Monocular-Vision-Based Precise Runway Detection Applied to State Estimation for Carrier-Based UAV Landing
2022
Improving the level of autonomy during the landing phase helps promote the full-envelope autonomous flight capability of unmanned aerial vehicles (UAVs). Aiming at the identification of potential landing sites, an end-to-end state estimation method for the autonomous landing of carrier-based UAVs based on monocular vision is proposed in this paper, which allows them to discover landing sites in flight by using equipped optical sensors and avoid a crash or damage during normal and emergency landings. This scheme aims to solve two problems: the requirement of accuracy for runway detection and the requirement of precision for UAV state estimation. First, we design a robust runway detection framework on the basis of YOLOv5 (you only look once, ver. 5) with four modules: a data augmentation layer, a feature extraction layer, a feature aggregation layer and a target prediction layer. Then, the corner prediction method based on geometric features is introduced into the prediction model of the detection framework, which enables the landing field prediction to more precisely fit the runway appearance. In simulation experiments, we developed datasets applied to carrier-based UAV landing simulations based on monocular vision. In addition, our method was implemented with help of the PyTorch deep learning tool, which supports the dynamic and efficient construction of a detection network. Results showed that the proposed method achieved a higher precision and better performance on state estimation during carrier-based UAV landings.
Journal Article
Real-Time Runway Detection Using Dual-Modal Fusion of Visible and Infrared Data
2025
Advancements in aviation technology have made intelligent navigation systems essential for improving flight safety and efficiency, particularly in low-visibility conditions. Radar and GPS systems face limitations in bad weather, making visible–infrared sensor fusion a promising alternative. This study proposes a salient object detection (SOD) method that integrates visible and infrared sensors for robust airport runway detection in complex environments. We introduce a large-scale visible–infrared runway dataset (RDD5000) and develop a SOD algorithm capable of detecting salient targets from unaligned visible and infrared images. To enable real-time processing, we design a lightweight dual-modal fusion network (DCFNet) with an independent–shared encoder and a cross-layer attention mechanism to enhance feature extraction and fusion. Experimental results show that the MobileNetV2-based lightweight version achieves 155 FPS on a single GPU, significantly outperforming previous methods such as DCNet (4.878 FPS) and SACNet (27 FPS), making it suitable for real-time deployment on airborne systems. This work offers a novel and efficient solution for intelligent navigation in aviation.
Journal Article
Optical Navigation Sensor for Runway Relative Positioning of Aircraft during Final Approach
by
Manecy, Augustin
,
Hiba, Antal
,
Gáti, Attila
in
automatic landing
,
Engineering Sciences
,
on-board vision system
2021
Precise navigation is often performed by sensor fusion of different sensors. Among these sensors, optical sensors use image features to obtain the position and attitude of the camera. Runway relative navigation during final approach is a special case where robust and continuous detection of the runway is required. This paper presents a robust threshold marker detection method for monocular cameras and introduces an on-board real-time implementation with flight test results. Results with narrow and wide field-of-view optics are compared. The image processing approach is also evaluated on image data captured by a different on-board system. The pure optical approach of this paper increases sensor redundancy because it does not require input from an inertial sensor as most of the robust runway detectors.
Journal Article
YOMO-Runwaynet: A Lightweight Fixed-Wing Aircraft Runway Detection Algorithm Combining YOLO and MobileRunwaynet
2024
The runway detection algorithm for fixed-wing aircraft is a hot topic in the field of aircraft visual navigation. High accuracy, high fault tolerance, and lightweight design are the core requirements in the domain of runway feature detection. This paper aims to address these needs by proposing a lightweight runway feature detection algorithm named YOMO-Runwaynet, designed for edge devices. The algorithm features a lightweight network architecture that follows the YOMO inference framework, combining the advantages of YOLO and MobileNetV3 in feature extraction and operational speed. Firstly, a lightweight attention module is introduced into MnasNet, and the improved MobileNetV3 is employed as the backbone network to enhance the feature extraction efficiency. Then, PANet and SPPnet are incorporated to aggregate the features from multiple effective feature layers. Subsequently, to reduce latency and improve efficiency, YOMO-Runwaynet generates a single optimal prediction for each object, eliminating the need for non-maximum suppression (NMS). Finally, experimental results on embedded devices demonstrate that YOMO-Runwaynet achieves a detection accuracy of over 89.5% on the ATD (Aerovista Runway Dataset), with a pixel error rate of less than 0.003 for runway keypoint detection, and an inference speed exceeding 90.9 FPS. These results indicate that the YOMO-Runwaynet algorithm offers high accuracy and real-time performance, providing effective support for the visual navigation of fixed-wing aircraft.
Journal Article
Fusion of Enhanced and Synthetic Vision System Images for Runway and Horizon Detection
by
Eberle, Henry
,
Vaidyanathan, Ravi
,
Fadhil, Ahmed F.
in
Aircraft
,
Algorithms
,
Computer engineering
2019
Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented in real-world situations due to signal misalignment. We address this through a registration step to align EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented, and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. The fusion architecture developed in this study holds promise for incorporation into manned heads-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provides a basis for rule selection in other signal fusion applications.
Journal Article
Infrared-Inertial Navigation for Commercial Aircraft Precision Landing in Low Visibility and GPS-Denied Environments
2019
This paper proposes a novel infrared-inertial navigation method for the precise landing of commercial aircraft in low visibility and Global Position System (GPS)-denied environments. Within a Square-root Unscented Kalman Filter (SR_UKF), inertial measurement unit (IMU) data, forward-looking infrared (FLIR) images and airport geo-information are integrated to estimate the position, velocity and attitude of the aircraft during landing. Homography between the synthetic image and the real image which implicates the camera pose deviations is created as vision measurement. To accurately extract real runway features, the current results of runway detection are used as the prior knowledge for the next frame detection. To avoid possible homography decomposition solutions, it is directly converted to a vector and fed to the SR_UKF. Moreover, the proposed navigation system is proven to be observable by nonlinear observability analysis. Last but not least, a general aircraft was elaborately equipped with vision and inertial sensors to collect flight data for algorithm verification. The experimental results have demonstrated that the proposed method could be used for the precise landing of commercial aircraft in low visibility and GPS-denied environments.
Journal Article
Deep Learning-Based Navigation System for Automatic Landing Approach of Fixed-Wing UAVs in GNSS-Denied Environments
2025
The Global Navigation Satellite System (GNSS) is widely used in various applications of UAVs (unmanned aerial vehicles) that require precise positioning or navigation. However, GNSS signals can be blocked in specific environments and are susceptible to jamming and spoofing, which will degrade the performance of navigation systems. In this study, a deep learning-based navigation system for the automatic landing of fixed-wing UAVs in GNSS-denied environments is proposed to serve as an alternative navigation system. Most visual-based runway landing systems are typically focused on runway detection and localization while neglecting the issue of integrating the localization solution into flight control and guidance laws to become a complete real-time automatic landing system. This study addresses these problems by combining runway detection and localization methods, YOLOv8 and CNN (convolutional neural network) regression, to demonstrate the robustness of deep learning approaches. Moreover, a line detection method is employed to accurately align the UAV with the runway, effectively resolving issues related to runway contours. In the control phase, the guidance law and controller are designed to ensure the stable flight of the UAV. Based on a deep learning model framework, this study conducts experiments within the simulation environment, verifying system stability under various assumed conditions, thereby avoiding the risks associated with real-world testing. The simulation results demonstrate that the UAV can achieve automatic landing on 3-degree and 5-degree glide slopes, whether it is directly aligned with the runway or deviating from it, with trajectory tracking errors within 10 m.
Journal Article
YOLO-RWY: A Novel Runway Detection Model for Vision-Based Autonomous Landing of Fixed-Wing Unmanned Aerial Vehicles
2024
In scenarios where global navigation satellite systems (GNSSs) and radio navigation systems are denied, vision-based autonomous landing (VAL) for fixed-wing unmanned aerial vehicles (UAVs) becomes essential. Accurate and real-time runway detection in VAL is vital for providing precise positional and orientational guidance. However, existing research faces significant challenges, including insufficient accuracy, inadequate real-time performance, poor robustness, and high susceptibility to disturbances. To address these challenges, this paper introduces a novel single-stage, anchor-free, and decoupled vision-based runway detection framework, referred to as YOLO-RWY. First, an enhanced data augmentation (EDA) module is incorporated to perform various augmentations, enriching image diversity, and introducing perturbations that improve generalization and safety. Second, a large separable kernel attention (LSKA) module is integrated into the backbone structure to provide a lightweight attention mechanism with a broad receptive field, enhancing feature representation. Third, the neck structure is reorganized as a bidirectional feature pyramid network (BiFPN) module with skip connections and attention allocation, enabling efficient multi-scale and across-stage feature fusion. Finally, the regression loss and task-aligned learning (TAL) assigner are optimized using efficient intersection over union (EIoU) to improve localization evaluation, resulting in faster and more accurate convergence. Comprehensive experiments demonstrate that YOLO-RWY achieves AP50:95 scores of 0.760, 0.611, and 0.413 on synthetic, real nominal, and real edge test sets of the landing approach runway detection (LARD) dataset, respectively. Deployment experiments on an edge device show that YOLO-RWY achieves an inference speed of 154.4 FPS under FP32 quantization with an image size of 640. The results indicate that the proposed YOLO-RWY model possesses strong generalization and real-time capabilities, enabling accurate runway detection in complex and challenging visual environments, and providing support for the onboard VAL systems of fixed-wing UAVs.
Journal Article
Real-Time Runway Detection for Infrared Aerial Image Using Synthetic Vision and an ROI Based Level Set Method
by
Liu, Changjiang
,
Basu, Anup
,
Cheng, Irene
in
level set method
,
runway detection
,
synthetic vision
2018
We present a new method for real-time runway detection embedded in synthetic vision and an ROI (Region of Interest) based level set method. A virtual runway from synthetic vision provides a rough region of an infrared runway. A three-thresholding segmentation is proposed following Otsu’s binarization method to extract a runway subset from this region, which is used to construct an initial level set function. The virtual runway also gives a reference area of the actual runway in an infrared image, which helps us design a stopping criterion for the level set method. In order to meet the needs of real-time processing, the ROI based level set evolution framework is implemented in this paper. Experimental results show that the proposed algorithm is efficient and accurate.
Journal Article