Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
26
result(s) for
"local saliency detection"
Sort by:
Spectral–Spatial Feature Fusion for Hyperspectral Anomaly Detection
2024
Hyperspectral anomaly detection is used to recognize unusual patterns or anomalies in hyperspectral data. Currently, many spectral–spatial detection methods have been proposed with a cascaded manner; however, they often neglect the complementary characteristics between the spectral and spatial dimensions, which easily leads to yield high false alarm rate. To alleviate this issue, a spectral–spatial information fusion (SSIF) method is designed for hyperspectral anomaly detection. First, an isolation forest is exploited to obtain spectral anomaly map, in which the object-level feature is constructed with an entropy rate segmentation algorithm. Then, a local spatial saliency detection scheme is proposed to produce the spatial anomaly result. Finally, the spectral and spatial anomaly scores are integrated together followed by a domain transform recursive filtering to generate the final detection result. Experiments on five hyperspectral datasets covering ocean and airport scenes prove that the proposed SSIF produces superior detection results over other state-of-the-art detection techniques.
Journal Article
A novel method for vehicle headlights detection using salient region segmentation and PHOG feature
2021
In this paper, we explore an issue that is to detect vehicle headlights from the nighttime traffic surveillance images with highly reflections. In the night, reflections on the road (water) surface, vehicles bodies, or some reflective objects (such as lane markings, traffic signs) will interfere the headlight detection seriously. Although the existing methods have achieved good results, however, most of them failed to detect the headlight when headlights are far from camera. In order to solve the issue, we propose a novel method for vehicle headlights detection. The proposed method makes full use of the brightness and gradient information of the headlights in the night. First, we propose an effective region-of-interest (ROI) segmentation method which is based on multi-scale local saliency detection. The method pre-serve faint or small-sized objects and retain the original shape of the object to the greatest extent. Then, we compute the pyramid histogram of oriented gradients (PHOG) features, which are used to train support vector machine (SVM) classifier. Finally, the extracted bright blocks are classified according to the pre-trained SVM classifier. Experimental results and quantitative evaluations in different scenes demonstrate that our proposed method can achieve a better result compared with previous methods.
Journal Article
Small Moving Vehicle Detection in a Satellite Video of an Urban Area
by
Yang, Tao
,
Zhang, Yanning
,
He, Zhannan
in
local saliency map
,
motion heat map
,
moving vehicle detection
2016
Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels and exhibit low contrast with respect to the background, which makes it hard to get available appearance or shape information. In this paper, we look into the problem of moving vehicle detection in satellite imagery. To the best of our knowledge, it is the first time to deal with moving vehicle detection from satellite videos. Our approach consists of two stages: first, through foreground motion segmentation and trajectory accumulation, the scene motion heat map is dynamically built. Following this, a novel saliency based background model which intensifies moving objects is presented to segment the vehicles in the hot regions. Qualitative and quantitative experiments on sequence from a recent Skybox satellite video dataset demonstrates that our approach achieves a high detection rate and low false alarm simultaneously.
Journal Article
High-Speed Spatial–Temporal Saliency Model: A Novel Detection Method for Infrared Small Moving Targets Based on a Vectorized Guided Filter
2024
Infrared (IR) imaging-based detection systems are of vital significance in the domains of early warning and security, necessitating a high level of precision and efficiency in infrared small moving target detection. IR targets often appear dim and small relative to the background and are easily buried by noise and difficult to detect. A novel high-speed spatial–temporal saliency model (HS-STSM) based on a guided filter (GF) is proposed, which innovatively introduces GF into IR target detection to extract the local anisotropy saliency in the spatial domain, and substantially suppresses the background region as well as the bright clutter false alarms present in the background. Moreover, the proposed model extracts the motion saliency of the target in the temporal domain through vectorization of IR image sequences. Additionally, the proposed model significantly improves the detection efficiency through a vectorized filtering process and effectively suppresses edge components in the background by integrating a prior weight. Experiments conducted on five real infrared image sequences demonstrate the superior performance of the model compared to existing algorithms in terms of the detection rate, noise suppression, real-time processing, and robustness to the background.
Journal Article
A2G-SRNet: An Adaptive Attention-Guided Transformer and Super-Resolution Network for Enhanced Aircraft Detection in Satellite Imagery
2025
Accurate aircraft detection in remote sensing imagery is critical for aerospace surveillance, military reconnaissance, and aviation security but remains fundamentally challenged by extreme scale variations, arbitrary orientations, and dense spatial clustering in high-resolution scenes. This paper presents an adaptive attention-guided super-resolution network that integrates multi-scale feature learning with saliency-aware processing to address these challenges. Our architecture introduces three key innovations: (1) A hierarchical coarse-to-fine detection pipeline that first identifies potential regions in downsampled imagery before applying precision refinement, (2) A saliency-aware tile selection module employing learnable attention tokens to dynamically localize aircraft-dense regions without manual thresholds, and (3) A local tile refinement network combining transformer-based super-resolution for target regions with efficient upsampling for background areas. Extensive experiments on DIOR and FAIR1M benchmarks demonstrate state-of-the-art performance, achieving 93.1% AP50 (DIOR) and 83.2% AP50 (FAIR1M), significantly outperforming existing super-resolution-enhanced detectors. The proposed framework offers an adaptive sensing solution for satellite-based aircraft detection, effectively mitigating scale variations and background clutter in real-world operational environments.
Journal Article
Target Detection in High-Resolution SAR Image via Iterating Outliers and Recursing Saliency Depth
2021
In dealing with the problem of target detection in high-resolution Synthetic Aperture Radar (SAR) images, segmenting before detecting is the most commonly used approach. After the image is segmented by the superpixel method, the segmented area is usually a mixture of target and background, but the existing regional feature model does not take this into account, and cannot accurately reflect the features of the SAR image. Therefore, we propose a target detection method based on iterative outliers and recursive saliency depth. At first, we use the conditional entropy to model the features of the superpixel region, which is more in line with the actual SAR image features. Then, through iterative anomaly detection, we achieve effective background selection and detection threshold design. After that, recursing saliency depth is used to enhance the effective outliers and suppress the background false alarm to realize the correction of superpixel saliency value. Finally, the local graph model is used to optimize the detection results. Compared with Constant False Alarm Rate (CFAR) and Weighted Information Entropy (WIE) methods, the results show that our method has better performance and is more in line with the actual situation.
Journal Article
Dense Connectivity Based Two-Stream Deep Feature Fusion Framework for Aerial Scene Classification
by
Yu, Yunlong
,
Liu, Fuxian
in
aerial scene classification
,
Architecture
,
Artificial intelligence
2018
Aerial scene classification is an active and challenging problem in high-resolution remote sensing imagery understanding. Deep learning models, especially convolutional neural networks (CNNs), have achieved prominent performance in this field. The extraction of deep features from the layers of a CNN model is widely used in these CNN-based methods. Although the CNN-based approaches have obtained great success, there is still plenty of room to further increase the classification accuracy. As a matter of fact, the fusion with other features has great potential for leading to the better performance of aerial scene classification. Therefore, we propose two effective architectures based on the idea of feature-level fusion. The first architecture, i.e., texture coded two-stream deep architecture, uses the raw RGB network stream and the mapped local binary patterns (LBP) coded network stream to extract two different sets of features and fuses them using a novel deep feature fusion model. In the second architecture, i.e., saliency coded two-stream deep architecture, we employ the saliency coded network stream as the second stream and fuse it with the raw RGB network stream using the same feature fusion model. For sake of validation and comparison, our proposed architectures are evaluated via comprehensive experiments with three publicly available remote sensing scene datasets. The classification accuracies of saliency coded two-stream architecture with our feature fusion model achieve 97.79%, 98.90%, 94.09%, 95.99%, 85.02%, and 87.01% on the UC-Merced dataset (50% and 80% training samples), the Aerial Image Dataset (AID) (20% and 50% training samples), and the NWPU-RESISC45 dataset (10% and 20% training samples), respectively, overwhelming state-of-the-art methods.
Journal Article
Aircraft Detection in High-Resolution SAR Images Based on a Gradient Textural Saliency Map
2015
This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural saliency map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural saliency map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the saliency map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate.
Journal Article
Saliency guided local and global descriptors for effective action recognition
by
Lai, Yu-Kun
,
Sun, Xianfang
,
Abdulmunem, Ashwan
in
Algorithms
,
Artificial Intelligence
,
Computer Graphics
2016
This paper presents a novel framework for human action recognition based on salient object detection and a new combination of local and global descriptors. We first detect salient objects in video frames and only extract features for such objects. We then use a simple strategy to identify and process only those video frames that contain salient objects. Processing salient objects instead of all frames not only makes the algorithm more efficient, but more importantly also suppresses the interference of background pixels. We combine this approach with a new combination of local and global descriptors, namely 3D-SIFT and histograms of oriented optical flow (HOOF), respectively. The resulting
saliency guided 3D-SIFT–HOOF
(SGSH) feature is used along with a multi-class support vector machine (SVM) classifier for human action recognition. Experiments conducted on the standard KTH and UCF-Sports action benchmarks show that our new method outperforms the competing state-of-the-art spatiotemporal feature-based human action recognition methods.
Journal Article
Building Detection from VHR Remote Sensing Imagery Based on the Morphological Building Index
by
Ma, Yuanxu
,
You, Yongfa
,
Chen, Guangsheng
in
building detection
,
built-up areas extraction
,
local feature points
2018
Automatic detection of buildings from very high resolution (VHR) satellite images is a current research hotspot in remote sensing and computer vision. However, many irrelevant objects with similar spectral characteristics to buildings will cause a large amount of interference to the detection of buildings, thus making the accurate detection of buildings still a challenging task, especially for images captured in complex environments. Therefore, it is crucial to develop a method that can effectively eliminate these interferences and accurately detect buildings from complex image scenes. To this end, a new building detection method based on the morphological building index (MBI) is proposed in this study. First, the local feature points are detected from the VHR remote sensing imagery and they are optimized by the saliency index proposed in this study. Second, a voting matrix is calculated based on these optimized local feature points to extract built-up areas. Finally, buildings are detected from the extracted built-up areas using the MBI algorithm. Experiments confirm that our proposed method can effectively and accurately detect buildings in VHR remote sensing images captured in complex environments.
Journal Article