Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
185 result(s) for "Hidden objects"
Sort by:
Terahertz video-based hidden object detection using YOLOv5m and mutation-enabled salp swarm algorithm for enhanced accuracy and faster recognition
In public spaces, conducting security checks to detect concealed objects carried on the human body is crucial for enhancing global anti-terrorist measures. Terahertz imaging has recently played a pivotal role in concealed object detection. However, previous studies have faced significant challenges in achieving superior accuracy and performance. To address these issues, we propose a YOLOv5m model for detecting hidden objects beneath human clothing. We employ the CSPDarknet53 block to reduce noise and enhance discriminative power. Object location and size are identified using a PANet and the prediction head. To reduce computational complexity and obtain highly relevant features, we utilize multi-convolutional layers. Duplicate boxes are eliminated and high-quality bounding boxes are accurately detected using the NMS block. Hyper parameter tuning is performed using the Mutation Enabled Salp Swarm Algorithm, resulting in improved detection accuracy and reduced processing time. Our proposed model achieves impressive metrics, including a precision of 98.99%, recall of 97.80%, F1 score of 98.05%, detection rate of 96.50% and execution time of 135 s. Comparatively, our method outperforms existing approaches such as CNN, YOLO3, AC-SDBSCAN, YOLO-v2, RaadNet and SPFAN. We train and test our proposed method using a terahertz video dataset, demonstrating excellent results with high precision.
An optimal deep learning model for recognition of hidden hazardous weapons in terahertz and millimeter wave images
In video surveillance, real-time detection of hidden or concealed objects under the clothes of humans is a challenging task. Very few research works have been carried out on this hidden object detection and recognition with the aid of various coherent technologies namely Millimeter wave (MMW) imaging, Infrared (IR) imaging, and Terahertz (THz) imaging systems. Hidden objects that include weapons like axe, knife, gun, bomb, pistol, etc., are a major threat to security surveillance as it needs to be recognized in a few seconds irrespective of their size. This paper proposes an efficient hidden object recognition method that focuses on the detection and recognition of concealed weapons in humans using Modified Weighted You Only Look Once v5 (MWYOLOv5) model. The probable occurrence of inaccurate forecast results due to the sense of a low confidence score is mitigated by the use of the weighted boxes fusion (WBF) method. As a result, higher-confidence boxes contribute more to fused box coordinates than lower-confidence boxes. Meanwhile, it is significant to select optimal hyperparameter values for training YOLOv5 model with CSPDarknet53 based feature extractor and Path Aggregation Network (PANet) based feature aggregation procedures to predict concealed objects. To achieve this, a new crossover salp swarm algorithm (CSSA) is developed to tune the YOLO hyperparameters such as learning rate, momentum, weight decay and batch size. This provides more accuracy in recognizing hidden objects on using THz and MMW images on comparison with existing methods. The proposed hazardous weapon recognition model is trained and tested on both MMW and THz Imagery datasets and it shows that the proposed methodology is showing good results with high mAP@.5 and mAP@.5:95.
Bird detection based on object-background conversion in substation
To address the issue of low segmentation accuracy caused by the small size of bird objects and their high similarity to the substation background caused the low segmentation accuracy. To address this problem, a bird detection method based on object-background conversion in substation, named OBCNet is proposed. Specifically, the global information generator to capture global information, expanding the receptive field while preserving original detail information. The boundary map extractor combines top-level and bottom-level features to obtain boundary maps, enhancing top-level features while fully utilizing bottom-level information. Then, the boundary conversion module integrates multiple guidance information by alternately focusing on the target boundaries and their surrounding background, thereby amplifying their differences and progressively refining the segmentation results. Experimental results on a self-constructed dataset demonstrate that the proposed method improves the Intersection over Union metric by 2.43% to the compared camouflaged object detection algorithms, providing a reliable basis for the repelling of small birds in substations.
Context in object detection: a systematic literature review
Context is an important factor in computer vision as it offers valuable information to clarify and analyze visual data. Utilizing the contextual information inherent in an image or a video can improve the precision and effectiveness of object detectors. For example, where recognizing an isolated object might be challenging, context information can improve comprehension of the scene. This study explores the impact of various context-based approaches to object detection. Initially, we investigate the role of context in object detection and survey it from several perspectives. We then review and discuss the most recent context-based object detection approaches and compare them. Finally, we conclude by addressing research questions and identifying gaps for further studies. More than 265 publications are included in this survey, covering different aspects of context in different categories of object detection, including general object detection, video object detection, small object detection, camouflaged object detection, zero-shot, one-shot, and few-shot object detection. This literature review presents a comprehensive overview of the latest advancements in context-based object detection, providing valuable contributions such as a thorough understanding of contextual information and effective methods for integrating various context types into object detection, thus benefiting researchers.
Where’s Wanda? The influence of visual imagery vividness on visual search speed measured by means of hidden object pictures
Previous research demonstrated effects of visual imagery on search speed in visual search paradigms. However, these effects were rather small, questioning their ecological validity. Thus, our present study aimed to generalize these effects to more naturalistic material (i.e., a paradigm that allows for top-down strategies in highly complex visual search displays that include overlapping stimuli while simultaneously avoiding possibly confounding search instructions). One hundred and four participants with aphantasia (= absence of voluntary mental imagery) and 104 gender and age-matched controls were asked to find hidden objects in several hidden object pictures with search times recorded. Results showed that people with aphantasia were significantly slower than controls, even when controlling for age and general processing speed. Thus, effects of visual imagery might be strong enough to influence the perception of our real-life surroundings, probably because of the involvement of visual imagery in several top-down strategies.
Depth alignment interaction network for camouflaged object detection
Many animals actively change their own characteristics, such as color and texture, through camouflage, a natural defense mechanism, making them difficult to be detected in the natural environment, which makes the task of camouflaged object detection extremely challenging. Biological research shows that the eyes of animals have three-dimensional perception ability, and the obtained depth information can provide useful object positioning clues for finding camouflaged objects. However, almost all the current studies for camouflaged object detection do not combine depth maps with RGB images. Therefore, combining depth maps with traditional unimodal RGB images is of great research significance to improve the accuracy of camouflaged object detection. In this paper, we propose a depth alignment interaction network for camouflaged object detection in which the depth maps used are generated from existing monocular depth estimation networks. To address the problem that the quality of the generated depth maps varies, we propose a depth alignment index method to evaluate the quality of the depth maps. The method dynamically assigns the proportion of depth maps in the fusion process to depth maps of different quality according to their alignment with RGB images. Then, to fully extract the fused artifact features, we design an expanded pyramid interaction module, which first expands the receptive field of the features in each layer. Then, the features at the higher levels interacted with the features at the lower levels by connecting them step-by-step to further refine the predicted camouflaged area. Extensive experiments on 4 camouflaged object detection datasets demonstrate the effectiveness of our solution for camouflaged object detection.
SF-CODnet: Spatial-Frequency Framework with Weak Sample Learning Strategy for Detecting Camouflaged Wildlife Objects
The problem of camouflaged object detection (COD) is to identify hidden objects in their environment. This problem remained unsolved for many years until the advent of deep learning methods. The style of invisibility of the camouflaged object plays a significant role, which is difficult to capture only in the spatial area. At the same time, the structural properties of camouflaged patterns are characterized by a high discriminatory ability to distinguish camouflaged objects from the background. The spatial-frequency model called SF-CODnet achieves accurate structural segmentation, realizing not only semantic but also instance segmentation of wildlife camouflage objects. A learning strategy to synthesize weak samples also helped to generalize the COD ability of the baseline model. Unlike other popular learning strategies that improve samples as much as possible before training, our weak sample synthesis learning strategy helps to generalize the base model's COD ability. Such augmentation strategy and the proposed SF-CODnet model were tested using three publicly available datasets: CAMO, COD10K, and NC4K with good results, outperforming some COD models.
Hidden Dangerous Object Recognition in Terahertz Images Using Deep Learning Methods
As a harmless detection method, terahertz has become a new trend in security detection. However, there are inherent problems such as the low quality of the images collected by terahertz equipment and the insufficient detection accuracy of dangerous goods. This work advances BiFPN at the neck of YOLOv5 of the deep learning model as a mechanism to improve low resolution. We also perform transfer learning, thereby fine-tuning the pre-training weight of the backbone for migration learning in our model. Results from experimental analysis reveal that mAP@0.5 and mAP@0.5:0.95 values witness a percentage increase of 0.2% and 1.7%, respectively, attesting to the superiority of the proposed model to YOLOv5, which is the state-of-the-art model in object detection.
Research on Camouflage Target Detection Method Based on Edge Guidance and Multi-Scale Feature Fusion
Camouflaged Object Detection (COD) aims to identify objects that share highly similar patterns—such as texture, intensity, and color—with their surrounding environment. Due to their intrinsic resemblance to the background, camouflaged objects often exhibit vague boundaries and varying scales, making it challenging to accurately locate targets and delineate their indistinct edges. To address this, we propose a novel camouflaged object detection network called Edge-Guided and Multi-scale Fusion Network (EGMFNet), which leverages edge-guided multi-scale integration for enhanced performance. The model incorporates two innovative components: a Multi-scale Fusion Module (MSFM) and an Edge-Guided Attention Module (EGA). These designs exploit multi-scale features to uncover subtle cues between candidate objects and the background while emphasizing camouflaged object boundaries. Moreover, recognizing the rich contextual information in fused features, we introduce a Dual-Branch Global Context Module (DGCM) to refine features using extensive global context, thereby generating more informative representations. Experimental results on four benchmark datasets demonstrate that EGMFNet outperforms state-of-the-art methods across five evaluation metrics. Specifically, on COD10K, our EGMFNet-P improvesFβby 4.8 points and reduces mean absolute error (MAE) by 0.006 compared with ZoomNeXt; on NC4K, it achieves a 3.6-point increase inFβ. On CAMO and CHAMELEON, it obtains 4.5-point increases inFβ, respectively. These consistent gains substantiate the superiority and robustness of EGMFNet.
Hidden Object Detection and Recognition in Passive Terahertz and Mid-wavelength Infrared
The study presents the comparison of detection and recognition of concealed objects covered with various types of clothing by using passive imagers operating in a terahertz (THz) range at 1.2 mm (250 GHz) and a mid-wavelength infrared (MWIR) at 3–6 μm (50–100 THz). During this study, large dataset of images presenting various items covered with various types of clothing has been collected. The detection and classification algorithms aimed to operate robustly at high processing speed across these two spectrums. Properties of both spectrums, theoretical limitations, performance of imagers and physical properties of fabrics in both spectral domains are described. The paper presents a comparison of two deep learning–based processing methods. The comparison of the original results of various experiments for the two spectrums is presented.