Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,813 result(s) for "moving objects"
Sort by:
Background subtraction for moving object detection: explorations of recent developments and challenges
Background subtraction, although being a very well-established field, has required significant research efforts to tackle unsolved challenges and to accelerate the progress toward generalized moving object detection framework for real-time applications. The performance of subsequent steps in higher level video analytical tasks totally depends on the performance of background subtraction. Recent years have witnessed a remarkable performance of deep neural networks for background subtraction. The deep leaning has paved the way for improving background subtraction to counter the major challenges in this area. Also, the fusion of multiple features leads to the improvement of conventional background subtraction methods. In this context, we provide the comprehensive review of conventional as well as recent developments in background subtraction to analyze the success and current challenges in this field. Firstly, this paper introduces the overview of background subtraction process along with challenges and benchmark video datasets released for evaluation purpose. Then, we briefly summarize the background subtraction methods and report a comparison of the most promising state-of-the-art algorithms. Moreover, we comprehensively investigate some of the recent methods in order to find out how they have achieved their reported performances. Finally, we conclude with the shortcomings in the current developments and outline the promising research directions for background subtraction.
Real-Time Moving Object Detection in High-Resolution Video Sensing
This paper addresses real-time moving object detection with high accuracy in high-resolution video frames. A previously developed framework for moving object detection is modified to enable real-time processing of high-resolution images. First, a computationally efficient method is employed, which detects moving regions on a resized image while maintaining moving regions on the original image with mapping coordinates. Second, a light backbone deep neural network in place of a more complex one is utilized. Third, the focal loss function is employed to alleviate the imbalance between positive and negative samples. The results of the extensive experimentations conducted indicate that the modified framework developed in this paper achieves a processing rate of 21 frames per second with 86.15% accuracy on the dataset SimitMovingDataset, which contains high-resolution images of the size 1920 × 1080.
A New RGB-D SLAM Method with Moving Object Detection for Dynamic Indoor Scenes
Simultaneous localization and mapping (SLAM) methods based on an RGB-D camera have been studied and used in robot navigation and perception. So far, most such SLAM methods have been applied to a static environment. However, these methods are incapable of avoiding the drift errors caused by moving objects such as pedestrians, which limits their practical performance in real-world applications. In this paper, a new RGB-D SLAM with moving object detection for dynamic indoor scenes is proposed. The proposed detection method for moving objects is based on mathematical models and geometric constraints, and it can be incorporated into the SLAM process as a data filtering process. In order to verify the proposed method, we conducted sufficient experiments on the public TUM RGB-D dataset and a sequence image dataset from our Kinect V1 camera; both were acquired in common dynamic indoor scenes. The detailed experimental results of our improved RGB-D SLAM were summarized and demonstrate its effectiveness in dynamic indoor scenes.
Artificial compound eye: a survey of the state-of-the-art
An artificial compound eye system is the bionic system of natural compound eyes with much wider field-of-view, better capacity to detect moving objects and higher sensitivity to light intensity than ordinary single-aperture eyes. In recent years, renewed attention has been paid to the artificial compound eyes, due to their better characteristics inheriting from insect compound eyes than ordinary optical imaging systems. This paper provides a comprehensive survey of the state-of-the-art work on artificial compound eyes. This review starts from natural compound eyes to artificial compound eyes including their system design, theoretical development and applications. The survey of artificial compound eyes is developed in terms of two main types: planar and curved artificial compound eyes. Finally, the most promising future research developments are highlighted.
A Method to Detect and Track Moving Airplanes from a Satellite Video
In recent years, satellites capable of capturing videos have been developed and launched to provide high definition satellite videos that enable applications far beyond the capabilities of remotely sensed imagery. Moving object detection and moving object tracking are among the most essential and challenging tasks, but existing studies have mainly focused on vehicles. To accurately detect and then track more complex moving objects, specifically airplanes, we need to address the challenges posed by the new data. First, slow-moving airplanes may cause foreground aperture problem during detection. Second, various disturbances, especially parallax motion, may cause false detection. Third, airplanes may perform complex motions, which requires a rotation-invariant and scale-invariant tracking algorithm. To tackle these difficulties, we first develop an Improved Gaussian-based Background Subtractor (IPGBBS) algorithm for moving airplane detection. This algorithm adopts a novel strategy for background and foreground adaptation, which can effectively deal with the foreground aperture problem. Then, the detected moving airplanes are tracked by a Primary Scale Invariant Feature Transform (P-SIFT) keypoint matching algorithm. The P-SIFT keypoint of an airplane exhibits high distinctiveness and repeatability. More importantly, it provides a highly rotation-invariant and scale-invariant feature vector that can be used in the matching process to determine the new locations of the airplane in the frame sequence. The method was tested on a satellite video with eight moving airplanes. Compared with state-of-the-art algorithms, our IPGBBS algorithm achieved the best detection accuracy with the highest F1 score of 0.94 and also demonstrated its superiority on parallax motion suppression. The P-SIFT keypoint matching algorithm could successfully track seven out of the eight airplanes. Based on the tracking results, movement trajectories of the airplanes and their dynamic properties were also estimated.
Research on Moving Object Detection Based On Probability Statistics Adaptive Background Model of VR Tech
The detection of moving objects often has a high value utilization demand in real life, but the current detection methods of moving objects still have some problems and shortcomings, which need to be further optimized and ameliorated. On account of this, this paper first analyses the common methods of moving object detection, then studies the probability statistics adaptive background threshold selection method on account of VR, and finally gives the construction strategy of background model on account of pixel statistics and probability.
Sparse-Gated RGB-Event Fusion for Small Object Detection in the Wild
Detecting small moving objects under challenging lighting conditions, such as overexposure and underexposure, remains a critical challenge in computer vision applications including surveillance, autonomous driving, and anti-UAV systems. Traditional RGB-based detectors often suffer from degraded object visibility and highly dynamic illumination, leading to suboptimal performance. To address these limitations, we propose a novel RGB-Event fusion framework that leverages the complementary strengths of RGB and event modalities for enhanced small object detection. Specifically, we introduce a Temporal Multi-Scale Attention Fusion (TMAF) module to encode motion cues from event streams at multiple temporal scales, thereby enhancing the saliency of small object features. Furthermore, we design a Sparse Noisy Gated Attention Fusion (SNGAF) module, inspired by the mixture-of-experts paradigm, which employs a sparse gating mechanism to adaptively combine multiple fusion experts based on input characteristics, enabling flexible and robust RGB-Event feature integration. Additionally, we present RGBE-UAV, which is a new RGB-Event dataset tailored for small moving object detection under diverse exposure conditions. Extensive experiments on our RGBE-UAV and public DSEC-MOD datasets demonstrate that our method outperforms existing state-of-the-art RGB-Event fusion approaches, validating its effectiveness and generalization under complex lighting conditions.
MOD-IR: moving objects detection from UAV-captured video sequences based on image registration
The moving objects detection from freely moving camera like the one mounted on Unmanned Aerial Vehicle (UAV) stands as an important and challenging issue. This paper introduced a new MOD-IR method for moving objects detection from UAV-captured video sequences. The proposed method consists of four steps: (1) feature extraction and matching, (2) frame registration, (3) moving objects detection and (4) moving objects detection post-processing. Our method stands out from those of the literature in a number of ways. First, we enhanced the method effectiveness and robustness by handling the constraints related to this field through extracting robust features, on the one hand, and automatically defining the optimum threshold, on the other. Second, we proposed an efficient method able to deal with real-time applications by extracting keypoint features instead of pixel-to-pixel model estimation, and by simulating the search for the matching features among multiple trees. Finally, we involved the quick-shift segmentation in parallel with the three first steps, in order to enhance and accelerate the moving objects detection task. Relying on quantitative and qualitative evaluations of the proposed method on a variety of sequences extracted from several datasets (such as DARPA VIVID-EgTest05, Hopkins 155, UCF Aerial Action, etc.), we assessed the performance of our method compared to the state-of-the-art reference methods. Furthermore, the time cost evaluation has enabled us to emphasize that our MOD-IR method is the optimal choice for real-time applications, owing to its lower computational time requirement compared to the reference methods.
Identification of Trajectory Anomalies on Video Surveillance Systems
Recently, CCTV surveillance applications have remarkably developed for public welfare. However, the investigation of different techniques for online implementation is always significantly restricted. Numerous implementations propose for detecting irregularities of moving objects in the videotape. Performance of fuzzy in trajectory's anomaly is one of the most robust detection procedures. In this paper, the authors propose a fuzzy implemented trajectory anomalies detection technique with the help of some parameters such as velocity, path deviation, and size of the moving objects. The critical aspect of the framework is a compact set of highly descriptive features extracted from a novel cell structure that helps us define support regions in a coarse-to-fine fashion. This paper also illustrates a small outline of different detection techniques. The authors also exhibit the outcome of experiments on the Queen Mary University of London junction dataset (QMUL).
A hybrid framework combining background subtraction and deep neural networks for rapid person detection
Currently, the number of surveillance cameras is rapidly increasing responding to security issues. But constructing an intelligent detection system is not easy because it needs high computing performance. This study aims to construct a real-world video surveillance system that can effectively detect moving person using limited resources. To this end, we propose a simple framework to detect and recognize moving objects using outdoor CCTV video footages by combining background subtraction and Convolutional Neural Networks (CNNs). A background subtraction algorithm is first applied to each video frame to find the regions of interest (ROIs). A CNN classification is then carried out to classify the obtained ROIs into one of the predefined classes. Our approach much reduces the computation complexity in comparison to other object detection algorithms. For the experiments, new datasets are constructed by filming alleys and playgrounds, places where crimes are likely to occur. Different image sizes and experimental settings are tested to construct the best classifier for detecting people. The best classification accuracy of 0.85 was obtained for a test set from the same camera with training set and 0.82 with different cameras.