Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
12,055
result(s) for
"Optical flow"
Sort by:
Video Enhancement with Task-Oriented Flow
2019
Many video enhancement algorithms rely on optical flow to register frames in a video sequence. Precise flow estimation is however intractable; and optical flow itself is often a sub-optimal representation for particular video processing tasks. In this paper, we propose task-oriented flow (TOFlow), a motion representation learned in a self-supervised, task-specific manner. We design a neural network with a trainable motion estimation component and a video processing component, and train them jointly to learn the task-oriented flow. For evaluation, we build Vimeo-90K, a large-scale, high-quality video dataset for low-level video processing. TOFlow outperforms traditional optical flow on standard benchmarks as well as our Vimeo-90K dataset in three video processing tasks: frame interpolation, video denoising/deblocking, and video super-resolution.
Journal Article
Learning to Reconstruct HDR Images from Events, with Applications to Depth and Flow Prediction
2021
Event cameras have numerous advantages over traditional cameras, such as low latency, high temporal resolution, and high dynamic range (HDR). We initially investigate the potential of creating intensity images/videos from an adjustable portion of the event data stream via event-based conditional generative adversarial networks (cGANs). Using the proposed framework, we further show the versatility of our method in directly handling similar supervised tasks, such as optical flow and depth prediction. Stacks of space-time coordinates of events are used as the inputs while the proposed framework is trained to predict either the intensity images, optical flows, or depth outputs according to the target task. We further demonstrate the unique capability of our approach in generating HDR images even under extreme illumination conditions, creating non-blurred images under rapid motion, and generating very high frame rate videos up to the temporal resolution of event cameras. The proposed framework is evaluated using a publicly available real-world dataset and a synthetic dataset we prepared by utilizing an event camera simulator.
Journal Article
GyroFlow+: Gyroscope-Guided Unsupervised Deep Homography and Optical Flow Learning
2024
Existing homography and optical flow methods are erroneous in challenging scenes, such as fog, rain, night, and snow because the basic assumptions such as brightness and gradient constancy are broken. To address this issue, we present an unsupervised learning approach that fuses gyroscope into homography and optical flow learning. Specifically, we first convert gyroscope readings into motion fields named gyro field. Second, we design a self-guided fusion module (SGF) to fuse the background motion extracted from the gyro field with the optical flow and guide the network to focus on motion details. Meanwhile, we propose a homography decoder module (HD) to combine gyro field and intermediate results of SGF to produce the homography. To the best of our knowledge, this is the first deep learning framework that fuses gyroscope data and image content for both deep homography and optical flow learning. To validate our method, we propose a new dataset that covers regular and challenging scenes. Experiments show that our method outperforms the state-of-the-art methods in both regular and challenging scenes. The code and dataset are available at https://github.com/lhaippp/GyroFlowPlus.
Journal Article
Joint Self-supervised Depth and Optical Flow Estimation towards Dynamic Objects
2023
Significant attention has been attracted to deep learning-based depth estimates. Dynamic objects become the most hard problems in inter-frame-supervised depth estimates due to the uncertainty in adjacent frames. Thus, integrating optical flow information with depth estimation is a feasible solution, as the optical flow is an essential motion representation. In this work, we construct a joint inter-frame-supervised depth and optical flow estimation framework, which predicts depths in various motions by minimizing pixel wrap errors in bilateral photometric re-projections and optical vectors. For motion segmentation, we adaptively segment the preliminary estimated optical flow map with large areas of connectivity. In self-supervised depth estimation, different motion regions are predicted independently and then composite into a complete depth. Further, the pose and depth estimations re-synthesize the optical flow maps, serving to compute reconstruction errors with the preliminary predictions. Our proposed joint depth and optical flow estimation outperforms existing depth estimators on the KITTI Depth dataset, both with and without Cityscapes pretraining. Additionally, our optical flow results demonstrate competitive performance on the KITTI Flow 2015 dataset.
Journal Article
Optical Tracking Velocimetry (OTV): Leveraging Optical Flow and Trajectory-Based Filtering for Surface Streamflow Observations
by
Piscopia, Rodolfo
,
Tosi, Fabio
,
Grimaldi, Salvatore
in
Accelerated tests
,
Algorithms
,
Artificial intelligence
2018
Nonintrusive image-based methods have the potential to advance hydrological streamflow observations by providing spatially distributed data at high temporal resolution. Due to their simplicity, correlation-based approaches have until recent been preferred to alternative image-based approaches, such as optical flow, for camera-based surface flow velocity estimate. In this work, we introduce a novel optical flow scheme, optical tracking velocimetry (OTV), that entails automated feature detection, tracking through the differential sparse Lucas-Kanade algorithm, and then a posteriori filtering to retain only realistic trajectories that pertain to the transit of actual objects in the field of view. The method requires minimal input on the flow direction and camera orientation. Tested on two image data sets collected in diverse natural conditions, the approach proved suitable for rapid and accurate surface flow velocity estimations. Five different feature detectors were compared and the features from accelerated segment test (FAST) resulted in the best balance between the number of features identified and successfully tracked as well as computational efficiency. OTV was relatively insensitive to reduced image resolution but was impacted by acquisition frequencies lower than 7–8 Hz. Compared to traditional correlation-based techniques, OTV was less affected by noise and surface seeding. In addition, the scheme is foreseen to be applicable to real-time gauge-cam implementations.
Journal Article
Optical flow for video super-resolution: a survey
2022
Video super-resolution is currently one of the most active research topics in computer vision as it plays an important role in many visual applications. Generally, video super-resolution contains a significant component, i.e., motion compensation, which is used to estimate the displacement between successive video frames for temporal alignment. Optical flow, which can supply dense and sub-pixel motion between consecutive frames, is among the most common ways for this task. To obtain a good understanding of the effect that optical flow acts in video super-resolution, in this work, we conduct a comprehensive review on this subject for the first time. This investigation covers the following major topics: the function of super-resolution (i.e., why we require super-resolution); the concept of video super-resolution (i.e., what is video super-resolution); the description of evaluation metrics (i.e., how (video) super-resolution performs); the introduction of optical flow based video super-resolution; the investigation of using optical flow to capture temporal dependency for video super-resolution. Prominently, we give an in-depth study of the deep learning based video super-resolution method, where some representative algorithms are analyzed and compared. Additionally, we highlight some promising research directions and open issues that should be further addressed.
Journal Article
Deep Learning‐Based Optical Flow in Fine‐Scale Deformation Mapping of Sea Ice Dynamics
2025
Optical methods deployed for studying motion and deformation of objects often struggle to distinguish small displacements hidden behind observational noise. In geophysical applications, this has limited analysis to lower spatial and temporal resolutions, while reliable extraction of high‐resolution data is required for understanding material deformation and failure. In this work, we propose a novel method for determining deformation for noisy observational data using deep learning‐based optical flow. To enable higher estimate accuracy, we introduce a novel initialization technique considering contextual information. This allows an unprecedentedly high‐resolution description of motion in radar imagery. We use the proposed technique on verification cases to compare with the currently used methodologies and on ship radar observations on sea ice deformation. The outcome of our work is an open‐source end‐to‐end tool for determining full‐field Lagrangian deformation fields for data sets with small pixel displacements and high observational noise. Plain Language Summary Estimating motion and deformation from radar imagery is a common task in geophysical sciences. High‐resolution description of material dynamics are required for accurate analytical solutions and models. Current methods for determining deformation from radar data have relied on tradition optical methods, resulting in lower resolutions and decreased accuracy. We develop a deep learning based tool to provide a highly accurate full‐field description of deformation in radar data. We further introduce a method to increase the tool's accuracy with small displacements and intensified noise. We verify the accuracy of the tool against the current methods used for sea ice radar imagery as well as against state‐of‐the‐art deep learning methods. To highlight the abilities of the novel tool, we apply it on ship radar imagery for description of sea ice deformation. Key Points A deep learning‐based tool is developed for determining motion and deformation from radar imagery in geophysical applications A novel temporal multiresolution tree is introduced to enhance accuracy in optical flow applications The method outperforms previously used approaches and is tested with high‐resolution ship radar observations on sea ice
Journal Article
Learning deep facial expression features from image and optical flow sequences using 3D CNN
by
Zhang, Jian
,
Mao, Xia
,
Zhao, Jianfeng
in
Artificial Intelligence
,
Artificial neural networks
,
Computer Graphics
2018
Facial expression is highly correlated with the facial motion. According to whether the temporal information of facial motion is used or not, the facial expression features can be classified as static and dynamic features. The former, which mainly includes the geometric features and appearance features, can be extracted by convolution or other learning filters; the latter, which are aimed to model the dynamic properties of facial motion, can be calculated through optical flow or other methods, respectively. When 3D convolutional neural networks (CNNs) are introduced, the extraction of two different types of features mentioned above becomes easy. In this paper, one 3D CNN architecture is presented to learn the static and dynamic features from facial image sequences and extract high-level dynamic features from optical flow sequences. Two types of dense optical flow, which contain the tracking information of facial muscle movement, are calculated according to different image pair construction methods. One is the common optical flow, and the other is an enhanced optical flow which is called accumulative optical flow. Four components of each type of optical flow are used in experiments. Three databases, two acted databases and one nearly realistic database, are selected to conduct the experiments. The experiments on the two acted databases achieve state-of-the-art accuracy, and indicate that the vertical component of optical flow has an advantage over other components in recognizing facial expression. The experimental results on the three selected databases show that more discriminative features can be learned from image sequences than from optical flow or accumulative optical flow sequences, and the accumulative optical flow contains more motion information than optical flow if the frame distance of the image pairs used to calculate them is not too large.
Journal Article
Study on video key frame extraction in different scenes based on optical flow
2023
Key frame extraction is an important component of video analysis, and has gradually become a research hotspot in the computer vision community in recent years. Early key frame extraction algorithms were mostly based on fixed time intervals, and their accuracy could not meet practical application requirements. Thanks to the rapid development of machine learning technology, key frame extraction algorithms based on image quality, motion analysis, and deep learning are gradually becoming mature. Although the above methods significantly improve the accuracy of key frame extraction, few work pay attention to the extraction effect in different scenarios. In this article, selecting the key frame extraction algorithm based on optical flow method, we compared the key frame extraction algorithm based on optical flow method in detail for different video scenarios. Extensive experimental results demonstrate the robuteness of key frame extraction algorithm based on optical flow.
Journal Article
Aerial Images-Based Forest Fire Detection for Firefighting Using Optical Remote Sensing Techniques and Unmanned Aerial Vehicles
2017
Due to their fast response capability, low cost and without danger to personnel safety since there is no human pilot on-board, unmanned aerial vehicles (UAVs) with vision-based systems have great potential for monitoring and detecting forest fires. This paper proposes a novel forest fire detection method using both color and motion features for processing images captured from the camera mounted on a UAV which is moving during the whole mission period. First, a color-based fire detection algorithm with light computational demand is designed to extract fire-colored pixels as fire candidate regions by making use of chromatic feature of fire and obtaining fire candidate regions for further analysis. As the pose variations and low-frequency vibrations of UAV cause all objects and background in the images are moving, it is challenging to identify fires defending on a single motion based method. Two types of optical flow algorithms, a classical optical flow algorithm and an optimal mass transport optical flow algorithm, are then combined to compute motion vectors of the fire candidate regions. Fires are thereby expected to be distinguished from other fire analogues based on their motion features. Several groups of experiments are conducted to validate that the proposed method can effectively extract and track fire pixels in aerial video sequences. The good performance is anticipated to significantly improve the accuracy of forest fire detection and reduce false alarm rates without increasing much computation efforts.
Journal Article