Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3,598
result(s) for
"Optical flow (image analysis)"
Sort by:
Image registration for accurate electrode deformation analysis in operando microscopy of battery materials
by
Sun, Tianxiao
,
Peng, Robert
,
Li, Wenlong
in
battery degradation
,
chemomechanical coupling
,
Data acquisition
2025
Operando imaging techniques have become increasingly valuable in both battery research and manufacturing. However, the reliability of these methods can be compromised by instabilities in the imaging setup and operando cells, particularly when utilizing high‐resolution imaging systems. The acquired imaging data often include features arising from both undesirable system vibrations and drift, as well as the scientifically relevant deformations occurring in the battery sample during cell operation. For meaningful analysis, it is crucial to distinguish and separately evaluate these two factors. To address these challenges, we employ a suite of advanced image‐processing techniques. These include fast Fourier transform analysis in the frequency domain, power spectrum‐based assessments for image quality, as well as rigid and non‐rigid image‐registration methods. These techniques allow us to identify and exclude blurred images, correct for displacements caused by motor vibrations and sample holder drift and, thus, prevent unwanted image artifacts from affecting subsequent analyses and interpretations. Additionally, we apply optical flow analysis to track the dynamic deformation of battery electrode materials during electrochemical cycling. This enables us to observe and quantify the evolving mechanical responses of the electrodes, offering deeper insights into battery degradation. Together, these methods ensure more accurate image analysis and enhance our understanding of the chemomechanical interplay in battery performance and longevity.
We applied advanced image‐processing techniques, including fast Fourier transform analysis, image registration and optical flow, to mitigate artifacts caused by system instabilities and accurately track battery electrode deformations during operation. This approach improves the reliability of high‐resolution operando imaging, providing deeper insights into battery degradation and enhancing our understanding of chemomechanical interactions in battery performance.
Journal Article
A vision-based approach for fall detection using multiple cameras and convolutional neural networks: A case study using the UP-Fall detection dataset
by
Espinosa, Ricardo
,
Gutiérrez, Sebastián
,
Brieva, Jorge
in
Artificial intelligence
,
Artificial neural networks
,
Cameras
2019
The automatic recognition of human falls is currently an important topic of research for the computer vision and artificial intelligence communities. In image analysis, it is common to use a vision-based approach for fall detection and classification systems due to the recent exponential increase in the use of cameras. Moreover, deep learning techniques have revolutionized vision-based approaches. These techniques are considered robust and reliable solutions for detection and classification problems, mostly using convolutional neural networks (CNNs). Recently, our research group released a public multimodal dataset for fall detection called the UP-Fall Detection dataset, and studies on modality approaches for fall detection and classification are required. Focusing only on a vision-based approach, in this paper, we present a fall detection system based on a 2D CNN inference method and multiple cameras. This approach analyzes images in fixed time windows and extracts features using an optical flow method that obtains information on the relative motion between two consecutive images. We tested this approach on our public dataset, and the results showed that our proposed multi-vision-based approach detects human falls and achieves an accuracy of 95.64% compared to state-of-the-art methods with a simple CNN network architecture.
•A human fall detection system based on multiple cameras and CNN is proposed.•This fall detection system achieves an accuracy of 95.64% using only two cameras.•This fall detection system competes with the state-of-the-art methods using images.
Journal Article
On Biases in Displacement Estimation for Image Registration, with a Focus on Photomechanics
by
Sur, Frédéric
,
Blaysat, Benoît
,
Grédiac, Michel
in
Applications of Mathematics
,
Bias
,
Computer Science
2021
Image registration under small displacements is the keystone of several image analysis tasks such as optical flow estimation, stereoscopic imaging, or full-field displacement estimation in photomechanics. A popular approach consists in locally modeling the displacement field between two images by a parametric transformation and performing least-squares estimation afterward. This procedure is known as “digital image correlation” in several domains as in photomechanics. The present article is part of this approach. First, the estimated displacement is shown to be impaired by biases related to the interpolation scheme needed to reach subpixel accuracy, the image gradient distribution, as well as the difference between the hypothesized parametric transformation and the true displacement. A quantitative estimation of the difference between the estimated value and the actual one is of importance in application domains such as stereoscopy or photomechanics, which have metrological concerns. Second, we question the extent to which these biases could be eliminated or reduced. We also present numerical assessments of our predictive formula in the context of photomechanics. Software codes are freely available to reproduce our results. Although this paper is focused on a particular application field, namely photomechanics, it is relevant to various scientific areas concerned by image registration.
Journal Article
Registration of Large Optical and SAR Images with Non-Flat Terrain by Investigating Reliable Sparse Correspondences
by
Ni, Weiping
,
Kuang, Gangyao
,
Zhang, Han
in
Ablation
,
Affine transformations
,
Comparative analysis
2023
Optical and SAR image registration is the primary procedure to exploit the complementary information from the two different image modal types. Although extensive research has been conducted to narrow down the vast radiometric and geometric gaps so as to extract homogeneous characters for feature point matching, few works have considered the registration issue for non-flat terrains, which will bring in more difficulties for not only sparse feature point matching but also outlier removal and geometric relationship estimation. This article addresses these issues with a novel and effective optical-SAR image registration framework. Firstly, sparse feature points are detected based on the phase congruency moment map of the textureless SAR image (SAR-PC-Moment), which helps to identify salient local regions. Then a template matching process using very large local image patches is conducted, which increases the matching accuracy by a significant margin. Secondly, a mutual verification-based initial outlier removal method is proposed, which takes advantage of the different mechanisms of sparse and dense matching and requires no geometric consistency assumption within the inliers. These two procedures will produce a putative correspondence feature point (CP) set with a low outlier ratio and high reliability. In the third step, the putative CPs are used to segment the large input image of non-flat terrain into dozens of locally flat areas using a recursive random sample consensus (RANSAC) method, with each locally flat area co-registered using an affine transformation. As for the mountainous areas with sharp elevation variations, anchor CPs are first identified, and then optical flow-based pixelwise dense matching is conducted. In the experimental section, ablation studies using four precisely co-registered optical-SAR image pairs of flat terrain quantitatively verify the effectiveness of the proposed SAR-PC-Moment-based feature point detector, big template matching strategy, and mutual verification-based outlier removal method. Registration results on four 1 m-resolution non-flat image pairs prove that the proposed framework is able to produce robust and quite accurate registration results.
Journal Article
Performance Testing of Optical Flow Time Series Analyses Based on a Fast, High-Alpine Landslide
by
Krautblatter, Michael
,
Gaeta, Michele
,
Hermle, Doris
in
Acceleration
,
Algorithms
,
Alpine environments
2022
Accurate remote analyses of high-alpine landslides are a key requirement for future alpine safety. In critical stages of alpine landslide evolution, UAS (unmanned aerial system) data can be employed using image registration to derive ground motion with high temporal and spatial resolution. However, classical area-based algorithms suffer from dynamic surface alterations and their limited velocity range restricts detection, resulting in noise from decorrelation and hindering their application to fast landslides. Here, to reduce these limitations we apply for the first time the optical flow-time series to landslides for the analysis of one of the fastest and most critical debris flow source zones in Austria. The benchmark site Sattelkar (2130–2730 m asl), a steep, high-alpine cirque in Austria, is highly sensitive to rainfall and melt-water events, which led to a 70,000 m³ debris slide event after two days of heavy precipitation in summer 2014. We use a UAS data set of five acquisitions (2018–2020) over a temporal range of three years with 0.16 m spatial resolution. Our new methodology is to employ optical flow for landslide monitoring, which, along with phase correlation, is incorporated into the software IRIS. For performance testing, we compared the two algorithms by applying them to the UAS image stacks to calculate time-series displacement curves and ground motion maps. These maps allow the exact identification of compartments of the complex landslide body and reveal different displacement patterns, with displacement curves reflecting an increased acceleration. Visually traceable boulders in the UAS orthophotos provide independent validation of the methodology applied. Here, we demonstrate that UAS optical flow time series analysis generates a better signal extraction, and thus less noise and a wider observable velocity range—highlighting its applicability for the acceleration of a fast, high-alpine landslide.
Journal Article
Development of an Image Analysis Method for Pepperpot Emittance Monitors
by
Morita, Y
,
Nagatomo, T
,
Nakashima, Y
in
Emittance
,
Image analysis
,
Optical flow (image analysis)
2024
At the RIKEN Nishina Center for Accelerator-Based Science, we developed a pepperpot emittance monitor using a method that can change the distance between the pepperpot mask and the screen. The accuracy can be improved by correctly identifying the beam’s position at a close range followed by the increase of the distance; however, if the distance is increased excessively, the position matching will be difficult. To solve this problem, we developed a method for tracking continuous changes on a screen using optical flow. Using this method, the distance between the pepperpot mask and screen could be extended without plotting in the wrong area in the phase space, and the emittance measurement accuracy was successfully improved by more than 10%. This development will enable beam transport simulations to be performed more accurately.
Journal Article
LVI-Fusion: A Robust Lidar-Visual-Inertial SLAM Scheme
2024
With the development of simultaneous positioning and mapping technology in the field of automatic driving, the current simultaneous localization and mapping scheme is no longer limited to a single sensor and is developing in the direction of multi-sensor fusion to enhance the robustness and accuracy. In this study, a localization and mapping scheme named LVI-fusion based on multi-sensor fusion of camera, lidar and IMU is proposed. Different sensors have different data acquisition frequencies. To solve the problem of time inconsistency in heterogeneous sensor data tight coupling, the time alignment module is used to align the time stamp between the lidar, camera and IMU. The image segmentation algorithm is used to segment the dynamic target of the image and extract the static key points. At the same time, the optical flow tracking based on the static key points are carried out and a robust feature point depth recovery model is proposed to realize the robust estimation of feature point depth. Finally, lidar constraint factor, IMU pre-integral constraint factor and visual constraint factor together construct the error equation that is processed with a sliding window-based optimization module. Experimental results show that the proposed algorithm has competitive accuracy and robustness.
Journal Article
Decomposition of Submesoscale Ocean Wave and Current Derived from UAV-Based Observation
by
Jeong, Youchul
,
Lee, Jong-Seok
,
Kim, Sin-Young
in
aerial imagery
,
Airborne observation
,
Algorithms
2024
The consecutive submesoscale sea surface processes observed by an unmanned aerial vehicle (UAV) were used to decompose into spatial waves and current features. For the image decomposition, the Fast and Adaptive Multidimensional Empirical Mode Decomposition (FA-MEMD) method was employed to disintegrate multicomponent signals identified in sea surface optical images into modulated signals characterized by their amplitudes and frequencies. These signals, referred to as Bidimensional Intrinsic Mode Functions (BIMFs), represent the inherent two-dimensional oscillatory patterns within sea surface optical data. The BIMFs, separated into seven modes and a residual component, were subsequently reconstructed based on the physical frequencies. A two-dimensional Fast Fourier Transform (2D FFT) for each high-frequency mode was used for surface wave analysis to illustrate the wave characteristics. Wavenumbers (Kx, Ky) ranging between 0.01–0.1 radm−1 and wave directions predominantly in the northeastward direction were identified from the spectral peak ranges. The Optical Flow (OF) algorithm was applied to the remaining consecutive low-frequency modes as the current signal under 0.1 Hz for surface current analysis and to estimate a current field with a 1 m spatial resolution. The accuracy of currents in the overall region was validated with in situ drifter measurements, showing an R-squared (R2) value of 0.80 and an average root-mean-square error (RMSE) of 0.03 ms−1. This study proposes a novel framework for analyzing individual sea surface dynamical processes acquired from high-resolution UAV imagery using a multidimensional signal decomposition method specialized in nonlinear and nonstationary data analysis.
Journal Article
Feature Scalar Field Grid-Guided Optical-Flow Image Matching for Multi-View Images of Asteroid
2023
Images captured by deep space probes exhibit large-scale variations, irregular overlap, and remarkable differences in field of view. These issues present considerable challenges for the registration of multi-view asteroid sensor images. To obtain accurate, dense, and reliable matching results of homonymous points in asteroid images, this paper proposes a new scale-invariant feature matching and displacement scalar field-guided optical-flow-tracking method. The method initially uses scale-invariant feature matching to obtain the geometric correspondence between two images. Subsequently, scalar fields of coordinate differences in the x and y directions are constructed based on this correspondence. Next, interim images are generated using the scalar field grid. Finally, optical-flow tracking is performed based on these interim images. Additionally, to ensure the reliability of the matching results, this paper introduces three methods for eliminating mismatched points: bidirectional optical-flow tracking, vector field consensus, and epipolar geometry constraints. Experimental results demonstrate that the proposed method achieves a 98% matching correctness rate and a root mean square error of 0.25 pixels. By combining the advantages of feature matching and optical-flow field methods, this approach achieves image homonymous point matching results with precision and density. The matching method exhibits robustness and strong applicability for asteroid images with cross-scale, large displacement, and large rotation angles.
Journal Article
Video Enhancement with Task-Oriented Flow
2019
Many video enhancement algorithms rely on optical flow to register frames in a video sequence. Precise flow estimation is however intractable; and optical flow itself is often a sub-optimal representation for particular video processing tasks. In this paper, we propose task-oriented flow (TOFlow), a motion representation learned in a self-supervised, task-specific manner. We design a neural network with a trainable motion estimation component and a video processing component, and train them jointly to learn the task-oriented flow. For evaluation, we build Vimeo-90K, a large-scale, high-quality video dataset for low-level video processing. TOFlow outperforms traditional optical flow on standard benchmarks as well as our Vimeo-90K dataset in three video processing tasks: frame interpolation, video denoising/deblocking, and video super-resolution.
Journal Article