Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
816
result(s) for
"Event streams"
Sort by:
STCA-SNN: self-attention-based temporal-channel joint attention for spiking neural networks
2023
Spiking Neural Networks (SNNs) have shown great promise in processing spatio-temporal information compared to Artificial Neural Networks (ANNs). However, there remains a performance gap between SNNs and ANNs, which impedes the practical application of SNNs. With intrinsic event-triggered property and temporal dynamics, SNNs have the potential to effectively extract spatio-temporal features from event streams. To leverage the temporal potential of SNNs, we propose a self-attention-based temporal-channel joint attention SNN (STCA-SNN) with end-to-end training, which infers attention weights along both temporal and channel dimensions concurrently. It models global temporal and channel information correlations with self-attention, enabling the network to learn ‘what’ and ‘when’ to attend simultaneously. Our experimental results show that STCA-SNNs achieve better performance on N-MNIST (99.67%), CIFAR10-DVS (81.6%), and N-Caltech 101 (80.88%) compared with the state-of-the-art SNNs. Meanwhile, our ablation study demonstrates that STCA-SNNs improve the accuracy of event stream classification tasks.
Journal Article
MSF: Multi-Level Spatiotemporal Filtering for Event Denoising via Motion Estimation
2026
Event cameras provide microsecond-level temporal resolution, low latency, and high dynamic range, enabling robust perception under fast motion and challenging lighting conditions. Nevertheless, event streams are susceptible to background activity, thermal noise, and hot pixels. Their sparse and irregular patterns can corrupt event structures and degrade downstream tasks. We propose MSF, a multi-level spatiotemporal filtering framework that couples motion-compensated aggregation with neighborhood-level verification. In each temporal window, MSF estimates a constant 2D optical flow by maximizing a robust, density-normalized contrast objective on the image of warped events (IWE). We further incorporate polarity–gradient decorrelation to suppress mixed-polarity noise and an explicit peak-suppression regularizer to avoid hot-pixel-induced degeneracy. The motion parameters are optimized via coarse grid initialization followed by gradient-ascent refinement. Based on the estimated motion, MSF performs hierarchical event selection: central events are extracted from high-confidence aggregated regions, local events are recovered through joint spatial–temporal–directional–polarity consistency, and weak border events are identified using a density-normalized probabilistic support model that rewards support from reliable structures while penalizing self-clustering. Experiments on four public benchmarks (DVSNOISE20, DVSMOTION20, DVSCLEAN, and E-MLB) show that MSF consistently improves the Event Structural Ratio (ESR) and outperforms representative baselines across diverse motion regimes and severe low-light noise.
Journal Article
Event stream-based process discovery using abstract representations
by
Wil M P van der Aalst
,
van Dongen, Boudewijn F
,
van Zelst, Sebastiaan J
in
Business
,
Data mining
,
Information systems
2018
The aim of process discovery, originating from the area of process mining, is to discover a process model based on business process execution data. A majority of process discovery techniques relies on an event log as an input. An event log is a static source of historical data capturing the execution of a business process. In this paper, we focus on process discovery relying on online streams of business process execution events. Learning process models from event streams poses both challenges and opportunities, i.e. we need to handle unlimited amounts of data using finite memory and, preferably, constant time. We propose a generic architecture that allows for adopting several classes of existing process discovery techniques in context of event streams. Moreover, we provide several instantiations of the architecture, accompanied by implementations in the process mining toolkit ProM (http://promtools.org). Using these instantiations, we evaluate several dimensions of stream-based process discovery. The evaluation shows that the proposed architecture allows us to lift process discovery to the streaming domain.
Journal Article
Research and Implementation of Local Spatiotemporal Event Quantities Denoising Algorithm Based on Event-Based Vision Sensors
2026
Event-based vision sensors (EVSs), with the core advantages of low latency, high dynamic range, and low data volume, have become one of the research hotspots in the field of computer vision. However, the characteristic of detecting changes in light intensity makes EVSs particularly sensitive to noise, so the large number of noise events in the event stream significantly limits the practical application of EVSs. To address this critical issue, considering the types and characteristics of noise, this paper proposes an event stream denoising algorithm based on local spatiotemporal event quantities and implements it in hardware. To comprehensively evaluate the algorithm’s performance, two metrics based on the probability of real events, Noise Event Ratio (NER) and Event Noise Ratio (ENR), are used to quantify the denoising effect, while hardware resource overhead is assessed in terms of event processing latency and memory usage. Experimental results show that the proposed algorithm achieves an NER of 8.37% and an ENR of 25.10%. Compared to existing denoising algorithms, such as the DWF algorithm, the NER and ENR of this algorithm are reduced by 27.72% and 22.89%, respectively, demonstrating superior denoising performance. On the hardware side, the latency for processing a single event is approximately 110 ns, with a total resource usage of N2 memory units. Although the hardware consumption is slightly higher, the algorithm exhibits significant advantages in denoising performance, providing effective support for the engineering application of EVSs.
Journal Article
Event Stream Denoising Method Based on Spatio-Temporal Density and Time Sequence Analysis
2024
An event camera is a neuromimetic sensor inspired by the human retinal imaging principle, which has the advantages of high dynamic range, high temporal resolution, and low power consumption. Due to the interference of hardware and software and other factors, the event stream output from the event camera usually contains a large amount of noise, and traditional denoising algorithms cannot be applied to the event stream. To better deal with different kinds of noise and enhance the robustness of the denoising algorithm, based on the spatio-temporal distribution characteristics of effective events and noise, an event stream noise reduction and visualization algorithm is proposed. The event stream enters fine filtering after filtering the BA noise based on spatio-temporal density. The fine filtering performs time sequence analysis on the event pixels and the neighboring pixels to filter out hot noise. The proposed visualization algorithm adaptively overlaps the events of the previous frame according to the event density difference to obtain clear and coherent event frames. We conducted denoising and visualization experiments on real scenes and public datasets, respectively, and the experiments show that our algorithm is effective in filtering noise and obtaining clear and coherent event frames under different event stream densities and noise backgrounds.
Journal Article
Faces in Event Streams (FES): An Annotated Face Dataset for Event Cameras
by
Rakhimzhanova, Tomiris
,
Kenzhebalin, Daulet
,
Bissarinova, Ulzhan
in
Access control
,
Algorithms
,
Annotations
2024
The use of event-based cameras in computer vision is a growing research direction. However, despite the existing research on face detection using the event camera, a substantial gap persists in the availability of a large dataset featuring annotations for faces and facial landmarks on event streams, thus hampering the development of applications in this direction. In this work, we address this issue by publishing the first large and varied dataset (Faces in Event Streams) with a duration of 689 min for face and facial landmark detection in direct event-based camera outputs. In addition, this article presents 12 models trained on our dataset to predict bounding box and facial landmark coordinates with an mAP50 score of more than 90%. We also performed a demonstration of real-time detection with an event-based camera using our models.
Journal Article
Adaptive Slicing Method of the Spatiotemporal Event Stream Obtained from a Dynamic Vision Sensor
2022
The dynamic vision sensor (DVS) measures asynchronously change of brightness per pixel, then outputs an asynchronous and discrete stream of spatiotemporal event information that encodes the time, location, and sign of brightness changes. The dynamic vision sensor has outstanding properties compared to sensors of traditional cameras, with very high dynamic range, high temporal resolution, low power consumption, and does not suffer from motion blur. Hence, dynamic vision sensors have considerable potential for computer vision in scenarios that are challenging for traditional cameras. However, the spatiotemporal event stream has low visualization and is incompatible with existing image processing algorithms. In order to solve this problem, this paper proposes a new adaptive slicing method for the spatiotemporal event stream. The resulting slices of the spatiotemporal event stream contain complete object information, with no motion blur. The slices can be processed either with event-based algorithms or by constructing slices into virtual frames and processing them with traditional image processing algorithms. We tested our slicing method using public as well as our own data sets. The difference between the object information entropy of the slice and the ideal object information entropy is less than 1%.
Journal Article
Runtime verification of real-time event streams under non-synchronized arrival
by
Scheffel Torben
,
Sánchez César
,
Schmitz, Malte
in
Algorithms
,
Real time
,
Run time (computers)
2020
We study the problem of online runtime verification of real-time event streams. Our monitors can observe concurrent systems with a shared clock, but where each component reports observations as signals that arrive to the monitor at different speeds and with different and varying latencies. We start from specifications in a fragment of the TeSSLa specification language, where streams (including inputs and final verdicts) are not restricted to be Booleans but can be data from richer domains, including integers and reals with arithmetic operations and aggregations. Specifications can be used both for checking logical properties and for computing statistics and general numeric temporal metrics (and properties on these richer metrics). We present an online evaluation algorithm for the specification language and a concurrent implementation of the evaluation algorithm. The algorithm can tolerate and exploit the asynchronous arrival of events without synchronizing the inputs. Then, we introduce a theory of asynchronous transducers and show a formal proof of the correctness such that every possible run of the monitor implements the semantics. Finally, we report an empirical evaluation of a highly concurrent Erlang implementation of the monitoring algorithm.
Journal Article
Infrared Temporal Differential Perception for Space-Based Aerial Targets
2025
Space-based infrared (IR) detection, with wide coverage, all-time operation, and stealth, is crucial for aerial target surveillance. Under low signal-to-noise ratio (SNR) conditions, however, its small target size, limited features, and strong clutters often lead to missed detections and false alarms, reducing stability and real-time performance. To overcome these issues of energy-integration imaging in perceiving dim targets, this paper proposes a biomimetic vision-inspired Infrared Temporal Differential Detection (ITDD) method. The ITDD method generates sparse event streams by triggering pixel-level radiation variations and establishes an irradiance-based sensitivity model with optimized threshold voltage, spectral bands, and optical aperture parameters. IR sequences are converted into differential event streams with inherent noise, upon which a lightweight multi-modal fusion detection network is developed. Simulation experiments demonstrate that ITDD reduces data volume by three orders of magnitude and improves the SNR by 4.21 times. On the SITP-QLEF dataset, the network achieves a detection rate of 99.31%, and a false alarm rate of 1.97×10−5, confirming its effectiveness and application potential under complex backgrounds. As the current findings are based on simulated data, future work will focus on building an ITDD demonstration system to validate the approach with real-world IR measurements.
Journal Article