Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
176 result(s) for "Neuromorphic vision"
Sort by:
Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades
Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.
CIFAR10-DVS: An Event-Stream Dataset for Object Classification
Neuromorphic vision research requires high-quality and appropriately challenging event-stream datasets to support continuous improvement of algorithms and methods. However, creating event-stream datasets is a time-consuming task, which needs to be recorded using the neuromorphic cameras. Currently, there are limited event-stream datasets available. In this work, by utilizing the popular computer vision dataset CIFAR-10, we converted 10,000 frame-based images into 10,000 event streams using a dynamic vision sensor (DVS), providing an event-stream dataset of intermediate difficulty in 10 different classes, named as \"CIFAR10-DVS.\" The conversion of event-stream dataset was implemented by a repeated closed-loop smooth (RCLS) movement of frame-based images. Unlike the conversion of frame-based images by moving the camera, the image movement is more realistic in respect of its practical applications. The repeated closed-loop image movement generates rich local intensity changes in continuous time which are quantized by each pixel of the DVS camera to generate events. Furthermore, a performance benchmark in event-driven object classification is provided based on state-of-the-art classification algorithms. This work provides a large event-stream dataset and an initial benchmark for comparison, which may boost algorithm developments in even-driven pattern recognition and object classification.
Progress of Materials and Devices for Neuromorphic Vision Sensors
HighlightsThe neuromorphic vision sensors for near-sensor and in-sensor computing of visual information are implemented using optoelectronic synaptic circuits and single-device optoelectronic synapses, respectively.This review focuses on the recent progress, working mechanisms, and image pre-processing techniques about two types of neuromorphic vision sensors based on near-sensor and in-sensor vision computing methodologies.The latest developments in bio-inspired neuromorphic vision sensors can be summarized in 3 keywords: smaller, faster, and smarter. (1) Smaller: Devices are becoming more compact by integrating previously separated components such as sensors, memory, and processing units. As a prime example, the transition from traditional sensory vision computing to in-sensor vision computing has shown clear benefits, such as simpler circuitry, lower power consumption, and less data redundancy. (2) Swifter: Owing to the nature of physics, smaller and more integrated devices can detect, process, and react to input more quickly. In addition, the methods for sensing and processing optical information using various materials (such as oxide semiconductors) are evolving. (3) Smarter: Owing to these two main research directions, we can expect advanced applications such as adaptive vision sensors, collision sensors, and nociceptive sensors. This review mainly focuses on the recent progress, working mechanisms, image pre-processing techniques, and advanced features of two types of neuromorphic vision sensors based on near-sensor and in-sensor vision computing methodologies.
Hardware, Algorithms, and Applications of the Neuromorphic Vision Sensor: A Review
Event-based (neuromorphic) cameras depart from frame-based sensing by reporting asynchronous per-pixel brightness changes. This produces sparse, low-latency data streams with extreme temporal resolution but demands new processing paradigms. In this survey, we systematically examine neuromorphic vision along three main dimensions. First, we highlight the technological evolution and distinctive hardware features of neuromorphic cameras from their inception to recent models. Second, we review image-processing algorithms developed explicitly for event-based data, covering works on feature detection, tracking, optical flow, depth and pose estimation, and object recognition. These techniques, drawn from classical computer vision and modern data-driven approaches, illustrate the breadth of applications enabled by event-based cameras. Third, we present practical application case studies demonstrating how event cameras have been successfully used across various scenarios. Distinct from prior reviews, our survey provides a broader overview by uniquely integrating hardware developments, algorithmic progressions, and real-world applications into a structured, cohesive framework. This explicitly addresses the needs of researchers entering the field or those requiring a balanced synthesis of foundational and recent advancements, without overly specializing in niche areas. Finally, we analyze the challenges limiting widespread adoption, identify research gaps compared to standard imaging techniques, and outline promising directions for future developments.
Plasmonic Optoelectronic Memristor Enabling Fully Light‐Modulated Synaptic Plasticity for Neuromorphic Vision
Exploration of optoelectronic memristors with the capability to combine sensing and processing functions is required to promote development of efficient neuromorphic vision. In this work, the authors develop a plasmonic optoelectronic memristor that relies on the effects of localized surface plasmon resonance (LSPR) and optical excitation in an Ag–TiO2 nanocomposite film. Fully light‐induced synaptic plasticity (e.g., potentiation and depression) under visible and ultraviolet light stimulations is demonstrated, which enables the functional combination of visual sensing and low‐level image pre‐processing (including contrast enhancement and noise reduction) in a single device. Furthermore, the light‐gated and electrically‐driven synaptic plasticity can be performed in the same device, in which the spike‐timing‐dependent plasticity (STDP) learning functions can be reversibly modulated by visible and ultraviolet light illuminations. Thereby, the high‐level image processing function, i.e., image recognition, can also be performed in this memristor, whose recognition rate and accuracy are obviously enhanced as a result of image pre‐processing and light‐gated STDP enhancement. Experimental analysis shows that the memristive switching mechanism under optical stimulation can be attributed to the oxidation/reduction of Ag nanoparticles due to the effects of LSPR and optical excitation. The authors' work proposes a new type of plasmonic optoelectronic memristor with fully light‐modulated capability that may promote the future development of efficient neuromorphic vision. A novel plasmonic optoelectronic memristor is demonstrated for the first time relying on localized surface plasmon resonance (LSPR) effect. Both fully light‐modulated and light‐gated electrically‐driven synaptic modulation can be implemented in such a single device. Furthermore, combination of visual sensing, low‐level (contrast enhancement and noise reduction), and high‐level image processing (image recognition) promotes the development of efficient neuromorphic vision.
Event Encryption for Neuromorphic Vision Sensors: Framework, Algorithm, and Evaluation
Nowadays, our lives have benefited from various vision-based applications, such as video surveillance, human identification and aided driving. Unauthorized access to the vision-related data greatly threatens users’ privacy, and many encryption schemes have been proposed to secure images and videos in those conventional scenarios. Neuromorphic vision sensor (NVS) is a brand new kind of bio-inspired sensor that can generate a stream of impulse-like events rather than synchronized image frames, which reduces the sensor’s latency and broadens the applications in surveillance and identification. However, the privacy issue related to NVS remains a significant challenge. For example, some image reconstruction and human identification approaches may expose privacy-related information from NVS events. This work is the first to investigate the privacy of NVS. We firstly analyze the possible security attacks to NVS, including grayscale image reconstruction and privacy-related classification. We then propose a dedicated encryption framework for NVS, which incorporates a 2D chaotic mapping to scramble the positions of events and flip their polarities. In addition, an updating score has been designed for controlling the frequency of execution, which supports efficient encryption on different platforms. Finally, extensive experiments have demonstrated that the proposed encryption framework can effectively protect NVS events against grayscale image reconstruction and human identification, and meanwhile, achieve high efficiency on various platforms including resource-constrained devices.
Integrated In‐Memory Sensor and Computing of Artificial Vision Based on Full‐vdW Optoelectronic Ferroelectric Field‐Effect Transistor
The development and application of artificial intelligence have led to the exploitation of low‐power and compact intelligent information‐processing systems integrated with sensing, memory, and neuromorphic computing functions. The 2D van der Waals (vdW) materials with abundant reservoirs for arbitrary stacking based on functions and enabling continued device downscaling offer an attractive alternative for continuously promoting artificial intelligence. In this study, full 2D SnS2/h‐BN/CuInP2S6 (CIPS)‐based ferroelectric field‐effect transistors (Fe‐FETs) and utilized light‐induced ferroelectric polarization reversal to achieve excellent memory properties and multi‐functional sensing‐memory‐computing vision simulations are designed. The device exhibits a high on/off current ratio of over 105, long retention time (>104 s), stable cyclic endurance (>350 cycles), and 128 multilevel current states (7‐bit). In addition, fundamental synaptic plasticity characteristics are emulated including paired‐pulse facilitation (PPF), short‐term plasticity (STP), long‐term plasticity (LTP), long‐term potentiation, and long‐term depression. A ferroelectric optoelectronic reservoir computing system for the Modified National Institute of Standards and Technology (MNIST) handwritten digital recognition achieved a high accuracy of 93.62%. Furthermore, retina‐like light adaptation and Pavlovian conditioning are successfully mimicked. These results provide a strategy for developing a multilevel memory and novel neuromorphic vision systems with integrated sensing‐memory‐processing. A novel multi‐functional neuromorphic visual system with optoelectronic synergy based on SnS2/BN/CuInP2S6 full van der Waals ferroelectric field‐effect transistor is reported. The device demonstrates a high switching ratio of 105, multilevel storage states of 128 (7 bits), excellent synaptic plasticity, and an image recognition accuracy of 93.62% based on reservoir computing.
A Noise Filtering Algorithm for Event-Based Asynchronous Change Detection Image Sensors on TrueNorth and Its Implementation on TrueNorth
Asynchronous event-based sensors, or \"silicon retinae,\" are a new class of vision sensors inspired by biological vision systems. The output of these sensors often contains a significant number of noise events along with the signal. Filtering these noise events is a common preprocessing step before using the data for tasks such as tracking and classification. This paper presents a novel spiking neural network-based approach to filtering noise events from data captured by an Asynchronous Time-based Image Sensor on a neuromorphic processor, the IBM TrueNorth Neurosynaptic System. The significant contribution of this work is that it demonstrates our proposed filtering algorithm outperforms the traditional nearest neighbor noise filter in achieving higher signal to noise ratio (~10 dB higher) and retaining the events related to signal (~3X more). In addition, for our envisioned application of object tracking and classification under some parameter settings, it can also generate some of the missing events in the spatial neighborhood of the signal for all classes of moving objects in the data which are unattainable using the nearest neighbor filter.
Event-Based Face Detection and Tracking Using the Dynamics of Eye Blinks
We present the first purely event-based method for face detection using the high temporal resolution properties of an event-based camera to detect the presence of a face in a scene using eye blinks. Eye blinks are a unique and stable natural dynamic temporal signature of human faces across population that can be fully captured by event-based sensors. We show that eye blinks have a unique temporal signature over time that can be easily detected by correlating the acquired local activity with a generic temporal model of eye blinks that has been generated from a wide population of users. In a second stage once a face has been located it becomes possible to apply a probabilistic framework to track its spatial location for each incoming event while using eye blinks to correct for drift and tracking errors. Results are shown for several indoor and outdoor experiments. We also release an annotated data set that can be used for future work on the topic.