Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
20,951 result(s) for "vision sensor"
Sort by:
EKLT: Asynchronous Photometric Feature Tracking Using Events and Frames
We present EKLT, a feature tracking method that leverages the complementarity of event cameras and standard cameras to track visual features with high temporal resolution. Event cameras are novel sensors that output pixel-level brightness changes, called “events”. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the same scene pattern can produce different events depending on the motion direction, establishing event correspondences across time is challenging. By contrast, standard cameras provide intensity measurements (frames) that do not depend on motion direction. Our method extracts features on frames and subsequently tracks them asynchronously using events, thereby exploiting the best of both types of data: the frames provide a photometric representation that does not depend on motion direction and the events provide updates with high temporal resolution. In contrast to previous works, which are based on heuristics, this is the first principled method that uses intensity measurements directly, based on a generative event model within a maximum-likelihood framework. As a result, our method produces feature tracks that are more accurate than the state of the art, across a wide variety of scenes.
A robust weld seam detection method based on particle filter for laser welding by using a passive vision sensor
Vision sensor systems with an auxiliary light source such as structured light–based vision sensor have been widely used for weld seam detection. However, the main drawback of this method is the preview distance between the welding position and the sensing position, which can generate unavoidable detecting errors. Laser welding of the small workpiece or narrow butt joint is especially vulnerable to detection error caused by preview distance. Meanwhile, only one point of the weld seam can be measured each time, which makes the corresponding image processing very sensitive to light noise. Consequently, a seam measurement method based on a passive vision sensor for narrow gap butt joint is proposed in this article. The weld pool is observed directly by this vision sensor so that there is no preview distance. An adequately adjusted telecentric lens is used to increase the contrast between the seam feature and the background welding noise. By this manner, a long and noticeable weld seam feature, as well as the weld pool, can be obtained in the captured images. A corresponding image processing algorithm based on particle filter is designed to extract the captured long weld seam. The particle filter is used to track the slope and intercept of the weld seam, which combines the advantage of both particle filter and Hough transform. As the particle filter algorithm could generate the result by evaluating both the previous and the current measurement results, the corresponding image processing can be robust even when a few images are of poor quality. Finally, laser welding experiments were carried out, and the results revealed that the proposed method could achieve detection accuracy of 0.08 mm when welding 0.1 mm width narrow butt joint at 2000 mm/min welding speed.
Real-time image processing for vision-based weld seam tracking in robotic GMAW
Image capturing and processing is important in using vision sensor to effectively track the weld seam and control the weld quality in robotic gas metal arc welding (GMAW). Using vision techniques to track weld seam, the key is to acquire clear weld images and process them accurately. In this paper, a method for real-time image capturing and processing is presented for the application in robotic seam tracking. By analyzing the characteristic of robotic GMAW, the real-time weld images are captured clearly by the passive vision sensor. Utilizing the main characteristics of the gray gradient in the weld image, a new improved Canny edge detection algorithm was proposed to detect the edges of weld image and extract the seam and pool characteristic parameters. The image processing precision was further verified by using the random welding experiments. Results showed that the precision range of the image processing can be controlled to be within ±0.3 mm in robotic GMAW, which can meet the requirement of real-time seam tracking.
Autonomous seam acquisition and tracking system for multi-pass welding based on vision sensor
Automatic welding technology is a solution to increase welding productivity and improve welding quality, especially in thick plate welding. In order to obtain high-quality multi-pass welds, it is necessary to maintain a stable welding bead in each pass. In the multi-pass welding, it is difficult to obtain a stable weld bead by using a traditional teaching and playback arc welding robot. To overcome these traditional limitations, an automatic welding tracking system of arc welding robot is proposed for multi-pass welding. The developed system includes an image acquisition module, an image processing module, a tracking control unit, and their software interfaces. The vision sensor, which includes a CCD camera, is mounted on the welding torch. In order to minimize the inevitable misalignment between the center line of welding seam and the welding torch for each welding pass, a robust algorithm of welding image processing is proposed, which was proved to be suitable for the root pass, filling passes, and the cap passes. In order to accurately track the welding seam, a Fuzzy-P controller is designed to control the arc welding robot to adjust the torch. The Microsoft Visual C++6.0 software is used to develop the application programs and user interface. The welding experiments are carried out to verify the validity of the multi-pass welding tracking system.
The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor
The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.
EMVS: Event-Based Multi-View Stereo—3D Reconstruction with an Event Camera in Real-Time
Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of event-based multi-view stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (1) its ability to respond to scene edges—which naturally provide semi-dense geometric information without any pre-processing operation—and (2) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semi-dense depth maps, without requiring any explicit data association or intensity estimation. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU.
Optoelectronic resistive random access memory for neuromorphic vision sensors
Neuromorphic visual systems have considerable potential to emulate basic functions of the human visual system even beyond the visible light region. However, the complex circuitry of artificial visual systems based on conventional image sensors, memory and processing units presents serious challenges in terms of device integration and power consumption. Here we show simple two-terminal optoelectronic resistive random access memory (ORRAM) synaptic devices for an efficient neuromorphic visual system that exhibit non-volatile optical resistive switching and light-tunable synaptic behaviours. The ORRAM arrays enable image sensing and memory functions as well as neuromorphic visual pre-processing with an improved processing efficiency and image recognition rate in the subsequent processing tasks. The proof-of-concept device provides the potential to simplify the circuitry of a neuromorphic visual system and contribute to the development of applications in edge computing and the internet of things.
Progress of Materials and Devices for Neuromorphic Vision Sensors
HighlightsThe neuromorphic vision sensors for near-sensor and in-sensor computing of visual information are implemented using optoelectronic synaptic circuits and single-device optoelectronic synapses, respectively.This review focuses on the recent progress, working mechanisms, and image pre-processing techniques about two types of neuromorphic vision sensors based on near-sensor and in-sensor vision computing methodologies.The latest developments in bio-inspired neuromorphic vision sensors can be summarized in 3 keywords: smaller, faster, and smarter. (1) Smaller: Devices are becoming more compact by integrating previously separated components such as sensors, memory, and processing units. As a prime example, the transition from traditional sensory vision computing to in-sensor vision computing has shown clear benefits, such as simpler circuitry, lower power consumption, and less data redundancy. (2) Swifter: Owing to the nature of physics, smaller and more integrated devices can detect, process, and react to input more quickly. In addition, the methods for sensing and processing optical information using various materials (such as oxide semiconductors) are evolving. (3) Smarter: Owing to these two main research directions, we can expect advanced applications such as adaptive vision sensors, collision sensors, and nociceptive sensors. This review mainly focuses on the recent progress, working mechanisms, image pre-processing techniques, and advanced features of two types of neuromorphic vision sensors based on near-sensor and in-sensor vision computing methodologies.
A survey of depth and inertial sensor fusion for human action recognition
A number of review or survey articles have previously appeared on human action recognition where either vision sensors or inertial sensors are used individually. Considering that each sensor modality has its own limitations, in a number of previously published papers, it has been shown that the fusion of vision and inertial sensor data improves the accuracy of recognition. This survey article provides an overview of the recent investigations where both vision and inertial sensors are used together and simultaneously to perform human action recognition more effectively. The thrust of this survey is on the utilization of depth cameras and inertial sensors as these two types of sensors are cost-effective, commercially available, and more significantly they both provide 3D human action data. An overview of the components necessary to achieve fusion of data from depth and inertial sensors is provided. In addition, a review of the publicly available datasets that include depth and inertial data which are simultaneously captured via depth and inertial sensors is presented.
A Vision-Based Sensor for Noncontact Structural Displacement Measurement
Conventional displacement sensors have limitations in practical applications. This paper develops a vision sensor system for remote measurement of structural displacements. An advanced template matching algorithm, referred to as the upsampled cross correlation, is adopted and further developed into a software package for real-time displacement extraction from video images. By simply adjusting the upsampling factor, better subpixel resolution can be easily achieved to improve the measurement accuracy. The performance of the vision sensor is first evaluated through a laboratory shaking table test of a frame structure, in which the displacements at all the floors are measured by using one camera to track either high-contrast artificial targets or low-contrast natural targets on the structural surface such as bolts and nuts. Satisfactory agreements are observed between the displacements measured by the single camera and those measured by high-performance laser displacement sensors. Then field tests are carried out on a railway bridge and a pedestrian bridge, through which the accuracy of the vision sensor in both time and frequency domains is further confirmed in realistic field environments. Significant advantages of the noncontact vision sensor include its low cost, ease of operation, and flexibility to extract structural displacement at any point from a single measurement.