Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
7,913 result(s) for "camera evaluation"
Sort by:
Coral Reef Monitoring by Scuba Divers Using Underwater Photogrammetry and Geodetic Surveying
Underwater photogrammetry is increasingly being used by marine ecologists because of its ability to produce accurate, spatially detailed, non-destructive measurements of benthic communities, coupled with affordability and ease of use. However, independent quality control, rigorous imaging system set-up, optimal geometry design and a strict modeling of the imaging process are essential to achieving a high degree of measurable accuracy and resolution. If a proper photogrammetric approach that enables the formal description of the propagation of measurement error and modeling uncertainties is not undertaken, statements regarding the statistical significance of the results are limited. In this paper, we tackle these critical topics, based on the experience gained in the Moorea Island Digital Ecosystem Avatar (IDEA) project, where we have developed a rigorous underwater photogrammetric pipeline for coral reef monitoring and change detection. Here, we discuss the need for a permanent, underwater geodetic network, which serves to define a temporally stable reference datum and a check for the time series of photogrammetrically derived three-dimensional (3D) models of the reef structure. We present a methodology to evaluate the suitability of several underwater camera systems for photogrammetric and multi-temporal monitoring purposes and stress the importance of camera network geometry to minimize the deformations of photogrammetrically derived 3D reef models. Finally, we incorporate the measurement and modeling uncertainties of the full photogrammetric process into a simple and flexible framework for detecting statistically significant changes among a time series of models.
Evaluations of Speed Camera Interventions Can Deliver a Wide Range of Outcomes: Causes and Policy Implications
Speeding (travelling at speeds above the speed limit) is proven to be a major contributor to serious crashes, and speed management interventions including speed cameras are shown to reduce speeds, crashes, and trauma. However, the present review identifies that the range of outcomes reported in evaluations of speed cameras is large, complicating the understanding of effects, and inviting scepticism about the value of speed cameras despite the large numbers of reported successes, as well as systematic reviews and meta-analyses that demonstrate their life- and injury-saving value. Therefore, this review is focused on the factors that contribute to the large range of findings, including reasons for genuine differences in the outcomes delivered by different camera programs and variations in evaluation methodology that influence the extent to which real benefits are detected. Finally, recommendations are offered to maximise the safety benefits of speed-camera programs (including ensuring the full chain of requirements for general deterrence is met; strong communications about new programs and expansions at least several weeks in advance of implementation; and unpredictability of enforcement versus signposted cameras) and to improve evaluation methods (especially around determining the road lengths/locations assumed to be treated by the cameras and use of control locations).
COMPARISON OF DIVER-OPERATED UNDERWATER PHOTOGRAMMETRIC SYSTEMS FOR CORAL REEF MONITORING
Underwater photogrammetry is a well-established technique for measuring and modelling the subaquatic environment in fields ranging from archaeology to marine ecology. While for simple tasks the acquisition and processing of images have become straightforward, applications requiring relative accuracy better then 1:1000 are still considered challenging. This study focuses on the metric evaluation of different off-the-shelf camera systems for making high resolution and high accuracy measurements of coral reefs monitoring through time, where the variations to be measured are in the range of a few centimeters per year. High quality and low-cost systems (reflex and mirrorless vs action cameras, i.e. GoPro) with multiple lenses (prime and zoom), different fields of views (from fisheye to moderate wide angle), pressure housing materials and lens ports (dome and flat) are compared. Tests are repeated at different camera to object distances to investigate distance dependent induced errors and assess the accuracy of the photogrammetrically derived models. An extensive statistical analysis of the different systems is performed and comparisons against reference control point measured through a high precision underwater geodetic network are reported.
Evaluation of Average Quantum Efficiency of Industrial Digital Camera
Quantum efficiency (QE) is a critical metric for assessing the performance of industrial digital cameras. The current EMVA1288 standard relies on monochromatic light for QE measurements. Comprehensive QE tests across the visible spectrum often involve elaborate setups and extensive data acquisition. Additionally, such tests may not fully capture camera performance under broadband illumination, which is frequently encountered in industrial applications. This study introduces the concept of average quantum efficiency (AQE) using white light sources and proposes a novel testing method. Systematic experiments and data analyses were performed on two industrial digital cameras under white light sources with different spectral distributions. The results suggest that AQE testing offers a practical and efficient means to evaluate camera performance under broadband illumination, complementing existing monochromatic QE measurement methods.
MEDIUM FORMAT CAMERA EVALUATION BASED ON THE LATEST PHASE ONE TECHNOLOGY
In early 2016, Phase One Industrial launched a new high resolution camera with a 100 MP CMOS sensor. CCD sensors excel at ISOs up to 200, but in lower light conditions, exposure time must be increased and Forward Motion Compensation (FMC) has to be employed to avoid smearing the images. The CMOS sensor has an ISO range of up to 6400, which enables short exposures instead of using FMC. This paper aims to evaluate the strengths of each of the sensor types based on real missions over a test field in Speyer, Germany, used for airborne camera calibration. The test field area has about 30 Ground Control Points (GCPs), which enable a perfect scenario for a proper geometric evaluation of the cameras. The test field includes both a Siemen star and scale bars to show any blurring caused by forward motion. The result of the comparison showed that both cameras offer high accuracy photogrammetric results with post processing, including triangulation, calibration, orthophoto and DEM generation. The forward motion effect can be compensated by a fast shutter speed and a higher ISO range of the CMOS-based camera. The results showed no significant differences between cameras.
MOTChallenge: A Benchmark for Single-Camera Multiple Target Tracking
Standardized benchmarks have been crucial in pushing the performance of computer vision algorithms, especially since the advent of deep learning. Although leaderboards should not be over-claimed, they often provide the most objective measure of performance and are therefore important guides for research. We present MOTChallenge, a benchmark for single-camera Multiple Object Tracking (MOT) launched in late 2014, to collect existing and new data and create a framework for the standardized evaluation of multiple object tracking methods. The benchmark is focused on multiple people tracking, since pedestrians are by far the most studied object in the tracking community, with applications ranging from robot navigation to self-driving cars. This paper collects the first three releases of the benchmark: (i) MOT15, along with numerous state-of-the-art results that were submitted in the last years, (ii) MOT16, which contains new challenging videos, and (iii) MOT17, that extends MOT16 sequences with more precise labels and evaluates tracking performance on three different object detectors. The second and third release not only offers a significant increase in the number of labeled boxes, but also provide labels for multiple object classes beside pedestrians, as well as the level of visibility for every single object of interest. We finally provide a categorization of state-of-the-art trackers and a broad error analysis. This will help newcomers understand the related work and research trends in the MOT community, and hopefully shed some light into potential future research directions.
An evaluation of platforms for processing camera‐trap data using artificial intelligence
Camera traps have quickly transformed the way in which many ecologists study the distribution of wildlife species, their activity patterns and interactions among members of the same ecological community. Although they provide a cost‐effective method for monitoring multiple species over large spatial and temporal scales, the time required to process the data can limit the efficiency of camera‐trap surveys. Thus, there has been considerable attention given to the use of artificial intelligence (AI), specifically deep learning, to help process camera‐trap data. Using deep learning for these applications involves training algorithms, such as convolutional neural networks (CNNs), to use particular features in the camera‐trap images to automatically detect objects (e.g. animals, humans, vehicles) and to classify species. To help overcome the technical challenges associated with training CNNs, several research communities have recently developed platforms that incorporate deep learning in easy‐to‐use interfaces. We review key characteristics of four AI platforms—Conservation AI, MegaDetector, MLWIC2: Machine Learning for Wildlife Image Classification and Wildlife Insights—and two auxiliary platforms—Camelot and Timelapse—that incorporate AI output for processing camera‐trap data. We compare their software and programming requirements, AI features, data management tools and output format. We also provide R code and data from our own work to demonstrate how users can evaluate model performance. We found that species classifications from Conservation AI, MLWIC2 and Wildlife Insights generally had low to moderate recall. Yet, the precision for some species and higher taxonomic groups was high, and MegaDetector and MLWIC2 had high precision and recall when classifying images as either ‘blank’ or ‘animal’. These results suggest that most users will need to review AI predictions, but that AI platforms can improve efficiency of camera‐trap‐data processing by allowing users to filter their dataset into subsets (e.g. of certain taxonomic groups or blanks) that can be verified using bulk actions. By reviewing features of popular AI‐powered platforms and sharing an open‐source GitBook that illustrates how to manage AI output to evaluate model performance, we hope to facilitate ecologists' use of AI to process camera‐trap data.
Specim IQ: Evaluation of a New, Miniaturized Handheld Hyperspectral Camera and Its Application for Plant Phenotyping and Disease Detection
Hyperspectral imaging sensors are promising tools for monitoring crop plants or vegetation in different environments. Information on physiology, architecture or biochemistry of plants can be assessed non-invasively and on different scales. For instance, hyperspectral sensors are implemented for stress detection in plant phenotyping processes or in precision agriculture. Up to date, a variety of non-imaging and imaging hyperspectral sensors is available. The measuring process and the handling of most of these sensors is rather complex. Thus, during the last years the demand for sensors with easy user operability arose. The present study introduces the novel hyperspectral camera Specim IQ from Specim (Oulu, Finland). The Specim IQ is a handheld push broom system with integrated operating system and controls. Basic data handling and data analysis processes, such as pre-processing and classification routines are implemented within the camera software. This study provides an introduction into the measurement pipeline of the Specim IQ as well as a radiometric performance comparison with a well-established hyperspectral imager. Case studies for the detection of powdery mildew on barley at the canopy scale and the spectral characterization of Arabidopsis thaliana mutants grown under stressed and non-stressed conditions are presented.
Evaluation of the Pose Tracking Performance of the Azure Kinect and Kinect v2 for Gait Analysis in Comparison with a Gold Standard: A Pilot Study
Gait analysis is an important tool for the early detection of neurological diseases and for the assessment of risk of falling in elderly people. The availability of low-cost camera hardware on the market today and recent advances in Machine Learning enable a wide range of clinical and health-related applications, such as patient monitoring or exercise recognition at home. In this study, we evaluated the motion tracking performance of the latest generation of the Microsoft Kinect camera, Azure Kinect, compared to its predecessor Kinect v2 in terms of treadmill walking using a gold standard Vicon multi-camera motion capturing system and the 39 marker Plug-in Gait model. Five young and healthy subjects walked on a treadmill at three different velocities while data were recorded simultaneously with all three camera systems. An easy-to-administer camera calibration method developed here was used to spatially align the 3D skeleton data from both Kinect cameras and the Vicon system. With this calibration, the spatial agreement of joint positions between the two Kinect cameras and the reference system was evaluated. In addition, we compared the accuracy of certain spatio-temporal gait parameters, i.e., step length, step time, step width, and stride time calculated from the Kinect data, with the gold standard system. Our results showed that the improved hardware and the motion tracking algorithm of the Azure Kinect camera led to a significantly higher accuracy of the spatial gait parameters than the predecessor Kinect v2, while no significant differences were found between the temporal parameters. Furthermore, we explain in detail how this experimental setup could be used to continuously monitor the progress during gait rehabilitation in older people.
Quality Evaluation for Colored Point Clouds Produced by Autonomous Vehicle Sensor Fusion Systems
Perception systems for autonomous vehicles (AVs) require various types of sensors, including light detection and ranging (LiDAR) and cameras, to ensure their robustness in driving scenarios and weather conditions. The data from these sensors are fused together to generate maps of the surrounding environment and provide information for the detection and tracking of objects. Hence, evaluation methods are necessary to compare existing and future sensor systems through quantifiable measurements given the wide range of sensor models and design choices. This paper presents an evaluation method to compare colored point clouds, a common fused data type, among two LiDAR–camera fusion systems and a stereo camera setup. The evaluation approach uses a test artifact measured by the fusion system’s colored point cloud through the spread, area coverage, and color difference of the colored points within the computed space. The test results showed the evaluation approach was able to rank the sensor fusion systems based on its metrics and complement the experimental observations. The proposed evaluation methodology is, therefore, suitable towards the comparison of generated colored point clouds by sensor fusion systems.