Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
659 result(s) for "RGB camera"
Sort by:
An Aerosol Extinction Coefficient Retrieval Method and Characteristics Analysis of Landscape Images
Images based on RGB pixel values were used to measure the extinction coefficient of aerosols suspended in an atmospheric state. The pixel values of the object-image depend on the target-object reflection ratio, reflection direction, object type, distances, illumination intensity, atmospheric particle extinction coefficient, and scattering angle between the sun and the optical axes of the camera, among others. Therefore, the imaged intensity cannot directly provide information on the aerosol concentration or aerosol extinction coefficient. This study proposes simple methods to solve this problem, which yield reasonable extinction coefficients at the three effective RGB wavelengths. Aerosol size information was analogized using the RGB Ångström exponent measured at the three wavelengths for clean, dusty, rainy, Asian dust storm, and foggy days. Additionally, long-term measurements over four months showed reasonable values compared with existing PM2.5 measurements and the proposed method yields useful results.
Augmenting Microsoft's HoloLens with vuforia tracking for neuronavigation
Major hurdles for Microsoft's HoloLens as a tool in medicine have been accessing tracking data, as well as a relatively high-localisation error of the displayed information; cumulatively resulting in its limited use and minimal quantification. The following work investigates the augmentation of HoloLens with the proprietary image processing SDK Vuforia, allowing integration of data from its front-facing RGB camera to provide more spatially stable holograms for neuronavigational use. Continuous camera tracking was able to maintain hologram registration with a mean perceived drift of 1.41 mm, as well as a mean sub 2-mm surface point localisation accuracy of 53%, all while allowing the researcher to walk about a test area. This represents a 68% improvement for the later and a 34% improvement for the former compared with a typical HoloLens deployment used as a control. Both represent a significant improvement on hologram stability given the current state-of-the-art, and to the best of the authors knowledge are the first example of quantified measurements when augmenting hologram stability using data from the RGB sensor.
Using Ordinary Digital Cameras in Place of Near-Infrared Sensors to Derive Vegetation Indices for Phenology Studies of High Arctic Vegetation
To remotely monitor vegetation at temporal and spatial resolutions unobtainable with satellite-based systems, near remote sensing systems must be employed. To this extent we used Normalized Difference Vegetation Index NDVI sensors and normal digital cameras to monitor the greenness of six different but common and widespread High Arctic plant species/groups (graminoid/Salix polaris; Cassiope tetragona; Luzula spp.; Dryas octopetala/S. polaris; C. tetragona/D. octopetala; graminoid/bryophyte) during an entire growing season in central Svalbard. Of the three greenness indices (2G_RBi, Channel G% and GRVI) derived from digital camera images, only GRVI showed significant correlations with NDVI in all vegetation types. The GRVI (Green-Red Vegetation Index) is calculated as (GDN RDN)/(GDN + RDN) where GDN is Green digital number and RDN is Red digital number. Both NDVI and GRVI successfully recorded timings of the green-up and plant growth periods and senescence in all six plant species/groups. Some differences in phenology between plant species/groups occurred: the mid-season growing period reached a sharp peak in NDVI and GRVI values where graminoids were present, but a prolonged period of higher values occurred with the other plant species/groups. Unlike the other plant species/groups, C. tetragona experienced increased NDVI and GRVI values towards the end of the season. NDVI measured with active and passive sensors were strongly correlated (r2 > 0.70) for the same plant species/groups. Although NDVI recorded by the active sensor was consistently lower than that of the passive sensor for the same plant species/groups, differences were small and likely due to the differing light sources used. Thus, it is evident that GRVI and NDVI measured with active and passive sensors captured similar vegetation attributes of High Arctic plants. Hence, inexpensive digital cameras can be used with passive and active NDVI devices to establish a near remote sensing network for monitoring changing vegetation dynamics in the High Arctic.
Quantifying Soil Particle Settlement Characteristics through Machine Vision Analysis Utilizing an RGB Camera
Soil particle size distribution is a crucial factor in determining soil properties and classifying soil types. Traditional methods, such as hydrometer tests, have limitations in terms of time required, labor, and operator dependency. In this paper, we propose a novel approach to quantify soil particle size analysis using machine vision analysis with an RGB camera. The method aims to overcome the limitations of traditional techniques by providing an efficient and automated analysis of fine-grained soils. It utilizes a digital camera to capture the settling properties of soil particles, eliminating the need for a hydrometer. Experimental results demonstrate the effectiveness of the machine vision-based approach in accurately determining soil particle size distribution. The comparison between the proposed method and traditional hydrometer tests reveals strong agreement, with an average deviation of only 2.3% in particle size measurements. This validates the reliability and accuracy of the machine vision-based approach. The proposed machine vision-based analysis offers a promising alternative to traditional techniques for assessing soil particle size distribution. The experimental results highlight its potential to revolutionize soil particle size analysis, providing precise, efficient, and cost-effective analysis for fine-grained soils.
Tableware Tidying-Up Robot System for Self-Service Restaurant–Detection and Manipulation of Leftover Food and Tableware
In this study, an automated tableware tidying-up robot system was developed to tidy up tableware in a self-service restaurant with a large amount of tableware. This study focused on sorting and collecting tableware placed on trays detected by an RGB-D camera. Leftover food was also treated with this robot system. The RGB-D camera efficiently detected the position and height of the tableware and whether there was leftover food or not by image processing. A parallel arm and robot hand mechanism was designed to realize the advantages of a low cost and high processing speed. Two types of rotation mechanisms were designed to realize the function of throwing away leftover food. The effectiveness of the camera detection system was verified through the experiments of tableware and leftover food detection. The effectiveness of the prototype robot and the rotation assist mechanism was verified through the experiments of grasping tableware, throwing away leftover food by two types of rotating mechanisms, collecting multiple tableware, and the sorting of overlapping tableware with multiple robots.
Analysis of Relationship between Natural Standing Behavior of Elderly People and a Class of Standing Aids in a Living Space
As the world’s population ages, technology-based support for the elderly is becoming increasingly important. This study analyzes the relationship between natural standing behavior measured in a living space of elderly people and the classes of standing aids, as well as the physical and cognitive abilities contributing to household fall injury prevention. In total, 24 elderly standing behaviors from chairs, sofas, and nursing beds recorded in an RGB-D elderly behavior library were analyzed. The differences in standing behavior were analyzed by focusing on intrinsic and common standing aid characteristics among various seat types, including armrests of chairs or sofas and nursing bed handrails. The standing behaviors were categorized into two types: behaviors while leaning the trunk forward without using an armrest as a standing aid and those without leaning the trunk forward by using an arrest or handrail as a standing aid. The standing behavior clusters were distributed in a two-dimensional map based on the seat type rather than the physical or cognitive abilities. Therefore, to reduce the risk of falling, it would be necessary to implement a seat type that the elderly can unconsciously and naturally use as a standing aid even with impaired physical and cognitive abilities.
A Review of Embedded Machine Learning Based on Hardware, Application, and Sensing Scheme
Machine learning is an expanding field with an ever-increasing role in everyday life, with its utility in the industrial, agricultural, and medical sectors being undeniable. Recently, this utility has come in the form of machine learning implementation on embedded system devices. While there have been steady advances in the performance, memory, and power consumption of embedded devices, most machine learning algorithms still have a very high power consumption and computational demand, making the implementation of embedded machine learning somewhat difficult. However, different devices can be implemented for different applications based on their overall processing power and performance. This paper presents an overview of several different implementations of machine learning on embedded systems divided by their specific device, application, specific machine learning algorithm, and sensors. We will mainly focus on NVIDIA Jetson and Raspberry Pi devices with a few different less utilized embedded computers, as well as which of these devices were more commonly used for specific applications in different fields. We will also briefly analyze the specific ML models most commonly implemented on the devices and the specific sensors that were used to gather input from the field. All of the papers included in this review were selected using Google Scholar and published papers in the IEEExplore database. The selection criterion for these papers was the usage of embedded computing systems in either a theoretical study or practical implementation of machine learning models. The papers needed to have provided either one or, preferably, all of the following results in their studies—the overall accuracy of the models on the system, the overall power consumption of the embedded machine learning system, and the inference time of their models on the embedded system. Embedded machine learning is experiencing an explosion in both scale and scope, both due to advances in system performance and machine learning models, as well as greater affordability and accessibility of both. Improvements are noted in quality, power usage, and effectiveness.
High Throughput Field Phenotyping for Plant Height Using UAV-Based RGB Imagery in Wheat Breeding Lines: Feasibility and Validation
Plant height (PH) is an essential trait in the screening of most crops. While in crops such as wheat, medium stature helps reduce lodging, tall plants are preferred to increase total above-ground biomass. PH is an easy trait to measure manually, although it can be labor-intense depending on the number of plots. There is an increasing demand for alternative approaches to estimate PH in a higher throughput mode. Crop surface models (CSMs) derived from dense point clouds generated via aerial imagery could be used to estimate PH. This study evaluates PH estimation at different phenological stages using plot-level information from aerial imaging-derived 3D CSM in wheat inbred lines during two consecutive years. Multi-temporal and high spatial resolution images were collected by fixed-wing ( P l a t F W ) and multi-rotor ( P l a t M R ) unmanned aerial vehicle (UAV) platforms over two wheat populations (50 and 150 lines). The PH was measured and compared at four growth stages (GS) using ground-truth measurements (PHground) and UAV-based estimates (PHaerial). The CSMs generated from the aerial imagery were validated using ground control points (GCPs) as fixed reference targets at different heights. The results show that PH estimations using P l a t F W were consistent with those obtained from P l a t M R , showing some slight differences due to image processing settings. The GCPs heights derived from CSM showed a high correlation and low error compared to their actual heights ( R 2 ≥ 0.90, RMSE ≤ 4 cm). The coefficient of determination ( R 2 ) between PHground and PHaerial at different GS ranged from 0.35 to 0.88, and the root mean square error ( RMSE ) from 0.39 to 4.02 cm for both platforms. In general, similar and higher heritability was obtained using PHaerial across different GS and years and ranged according to the variability, and environmental error of the PHground observed (0.06–0.97). Finally, we also observed high Spearman rank correlations (0.47–0.91) and R 2 (0.63–0.95) of PHaerial adjusted and predicted values against PHground values. This study provides an example of the use of UAV-based high-resolution RGB imagery to obtain time-series estimates of PH, scalable to tens-of-thousands of plots, and thus suitable to be applied in plant wheat breeding trials.
RGB-D face recognition using LBP with suitable feature dimension of depth image
This study proposes a robust method for the face recognition from low-resolution red, green, and blue-depth (RGB-D) cameras acquired images which have a wide range of variations in head pose, illumination, facial expression, and occlusion in some cases. The local binary pattern (LBP) of the RGB-D images with the suitable feature dimension of Depth image is employed to extract the facial features. On the basis of error correcting output codes, they are fed to multiclass support vector machines (MSVMs) for the off-line training and validation, and then the online classification. The proposed method is called as the LBP-RGB-D-MSVM with the suitable feature dimension of the depth image. The effectiveness of the proposed method is evaluated by the four databases: Indraprastha Institute of Information Technology, Delhi (IIIT-D) RGB-D, visual analysis of people (VAP) RGB-D-T, EURECOM, and the authors. In addition, an extended database merged by the first three databases is employed to compare among the proposed method and some existing two-dimensional (2D) and 3D face recognition algorithms. The proposed method possesses satisfactory performance (as high as 99.10 ± 0.52% for Rank 5 recognition rate in their database) with low computation (62 ms for feature extraction) which is desirable for real-time applications.
Real-Time Human Action Recognition with a Low-Cost RGB Camera and Mobile Robot Platform
Human action recognition is an important research area in the field of computer vision that can be applied in surveillance, assisted living, and robotic systems interacting with people. Although various approaches have been widely used, recent studies have mainly focused on deep-learning networks using Kinect camera that can easily generate data on skeleton joints using depth data, and have achieved satisfactory performances. However, their models are deep and complex to achieve a higher recognition score; therefore, they cannot be applied to a mobile robot platform using a Kinect camera. To overcome these limitations, we suggest a method to classify human actions in real-time using a single RGB camera, which can be applied to the mobile robot platform as well. We integrated two open-source libraries, i.e., OpenPose and 3D-baseline, to extract skeleton joints on RGB images, and classified the actions using convolutional neural networks. Finally, we set up the mobile robot platform including an NVIDIA JETSON XAVIER embedded board and tracking algorithm to monitor a person continuously. We achieved an accuracy of 70% on the NTU-RGBD training dataset, and the whole process was performed on an average of 15 frames per second (FPS) on an embedded board system.