Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
109 result(s) for "viewpoint selection"
Sort by:
Active Perception Fruit Harvesting Robots — A Systematic Review
This paper studies the state-of-the-art of active perception solutions for manipulation in agriculture and suggests a possible architecture for an active perception system for harvesting in agriculture. Research and developing robots for agricultural context is a challenge, particularly for harvesting and pruning context applications. These applications normally consider mobile manipulators and their cognitive part has many challenges. Active perception systems look reasonable approach for fruit assessment robustly and economically. This systematic literature review focus in the topic of active perception for fruits harvesting robots. The search was performed in five different databases. The search resumed into 1034 publications from which only 195 publications where considered for inclusion in this review after analysis. We conclude that the most of researches are mainly about fruit detection and segmentation in two-dimensional space using evenly classic computer vision strategies and deep learning models. For harvesting, multiple viewpoint and visual servoing are the most commonly used strategies. The research of these last topics does not look robust yet, and require further analysis and improvements for better results on fruit harvesting.
Automatic Viewpoint Selection for Teleoperation Assistance in Unmanned Environments Using Rail-Mounted Observation Robots
In irradiated environments that are inaccessible to human workers, operations are often conducted via teleoperation. Consequently, robot operators must maintain continuous situational awareness of a previously unknown working environment. Visual information regarding task targets and robot manipulators is of utmost importance. The proposed method employs rail-mounted observation robots to position easily replaceable cameras capable of long-term deployment in such environments. To reduce the cognitive load on teleoperators, the automatic viewpoint selection system eliminates the need for direct control of observation robots. This research presents a method for using a single rail-mounted observation robot to gather information on an unknown environment and automatically determine an optimal viewpoint. A key contribution of this study is the viewpoint presentation system, which can adapt to occlusions caused by robots and adjust its positions accordingly. The proposed method was validated through computer simulation using a hybrid model consisting of a static environment and a dynamic robot arm, which moves within the environment and may obstruct views. Furthermore, the feasibility of the approach was demonstrated in a real-world experiment involving a robot arm performing a teleoperation task.
Viewpoint Selection for 3D Scenes in Map Narratives
Narrative mapping, an advanced geographic information visualization technology, presents spatial information episodically, enhancing readers’ spatial understanding and event cognition. However, during 3D scene construction, viewpoint selection is heavily reliant on the cartographer’s subjective interpretation of the event. Even with fixed-angle settings, the task of ensuring that selected viewpoints align with the narrative theme remains challenging. To address this, an automated viewpoint selection method constrained by narrative relevance and visual information is proposed. Narrative relevance is determined by calculating spatial distances between each element and the thematic element within the scene. Visual information is quantified by assessing the visual salience of elements as the ratio of their projected area on the view window to their total area. Pearson’s correlation coefficient is used to evaluate the relationship between visual salience and narrative relevance, serving as a constraint to construct a viewpoint fitness function that integrates the visual salience of the convex polyhedron enclosing the scene. The chaotic particle swarm optimization (CPSO) algorithm is utilized to locate the viewpoint position while maximizing the fitness function, identifying a viewpoint meeting narrative and visual salience requirements. Experimental results indicate that, compared to the maximum projected area method and fixed-value method, a higher viewpoint fitness is achieved by this approach. The narrative views generated by this method were positively recognized by approximately two-thirds of invited professionals. This process aligns effectively with narrative visualization needs, enhances 3D narrative map creation efficiency, and offers a robust strategy for viewpoint selection in 3D scene-based narrative mapping.
Automatic Inspection of Aeronautical Mechanical Assemblies by Matching the 3D CAD Model and Real 2D Images
In the aviation industry, automated inspection is essential for ensuring quality of production. It allows acceleration of procedures for quality control of parts or mechanical assemblies. As a result, the demand of intelligent visual inspection systems aimed at ensuring high quality in production lines is increasing. In this work, we address a very common problem in quality control. The problem is verification of presence of the correct part and verification of its position. We address the problem in two parts: first, automatic selection of informative viewpoints before the inspection process is started (offline preparation of the inspection) and, second, automatic treatment of the acquired images from said viewpoints by matching them with information in 3D CAD models is launched. We apply this inspection system for detecting defects on aeronautical mechanical assemblies with the aim of checking whether all the subparts are present and correctly mounted. The system can be used during manufacturing or maintenance operations. The accuracy of the system is evaluated on two kinds of platform. One is an autonomous navigation robot, and the other one is a handheld tablet. The experimental results show that our proposed approach is accurate and promising for industrial applications with possibility for real-time inspection.
Viewpoint Selection for 3D-Games with f-Divergences
In this paper, we present a novel approach for the optimal camera selection in video games. The new approach explores the use of information theoretic metrics f-divergences, to measure the correlation between the objects as viewed in camera frustum and the ideal or target view. The f-divergences considered are the Kullback–Leibler divergence or relative entropy, the total variation and the χ2 divergence. Shannon entropy is also used for comparison purposes. The visibility is measured using the differential form factors from the camera to objects and is computed by casting rays with importance sampling Monte Carlo. Our method allows a very fast dynamic selection of the best viewpoints, which can take into account changes in the scene, in the ideal or target view, and in the objectives of the game. Our prototype is implemented in Unity engine, and our results show an efficient selection of the camera and an improved visual quality. The most discriminating results are obtained with the use of Kullback–Leibler divergence.
Dynamic Viewpoint Selection for Sweet Pepper Maturity Classification Using Online Economic Decisions
This paper presents a rule-based methodology for dynamic viewpoint selection for maturity classification of red and yellow sweet peppers. The method makes an online decision to capture an additional next-best viewpoint based on an economic analysis that considers potential misclassification and robot operational costs. The next-best viewpoint is selected based on color variations on the pepper. Peppers were classified into mature and immature using a random forest classifier based on principle components of various color features derived from an RGB-D camera. The method first attempts to classify maturity based on a single viewpoint. An additional viewpoint is acquired and added to the point cloud only when it is deemed profitable. The methodology was evaluated using leave-one-out cross-validation on datasets of 69 red and 70 yellow sweet peppers from three different maturity stages. Classification accuracy was increased by 6% and 5% using dynamic viewpoint selection along with 52% and 12% decrease in economic costs for red and yellow peppers, respectively, compared to using a single viewpoint. Sensitivity analyses were performed for misclassification and robot operational costs.
A method of generating depth images for view-based shape retrieval of 3D CAD models from partial point clouds
Laser scanners can easily acquire the geometric data of physical environments in the form of point clouds. Industrial 3D reconstruction processes generally recognize objects from point clouds, which should include both geometric and semantic data. However, the recognition process is often a bottleneck in 3D reconstruction because it is labor intensive and requires expertise in domain knowledge. To address this problem, various methods have been developed to recognize objects by retrieving their corresponding models from a database via input geometric queries. In recent years, geometric data conversion to images and view-based 3D shape retrieval applications have demonstrated high accuracies. Depth images that encode the depth values as pixel intensities are frequently used for view-based 3D shape retrieval. However, geometric data collected from objects are often incomplete owing to occlusions and line-of-sight limitations. Images generated by occluded point clouds lower the view-based 3D object retrieval performance owing to loss of information. In this paper, we propose a viewpoint and image-resolution estimation method for view-based 3D shape retrieval from point cloud queries. Further, automatic selection of viewpoint and image resolution are proposed using the data acquisition rate and density calculations from sampled viewpoints and image resolutions. The retrieval performances for images generated by the proposed method are investigated via experiments and compared for various datasets. Additionally, view-based 3D shape retrieval performance with a deep convolutional neural network was investigated using the proposed method.
A Survey of Viewpoint Selection Methods for Polygonal Models
Viewpoint selection has been an emerging area in computer graphics for some years, and it is now getting maturity with applications in fields such as scene navigation, scientific visualization, object recognition, mesh simplification, and camera placement. In this survey, we review and compare twenty-two measures to select good views of a polygonal 3D model, classify them using an extension of the categories defined by Secord et al., and evaluate them against the Dutagaci et al. benchmark. Eleven of these measures have not been reviewed in previous surveys. Three out of the five short-listed best viewpoint measures are directly related to information. We also present in which fields the different viewpoint measures have been applied. Finally, we provide a publicly available framework where all the viewpoint selection measures are implemented and can be compared against each other.
Automatic Representative View Selection of a 3D Cultural Relic Using Depth Variation Entropy and Depth Distribution Entropy
Automatically selecting a set of representative views of a 3D virtual cultural relic is crucial for constructing wisdom museums. There is no consensus regarding the definition of a good view in computer graphics; the same is true of multiple views. View-based methods play an important role in the field of 3D shape retrieval and classification. However, it is still difficult to select views that not only conform to subjective human preferences but also have a good feature description. In this study, we define two novel measures based on information entropy, named depth variation entropy and depth distribution entropy. These measures were used to determine the amount of information about the depth swings and different depth quantities of each view. Firstly, a canonical pose 3D cultural relic was generated using principal component analysis. A set of depth maps obtained by orthographic cameras was then captured on the dense vertices of a geodesic unit-sphere by subdividing the regular unit-octahedron. Afterwards, the two measures were calculated separately on the depth maps gained from the vertices and the results on each one-eighth sphere form a group. The views with maximum entropy of depth variation and depth distribution were selected, and further scattered viewpoints were selected. Finally, the threshold word histogram derived from the vector quantization of salient local descriptors on the selected depth maps represented the 3D cultural relic. The viewpoints obtained by the proposed method coincided with an arbitrary pose of the 3D model. The latter eliminated the steps of manually adjusting the model's pose and provided acceptable display views for people. In addition, it was verified on several datasets that the proposed method, which uses the Bag-of-Words mechanism and a deep convolution neural network, also has good performance regarding retrieval and classification when dealing with only four views.
Optimizing Viewpoint Selection for Route-Based Experiences: Assessing the Role of Viewpoints on Viewshed Accuracy
A visual analysis is useful to assess potential impacts to our surroundings. There has been tremendous progress toward the optimization, accuracy, and techniques of these analyses. Viewshed analyses are a common type of visual analysis. The purpose of this study was to identify the optimal trade-off between the number of viewpoints needed to generate an accurate viewshed for a given route. In this study, we focused on identifying how a viewshed differs based on the sampling distance (interval) of viewpoints, topography, and distance of analysis. We employed the Geospatial Route Analysis and Visual Impact Assessment (GRAVIA) tool, a type of advanced viewshed that uses visual-magnitude measures. GRAVIA was applied across three different topographical environments (flat, hilly, and mountainous). We generated a one-mile-long segment for each environment and systematically discretized the route by varying the sampling-distance intervals from 1 m to 100 m. We also compared how the calculated results differed by distance from the route. The results showed a linear decrease in the correlation, though this was sensitive to the distance. When all distances were combined, a 30 m and 50 m sampling distance correlated to 0.9 and 0.7, respectively. However, when the results compared calculations beyond 300 m away from the route, the correlation values exceeded 97% for all the viewpoint-sampling distances. This suggests that for route-based analyses using visual magnitude, reducing the sampling rate can produce equivalent results with far less processing time while maintaining model precision.