Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
8,042 result(s) for "point clouds"
Sort by:
Deep Learning on Point Clouds and Its Application: A Survey
Point cloud is a widely used 3D data form, which can be produced by depth sensors, such as Light Detection and Ranging (LIDAR) and RGB-D cameras. Being unordered and irregular, many researchers focused on the feature engineering of the point cloud. Being able to learn complex hierarchical structures, deep learning has achieved great success with images from cameras. Recently, many researchers have adapted it into the applications of the point cloud. In this paper, the recent existing point cloud feature learning methods are classified as point-based and tree-based. The former directly takes the raw point cloud as the input for deep learning. The latter first employs a k-dimensional tree (Kd-tree) structure to represent the point cloud with a regular representation and then feeds these representations into deep learning models. Their advantages and disadvantages are analyzed. The applications related to point cloud feature learning, including 3D object classification, semantic segmentation, and 3D object detection, are introduced, and the datasets and evaluation metrics are also collected. Finally, the future research trend is predicted.
Feature Analysis of Scanning Point Cloud of Structure and Research on Hole Repair Technology Considering Space-Ground Multi-Source 3D Data Acquisition
As one of the best means of obtaining the geometry information of special shaped structures, point cloud data acquisition can be achieved by laser scanning or photogrammetry. However, there are some differences in the quantity, quality, and information type of point clouds obtained by different methods when collecting point clouds of the same structure, due to differences in sensor mechanisms and collection paths. Thus, this study aimed to combine the complementary advantages of multi-source point cloud data and provide the high-quality basic data required for structure measurement and modeling. Specifically, low-altitude photogrammetry technologies such as hand-held laser scanners (HLS), terrestrial laser scanners (TLS), and unmanned aerial systems (UAS) were adopted to collect point cloud data of the same special-shaped structure in different paths. The advantages and disadvantages of different point cloud acquisition methods of special-shaped structures were analyzed from the perspective of the point cloud acquisition mechanism of different sensors, point cloud data integrity, and single-point geometric characteristics of the point cloud. Additionally, a point cloud void repair technology based on the TLS point cloud was proposed according to the analysis results. Under the premise of unifying the spatial position relationship of the three point clouds, the M3C2 distance algorithm was performed to extract the point clouds with significant spatial position differences in the same area of the structure from the three point clouds. Meanwhile, the single-point geometric feature differences of the multi-source point cloud in the area with the same neighborhood radius was calculated. With the kernel density distribution of the feature difference, the feature points filtered from the HLS point cloud and the TLS point cloud were fused to enrich the number of feature points in the TLS point cloud. In addition, the TLS point cloud voids were located by raster projection, and the point clouds within the void range were extracted, or the closest points were retrieved from the other two heterologous point clouds, to repair the top surface and façade voids of the TLS point cloud. Finally, high-quality basic point cloud data of the special-shaped structure were generated.
Research on a Point Cloud Registration Method of Mobile Laser Scanning and Terrestrial Laser Scanning
Mobile laser scanning can quickly and dynamically obtain a wide range of urban scene point clouds. However, due to factors such as occlusion and field of view limitation, it needs to be supplemented by terrestrial laser scanning. The acquisition methods and data quality of mobile point clouds and terrestrial point clouds are quite different, the target of urban scene point clouds is complex and diverse, and the corresponding feature is difficult to extract, so the point cloud fusion is difficult. To this end, a point cloud registration method of mobile and terrestrial scanning based on the target features of artificial ground objects is proposed. Firstly, the data features of mobile laser scanning point clouds and terrestrial laser scanning point clouds are analyzed, and the point clouds are diluted with equal density. Then, the artificial ground objects are extracted as the registration primitives to reduce the scene complexity, and the features of urban scenes and the features of point cloud eigenvalues and principal curvature attributes are analyzed. Combined with the octree voxel index, the multi-scale key point extraction method is constructed to extract the multi-scale key points of registration primitives. Finally, the key point constraint is used to improve the deficiencies of 4PCS (4-Points Congruent Sets) algorithm and ICP (Iterative Closest Point) algorithm to complete the registration of mobile and terrestrial point clouds in different road scenes. Experiments show that the point cloud registration accuracy can reach 2.6 cm, which provides a feasible method for high precision fusion of multi-platform laser point clouds.
Low Overlapping Point Cloud Registration Using Line Features Detection
Modern robotic exploratory strategies assume multi-agent cooperation that raises a need for an effective exchange of acquired scans of the environment with the absence of a reliable global positioning system. In such situations, agents compare the scans of the outside world to determine if they overlap in some region, and if they do so, they determine the right matching between them. The process of matching multiple point-cloud scans is called point-cloud registration. Using the existing point-cloud registration approaches, a good match between any two-point-clouds is achieved if and only if there exists a large overlap between them, however, this limits the advantage of using multiple robots, for instance, for time-effective 3D mapping. Hence, a point-cloud registration approach is highly desirable if it can work with low overlapping scans. This work proposes a novel solution for the point-cloud registration problem with a very low overlapping area between the two scans. In doing so, no initial relative positions of the point-clouds are assumed. Most of the state-of-the-art point-cloud registration approaches iteratively match keypoints in the scans, which is computationally expensive. In contrast to the traditional approaches, a more efficient line-features-based point-cloud registration approach is proposed in this work. This approach, besides reducing the computational cost, avoids the problem of high false-positive rate of existing keypoint detection algorithms, which becomes especially significant in low overlapping point-cloud registration. The effectiveness of the proposed approach is demonstrated with the help of experiments.
Three-dimensional reconstruction and phenotype measurement of maize seedlings based on multi-view image sequences
As an important method for crop phenotype quantification, three-dimensional (3D) reconstruction is of critical importance for exploring the phenotypic characteristics of crops. In this study, maize seedlings were subjected to 3D reconstruction based on the imaging technology, and their phenotypic characters were analyzed. In the first stage, a multi-view image sequence was acquired via an RGB camera and video frame extraction method, followed by 3D reconstruction of maize based on structure from motion algorithm. Next, the original point cloud data of maize were preprocessed through Euclidean clustering algorithm, color filtering algorithm and point cloud voxel filtering algorithm to obtain a point cloud model of maize. In the second stage, the phenotypic parameters in the development process of maize seedlings were analyzed, and the maize plant height, leaf length, relative leaf area and leaf width measured through point cloud were compared with the corresponding manually measured values, and the two were highly correlated, with the coefficient of determination ( R 2 ) of 0.991, 0.989, 0.926 and 0.963, respectively. In addition, the errors generated between the two were also analyzed, and results reflected that the proposed method was capable of rapid, accurate and nondestructive extraction. In the third stage, maize stem leaves were segmented and identified through the region growing segmentation algorithm, and the expected segmentation effect was achieved. In general, the proposed method could accurately construct the 3D morphology of maize plants, segment maize leaves, and nondestructively and accurately extract the phenotypic parameters of maize plants, thus providing a data support for the research on maize phenotypes.
Systematic and Comprehensive Review of Clustering and Multi-Target Tracking Techniques for LiDAR Point Clouds in Autonomous Driving Applications
Autonomous vehicles (AVs) rely on advanced sensory systems, such as Light Detection and Ranging (LiDAR), to function seamlessly in intricate and dynamic environments. LiDAR produces highly accurate 3D point clouds, which are vital for the detection, classification, and tracking of multiple targets. A systematic review and classification of various clustering and Multi-Target Tracking (MTT) techniques are necessary due to the inherent challenges posed by LiDAR data, such as density, noise, and varying sampling rates. As part of this study, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology was employed to examine the challenges and advancements in MTT techniques and clustering for LiDAR point clouds within the context of autonomous driving. Searches were conducted in major databases such as IEEE Xplore, ScienceDirect, SpringerLink, ACM Digital Library, and Google Scholar, utilizing customized search strategies. We identified and critically reviewed 76 relevant studies based on rigorous screening and evaluation processes, assessing their methodological quality, data handling adequacy, and reporting compliance. As a result of this comprehensive review and classification, we were able to provide a detailed overview of current challenges, research gaps, and advancements in clustering and MTT techniques for LiDAR point clouds, thus contributing to the field of autonomous driving. Researchers and practitioners working in the field of autonomous driving will benefit from this study, which was characterized by transparency and reproducibility on a systematic basis.
Contrastive Learning for 3D Point Clouds Classification and Shape Completion
In this paper, we present the idea of Self Supervised learning on the shape completion and classification of point clouds. Most 3D shape completion pipelines utilize AutoEncoders to extract features from point clouds used in downstream tasks such as classification, segmentation, detection, and other related applications. Our idea is to add contrastive learning into AutoEncoders to encourage global feature learning of the point cloud classes. It is performed by optimizing triplet loss. Furthermore, local feature representations learning of point cloud is performed by adding the Chamfer distance function. To evaluate the performance of our approach, we utilize the PointNet classifier. We also extend the number of classes for evaluation from 4 to 10 to show the generalization ability of the learned features. Based on our results, embeddings generated from the contrastive AutoEncoder enhances shape completion and classification performance from 84.2% to 84.9% of point clouds achieving the state-of-the-art results with 10 classes.
A Registration Method Based on Ordered Point Clouds for Key Components of Trains
Point cloud registration is pivotal across various applications, yet traditional methods rely on unordered point clouds, leading to significant challenges in terms of computational complexity and feature richness. These methods often use k-nearest neighbors (KNN) or neighborhood ball queries to access local neighborhood information, which is not only computationally intensive but also confines the analysis within the object’s boundary, making it difficult to determine if points are precisely on the boundary using local features alone. This indicates a lack of sufficient local feature richness. In this paper, we propose a novel registration strategy utilizing ordered point clouds, which are now obtainable through advanced depth cameras, 3D sensors, and structured light-based 3D reconstruction. Our approach eliminates the need for computationally expensive KNN queries by leveraging the inherent ordering of points, significantly reducing processing time; extracts local features by utilizing 2D coordinates, providing richer features compared to traditional methods, which are constrained by object boundaries; compares feature similarity between two point clouds without keypoint extraction, enhancing efficiency and accuracy; and integrates image feature-matching techniques, leveraging the coordinate correspondence between 2D images and 3D-ordered point clouds. Experiments on both synthetic and real-world datasets, including indoor and industrial environments, demonstrate that our algorithm achieves an optimal balance between registration accuracy and efficiency, with registration times consistently under one second.
Research on the Method for Recognizing Bulk Grain-Loading Status Based on LiDAR
Grain is a common bulk cargo. To ensure optimal utilization of transportation space and prevent overflow accidents, it is necessary to observe the grain’s shape and determine the loading status during the loading process. Traditional methods often rely on manual judgment, which results in high labor intensity, poor safety, and low loading efficiency. Therefore, this paper proposes a method for recognizing the bulk grain-loading status based on Light Detection and Ranging (LiDAR). This method uses LiDAR to obtain point cloud data and constructs a deep learning network to perform target recognition and component segmentation on loading vehicles, extract vehicle positions and grain shapes, and recognize and make known the bulk grain-loading status. Based on the measured point cloud data of bulk grain loading, in the point cloud-classification task, the overall accuracy is 97.9% and the mean accuracy is 98.1%. In the vehicle component-segmentation task, the overall accuracy is 99.1% and the Mean Intersection over Union is 96.6%. The results indicate that the method has reliable performance in the research tasks of extracting vehicle positions, detecting grain shapes, and recognizing loading status.
Three-Dimensional Point Cloud Applications, Datasets, and Compression Methodologies for Remote Sensing: A Meta-Survey
This meta-survey provides a comprehensive review of 3D point cloud (PC) applications in remote sensing (RS), essential datasets available for research and development purposes, and state-of-the-art point cloud compression methods. It offers a comprehensive exploration of the diverse applications of point clouds in remote sensing, including specialized tasks within the field, precision agriculture-focused applications, and broader general uses. Furthermore, datasets that are commonly used in remote-sensing-related research and development tasks are surveyed, including urban, outdoor, and indoor environment datasets; vehicle-related datasets; object datasets; agriculture-related datasets; and other more specialized datasets. Due to their importance in practical applications, this article also surveys point cloud compression technologies from widely used tree- and projection-based methods to more recent deep learning (DL)-based technologies. This study synthesizes insights from previous reviews and original research to identify emerging trends, challenges, and opportunities, serving as a valuable resource for advancing the use of point clouds in remote sensing.