Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
193 result(s) for "Remondino, F."
Sort by:
A REVIEW OF POINT CLOUDS SEGMENTATION AND CLASSIFICATION ALGORITHMS
Today 3D models and point clouds are very popular being currently used in several fields, shared through the internet and even accessed on mobile phones. Despite their broad availability, there is still a relevant need of methods, preferably automatic, to provide 3D data with meaningful attributes that characterize and provide significance to the objects represented in 3D. Segmentation is the process of grouping point clouds into multiple homogeneous regions with similar properties whereas classification is the step that labels these regions. The main goal of this paper is to analyse the most popular methodologies and algorithms to segment and classify 3D point clouds. Strong and weak points of the different solutions presented in literature or implemented in commercial software will be listed and shortly explained. For some algorithms, the results of the segmentation and classification is shown using real examples at different scale in the Cultural Heritage field. Finally, open issues and research topics will be discussed.
A CRITICAL REVIEW OF AUTOMATED PHOTOGRAMMETRIC PROCESSING OF LARGE DATASETS
The paper reports some comparisons between commercial software able to automatically process image datasets for 3D reconstruction purposes. The main aspects investigated in the work are the capability to correctly orient large sets of image of complex environments, the metric quality of the results, replicability and redundancy. Different datasets are employed, each one featuring a diverse number of images, GSDs at cm and mm resolutions, and ground truth information to perform statistical analyses of the 3D results. A summary of (photogrammetric) terms is also provided, in order to provide rigorous terms of reference for comparisons and critical analyses.
IMAGE ORIENTATION WITH A HYBRID PIPELINE ROBUST TO ROTATIONS AND WIDE-BASELINES
The extraction of reliable and repeatable interest points among images is a fundamental step for automatic image orientation (Structure-From-Motion). Despite recent progresses, open issues in challenging conditions - such as wide baselines and strong light variations - are still present. Over the years, traditional hand-crafted methods have been paired by learning-based approaches, progressively updating the state-of-the-art according to recent benchmarks. Notwithstanding these advancements, learning-based methods are often not suitable for real photogrammetric surveys due to their lack of rotation invariance, a fundamental requirement for these specific applications. This paper proposes a novel hybrid image matching pipeline which employs both hand-crafted and deep-based components, to extract reliable rotational invariant keypoints optimized for wide-baseline scenarios. The proposed hybrid pipeline was compared with other hand-crafted and learning-based state-of-the-art approaches on some photogrammetric datasets using metric ground-truth data. Results show that the proposed hybrid matching pipeline has high accuracy and appeared to be the only method among the evaluated ones able to register images in the most challenging wide-baseline scenarios.
APPLICATION OF MACHINE AND DEEP LEARNING STRATEGIES FOR THE CLASSIFICATION OF HERITAGE POINT CLOUDS
The use of heritage point cloud for documentation and dissemination purposes is nowadays increasing. The association of semantic information to 3D data by means of automated classification methods can help to characterize, describe and better interpret the object under study. In the last decades, machine learning methods have brought significant progress to classification procedures. However, the topic of cultural heritage has not been fully explored yet. This paper presents a research for the classification of heritage point clouds using different supervised learning approaches (Machine and Deep learning ones). The classification is aimed at automatically recognizing architectural components such as columns, facades or windows in large datasets. For each case study and employed classification method, different accuracy metrics are calculated and compared.
VIDEOGRAMMETRY VS PHOTOGRAMMETRY FOR HERITAGE 3D RECONSTRUCTION
In the last years we are witnessing an increasing quality (and quantity) of video streams and a growing capability of SLAM-based methods to derive 3D data from video. Video sequences can be easily acquired by non-expert surveyors and possibly used for 3D documentation purposes. The aim of the paper is to evaluate the possibility to perform 3D reconstructions of heritage scenarios using videos (\"videogrammetry\"), e.g. acquired with smartphones. Video frames are extracted from the sequence using a fixed-time interval and two advanced methods. Frames are then processed applying automated image orientation / Structure from Motion (SfM) and dense image matching / Multi-View Stereo (MVS) methods. Obtained 3D dense point clouds are the visually validated as well as compared with photogrammetric ground truth archived acquiring image with a reflex camera or analysing 3D data's noise on flat surfaces.
DEEP-IMAGE-MATCHING: A TOOLBOX FOR MULTIVIEW IMAGE MATCHING OF COMPLEX SCENARIOS
Finding corresponding points between images is a fundamental step in photogrammetry and computer vision tasks. Traditionally, image matching has relied on hand-crafted algorithms such as SIFT or ORB. However, these algorithms face challenges when dealing with multi-temporal images, varying radiometry and contents as well as significant viewpoint differences. Recently, the computer vision community has proposed several deep learning-based approaches that are trained for challenging illumination and wide viewing angle scenarios. However, they suffer from certain limitations, such as rotations, and they are not applicable to high resolution images due to computational constraints. In addition, they are not widely used by the photogrammetric community due to limited integration with standard photogrammetric software packages. To overcome these challenges, this paper introduces Deep-Image-Matching, an open-source toolbox designed to match images using different matching strategies, ranging from traditional hand-crafted to deep-learning methods (https://github.com/3DOM-FBK/deep-image-matching). The toolbox accommodates high-resolution datasets, e.g. data acquired with full-frame or aerial sensors, and addresses known rotation-related problems of the learned features. The toolbox provides image correspondences outcomes that are directly compatible with commercial and open-source software packages, such as COLMAP and openMVG, for a bundle adjustment. The paper includes also a series of cultural heritage case studies that present challenging conditions where traditional hand-crafted approaches typically fail.
GEOMETRIC FEATURES ANALYSIS FOR THE CLASSIFICATION OF CULTURAL HERITAGE POINT CLOUDS
In the last years, the application of artificial intelligence (Machine Learning and Deep Learning methods) for the classification of 3D point clouds has become an important task in modern 3D documentation and modelling applications. The identification of proper geometric and radiometric features becomes fundamental to classify 2D/3D data correctly. While many studies have been conducted in the geospatial field, the cultural heritage sector is still partly unexplored. In this paper we analyse the efficacy of the geometric covariance features as a support for the classification of Cultural Heritage point clouds. To analyse the impact of the different features calculated on spherical neighbourhoods at various radius sizes, we present results obtained on four different heritage case studies using different features configurations.
PHOTOGRAMMETRY NOW AND THEN – FROM HAND-CRAFTED TO DEEP-LEARNING TIE POINTS
Historical images provide a valuable source of information exploited by several kinds of applications, such as the monitoring of cities and territories, the reconstruction of destroyed buildings, and are increasingly being shared for cultural promotion projects through virtual reality or augmented reality applications. Finding reliable and accurate matches between historical and present images is a fundamental step for such tasks since they require to co-register the present 3D scene with the past one. Classical image matching solutions are sensitive to strong radiometric variations within the images, which are particularly relevant in these multi-temporal contexts due to different types of sensitive media (film/sensors) employed for the image acquisitions, different lighting conditions and viewpoint angles. In this work, we investigate the actual improvement provided by recent deep learning approaches to match historical and nowadays images. As learning-based methods have been trained to find reliable matches in challenging scenarios, including large viewpoint and illumination changes, they could overcome the limitations of classic hand-crafted methods such as SIFT and ORB. The most relevant approaches proposed by the research community in the last years are analyzed and compared using pairs of multi-temporal images.
A BENCHMARK FOR LARGE-SCALE HERITAGE POINT CLOUD SEMANTIC SEGMENTATION
The lack of benchmarking data for the semantic segmentation of digital heritage scenarios is hampering the development of automatic classification solutions in this field. Heritage 3D data feature complex structures and uncommon classes that prevent the simple deployment of available methods developed in other fields and for other types of data. The semantic classification of heritage 3D data would support the community in better understanding and analysing digital twins, facilitate restoration and conservation work, etc. In this paper, we present the first benchmark with millions of manually labelled 3D points belonging to heritage scenarios, realised to facilitate the development, training, testing and evaluation of machine and deep learning methods and algorithms in the heritage field. The proposed benchmark, available at http://archdataset.polito.it/, comprises datasets and classification results for better comparisons and insights into the strengths and weaknesses of different machine and deep learning approaches for heritage point cloud semantic segmentation, in addition to promoting a form of crowdsourcing to enrich the already annotated database.
EVALUATING HAND-CRAFTED AND LEARNING-BASED FEATURES FOR PHOTOGRAMMETRIC APPLICATIONS
The image orientation (or Structure from Motion – SfM) process needs well localized, repeatable and stable tie points in order to derive camera poses and a sparse 3D representation of the surveyed scene. The accurate identification of tie points in large image datasets is still an open research topic in the photogrammetric and computer vision communities. Tie points are established by firstly extracting keypoint using a hand-crafted feature detector and descriptor methods. In the last years new solutions, based on convolutional neural network (CNN) methods, were proposed to let a deep network discover which feature extraction process and representation are most suitable for the processed images. In this paper we aim to compare state-of-the-art hand-crafted and learning-based method for the establishment of tie points in various and different image datasets. The investigation highlights the actual challenges for feature matching and evaluates selected methods under different acquisition conditions (network configurations, image overlap, UAV vs terrestrial, strip vs convergent) and scene's characteristics. Remarks and lessons learned constrained to the used datasets and methods are provided.