Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
4,368 result(s) for "3d reconstruction"
Sort by:
A Comprehensive Review of Vision-Based 3D Reconstruction Methods
With the rapid development of 3D reconstruction, especially the emergence of algorithms such as NeRF and 3DGS, 3D reconstruction has become a popular research topic in recent years. 3D reconstruction technology provides crucial support for training extensive computer vision models and advancing the development of general artificial intelligence. With the development of deep learning and GPU technology, the demand for high-precision and high-efficiency 3D reconstruction information is increasing, especially in the fields of unmanned systems, human-computer interaction, virtual reality, and medicine. The rapid development of 3D reconstruction is becoming inevitable. This survey categorizes the various methods and technologies used in 3D reconstruction. It explores and classifies them based on three aspects: traditional static, dynamic, and machine learning. Furthermore, it compares and discusses these methods. At the end of the survey, which includes a detailed analysis of the trends and challenges in 3D reconstruction development, we aim to provide a comprehensive introduction for individuals who are currently engaged in or planning to conduct research on 3D reconstruction. Our goal is to help them gain a comprehensive understanding of the relevant knowledge related to 3D reconstruction.
MoReLab: A Software for User-Assisted 3D Reconstruction
We present MoReLab, a tool for user-assisted 3D reconstruction. This reconstruction requires an understanding of the shapes of the desired objects. Our experiments demonstrate that existing Structure from Motion (SfM) software packages fail to estimate accurate 3D models in low-quality videos due to several issues such as low resolution, featureless surfaces, low lighting, etc. In such scenarios, which are common for industrial utility companies, user assistance becomes necessary to create reliable 3D models. In our system, the user first needs to add features and correspondences manually on multiple video frames. Then, classic camera calibration and bundle adjustment are applied. At this point, MoReLab provides several primitive shape tools such as rectangles, cylinders, curved cylinders, etc., to model different parts of the scene and export 3D meshes. These shapes are essential for modeling industrial equipment whose videos are typically captured by utility companies with old video cameras (low resolution, compression artifacts, etc.) and in disadvantageous lighting conditions (low lighting, torchlight attached to the video camera, etc.). We evaluate our tool on real industrial case scenarios and compare it against existing approaches. Visual comparisons and quantitative results show that MoReLab achieves superior results with regard to other user-interactive 3D modeling tools.
Effects of Different Parameter Settings for 3D Data Smoothing and Mesh Simplification on Near Real-Time 3D Reconstruction of High Resolution Bioceramic Bone Void Filling Medical Images
Three-dimensional reconstruction plays a vital role in assisting doctors and surgeons in diagnosing the healing progress of bone defects. Common three-dimensional reconstruction methods include surface and volume rendering. As the focus is on the shape of the bone, this study omits the volume rendering methods. Many improvements have been made to surface rendering methods like Marching Cubes and Marching Tetrahedra, but not many on working towards real-time or near real-time surface rendering for large medical images and studying the effects of different parameter settings for the improvements. Hence, this study attempts near real-time surface rendering for large medical images. Different parameter values are experimented on to study their effect on reconstruction accuracy, reconstruction and rendering time, and the number of vertices and faces. The proposed improvement involving three-dimensional data smoothing with convolution kernel Gaussian size 5 and mesh simplification reduction factor of 0.1 is the best parameter value combination for achieving a good balance between high reconstruction accuracy, low total execution time, and a low number of vertices and faces. It has successfully increased reconstruction accuracy by 0.0235%, decreased the total execution time by 69.81%, and decreased the number of vertices and faces by 86.57% and 86.61%, respectively.
Patients with trochlear dysplasia have dysplastic medial femoral epiphyseal plates
Purpose To investigate the growth of the epiphyseal plate in patients with trochlea dysplasia using a 3D computed tomography (CT)-based reconstruction of the bony structure of the distal femur. The epiphysis plate was divided into a medial part and a lateral part to compare their differences in patients with trochlear dysplasia. Methods This retrospective study included 50 patients with trochlea dysplasia in the study group and 50 age- and sex-matched patients in the control group. Based on the CT images, MIMICS was used to reconstruct the bony structure of the distal femur. Measurements included the surface area and volume of the growth plate (both medial and lateral), the surface area and capacity of the proximal trochlea, trochlea-physis distance (TPD) (both medial and lateral), and height of the medial and lateral condyle. Results The surface area of the medial epiphyseal plate (1339.8 ± 202.4 mm 2 vs. 1596.6 ± 171.8 mm 2 ), medial TPD (4.9 ± 2.8 mm vs. 10.6 ± 3.0 mm), height of the medial condyle (1.1 ± 2.5 mm vs. 4.9 ± 1.3 mm), and capacity of the proximal trochlear groove (821.7 ± 230.9 mm 3 vs. 1520.0 ± 498.0 mm 3 ) was significantly smaller in the study group than in the control group. A significant positive correlation was found among the area of the medial epiphyseal plate, the medial TPD, the height of the medial condyle and the capacity of the proximal trochlear groove ( r  = 0.502–0.638). Conclusion The medial epiphyseal plate was dysplastic in patients with trochlea dysplasia. There is a significant positive correlation between the surface area of the medial epiphyseal plate, medial TPD, height of the medial condyle and capacity of the proximal trochlear groove, which can be used to evaluate the developmental stage of the trochlea in clinical practice and to guide targeted treatment of trochlear dysplasia. Level of evidence III.
Joint estimation of imaging plane and three‐dimensional structure based on inverse synthetic aperture radar image sequences
It is difficult to estimate the effective rotation velocity for non‐triaxial stabilized space targets, which leads to deviations in the estimation of imaging plane vectors and azimuth resolution. This makes it hard to reconstruct the three‐dimensional (3D) of space targets based on inverse synthetic aperture radar (ISAR) image sequence. To solve this problem, a joint estimation method of imaging plane vector and 3D structure based on ISAR image sequences is proposed. First, a compact form of imaging plane vector is defined. Then, the 3D structure of a target is characterized by a fully connected deep network. The volume rendering for ISAR images is redesigned with the analysis of projection formula for ISAR imaging. The network is trained using the discrepancy between the modulus of rendered and observed ISAR images in a self‐supervised manner without 3D supervision. And the 3D structure can be obtained by inquiring all points in the 3D space. Therefore, it can optimize imaging planes and produce more complete and accurate results for complex space targets. Adequate simulation experiments verify its superiority. By constructing a compact form of imaging plane vectors and azimuth resolution, an MLP network is used to characterize the 3D structure of the target, and 2D ISAR images are used to supervise the training network to update the compact form. Finally, 3D space point coordinates are constructed to obtain the 3D mesh of the target.
The Joint Calibration Method of Multi-line Laser and Tracking System based on Conjugate Gradient Iteration
The multi-line laser 3D reconstruction system mainly relies on marking points to acquire 3D data. To simplify the acquisition of 3D data for objects, we use a binocular tracking method to achieve unmarked point stitching of the multi-line laser reconstructed 3D data. The key challenge with this system is the joint calibration between the multi-line laser system and the tracking ball cage. Traditionally, planar calibration plates are used for calibration. However, due to the extensive calibration field, the production of large calibration plates incurs high costs and compromises machining accuracy. As a result, significant joint calibration errors occur between the tracking ball and the multi-line laser system, making high-precision calibration impossible. To solve these problems, an iterative method based on multi-position attitude and conjugate gradient is proposed to achieve high-precision joint calibration. A simple and convenient cross pole with multiple coding points is used as a calibrator. The 3D data of these coding points are determined beforehand using a coordinate measuring machine (CMM). First, the internal and external parameters of the binocular tracking system are calibrated using this cross pole. During the joint calibration process, in which both the multi-line laser system and the tracking ball cage are involved, the cross pole is imaged at different positions simultaneously with the binocular tracking system and the multi-line laser system. This allows us to determine the positions and orientations of both systems relative to each other and relative to the cross pole. The transformation relationship between the multi-line laser system and the tracking ball cage is calibrated using an iterative conjugate gradient optimization algorithm based on these positions and orientations, which completes the entire system calibration and eventually achieves three-dimensional reconstruction of the unmarked points. Compared to conventional planar calibration plate-based methods, our proposed approach requires only one cross pole to perform two crucial calibration steps, improving the joint calibration accuracy. While the final reconstruction accuracy of conventional methods is about 0.1 mm, our proposed method can achieve an accuracy of about 0.02 mm.
Plane Fitting in 3D Reconstruction to Preserve Smooth Homogeneous Surfaces
Photogrammetric techniques for weakly-textured surfaces without sufficient information about the R (red), G (green) and B (blue) primary colors of light are challenging. Considering that most urban or indoor object surfaces follow simple geometric shapes, a novel method for reconstructing smooth homogeneous planar surfaces based on MVS (Multi-View Stereo) is proposed. The idea behind it is to extract enough features for the image description, and to refine the dense points generated by the depth values of pixels with plane fitting, to favor the alignment of the surface to the detected planes. The SIFT (Scale Invariant Feature Transform) and AKAZE (Accelerated-KAZE) feature extraction algorithms are combined to ensure robustness and help retrieve connections in small samples. The smoothness of the enclosed watertight Poisson surface can be enhanced by enforcing the 3D points to be projected onto the absolute planes detected by a RANSAC (Random Sample Consensus)-based approach. Experimental evaluations of both cloud-to-mesh comparisons in the per-vertex distances with the ground truth models and visual comparisons with a popular mesh filtering based post-processing method indicate that the proposed method can considerably retain the integrity and smoothness of the reconstruction results. Combined with other primitive fittings, the reconstruction extent of homogeneous surfaces can be further extended, serving as primitive models for 3D building reconstruction, and providing guidance for future works in photogrammetry and 3D surface reconstruction.
Can 3D imaging modeling recognize functional tissue and predict liver failure? A retrospective study based on 3D modelling of the major hepatectomies after hepatic modulation
Background Thanks to the introduction of radiomics, 3d reconstruction can be able to analyse tissues and recognise true hypertrophy from non-functioning tissue in patients treated with major hepatectomies with hepatic modulation.The aim of this study is to evaluate the performance of 3D Imaging Modelling in predict liver failure. Methods Patients submitted to major hepatectomies after hepatic modulation at Sanchinarro University Hospital from May 2015 to October 2019 were analysed. Three-dimensional reconstruction was realised before and after surgical treatment. The volumetry of Future Liver Remnant was calculated, distinguishing in Functional Future Liver Remnant (FRFx) i.e. true hypertrophy tissue and Anatomic Future Liver Remnant (FRL) i.e. hypertrophy plus no functional tissue (oedema/congestion) These volumes were analysed in patients with and without post hepatic liver failure. Results Twenty-four procedures were realised (11 ALPPS and 13 PVE followed by major hepatectomy). Post hepatic liver failure grade B and C occurred in 6 patients. The ROC curve showed a better AUC for FRFxV (74%) with respect to FRLV (54%) in prediction PHLF > B. The increase of anatomical FRL (iFRL) was superior in the ALPPS group (120%) with respect to the PVE group (73%) ( p  = 0,041), while the increase of functional FRFX (iFRFx) was 35% in the ALLPS group and 46% in the PVE group ( p  > 0,05), showing no difference in the two groups. Conclusion The 3D reconstruction model can allow optimal surgical planning, and through the use of specific algorithms, can contribute to differential functioning liver parenchyma of the FLR.
The presentation of a semi-supervised deep learning platform for 3D face reconstruction from 2D images
In recent years, 3D face reconstruction approaches based on deep learning have had good results, which perform well in terms of quality and efficiency. In this paper, we present a semi-supervised deep learning platform for 3D reconstruction from 2D images, where we use two pre-trained unsupervised segments to reduce the need for 3D and 2D labels. In this way, by using the trained parts for the training of the entire network, we need less labeled data. The original goal of the presented platform is to find a map between two-dimensional representation spaces and three-dimensional representation spaces with lower dimensions. Therefore, the proposed platform in this article includes the unsupervised parts of the mapping from 2D and 3D spaces to the low-dimensional representations and the supervised part of the mapping between the representations with low-dimensional. Generally, therefore, we present a method that: 1) uses a stout and combined loss function for weakly supervised learning that considers the low-level information and the data at the perceptual level for learning; 2) applies the multi-image face renovation by using the supplementary data from the different pictures to consolidate the form. In order to prove the effectiveness of the proposed method, the various tests have been performed on three data sets. The results of these tests show that the proposed method significantly reduces the amount of the reconstruction error in compared to the similar methods.
Vehicle Localization in a Completed City-Scale 3D Scene Using Aerial Images and an On-Board Stereo Camera
Simultaneous Localization and Mapping (SLAM) forms the foundation of vehicle localization in autonomous driving. Utilizing high-precision 3D scene maps as prior information in vehicle localization greatly assists in the navigation of autonomous vehicles within large-scale 3D scene models. However, generating high-precision maps is complex and costly, posing challenges to commercialization. As a result, a global localization system that employs low-precision, city-scale 3D scene maps reconstructed by unmanned aerial vehicles (UAVs) is proposed to optimize visual positioning for vehicles. To address the discrepancies in image information caused by differing aerial and ground perspectives, this paper introduces a wall complementarity algorithm based on the geometric structure of buildings to refine the city-scale 3D scene. A 3D-to-3D feature registration algorithm is developed to determine vehicle location by integrating the optimized city-scale 3D scene with the local scene generated by an onboard stereo camera. Through simulation experiments conducted in a computer graphics (CG) simulator, the results indicate that utilizing a completed low-precision scene model enables achieving a vehicle localization accuracy with an average error of 3.91 m, which is close to the 3.27 m error obtained using the high-precision map. This validates the effectiveness of the proposed algorithm. The system demonstrates the feasibility of utilizing low-precision city-scale 3D scene maps generated by unmanned aerial vehicles (UAVs) for vehicle localization in large-scale scenes.