Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
751 result(s) for "surface registration"
Sort by:
Cortical surface registration using unsupervised learning
Non-rigid cortical registration is an important and challenging task due to the geometric complexity of the human cortex and the high degree of inter-subject variability. A conventional solution is to use a spherical representation of surface properties and perform registration by aligning cortical folding patterns in that space. This strategy produces accurate spatial alignment, but often requires high computational cost. Recently, convolutional neural networks (CNNs) have demonstrated the potential to dramatically speed up volumetric registration. However, due to distortions introduced by projecting a sphere to a 2D plane, a direct application of recent learning-based methods to surfaces yields poor results. In this study, we present SphereMorph, a diffeomorphic registration framework for cortical surfaces using deep networks that addresses these issues. SphereMorph uses a UNet-style network associated with a spherical kernel to learn the displacement field and warps the sphere using a modified spatial transformer layer. We propose a resampling weight in computing the data fitting loss to account for distortions introduced by polar projection, and demonstrate the performance of our proposed method on two tasks, including cortical parcellation and group-wise functional area alignment. The experiments show that the proposed SphereMorph is capable of modeling the geometric registration problem in a CNN framework and demonstrate superior registration accuracy and computational efficiency. The source code of SphereMorph will be released to the public upon acceptance of this manuscript at https://github.com/voxelmorph/spheremorph. •Non-rigid cortical registration is an important and challenging task.•Convolutional neural networks (CNNs) have demonstrated the potential to dramatically speed up volumetric registration.•We present SphereMorph, a diffeomorphic registration framework for cortical surfaces using deep networks.•Our experiments demonstrate superior registration accuracy and computational Efficiency.
CIVET-Macaque: An automated pipeline for MRI-based cortical surface generation and cortical thickness in macaques
The MNI CIVET pipeline for automated extraction of cortical surfaces and evaluation of cortical thickness from in-vivo human MRI has been extended for processing macaque brains. Processing is performed based on the NIMH Macaque Template (NMT), as the reference template, with the anatomical parcellation of the surface following the D99 and CHARM atlases. The modifications needed to adapt CIVET to the macaque brain are detailed. Results have been obtained using CIVET-macaque to process the anatomical scans of the 31 macaques used to generate the NMT and another 95 macaques from the PRIME-DE initiative. It is anticipated that the open usage of CIVET-macaque will promote collaborative efforts in data collection and processing, sharing, and automated analyses from which the non-human primate brain imaging field will advance.
Shape My Face: Registering 3D Face Scans by Surface-to-Surface Translation
Standard registration algorithms need to be independently applied to each surface to register, following careful pre-processing and hand-tuning. Recently, learning-based approaches have emerged that reduce the registration of new scans to running inference with a previously-trained model. The potential benefits are multifold: inference is typically orders of magnitude faster than solving a new instance of a difficult optimization problem, deep learning models can be made robust to noise and corruption, and the trained model may be re-used for other tasks, e.g. through transfer learning. In this paper, we cast the registration task as a surface-to-surface translation problem, and design a model to reliably capture the latent geometric information directly from raw 3D face scans. We introduce Shape-My-Face (SMF), a powerful encoder-decoder architecture based on an improved point cloud encoder, a novel visual attention mechanism, graph convolutional decoders with skip connections, and a specialized mouth model that we smoothly integrate with the mesh convolutions. Compared to the previous state-of-the-art learning algorithms for non-rigid registration of face scans, SMF only requires the raw data to be rigidly aligned (with scaling) with a pre-defined face template. Additionally, our model provides topologically-sound meshes with minimal supervision, offers faster training time, has orders of magnitude fewer trainable parameters, is more robust to noise, and can generalize to previously unseen datasets. We extensively evaluate the quality of our registrations on diverse data. We demonstrate the robustness and generalizability of our model with in-the-wild face scans across different modalities, sensor types, and resolutions. Finally, we show that, by learning to register scans, SMF produces a hybrid linear and non-linear morphable model. Manipulation of the latent space of SMF allows for shape generation, and morphing applications such as expression transfer in-the-wild. We train SMF on a dataset of human faces comprising 9 large-scale databases on commodity hardware.
A Scale Independent Selection Process for 3D Object Recognition in Cluttered Scenes
During the last years a wide range of algorithms and devices have been made available to easily acquire range images. The increasing abundance of depth data boosts the need for reliable and unsupervised analysis techniques, spanning from part registration to automated segmentation. In this context, we focus on the recognition of known objects in cluttered and incomplete 3D scans. Locating and fitting a model to a scene are very important tasks in many scenarios such as industrial inspection, scene understanding, medical imaging and even gaming. For this reason, these problems have been addressed extensively in the literature. Several of the proposed methods adopt local descriptor-based approaches, while a number of hurdles still hinder the use of global techniques. In this paper we offer a different perspective on the topic: We adopt an evolutionary selection algorithm that seeks global agreement among surface points, while operating at a local level. The approach effectively extends the scope of local descriptors by actively selecting correspondences that satisfy global consistency constraints, allowing us to attack a more challenging scenario where model and scene have different, unknown scales. This leads to a novel and very effective pipeline for 3D object recognition, which is validated with an extensive set of experiments and comparisons with recent techniques at the state of the art.
A comparison of voxel- and surface-based cone-beam computed tomography mandibular superimposition in adult orthodontic patients
Objective To evaluate the accuracy, reliability, and efficiency of voxel- and surface-based registrations for cone-beam computed tomography (CBCT) mandibular superimposition in adult orthodontic patients. Methods Pre- and post-orthodontic treatment CBCT scans of 27 adult patients were obtained. Voxel- and surface-based CBCT mandibular superimpositions were performed using the mandibular basal bone as a reference. The accuracy of the two methods was evaluated using the absolute mean distance measured. The time that was required to perform the measurements using these methods was also compared. Statistical differences were determined using paired t-tests, and inter-observer reliability was assessed by intraclass correlation coefficients (ICCs). Results The absolute mean distance on seven mandible surface areas between voxel- and surface-based registrations was similar but not significantly different. ICC values of the surface-based registration were 0.918 to 0.990, which were slightly lower than those of voxel-based registration that ranged from 0.984 to 0.996. The time required for voxel-based registration and surface-based registration was 44.6 ± 2.5 s and 252.3 ± 7.1 s, respectively. Conclusions Both methods are accurate and reliable and not significantly different from each other. However, voxel-based registration is more efficient than surface-based registration for CBCT mandibular superimposition.
Multi-contrast multi-scale surface registration for improved alignment of cortical areas
The position of cortical areas can be approximately predicted from cortical surface folding patterns. However, there is extensive inter-subject variability in cortical folding patterns, prohibiting a one-to-one mapping of cortical folds in certain areas. In addition, the relationship between cortical area boundaries and the shape of the cortex is variable, and weaker for higher-order cortical areas. Current surface registration techniques align cortical folding patterns using sulcal landmarks or cortical curvature, for instance. The alignment of cortical areas by these techniques is thus inherently limited by the sole use of geometric similarity metrics. Magnetic resonance imaging T1 maps show intra-cortical contrast that reflects myelin content, and thus can be used to improve the alignment of cortical areas. In this article, we present a new symmetric diffeomorphic multi-contrast multi-scale surface registration (MMSR) technique that works with partially inflated surfaces in the level-set framework. MMSR generates a more precise alignment of cortical surface curvature in comparison to two widely recognized surface registration algorithms. The resulting overlap in gyrus labels is comparable to FreeSurfer. Most importantly, MMSR improves the alignment of cortical areas further by including T1 maps. As a first application, we present a group average T1 map at a uniquely high-resolution and multiple cortical depths, which reflects the myeloarchitecture of the cortex. MMSR can also be applied to other MR contrasts, such as functional and connectivity data. •MMSR is a novel multi-contrast multi-scale surface registration algorithm.•MMSR generates a symmetric diffeomorphic transformation in native 3D space.•MMSR performs a more precise alignment in comparison to FreeSurfer.•MMSR can use multiple contrasts, such as T1, to improve cortical alignment.•We present a 0.5mm isotropic group average T1 map at multiple cortical depths.
A Novel Stretch Energy Minimization Algorithm for Equiareal Parameterizations
Surface parameterizations have been widely applied to computer graphics and digital geometry processing. In this paper, we propose a novel stretch energy minimization (SEM) algorithm for the computation of equiareal parameterizations of simply connected open surfaces with very small area distortions and highly improved computational efficiencies. In addition, the existence of nontrivial limit points of the SEM algorithm is guaranteed under some mild assumptions of the mesh quality. Numerical experiments indicate that the accuracy, effectiveness, and robustness of the proposed SEM algorithm outperform the other state-of-the-art algorithms. Applications of the SEM on surface remeshing, registration and morphing for simply connected open surfaces are demonstrated thereafter. Thanks to the SEM algorithm, the computation for these applications can be carried out efficiently and reliably.
Comparison of radiation exposure and surgery time between an intraoperative CT with automatic surface registration and a preoperative CT with manual surface registration in navigated spinal surgeries
PurposeThis retrospective matched case–control study was conducted to compare two CT based surgery techniques for navigated screw placement in spinal surgery, whether a reduction of radiation exposure and surgery time could be achieved.MethodsWe matched cases treated with an intraoperative CT (iCT), regarding the type and number of implants, with cases treated with a preoperative CT (pCT) of one main surgeon. Outcome measures were radiation exposure due to intraoperative control x-rays, radiation exposure due to CT images, and the duration of surgery.ResultsThe required radiation exposure could be significantly reduced in the iCT group. For the intraoperative control X-rays by 69% (median (MED) 88.50/standard deviation (SD) 107.84 and MED 286.00/SD 485.04 for iCT and pCT respectively—in Gycm2; p < 0.001) and for the CT examinations by 25% (MED 317.00/SD 158.62 and MED 424.50/SD 225.04 for iCT and pCT respectively—in mGycm; p < 0.001) with no significant change in surgery time. The correlation between the number of segments fused and the necessary surgery time decreased significantly for the iCT group (Pearson product-moment-correlation: r = 0.569 and r = 0.804 for iCT and pCT respectively; p < 0.05).ConclusionThe results show that spinal navigation using an intraoperative CT with automatic registration compared to a preoperative CT and intraoperative manual surface registration, allows a significant reduction of radiation exposure, without prolonged surgery time. A significant benefit regarding cut-to-suture-time can be gained with surgeries of a larger scale.
Increase in tibial internal rotation due to weight-bearing is a key feature to diagnose early-stage knee osteoarthritis: a study with upright computed tomography
Background The classification of knee osteoarthritis is an essential clinical issue, particularly in terms of diagnosing early knee osteoarthritis. However, the evaluation of three-dimensional limb alignment on two-dimensional radiographs is limited. This study evaluated the three-dimensional changes induced by weight-bearing in the alignments of lower limbs at various stages of knee osteoarthritis. Methods Forty five knees of 25 patients (69.9 ± 8.9 years) with knee OA were examined in the study. CT images of the entire leg were obtained in the supine and standing positions using conventional CT and 320-row detector upright CT, respectively. Next, the differences in the three-dimensional alignment of the entire leg in the supine and standing positions were obtained using 3D-3D surface registration technique, and those were compared for each Kellgren–Lawrence grade. Results Greater flexion, adduction, and tibial internal rotation were observed in the standing position, as opposed to the supine position. Kellgren–Lawrence grades 1 and 4 showed significant differences in flexion, adduction, and tibial internal rotation between two postures. Grades 2 and 4 showed significant differences in adduction, while grades 1 and 2, and 1 and 3 showed significant differences in tibial internal rotation between standing and supine positions. Conclusions Weight-bearing makes greater the three-dimensional deformities in knees with osteoarthritis. Particularly, greater tibial internal rotation was observed in patients with grades 2 and 3 compared to those with grade 1. The greater tibial internal rotation due to weight-bearing is a key pathologic feature to detect early osteoarthritic change in knees undergoing osteoarthritis.
Learning the shape of female breasts: an open-access 3D statistical shape model of the female breast built from 110 breast scans
We present the Regensburg Breast Shape Model (RBSM)—a 3D statistical shape model of the female breast built from 110 breast scans acquired in a standing position, and the first publicly available. Together with the model, a fully automated, pairwise surface registration pipeline used to establish dense correspondence among 3D breast scans is introduced. Our method is computationally efficient and requires only four landmarks to guide the registration process. A major challenge when modeling female breasts from surface-only 3D breast scans is the non-separability of breast and thorax. In order to weaken the strong coupling between breast and surrounding areas, we propose to minimize the variance outside the breast region as much as possible. To achieve this goal, a novel concept called breast probability masks (BPMs) is introduced. A BPM assigns probabilities to each point of a 3D breast scan, telling how likely it is that a particular point belongs to the breast area. During registration, we use BPMs to align the template to the target as accurately as possible inside the breast region and only roughly outside. This simple yet effective strategy significantly reduces the unwanted variance outside the breast region, leading to better statistical shape models in which breast shapes are quite well decoupled from the thorax. The RBSM is thus able to produce a variety of different breast shapes as independently as possible from the shape of the thorax. Our systematic experimental evaluation reveals a generalization ability of 0.17 mm and a specificity of 2.8 mm. To underline the expressiveness of the proposed model, we finally demonstrate in two showcase applications how the RBSM can be used for surgical outcome simulation and the prediction of a missing breast from the remaining one. Our model is available at https://www.rbsm.re-mic.de/ .