Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
7,473
result(s) for
"Image registration."
Sort by:
Image-to-Image Subpixel Registration Based on Template Matching of Road Network Extracted by Deep Learning
2022
The vast digital archives collected by optical remote sensing observations over a long period of time can be used to determine changes in the land surface and this information can be very useful in a variety of applications. However, accurate change extraction requires highly accurate image-to-image registration, which is especially true when the target is urban areas in high-resolution remote sensing images. In this paper, we propose a new method for automatic registration between images that can be applied to noisy images such as old aerial photographs taken with analog film, in the case where changes in man-made objects such as buildings in urban areas are extracted from multitemporal high-resolution remote sensing images. The proposed method performs image-to-image registration by applying template matching to road masks extracted from images using a two-step deep learning model. We applied the proposed method to multitemporal images, including images taken more than 36 years before the reference image. As a result, the proposed method achieved registration accuracy at the subpixel level, which was more accurate than the conventional area-based and feature-based methods, even for image pairs with the most distant acquisition times. The proposed method is expected to provide more robust image-to-image registration for differences in sensor characteristics, acquisition time, resolution and color tone of two remote sensing images, as well as to temporal variations in vegetation and the effects of building shadows. These results were obtained with a road extraction model trained on images from a single area, single time period and single platform, demonstrating the high versatility of the model. Furthermore, the performance is expected to be improved and stabilized by using images from different areas, time periods and platforms for training.
Journal Article
Multimodal Remote Sensing Image Registration Methods and Advancements: A Survey
2021
With rapid advancements in remote sensing image registration algorithms, comprehensive imaging applications are no longer limited to single-modal remote sensing images. Instead, multi-modal remote sensing (MMRS) image registration has become a research focus in recent years. However, considering multi-source, multi-temporal, and multi-spectrum input introduces significant nonlinear radiation differences in MMRS images for which researchers need to develop novel solutions. At present, comprehensive reviews and analyses of MMRS image registration methods are inadequate in related fields. Thus, this paper introduces three theoretical frameworks: namely, area-based, feature-based and deep learning-based methods. We present a brief review of traditional methods and focus on more advanced methods for MMRS image registration proposed in recent years. Our review or comprehensive analysis is intended to provide researchers in related fields with advanced understanding to achieve further breakthroughs and innovations.
Journal Article
Unsupervised Image Registration towards Enhancing Performance and Explainability in Cardiac and Brain Image Analysis
by
Yang, Guang
,
Papanastasiou, Giorgos
,
Wang, Chengjia
in
Algorithms
,
Brain - diagnostic imaging
,
deep learning
2022
Magnetic Resonance Imaging (MRI) typically recruits multiple sequences (defined here as “modalities”). As each modality is designed to offer different anatomical and functional clinical information, there are evident disparities in the imaging content across modalities. Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging, as for example before imaging biomarkers need to be derived and clinically evaluated across different MRI modalities, time phases and slices. Although commonly needed in real clinical scenarios, affine and non-rigid image registration is not extensively investigated using a single unsupervised model architecture. In our work, we present an unsupervised deep learning registration methodology that can accurately model affine and non-rigid transformations, simultaneously. Moreover, inverse-consistency is a fundamental inter-modality registration property that is not considered in deep learning registration algorithms. To address inverse consistency, our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent representations, and involves two factorised transformation networks (one per each encoder-decoder channel) and an inverse-consistency loss to learn topology-preserving anatomical transformations. Overall, our model (named “FIRE”) shows improved performances against the reference standard baseline method (i.e., Symmetric Normalization implemented using the ANTs toolbox) on multi-modality brain 2D and 3D MRI and intra-modality cardiac 4D MRI data experiments. We focus on explaining model-data components to enhance model explainability in medical image registration. On computational time experiments, we show that the FIRE model performs on a memory-saving mode, as it can inherently learn topology-preserving image registration directly in the training phase. We therefore demonstrate an efficient and versatile registration technique that can have merit in multi-modal image registrations in the clinical setting.
Journal Article
MDReg‐Net: Multi‐resolution diffeomorphic image registration using fully convolutional networks with deep self‐supervision
2022
We present a diffeomorphic image registration algorithm to learn spatial transformations between pairs of images to be registered using fully convolutional networks (FCNs) under a self‐supervised learning setting. Particularly, a deep neural network is trained to estimate diffeomorphic spatial transformations between pairs of images by maximizing an image‐wise similarity metric between fixed and warped moving images, similar to those adopted in conventional image registration algorithms. The network is implemented in a multi‐resolution image registration framework to optimize and learn spatial transformations at different image resolutions jointly and incrementally with deep self‐supervision in order to better handle large deformation between images. A spatial Gaussian smoothing kernel is integrated with the FCNs to yield sufficiently smooth deformation fields for diffeomorphic image registration. The spatial transformations learned at coarser resolutions are utilized to warp the moving image, which is subsequently used as input to the network for learning incremental transformations at finer resolutions. This procedure proceeds recursively to the full image resolution and the accumulated transformations serve as the final transformation to warp the moving image at the finest resolution. Experimental results for registering high‐resolution 3D structural brain magnetic resonance (MR) images have demonstrated that image registration networks trained by our method obtain robust, diffeomorphic image registration results within seconds with improved accuracy compared with state‐of‐the‐art image registration algorithms. We present a diffeomorphic image registration algorithm to learn spatial transformations between pairs of images to be registered using fully convolutional networks under a self‐supervised learning setting. Experimental results have demonstrated our method could obtain robust, diffeomorphic image registration results within seconds with improved accuracy compared with state‐of‐the‐art image registration algorithms.
Journal Article
An anatomically detailed and personalizable head injury model: Significance of brain and white matter tract morphological variability on strain
2021
Finite element head (FE) models are important numerical tools to study head injuries and develop protection systems. The generation of anatomically accurate and subject-specific head models with conforming hexahedral meshes remains a significant challenge. The focus of this study is to present two developmental works: first, an anatomically detailed FE head model with conforming hexahedral meshes that has smooth interfaces between the brain and the cerebrospinal fluid, embedded with white matter (WM) fiber tracts; second, a morphing approach for subject-specific head model generation via a new hierarchical image registration pipeline integrating Demons and Dramms deformable registration algorithms. The performance of the head model is evaluated by comparing model predictions with experimental data of brain–skull relative motion, brain strain, and intracranial pressure. To demonstrate the applicability of the head model and the pipeline, six subject-specific head models of largely varying intracranial volume and shape are generated, incorporated with subject-specific WM fiber tracts. DICE similarity coefficients for cranial, brain mask, local brain regions, and lateral ventricles are calculated to evaluate personalization accuracy, demonstrating the efficiency of the pipeline in generating detailed subject-specific head models achieving satisfactory element quality without further mesh repairing. The six head models are then subjected to the same concussive loading to study the sensitivity of brain strain to inter-subject variability of the brain and WM fiber morphology. The simulation results show significant differences in maximum principal strain and axonal strain in local brain regions (one-way ANOVA test, p < 0.001), as well as their locations also vary among the subjects, demonstrating the need to further investigate the significance of subject-specific models. The techniques developed in this study may contribute to better evaluation of individual brain injury and the development of individualized head protection systems in the future. This study also contains general aspects the research community may find useful: on the use of experimental brain strain close to or at injury level for head model validation; the hierarchical image registration pipeline can be used to morph other head models, such as smoothed-voxel models.
Journal Article
Australian Society of Medical Imaging and Radiation Therapy Image Registration in Radiation Therapy Position Paper
2026
The report from the American Association of Physicists in Medicine (AAPM) Task Group 132, published in 2017, established a framework and recommendations for the safe implementation of rigid image registration (RIR) and deformable image registration (DIR) into radiation oncology clinical practice. The Medical Image Registration Special Interest Group (MRSIG) of the Australasian College of Physical Scientists and Engineers in Medicine further built on these recommendations through the publishing of best practice guidelines for DIR warping based on increased accessibility to such tools in the clinical environment. There remains an increasing responsibility on radiation therapists, a critical member of the radiation oncology multidisciplinary team, to safely embed RIR and DIR process into their routine clinical practice, along all steps of the patient's radiation therapy treatment journey. This position paper, authored by the Australian Society of Medical Imaging and Radiation Therapy (ASMIRT) Image Registration Working Party and endorsed by the ASMIRT Executive, (i) details the role of radiation therapists in the application of RIR and DIR in day‐to‐day clinical practice and (ii) delivers a series of recommendations to support the safe implementation of RIR and DIR into radiation therapist workflows.
Journal Article
Fast Diffeomorphic Image Registration via Fourier-Approximated Lie Algebras
2019
This paper introduces Fourier-approximated Lie algebras for shooting (FLASH), a fast geodesic shooting algorithm for diffeomorphic image registration. We approximate the infinite-dimensional Lie algebra of smooth vector fields, i.e., the tangent space at the identity of the diffeomorphism group, with a low-dimensional, bandlimited space. We show that most of the computations for geodesic shooting can be carried out entirely in this low-dimensional space. Our algorithm results in dramatic savings in time and memory over traditional large-deformation diffeomorphic metric mapping algorithms, which require dense spatial discretizations of vector fields. To validate the effectiveness of FLASH, we run pairwise image registration on both 2D synthetic data and real 3D brain images and compare with the state-of-the-art geodesic shooting methods. Experimental results show that our algorithm dramatically reduces the computational cost and memory footprint of diffemorphic image registration with little or no loss of accuracy.
Journal Article
ConKeD: multiview contrastive descriptor learning for keypoint-based retinal image registration
by
Rouco, José
,
Hervella, Álvaro S.
,
Novo, Jorge
in
Algorithms
,
Artificial neural networks
,
Bifurcations
2024
Retinal image registration is of utmost importance due to its wide applications in medical practice. In this context, we propose ConKeD, a novel deep learning approach to learn descriptors for retinal image registration. In contrast to current registration methods, our approach employs a novel multi-positive multi-negative contrastive learning strategy that enables the utilization of additional information from the available training samples. This makes it possible to learn high-quality descriptors from limited training data. To train and evaluate ConKeD, we combine these descriptors with domain-specific keypoints, particularly blood vessel bifurcations and crossovers, that are detected using a deep neural network. Our experimental results demonstrate the benefits of the novel multi-positive multi-negative strategy, as it outperforms the widely used triplet loss technique (single-positive and single-negative) as well as the single-positive multi-negative alternative. Additionally, the combination of ConKeD with the domain-specific keypoints produces comparable results to the state-of-the-art methods for retinal image registration, while offering important advantages such as avoiding pre-processing, utilizing fewer training samples, and requiring fewer detected keypoints, among others. Therefore, ConKeD shows a promising potential towards facilitating the development and application of deep learning-based methods for retinal image registration.
Graphical abstract
Journal Article
2D-3D deformable image registration of histology slide and micro-CT with DISA-based initialization
by
Ronchetti, Matteo
,
Gedara, Mahesh Thalwaththe
,
Lölkes, Claudia
in
2D-3D Image Registration
,
631/1647
,
639/705
2025
Recent developments in the registration of histology and micro-computed tomography (µCT) have broadened the perspective of pathological applications such as virtual histology based on µCT. This topic remains challenging because of the low image quality of soft tissue CT. Additionally, soft tissue samples usually deform during the histology slide preparation, making it difficult to correlate the structures between the histology slide and µCT. In this work, we propose a novel 2D-3D multi-modal deformable image registration method. The method utilizes an initial global 2D-3D registration using an ML-based differentiable similarity measure. The registration is then finalized by an analytical out-of-plane deformation refinement. The method is evaluated on datasets acquired from tonsil and tumor tissues. µCTs of both phase-contrast and conventional absorption modalities are investigated. The registration results from the proposed method are compared with those from intensity- and keypoint-based methods. The comparison is conducted using both visual and fiducial-based evaluations. The proposed method demonstrates superior performance compared to the other two methods.
Journal Article