Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
48 result(s) for "Intensity normalization"
Sort by:
Impact of Preprocessing and Harmonization Methods on the Removal of Scanner Effects in Brain MRI Radiomic Features
In brain MRI radiomics studies, the non-biological variations introduced by different image acquisition settings, namely scanner effects, affect the reliability and reproducibility of the radiomics results. This paper assesses how the preprocessing methods (including N4 bias field correction and image resampling) and the harmonization methods (either the six intensity normalization methods working on brain MRI images or the ComBat method working on radiomic features) help to remove the scanner effects and improve the radiomic feature reproducibility in brain MRI radiomics. The analyses were based on in vitro datasets (homogeneous and heterogeneous phantom data) and in vivo datasets (brain MRI images collected from healthy volunteers and clinical patients with brain tumors). The results show that the ComBat method is essential and vital to remove scanner effects in brain MRI radiomic studies. Moreover, the intensity normalization methods, while not able to remove scanner effects at the radiomic feature level, still yield more comparable MRI images and improve the robustness of the harmonized features to the choice among ComBat implementations.
Intensity normalization methods in brain FDG-PET quantification
The lack of standardization of intensity normalization methods and its unknown effect on the quantification output is recognized as a major drawback for the harmonization of brain FDG-PET quantification protocols. The aim of this work is the ground truth-based evaluation of different intensity normalization methods on brain FDG-PET quantification output. Realistic FDG-PET images were generated using Monte Carlo simulation from activity and attenuation maps directly derived from 25 healthy subjects (adding theoretical relative hypometabolisms on 6 regions of interest and for 5 hypometabolism levels). Single-subject statistical parametric mapping (SPM) was applied to compare each simulated FDG-PET image with a healthy database after intensity normalization based on reference regions methods such as the brain stem (RRBS), cerebellum (RRC) and the temporal lobe contralateral to the lesion (RRTL), and data-driven methods, such as proportional scaling (PS), histogram-based method (HN) and iterative versions of both methods (iPS and iHN). The performance of these methods was evaluated in terms of the recovery of the introduced theoretical hypometabolic pattern and the appearance of unspecific hypometabolic and hypermetabolic findings. Detected hypometabolic patterns had significantly lower volumes than the introduced hypometabolisms for all intensity normalization methods particularly for slighter reductions in metabolism . Among the intensity normalization methods, RRC and HN provided the largest recovered hypometabolic volumes, while the RRBS showed the smallest recovery. In general, data-driven methods overcame reference regions and among them, the iterative methods overcame the non-iterative ones. Unspecific hypermetabolic volumes were similar for all methods, with the exception of PS, where it became a major limitation (up to 250 cm3) for extended and intense hypometabolism. On the other hand, unspecific hypometabolism was similar far all methods, and usually solved with appropriate clustering. Our findings showed that the inappropriate use of intensity normalization methods can provide remarkable bias in the detected hypometabolism and it represents a serious concern in terms of false positives. Based on our findings, we recommend the use of histogram-based intensity normalization methods. Reference region methods performance was equivalent to data-driven methods only when the selected reference region is large and stable. [Display omitted]
Data-driven identification of intensity normalization region based on longitudinal coherency of 18F-FDG metabolism in the healthy brain
In brain 18F-FDG PET data intensity normalization is usually applied to control for unwanted factors confounding brain metabolism. However, it can be difficult to determine a proper intensity normalization region as a reference for the identification of abnormal metabolism in diseased brains. In neurodegenerative disorders, differentiating disease-related changes in brain metabolism from age-associated natural changes remains challenging. This study proposes a new data-driven method to identify proper intensity normalization regions in order to improve separation of age-associated natural changes from disease related changes in brain metabolism. 127 female and 128 male healthy subjects (age: 20 to 79) with brain18F-FDG PET/CT in the course of a whole body cancer screening were included. Brain PET images were processed using SPM8 and were parcellated into 116 anatomical regions according to the AAL template. It is assumed that normal brain 18F-FDG metabolism has longitudinal coherency and this coherency leads to better model fitting. The coefficient of determination R2 was proposed as the coherence coefficient, and the total coherence coefficient (overall fitting quality) was employed as an index to assess proper intensity normalization strategies on single subjects and age-cohort averaged data. Age-associated longitudinal changes of normal subjects were derived using the identified intensity normalization method correspondingly. In addition, 15 subjects with clinically diagnosed Parkinson's disease were assessed to evaluate the clinical potential of the proposed new method. Intensity normalizations by paracentral lobule and cerebellar tonsil, both regions derived from the new data-driven coherency method, showed significantly better coherence coefficients than other intensity normalization regions, and especially better than the most widely used global mean normalization. Intensity normalization by paracentral lobule was the most consistent method within both analysis strategies (subject-based and age-cohort averaging). In addition, the proposed new intensity normalization method using the paracentral lobule generates significantly higher differentiation from the age-associated changes than other intensity normalization methods. Proper intensity normalization can enhance the longitudinal coherency of normal brain glucose metabolism. The paracentral lobule followed by the cerebellar tonsil are shown to be the two most stable intensity normalization regions concerning age-dependent brain metabolism. This may provide the potential to better differentiate disease-related changes from age-related changes in brain metabolism, which is of relevance in the diagnosis of neurodegenerative disorders. •A method to differentiate disease-related metabolic changes from age-associated natural changes.•A concept of longitudinal coherency to identify proper age-associated metabolic changes.•A data-driven method to find out optimal intensity normalization enhancing longitudinal coherency.•Normalization using paracentral lobule can best describe age-dependent brain metabolism.•Development on normal subjects and preliminary verification on PD patients.
Intensity warping for multisite MRI harmonization
In multisite neuroimaging studies there is often unwanted technical variation across scanners and sites. These “scanner effects” can hinder detection of biological features of interest, produce inconsistent results, and lead to spurious associations. We propose mica (multisite image harmonization by cumulative distribution function alignment), a tool to harmonize images taken on different scanners by identifying and removing within-subject scanner effects. Our goals in the present study were to (1) establish a method that removes scanner effects by leveraging multiple scans collected on the same subject, and, building on this, (2) develop a technique to quantify scanner effects in large multisite studies so these can be reduced as a preprocessing step. We illustrate scanner effects in a brain MRI study in which the same subject was measured twice on seven scanners, and assess our method’s performance in a second study in which ten subjects were scanned on two machines. We found that unharmonized images were highly variable across site and scanner type, and our method effectively removed this variability by aligning intensity distributions. We further studied the ability to predict image harmonization results for a scan taken on an existing subject at a new site using cross-validation.
Intensity Thresholding and Deep Learning Based Lane Marking Extraction and Lane Width Estimation from Mobile Light Detection and Ranging (LiDAR) Point Clouds
Lane markings are one of the essential elements of road information, which is useful for a wide range of transportation applications. Several studies have been conducted to extract lane markings through intensity thresholding of Light Detection and Ranging (LiDAR) point clouds acquired by mobile mapping systems (MMS). This paper proposes an intensity thresholding strategy using unsupervised intensity normalization and a deep learning strategy using automatically labeled training data for lane marking extraction. For comparative evaluation, original intensity thresholding and deep learning using manually established labels strategies are also implemented. A pavement surface-based assessment of lane marking extraction by the four strategies is conducted in asphalt and concrete pavement areas covered by MMS equipped with multiple LiDAR scanners. Additionally, the extracted lane markings are used for lane width estimation and reporting lane marking gaps along various highways. The normalized intensity thresholding leads to a better lane marking extraction with an F1-score of 78.9% in comparison to the original intensity thresholding with an F1-score of 72.3%. On the other hand, the deep learning model trained with automatically generated labels achieves a higher F1-score of 85.9% than the one trained on manually established labels with an F1-score of 75.1%. In concrete pavement area, the normalized intensity thresholding and both deep learning strategies obtain better lane marking extraction (i.e., lane markings along longer segments of the highway have been extracted) than the original intensity thresholding approach. For the lane width results, more estimates are observed, especially in areas with poor edge lane marking, using the two deep learning models when compared with the intensity thresholding strategies due to the higher recall rates for the former. The outcome of the proposed strategies is used to develop a framework for reporting lane marking gap regions, which can be subsequently visualized in RGB imagery to identify their cause.
Multisite reproducibility and test-retest reliability of the T1w/T2w-ratio: A comparison of processing methods
•Reproducibility of regional T1w/T2w-ratio distributions was overall good, with the exception of one dataset.•Bias correction improved reproducibility of regional distributions in this dataset, while partial volume and outlier corrections had limited effects.•Test-retest reliability of the T1w/T2w-ratio was poor both regionally and across the cortex.•Intensity normalisation with Whitestripe and Z-Score improved test-retest reliability. The ratio of T1-weighted (T1w) and T2-weighted (T2w) magnetic resonance imaging (MRI) images is often used as a proxy measure of cortical myelin. However, the T1w/T2w-ratio is based on signal intensities that are inherently non-quantitative and known to be affected by extrinsic factors. To account for this a variety of processing methods have been proposed, but a systematic evaluation of their efficacy is lacking. Given the dependence of the T1w/T2w-ratio on scanner hardware and T1w and T2w protocols, it is important to ensure that processing pipelines perform well also across different sites. We assessed a variety of processing methods for computing cortical T1w/T2w-ratio maps, including correction methods for nonlinear field inhomogeneities, local outliers, and partial volume effects as well as intensity normalisation. These were implemented in 33 processing pipelines which were applied to four test-retest datasets, with a total of 170 pairs of T1w and T2w images acquired on four different MRI scanners. We assessed processing pipelines across datasets in terms of their reproducibility of expected regional distributions of cortical myelin, lateral intensity biases, and test-retest reliability regionally and across the cortex. Regional distributions were compared both qualitatively with histology and quantitatively with two reference datasets, YA-BC and YA-B1+, from the Human Connectome Project. Reproducibility of raw T1w/T2w-ratio distributions was overall high with the exception of one dataset. For this dataset, Spearman rank correlations increased from 0.27 to 0.70 after N3 bias correction relative to the YA-BC reference and from -0.04 to 0.66 after N4ITK bias correction relative to the YA-B1+ reference. Partial volume and outlier corrections had only marginal effects on the reproducibility of T1w/T2w-ratio maps and test-retest reliability. Before intensity normalisation, we found large coefficients of variation (CVs) and low intraclass correlation coefficients (ICCs), with total whole-cortex CV of 10.13% and whole-cortex ICC of 0.58 for the raw T1w/T2w-ratio. Intensity normalisation with WhiteStripe, RAVEL, and Z-Score improved total whole-cortex CVs to 5.91%, 5.68%, and 5.19% respectively, whereas Z-Score and Least Squares improved whole-cortex ICCs to 0.96 and 0.97 respectively. In the presence of large intensity nonuniformities, bias field correction is necessary to achieve acceptable correspondence with known distributions of cortical myelin, but it can be detrimental in datasets with less intensity inhomogeneity. Intensity normalisation can improve test-retest reliability and inter-subject comparability. However, both bias field correction and intensity normalisation methods vary greatly in their efficacy and may affect the interpretation of results. The choice of T1w/T2w-ratio processing method must therefore be informed by both scanner and acquisition protocol as well as the given study objective. Our results highlight limitations of the T1w/T2w-ratio, but also suggest concrete ways to enhance its usefulness in future studies.
LiDAR Intensity Completion: Fully Exploiting the Message from LiDAR Sensors
Light Detection and Ranging (LiDAR) systems are novel sensors that provide robust distance and reflection strength by active pulsed laser beams. They have significant advantages over visual cameras by providing active depth and intensity measurements that are robust to ambient illumination. However, the systemsstill pay limited attention to intensity measurements since the output intensity maps of LiDAR sensors are different from conventional cameras and are too sparse. In this work, we propose exploiting the information from both intensity and depth measurements simultaneously to complete the LiDAR intensity maps. With the completed intensity maps, mature computer vision techniques can work well on the LiDAR data without any specific adjustment. We propose an end-to-end convolutional neural network named LiDAR-Net to jointly complete the sparse intensity and depth measurements by exploiting their correlations. For network training, an intensity fusion method is proposed to generate the ground truth. Experiment results indicate that intensity–depth fusion can benefit the task and improve performance. We further apply an off-the-shelf object (lane) segmentation algorithm to the completed intensity maps, which delivers consistent robust to ambient illumination performance. We believe that the intensity completion method allows LiDAR sensors to cope with a broader range of practice applications.
A comparison of FreeSurfer-generated data with and without manual intervention
This paper examined whether FreeSurfer-generated data differed between a fully-automated, unedited pipeline and an edited pipeline that included the application of control points to correct errors in white matter segmentation. In a sample of 30 individuals, we compared the summary statistics of surface area, white matter volumes, and cortical thickness derived from edited and unedited datasets for the 34 regions of interest (ROIs) that FreeSurfer (FS) generates. To determine whether applying control points would alter the detection of significant differences between patient and typical groups, effect sizes between edited and unedited conditions in individuals with the genetic disorder, 22q11.2 deletion syndrome (22q11DS) were compared to neurotypical controls. Analyses were conducted with data that were generated from both a 1.5 tesla and a 3 tesla scanner. For 1.5 tesla data, mean area, volume, and thickness measures did not differ significantly between edited and unedited regions, with the exception of rostral anterior cingulate thickness, lateral orbitofrontal white matter, superior parietal white matter, and precentral gyral thickness. Results were similar for surface area and white matter volumes generated from the 3 tesla scanner. For cortical thickness measures however, seven edited ROI measures, primarily in frontal and temporal regions, differed significantly from their unedited counterparts, and three additional ROI measures approached significance. Mean effect sizes for edited ROIs did not differ from most unedited ROIs for either 1.5 or 3 tesla data. Taken together, these results suggest that although the application of control points may increase the validity of intensity normalization and, ultimately, segmentation, it may not affect the final, extracted metrics that FS generates. Potential exceptions to and limitations of these conclusions are discussed.
Improved lung nodule segmentation with a squeeze excitation dilated attention based residual UNet
The diverse types and sizes, proximity to non-nodule structures, identical shape characteristics, and varying sizes of nodules make them challenging for segmentation methods. Although many efforts have been made in automatic lung nodule segmentation, most of them have not sufficiently addressed the challenges related to the type and size of nodules, such as juxta-pleural and juxta-vascular nodules. The current research introduces a Squeeze-Excitation Dilated Attention-based Residual U-Net (SEDARU-Net) with a robust intensity normalization technique to address the challenges related to different types and sizes of lung nodules and to achieve an improved lung nodule segmentation. After preprocessing the images with the intensity normalization method and extracting the Regions of Interest by YOLOv3, they are fed into the SEDARU-Net with dilated convolutions in the encoder part. Then, the extracted features are given to the decoder part, which involves transposed convolutions, Squeeze-Excitation Dilated Residual blocks, and skip connections equipped with an Attention Gate, to decode the feature maps and construct the segmentation mask. The proposed model was evaluated using the publicly available Lung Nodule Analysis 2016 (LUNA16) dataset, achieving a Dice Similarity Coefficient of 97.86%, IoU of 96.40%, sensitivity of 96.54%, and precision of 98.84%. Finally, it was shown that each added component to the U-Net’s structure and the intensity normalization technique increased the Dice Similarity Coefficient by more than 2%. The proposed method suggests a potential clinical tool to address challenges related to the segmentation of lung nodules with different types located in the proximity of non-nodule structures.
Histogram-based normalization technique on human brain magnetic resonance images from different acquisitions
Background Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. Methods In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. Results We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. Conclusions We have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.