Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
143 result(s) for "Noise normalisation"
Sort by:
Multivariate pattern analysis for MEG: A comparison of dissimilarity measures
Multivariate pattern analysis (MVPA) methods such as decoding and representational similarity analysis (RSA) are growing rapidly in popularity for the analysis of magnetoencephalography (MEG) data. However, little is known about the relative performance and characteristics of the specific dissimilarity measures used to describe differences between evoked activation patterns. Here we used a multisession MEG data set to qualitatively characterize a range of dissimilarity measures and to quantitatively compare them with respect to decoding accuracy (for decoding) and between-session reliability of representational dissimilarity matrices (for RSA). We tested dissimilarity measures from a range of classifiers (Linear Discriminant Analysis – LDA, Support Vector Machine – SVM, Weighted Robust Distance – WeiRD, Gaussian Naïve Bayes – GNB) and distances (Euclidean distance, Pearson correlation). In addition, we evaluated three key processing choices: 1) preprocessing (noise normalisation, removal of the pattern mean), 2) weighting decoding accuracies by decision values, and 3) computing distances in three different partitioning schemes (non-cross-validated, cross-validated, within-class-corrected). Four main conclusions emerged from our results. First, appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures. Second, LDA, SVM and WeiRD yielded high peak decoding accuracies and nearly identical time courses. Third, while using decoding accuracies for RSA was markedly less reliable than continuous distances, this disadvantage was ameliorated by decision-value-weighting of decoding accuracies. Fourth, the cross-validated Euclidean distance provided unbiased distance estimates and highly replicable representational dissimilarity matrices. Overall, we strongly advise the use of multivariate noise normalisation as a general preprocessing step, recommend LDA, SVM and WeiRD as classifiers for decoding and highlight the cross-validated Euclidean distance as a reliable and unbiased default choice for RSA. •We provide Python and MATLAB tutorials for key analysis steps.•We compared dissimilarity measures and preprocessing choices for MEG MVPA.•Multivariate noise normalisation is a key preprocessing step.•LDA, SVM and WeiRD are recommended classifiers for decoding.•The cross-validated Euclidean distance is a reliable and unbiased choice for RSA.
Reliability of dissimilarity measures for multi-voxel pattern analysis
Representational similarity analysis of activation patterns has become an increasingly important tool for studying brain representations. The dissimilarity between two patterns is commonly quantified by the correlation distance or the accuracy of a linear classifier. However, there are many different ways to measure pattern dissimilarity and little is known about their relative reliability. Here, we compare the reliability of three classes of dissimilarity measure: classification accuracy, Euclidean/Mahalanobis distance, and Pearson correlation distance. Using simulations and four real functional magnetic resonance imaging (fMRI) datasets, we demonstrate that continuous dissimilarity measures are substantially more reliable than the classification accuracy. The difference in reliability can be explained by two characteristics of classifiers: discretization and susceptibility of the discriminant function to shifts of the pattern ensemble between imaging runs. Reliability can be further improved through multivariate noise normalization for all measures. Finally, unlike conventional distance measures, crossvalidated distances provide unbiased estimates of pattern dissimilarity on a ratio scale, thus providing an interpretable zero point. Overall, our results indicate that the crossvalidated Mahalanobis distance is preferable to both the classification accuracy and the correlation distance for characterizing representational geometries. •We compare the reliability of dissimilarity measures and classifiers in fMRI.•We examine the effect of noise normalizations and crossvalidation on reliability.•Multivariate noise normalization makes the dissimilarity measures more reliable.•Crossvalidation makes the dissimilarity measures more reliable and unbiased.•Dissimilarities measure brain representations more reliably than classifiers.
SimpleSTORM: a fast, self-calibrating reconstruction algorithm for localization microscopy
Although there are many reconstruction algorithms for localization microscopy, their use is hampered by the difficulty to adjust a possibly large number of parameters correctly. We propose SimpleSTORM, an algorithm that determines appropriate parameter settings directly from the data in an initial self-calibration phase. The algorithm is based on a carefully designed yet simple model of the image acquisition process which allows us to standardize each image such that the background has zero mean and unit variance. This standardization makes it possible to detect spots by a true statistical test (instead of hand-tuned thresholds) and to de-noise the images with an efficient matched filter. By reducing the strength of the matched filter, SimpleSTORM also performs reasonably on data with high-spot density, trading off localization accuracy for improved detection performance. Extensive validation experiments on the ISBI Localization Challenge Dataset, as well as real image reconstructions, demonstrate the good performance of our algorithm.
Performance analysis of a noise-normalized FFH/MFSK receiver over Rayleigh fading channels with partial-band noise jamming
Fast frequency hopping can reduce the degradation caused by interference. The performance of noise-normalized fast frequency-hopped M-ary orthogonal frequency-shift-keying (FFH/MFSK) receivers over frequency nonselective Rayleigh fading channels with partial-band noise jamming is analyzed. Instead of nu- merical evaluation, a closed-form error probability expression is given based on the theory of complex variables. The simulation results validate the analytical results. It is shown that there is an optinmm diversity order and the svstem nerformance gets better with an increase in the modulation order.
Enhanced CNN for image denoising
Owing to the flexible architectures of deep convolutional neural networks (CNNs) are successfully used for image denoising. However, they suffer from the following drawbacks: (i) deep network architecture is very difficult to train. (ii) Deeper networks face the challenge of performance saturation. In this study, the authors propose a novel method called enhanced convolutional neural denoising network (ECNDNet). Specifically, they use residual learning and batch normalisation techniques to address the problem of training difficulties and accelerate the convergence of the network. In addition, dilated convolutions are used in the proposed network to enlarge the context information and reduce the computational cost. Extensive experiments demonstrate that the ECNDNet outperforms the state-of-the-art methods for image denoising.
Dynamic Data Filtering of Long-Range Doppler LiDAR Wind Speed Measurements
Doppler LiDARs have become flexible and versatile remote sensing devices for wind energy applications. The possibility to measure radial wind speed components contemporaneously at multiple distances is an advantage with respect to meteorological masts. However, these measurements must be filtered due to the measurement geometry, hard targets and atmospheric conditions. To ensure a maximum data availability while producing low measurement errors, we introduce a dynamic data filter approach that conditionally decouples the dependency of data availability with increasing range. The new filter approach is based on the assumption of self-similarity, that has not been used so far for LiDAR data filtering. We tested the accuracy of the dynamic data filter approach together with other commonly used filter approaches, from research and industry applications. This has been done with data from a long-range pulsed LiDAR installed at the offshore wind farm ‘alpha ventus’. There, an ultrasonic anemometer located approximately 2.8 km from the LiDAR was used as reference. The analysis of around 1.5 weeks of data shows, that the error of mean radial velocity can be minimised for wake and free stream conditions.
Automatic target recognition of synthetic aperture radar (SAR) images based on optimal selection of Zernike moments features
In the present study, a new algorithm for automatic target detection (ATR) in synthetic aperture radar (SAR) images has been proposed. First, moving and stationary target acquisition and recognition image chips have been segmented and then passed to a number of preprocessing stages such as histogram equalisation, position and size normalisation. Second, the feature extraction based on Zernike moments (ZMs) having linear transformation invariance properties and robustness in the presence of the noise has been introduced for the first time. Third, a genetic algorithm-based feature selection and a support vector machine classifier have been presented to select the optimal feature subset of ZMs for decreasing the computational complexity. Experimental results demonstrate the efficiency of the proposed approach in target recognition of SAR imagery. The authors obtained results show that just a small amount of ZMs features is sufficient to achieve the recognition rates that rival other established methods, and so ZMs features can be regarded as a powerful discriminatory feature for automatic target recognition applications relevant to SAR imagery. Furthermore, it can be observed that the classifier performs fairly well until the signal-to-noise ratio falls beneath 5 dB for noisy images.
Pooling across cells to normalize single-cell RNA sequencing data with many zero counts
Normalization of single-cell RNA sequencing data is necessary to eliminate cell-specific biases prior to downstream analyses. However, this is not straightforward for noisy single-cell data where many counts are zero. We present a novel approach where expression values are summed across pools of cells, and the summed values are used for normalization. Pool-based size factors are then deconvolved to yield cell-based factors. Our deconvolution approach outperforms existing methods for accurate normalization of cell-specific biases in simulated data. Similar behavior is observed in real data, where deconvolution improves the relevance of results of downstream analyses.
Strategies for Accurate Cell Type Identification in CODEX Multiplexed Imaging Data
Multiplexed imaging is a recently developed and powerful single-cell biology research tool. However, it presents new sources of technical noise that are distinct from other types of single-cell data, necessitating new practices for single-cell multiplexed imaging processing and analysis, particularly regarding cell-type identification. Here we created single-cell multiplexed imaging datasets by performing CODEX on four sections of the human colon (ascending, transverse, descending, and sigmoid) using a panel of 47 oligonucleotide-barcoded antibodies. After cell segmentation, we implemented five different normalization techniques crossed with four unsupervised clustering algorithms, resulting in 20 unique cell-type annotations for the same dataset. We generated two standard annotations: hand-gated cell types and cell types produced by over-clustering with spatial verification. We then compared these annotations at four levels of cell-type granularity. First, increasing cell-type granularity led to decreased labeling accuracy; therefore, subtle phenotype annotations should be avoided at the clustering step. Second, accuracy in cell-type identification varied more with normalization choice than with clustering algorithm. Third, unsupervised clustering better accounted for segmentation noise during cell-type annotation than hand-gating. Fourth, Z-score normalization was generally effective in mitigating the effects of noise from single-cell multiplexed imaging. Variation in cell-type identification will lead to significant differential spatial results such as cellular neighborhood analysis; consequently, we also make recommendations for accurately assigning cell-type labels to CODEX multiplexed imaging.
DespNet: A residual learning based deep convolutional neural network for the despeckling of optical coherence tomography images
OCT (Optical Coherence Tomography) is a non-invasive diagnostic tool for detecting and treating a wide range of retinal diseases. However, the OCT image formation method produces speckle noise, degrading the quality of OCT images significantly, and these low-quality images negatively impact subsequent illness diagnosis. Traditional approaches to remove speckle noise include spatial/transform domain filtering, dictionary learning, or hybridizing these methods. By adopting a hierarchical network topology, deep Convolutional Neural Networks (CNN) have expanded the capacity to harness spatial correlations and extract data at multiple resolutions, making image denoising algorithms more robust. This paper proposes a residual learning-based despeckling CNN architecture (DespNet) for removing speckle noise from OCT images. Trained on 1440 augmented OCT images, DespNet generates the residual images that contain the detailed noise pattern of input images, which, when subtracted from noisy images, results in the denoised version. Quantitative and qualitative analyses have been done, and the experimental results show that the images despeckled using DespNet architecture substantially reduce speckle noise while preserving texture and structure that aids in retinal layer segmentation and consequent illness diagnosis.