Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
75
result(s) for
"noise correction techniques"
Sort by:
Physiological noise modeling in fMRI based on the pulsatile component of photoplethysmograph
2021
The blood oxygenation level-dependent (BOLD) contrast mechanism allows the noninvasive monitoring of changes in deoxyhemoglobin content. As such, it is commonly used in functional magnetic resonance imaging (fMRI) to study brain activity since levels of deoxyhemoglobin are indirectly related to local neuronal activity through neurovascular coupling mechanisms. However, the BOLD signal is severely affected by physiological processes as well as motion. Due to this, several noise correction techniques have been developed to correct for the associated confounds. The present study focuses on cardiac pulsatility fMRI confounds, aiming to refine model-based techniques that utilize the photoplethysmograph (PPG) signal. Specifically, we propose a new technique based on convolution filtering, termed cardiac pulsatility model (CPM) and compare its performance with the cardiac-related RETROICOR (Card-RETROICOR), which is a technique commonly used to model fMRI fluctuations due to cardiac pulsatility. Further, we investigate whether variations in the amplitude of the PPG pulses (PPG-Amp) covary with variations in amplitude of pulse-related fMRI fluctuations, as well as with the systemic low frequency oscillations (SLFOs) component of the fMRI global signal (GS – defined as the mean signal across all gray matter voxels). Capitalizing on 3T fMRI data from the Human Connectome Project, CPM was found to explain a significantly larger fraction of the fMRI signal variance compared to Card-RETROICOR, particularly for subjects with larger heart rate variability during the scan. The amplitude of the fMRI pulse-related fluctuations did not covary with PPG-Amp; however, PPG-Amp explained significant variance in the GS that was not attributed to variations in heart rate or breathing patterns. Our results suggest that the proposed approach can model high-frequency fluctuations due to pulsation as well as low-frequency physiological fluctuations more accurately compared to model-based techniques commonly employed in fMRI studies.
Journal Article
Relative intensity noise management and thermal/shot noise control for high speed ultra high bandwidth fiber reach transmission performance
by
Vijayakumar, Sundararaju
,
Prabu, Ramachandran Thandaiah
,
Divya, Nune
in
Algorithms
,
Ambient temperature
,
Bandwidths
2025
This work has clarified the relative intensity noise management and thermal/shot noise control for high speed ultra high bandwidth fiber reach transmission performance. Total link losses variations are clarified against ambient temperature and relative refractive index difference variations at 1,550 nm wavelength, 20 km fiber link length, and 45 % fluoride dopant ratio inside the fiber material cable. Besides the total link losses variations are demonstrated versus fiber link length and relative refractive index difference variations at 1,550 nm wavelength, room temperature, and 45 % fluoride dopant ratio inside the fiber material cable. Required optical signal per noise ratio (OSNR), electronic signal to noise ratio (SNR), and optimum gain variations are measured and analyzed versus the fiber link length and relative refractive index difference variations at 1,550 nm wavelength, room temperature, and 45 % fluoride dopant ratio inside the fiber material cable.
Journal Article
Quantum Communication with Zero-Capacity Channels
by
Yard, Jon
,
Smith, Graeme
in
Applied sciences
,
Communication channels
,
communications technology
2008
Communication over a noisy quantum channel introduces errors in the transmission that must be corrected. A fundamental bound on quantum error correction is the quantum capacity, which quantifies the amount of quantum data that can be protected. We show theoretically that two quantum channels, each with a transmission capacity of zero, can have a nonzero capacity when used together. This unveils a rich structure in the theory of quantum communications, implying that the quantum capacity does not completely specify a channel's ability to transmit quantum information.
Journal Article
Deep learning methods hold promise for light fluence compensation in three-dimensional optoacoustic imaging
2022
Significance: Quantitative optoacoustic imaging (QOAI) continues to be a challenge due to the influence of nonlinear optical fluence distribution, which distorts the optoacoustic image representation. Nonlinear optical fluence correction in OA imaging is highly ill-posed, leading to the inaccurate recovery of optical absorption maps. This work aims to recover the optical absorption maps using deep learning (DL) approach by correcting for the fluence effect.
Aim: Different DL models were compared and investigated to enable optical absorption coefficient recovery at a particular wavelength in a nonhomogeneous foreground and background medium.
Approach: Data-driven models were trained with two-dimensional (2D) Blood vessel and three-dimensional (3D) numerical breast phantom with highly heterogeneous/realistic structures to correct for the nonlinear optical fluence distribution. The trained DL models such as U-Net, Fully Dense (FD) U-Net, Y-Net, FD Y-Net, Deep residual U-Net (Deep ResU-Net), and generative adversarial network (GAN) were tested to evaluate the performance of optical absorption coefficient recovery (or fluence compensation) with in-silico and in-vivo datasets.
Results: The results indicated that FD U-Net-based deconvolution improves by about 10% over reconstructed optoacoustic images in terms of peak-signal-to-noise ratio. Further, it was observed that DL models can indeed highlight deep-seated structures with higher contrast due to fluence compensation. Importantly, the DL models were found to be about 17 times faster than solving diffusion equation for fluence correction.
Conclusions: The DL methods were able to compensate for nonlinear optical fluence distribution more effectively and improve the optoacoustic image quality.
Journal Article
Degree of Hearing Loss Affects Bilateral Hearing Aid Benefits in Ecologically Relevant Laboratory Conditions
2019
Purpose: Previous evidence supports benefits of bilateral hearing aids, relative to unilateral hearing aid use, in laboratory environments using audio-only (AO) stimuli and relatively simple tasks. The purpose of this study was to evaluate bilateral hearing aid benefits in ecologically relevant laboratory settings, with and without visual cues. In addition, we evaluated the relationship between bilateral benefit and clinically viable predictive variables. Method: Participants included 32 adult listeners with hearing loss ranging from mild-moderate to severe-profound. Test conditions varied by hearing aid fitting type (unilateral, bilateral) and modality (AO, audiovisual). We tested participants in complex environments that evaluated the following domains: sentence recognition, word recognition, behavioral listening effort, gross localization, and subjective ratings of spatialization. Signal-to-noise ratio was adjusted to provide similar unilateral speech recognition performance in both modalities and across procedures. Results: Significant and similar bilateral benefits were measured for both modalities on all tasks except listening effort, where bilateral benefits were not identified in either modality. Predictive variables were related to bilateral benefits in some conditions. With audiovisual stimuli, increasing hearing loss, unaided speech recognition in noise, and unaided subjective spatial ability were significantly correlated with increased benefits for many outcomes. With AO stimuli, these same predictive variables were not significantly correlated with outcomes. No predictive variables were correlated with bilateral benefits for sentence recognition in either modality. Conclusions: Hearing aid users can expect significant bilateral hearing aid advantages for ecologically relevant, complex laboratory tests. Although future confirmatory work is necessary, these data indicate the presence of vision strengthens the relationship between bilateral benefits and degree of hearing loss.
Journal Article
Rate compatible reconciliation for continuous-variable quantum key distribution using Raptor-like LDPC codes
by
Zhou, Chao
,
Wang, XiangYu
,
Zhang, ZhiGuo
in
Astronomy
,
Classical and Continuum Physics
,
Codes
2021
In the practical continuous-variable quantum key distribution (CV-QKD) system, the postprocessing process, particularly the error correction part, significantly impacts the system performance. Multi-edge type low-density parity-check (MET-LDPC) codes are suitable for CV-QKD systems because of their Shannon-limit-approaching performance at a low signal-to-noise ratio (SNR). However, the process of designing a low-rate MET-LDPC code with good performance is extremely complicated. Thus, we introduce Raptor-like LDPC (RL-LDPC) codes into the CV-QKD system, exhibiting both the rate compatible property of the Raptor code and capacity-approaching performance of MET-LDPC codes. Moreover, this technique can significantly reduce the cost of constructing a new matrix. We design the RL-LDPC matrix with a code rate of 0.02 and easily and effectively adjust this rate from 0.016 to 0.034. Simulation results show that we can achieve more than 98% reconciliation efficiency in a range of code rate variation using only one RL-LDPC code that can support high-speed decoding with an SNR less than −16.45 dB. This code allows the system to maintain a high key extraction rate under various SNRs, paving the way for practical applications of CV-QKD systems with different transmission distances.
Journal Article
A Tale of Two Time Scales
2005
It is a common practice in finance to estimate volatility from the sum of frequently sampled squared returns. However, market microstructure poses challenges to this estimation approach, as evidenced by recent empirical studies in finance. The present work attempts to lay out theoretical grounds that reconcile continuous-time modeling and discrete-time samples. We propose an estimation approach that takes advantage of the rich sources in tick-by-tick data while preserving the continuous-time assumption on the underlying returns. Under our framework, it becomes clear why and where the \"usual\" volatility estimator fails when the returns are sampled at the highest frequencies. If the noise is asymptotically small, our work provides a way of finding the optimal sampling frequency. A better approach, the \"two-scales estimator,\" works for any size of the noise.
Journal Article
Efficiency of two decoders based on hash techniques and syndrome calculation over a Rayleigh channel
2023
The explosive growth of connected devices demands high quality and reliability in data transmission and storage. Error correction codes (ECCs) contribute to this in ways that are not very apparent to the end user, yet indispensable and effective at the most basic level of transmission. This paper presents an investigation of the performance and analysis of two decoders that are based on hash techniques and syndrome calculation over a Rayleigh channel. These decoders under study consist of two main features: a reduced complexity compared to other competitors and good error correction performance over an additive white gaussian noise (AWGN) channel. When applied to decode some linear block codes such as Bose, Ray-Chaudhuri, and Hocquenghem (BCH) and quadratic residue (QR) codes over a Rayleigh channel, the experiment and comparison results of these decoders have shown their efficiency in terms of guaranteed performance measured in bit error rate (BER). For example, the coding gain obtained by syndrome decoding and hash techniques (SDHT) when it is applied to decode BCH (31, 11, 11) equals 34.5 dB, i.e., a reduction rate of 75% compared to the case where the exchange is carried out without coding and decoding process.
Journal Article
Enhancement of dark and low-contrast images using dynamic stochastic resonance
by
Chouhan, Rajlaxmi
,
Biswas, Prabir Kumar
,
Jha, Rajib Kumar
in
adaptive computation
,
adaptive histogram equalisation
,
Applied sciences
2013
In this study, a dynamic stochastic resonance (DSR)-based technique in spatial domain has been proposed for the enhancement of dark- and low-contrast images. Stochastic resonance (SR) is a phenomenon in which the performance of a system (low-contrast image) can be improved by addition of noise. However, in the proposed work, the internal noise of an image has been utilised to produce a noise-induced transition of a dark image from a state of low contrast to that of high contrast. DSR is applied in an iterative fashion by correlating the bistable system parameters of a double-well potential with the intensity values of a low-contrast image. Optimum output is ensured by adaptive computation of performance metrics – relative contrast enhancement factor (F), perceptual quality measures and colour enhancement factor. When compared with the existing enhancement techniques such as adaptive histogram equalisation, gamma correction, single-scale retinex, multi-scale retinex, modified high-pass filtering, edge-preserving multi-scale decomposition and automatic controls of popular imaging tools, the proposed technique gives significant performance in terms of contrast and colour enhancement as well as perceptual quality. Comparison with a spatial domain SR-based technique has also been illustrated.
Journal Article
A Comprehensive Review of Quantum Circuit Optimization: Current Trends and Future Directions
by
Puram, Varun
,
Johnson, Stevens
,
Karuppasamy, Krishnageetha
in
Algorithms
,
Circuits
,
Efficiency
2025
Optimizing quantum circuits is critical for enhancing computational speed and mitigating errors caused by quantum noise. Effective optimization must be achieved without compromising the correctness of the computations. This survey explores recent advancements in quantum circuit optimization, encompassing both hardware-independent and hardware-dependent techniques. It reviews state-of-the-art approaches, including analytical algorithms, heuristic strategies, machine learning-based methods, and hybrid quantum-classical frameworks. The paper highlights the strengths and limitations of each method, along with the challenges they pose. Furthermore, it identifies potential research opportunities in this evolving field, offering insights into the future directions of quantum circuit optimization.
Journal Article