Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
18
result(s) for
"data coherence constraints"
Sort by:
DECO: DIMM controller efficient for ECC operations
2014
An error-correcting code (ECC) immune to bit errors can make memory performance severely degraded since incomplete-word ECC write requests lead to inefficient operations on a dual in-line memory module (DIMM). A DIMM controller efficient for such ECC operations is proposed. The key idea is that read-to-write and write-to-read operations caused by incomplete-word ECC write requests are split into independent read and write operations, and then the read and write operations are individually scheduled under data coherence constraints. Experimental results show that the proposed DIMM controller achieves 11% shorter memory latency, and 9.3% higher memory utilisation, on average, than the latest conventional DIMM controller in industrial multimedia applications. Moreover, it achieves up to 2.1 times higher memory performance on synthetic benchmarks.
Journal Article
Exploiting a centrally powered coherent microcomb for lightweight optical transmission
by
Wang, Jian
,
Wang, Xingjun
,
Li, Kang
in
639/624/1075/1079
,
639/624/1075/187
,
Carrier to noise ratios
2025
The exponential growth of data capacity in intelligent terminals drives higher data traffic toward network edges. Compact I/O systems are essential to support space-constrained infrastructures at the computing edges or modular data centers. However, scaling high-capacity transmission via increasing physical channels is constrained by limited source coherence and low carrier-to-noise ratios (OCNR), hindering lightweight, efficient applications like distributed edge computing. Here, we exploit an integrated self-injection-locked dark-pulse microcomb to achieve 1 Tbps/λ/core transmission and characterize the constraints among OCNR, linewidth, and transmission rate. Furthermore, a multi-dimensional transmission architecture for multi-nodes aggregation is explored, boosting the transmission rate to 200 Tbps with 16 comblines at 70 Gbaud. Combining with integrated waveshapers and semiconductor optical amplifiers, a chip-level parallel carrier generator is explored, reducing system size a hundredfold while delivering 5 Tbps. Our results highlight significant potential for compact and resource-conserving transmission systems in data centers and distributed high performance computing applications.
The exponential growth of data traffic demands efficient transmission systems. Here, authors exploit a self-injection-locked microcomb to achieve high-capacity optical transmission, demonstrating a compact, lightweight system with potential for data centers and edge computing applications.
Journal Article
Coherent chaos in a recurrent neural network with structured connectivity
by
Landau, Itamar Daniel
,
Sompolinsky, Haim
in
Biology and Life Sciences
,
Broken symmetry
,
Coherence
2018
We present a simple model for coherent, spatially correlated chaos in a recurrent neural network. Networks of randomly connected neurons exhibit chaotic fluctuations and have been studied as a model for capturing the temporal variability of cortical activity. The dynamics generated by such networks, however, are spatially uncorrelated and do not generate coherent fluctuations, which are commonly observed across spatial scales of the neocortex. In our model we introduce a structured component of connectivity, in addition to random connections, which effectively embeds a feedforward structure via unidirectional coupling between a pair of orthogonal modes. Local fluctuations driven by the random connectivity are summed by an output mode and drive coherent activity along an input mode. The orthogonality between input and output mode preserves chaotic fluctuations by preventing feedback loops. In the regime of weak structured connectivity we apply a perturbative approach to solve the dynamic mean-field equations, showing that in this regime coherent fluctuations are driven passively by the chaos of local residual fluctuations. When we introduce a row balance constraint on the random connectivity, stronger structured connectivity puts the network in a distinct dynamical regime of self-tuned coherent chaos. In this regime the coherent component of the dynamics self-adjusts intermittently to yield periods of slow, highly coherent chaos. The dynamics display longer time-scales and switching-like activity. We show how in this regime the dynamics depend qualitatively on the particular realization of the connectivity matrix: a complex leading eigenvalue can yield coherent oscillatory chaos while a real leading eigenvalue can yield chaos with broken symmetry. The level of coherence grows with increasing strength of structured connectivity until the dynamics are almost entirely constrained to a single spatial mode. We examine the effects of network-size scaling and show that these results are not finite-size effects. Finally, we show that in the regime of weak structured connectivity, coherent chaos emerges also for a generalized structured connectivity with multiple input-output modes.
Journal Article
Total Survey Error: Design, Implementation, and Evaluation
2010
The total survey error (TSE) paradigm provides a theoretical framework for optimizing surveys by maximizing data quality within budgetary constraints. In this article, the TSE paradigm is viewed as part of a much larger design strategy that seeks to optimize surveys by maximizing total survey quality; i.e., quality more broadly defined to include user-specified dimensions of quality. Survey methodology, viewed within this larger framework, alters our perspectives on the survey design, implementation, and evaluation. As an example, although a major objective of survey design is to maximize accuracy subject to costs and timeliness constraints, the survey budget must also accommodate additional objectives related to relevance, accessibility, interpretability, comparability, coherence, and completeness that are critical to a survey's “fitness for use.” The article considers how the total survey quality approach can be extended beyond survey design to include survey implementation and evaluation. In doing so, the “fitness for use” perspective is shown to influence decisions regarding how to reduce survey error during design implementation and what sources of error should be evaluated in order to assess the survey quality today and to prepare for the surveys of the future.
Journal Article
AI-Enabled Frequency Diverse Array Spaceborne Surveillance Radar for Space Debris and Threat Detection Under Resource Constraints
2026
Ensuring space environment security through the detection of space debris and non-cooperative threat objects has become a critical mission for next-generation spaceborne surveillance systems. Frequency diversity array (FDA) radar, with its unique range angle-dependent beampattern, offers a transformative capability to distinguish closely-spaced space threats from intense background clutter. However, the operational deployment of spaceborne FDA is inherently hindered by stringent platform resource constraints, including limited power supply, high hardware complexity, and restricted data transmission bandwidth. These physical limitations inevitably lead to incomplete signal observations, resulting in elevated sidelobes that can obscure small, high-speed space debris. To bridge the gap between hardware constraints and high-fidelity surveillance, this paper proposes an AI-enabled data recovery framework based on deep matrix factorization. Specifically designed to process the complex-valued nature of radar echoes, the proposed framework introduces two specialized architectures: a real-valued representation-based method (DMF-Rr) and a native complex-valued deep matrix factorization (CDMF) network that preserves vital phase coherence. By leveraging deep learning to “enable” sparse-sampled systems, the proposed method effectively reconstructs missing observations without requiring prior knowledge of the signal rank. Numerical results demonstrate that the AI-powered CDMF significantly suppresses the high sidelobes induced by resource-limited sampling, enabling the reliable identification and localization of weak threat objects. This study demonstrates the power of AI in overcoming the physical bottlenecks of spaceborne hardware, providing a robust solution for enhancing space situational awareness in an increasingly crowded orbital environment.
Journal Article
Hyperspectral Sensors as a Management Tool to Prevent the Invasion of the Exotic Cordgrass Spartina densiflora in the Doñana Wetlands
by
Castellanos, Eloy
,
Afán, Isabel
,
Bustamante, Javier
in
adaptive coherence estimator
,
Adaptive filters
,
Airborne sensing
2016
We test the use of hyperspectral sensors for the early detection of the invasive dense-flowered cordgrass (Spartina densiflora Brongn.) in the Guadalquivir River marshes, Southwestern Spain. We flew in tandem a CASI-1500 (368–1052 nm) and an AHS (430–13,000 nm) airborne sensors in an area with presence of S. densiflora. We simplified the processing of hyperspectral data (no atmospheric correction and no data-reduction techniques) to test if these treatments were necessary for accurate S. densiflora detection in the area. We tested several statistical signal detection algorithms implemented in ENVI software as spectral target detection techniques (matched filtering, constrained energy minimization, orthogonal subspace projection, target-constrained interference minimized filter, and adaptive coherence estimator) and compared them to the well-known spectral angle mapper, using spectra extracted from ground-truth locations in the images. The target S. densiflora was easy to detect in the marshes by all algorithms in images of both sensors. The best methods (adaptive coherence estimator and target-constrained interference minimized filter) on the best sensor (AHS) produced 100% discrimination (Kappa = 1, AUC = 1) at the study site and only some decline in performance when extrapolated to a new nearby area. AHS outperformed CASI in spite of having a coarser spatial resolution (4-m vs. 1-m) and lower spectral resolution in the visible and near-infrared range, but had a better signal to noise ratio. The larger spectral range of AHS in the short-wave and thermal infrared was of no particular advantage. Our conclusions are that it is possible to use hyperspectral sensors to map the early spread S. densiflora in the Guadalquivir River marshes. AHS is the most suitable airborne hyperspectral sensor for this task and the signal processing techniques target-constrained interference minimized filter (TCIMF) and adaptive coherence estimator (ACE) are the best performing target detection techniques that can be employed operationally with a simplified processing of hyperspectral images.
Journal Article
Topology-aware non-rigid point set registration via global–local topology preservation
2019
We propose a new topology-aware point set registration algorithm which can cope with multi-part articulated and non-rigid deformations. Point set registration is formulated as a maximum likelihood (ML) estimation problem where two topologically complementary constraints are jointly optimized in a probabilistic framework. The first is coherent point drift that keeps the overall spatial connectivity and associativity by moving the point set collectively and coherently. The second is local linear embedding that preserves the local topological structure during registration. Hence, the new algorithm is called global–local topology preservation (GLTP). Without any pre-segmentation and correspondence initialization, GLTP is particularly useful and effective in dealing with complex shape matching with non-coherent and non-rigid local deformations at different parts of a point set. We have derived the expectation maximization algorithm for the ML optimization constrained with both regularization terms. Experimental results on a large set of 2D and 3D examples show the advantages and robustness of GLTP over existing algorithms in the presence of outliers, noise and missing data, especially in the case of articulated non-rigid transformations.
Journal Article
Blockchain Transaction Fee Forecasting: A Comparison of Machine Learning Methods
2023
Gas is the transaction-fee metering system of the Ethereum network. Users of the network are required to select a gas price for submission with their transaction, creating a risk of overpaying or delayed/unprocessed transactions involved in this selection. In this work, we investigate data in the aftermath of the London Hard Fork and shed insight into the transaction dynamics of the network after this major fork. As such, this paper provides an update on work previous to 2019 on the link between EthUSD/BitUSD and gas price. For forecasting, we compare a novel combination of machine learning methods such as Direct-Recursive Hybrid LSTM, CNN-LSTM, and Attention-LSTM. These are combined with wavelet threshold denoising and matrix profile data processing toward the forecasting of block minimum gas price, on a 5-min timescale, over multiple lookaheads. As the first application of the matrix profile being applied to gas price data and forecasting that we are aware of, this study demonstrates that matrix profile data can enhance attention-based models; however, given the hardware constraints, hybrid models outperformed attention and CNN-LSTM models. The wavelet coherence of inputs demonstrates correlation in multiple variables on a 1-day timescale, which is a deviation of base free from gas price. A Direct-Recursive Hybrid LSTM strategy is found to outperform other models, with an average RMSE of 26.08 and R2 of 0.54 over a 50-min lookahead window compared to an RMSE of 26.78 and R2 of 0.452 in the best-performing attention model. Hybrid models are shown to have favorable performance up to a 20-min lookahead with performance being comparable to attention models when forecasting 25–50-min ahead. Forecasts over a range of lookaheads allow users to make an informed decision on gas price selection and the optimal window to submit their transaction in without fear of their transaction being rejected. This, in turn, gives more detailed insight into gas price dynamics than existing recommenders, oracles and forecasting approaches, which provide simple heuristics or limited lookahead horizons.
Journal Article
Balanced and Coherent Climate Estimation by Combining Data with a Biased Coupled Model
2014
Given a biased coupled model and the atmospheric and oceanic observing system, maintaining a balanced and coherent climate estimation is of critical importance for producing accurate climate analysis and prediction initialization. However, because of limitations of the observing system (e.g., most of the oceanic measurements are only available for the upper ocean), directly evaluating climate estimation with real observations is difficult. With two coupled models that are biased with respect to each other, a biased twin experiment is designed to simulate the problem. To do that, the atmospheric and oceanic observations drawn from one model based on the modern climate observing system are assimilated into the other. The model that produces observations serves as the truth and the degree by which an assimilation recovers the truth steadily and coherently is an assessment of the impact of the data constraint scheme on climate estimation. Given the assimilation model bias of warmer atmosphere and colder ocean, where the atmospheric-only (oceanic only) data constraint produces an overcooling (overwarming) ocean through the atmosphere–ocean interaction, the constraints with both atmospheric and oceanic data create a balanced and coherent ocean estimate as the observational model. Moreover, the consistent atmosphere–ocean constraint produces the most accurate estimate for North Atlantic Deep Water (NADW), whereas NADW is too strong (weak) if the system is only constrained by atmospheric (oceanic) data. These twin experiment results provide insights that consistent data constraints of multiple components are very important when a coupled model is combined with the climate observing system for climate estimation and prediction initialization.
Journal Article
Sharp and laterally constrained multitrace impedance inversion based on blocky coordinate descent
2018
Seismic impedance inversion is a well-known method used to obtain the image of subsurface geological structures. Utilizing the spatial coherence among seismic traces, the laterally constrained multitrace impedance inversion (LCI) is superior to trace-by-trace inversion and can produce a more realistic image of the subsurface structures. However, when the traces are numerous, it will take great computational cost and a lot of memory to solve the large-scale matrix in the multitrace inversion, which restricts the efficiency and applicability of the existing multitrace inversion algorithm. In addition, the multitrace inversion methods are not only needed to consider the lateral correlation but also should take the constraints in temporal dimension into account. As usual, these vertical constraints represent the stratigraphic characteristics of the reservoir. For instance, total-variation regularization is adopted to obtain the blocky structure. However, it still limits the magnitude of model parameter variation and therefore somewhat distorts the real image. In this paper, we propose two schemes to solve these issues. Firstly, we introduce a fast algorithm called blocky coordinate descent (BCD) to derive a new framework of laterally constrained multitrace impedance inversion. This new BCD-based inversion approach is fast and spends fewer memories. Next, we introduce a minimum gradient support regularization into the BCD-based laterally constrained inversion. This new approach can adapt to sharp layer boundaries and keep the spatial coherence. The feasibility of the proposed method is illustrated by numerical tests for both synthetic data and field seismic data.
Journal Article