Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
153
result(s) for
"efficient coding"
Sort by:
Low Computational Coding-Efficient Distributed Video Coding: Adding a Decision Mode to Limit Channel Coding Load
2023
Distributed video coding (DVC) is based on distributed source coding (DSC) concepts in which video statistics are used partially or completely at the decoder rather than the encoder. The rate-distortion (RD) performance of distributed video codecs substantially lags the conventional predictive video coding. Several techniques and methods are employed in DVC to overcome this performance gap and achieve high coding efficiency while maintaining low encoder computational complexity. However, it is still challenging to achieve coding efficiency and limit the computational complexity of the encoding and decoding process. The deployment of distributed residual video coding (DRVC) improves coding efficiency, but significant enhancements are still required to reduce these gaps. This paper proposes the QUAntized Transform ResIdual Decision (QUATRID) scheme that improves the coding efficiency by deploying the Quantized Transform Decision Mode (QUAM) at the encoder. The proposed QUATRID scheme’s main contribution is a design and integration of a novel QUAM method into DRVC that effectively skips the zero quantized transform (QT) blocks, thus limiting the number of input bit planes to be channel encoded and consequently reducing both the channel encoding and decoding computational complexity. Moreover, an online correlation noise model (CNM) is specifically designed for the QUATRID scheme and implemented at its decoder. This online CNM improves the channel decoding process and contributes to the bit rate reduction. Finally, a methodology for the reconstruction of the residual frame (R^) is developed that utilizes the decision mode information passed by the encoder, decoded quantized bin, and transformed estimated residual frame. The Bjøntegaard delta analysis of experimental results shows that the QUATRID achieves better performance over the DISCOVER by attaining the PSNR between 0.06 dB and 0.32 dB and coding efficiency, which varies from 5.4 to 10.48 percent. In addition to this, results determine that for all types of motion videos, the proposed QUATRID scheme outperforms the DISCOVER in terms of reducing the number of input bit-planes to be channel encoded and the entire encoder’s computational complexity. The number of bit plane reduction exceeds 97%, while the entire Wyner-Ziv encoder and channel coding computational complexity reduce more than nine-fold and 34-fold, respectively.
Journal Article
Toward a unified theory of efficient, predictive, and sparse coding
by
Marre, Olivier
,
Tkačik, Gašper
,
Chalk, Matthew
in
Animals
,
Biological Sciences
,
Biophysics and Computational Biology
2018
A central goal in theoretical neuroscience is to predict the response properties of sensory neurons from first principles. To this end, “efficient coding” posits that sensory neurons encode maximal information about their inputs given internal constraints. There exist, however, many variants of efficient coding (e.g., redundancy reduction, different formulations of predictive coding, robust coding, sparse coding, etc.), differing in their regimes of applicability, in the relevance of signals to be encoded, and in the choice of constraints. It is unclear how these types of efficient coding relate or what is expected when different coding objectives are combined. Here we present a unified framework that encompasses previously proposed efficient coding models and extends to unique regimes. We show that optimizing neural responses to encode predictive information can lead them to either correlate or decorrelate their inputs, depending on the stimulus statistics; in contrast, at low noise, efficiently encoding the past always predicts decorrelation. Later, we investigate coding of naturalistic movies and show that qualitatively different types of visual motion tuning and levels of response sparsity are predicted, depending on whether the objective is to recover the past or predict the future. Our approach promises a way to explain the observed diversity of sensory neural responses, as due to multiple functional goals and constraints fulfilled by different cell types and/or circuits.
Journal Article
Lawful relation between perceptual bias and discriminability
2017
Perception of a stimulus can be characterized by two fundamental psychophysical measures: how well the stimulus can be discriminated from similar ones (discrimination threshold) and how strongly the perceived stimulus value deviates on average from the true stimulus value (perceptual bias). We demonstrate that perceptual bias and discriminability, as functions of the stimulus value, follow a surprisingly simple mathematical relation. The relation, which is derived from a theory combining optimal encoding and decoding, is well supported by a wide range of reported psychophysical data including perceptual changes induced by contextual modulation. The large empirical support indicates that the proposed relation may represent a psychophysical law in human perception. Our results imply that the computational processes of sensory encoding and perceptual decoding are matched and optimized based on identical assumptions about the statistical structure of the sensory environment.
Journal Article
A Bayesian and efficient observer model explains concurrent attractive and repulsive history biases in visual perception
2020
Human perceptual decisions can be repelled away from (repulsive adaptation) or attracted towards recent visual experience (attractive serial dependence). It is currently unclear whether and how these repulsive and attractive biases interact during visual processing and what computational principles underlie these history dependencies. Here we disentangle repulsive and attractive biases by exploring their respective timescales. We find that perceptual decisions are concurrently attracted towards the short-term perceptual history and repelled from stimuli experienced up to minutes into the past. The temporal pattern of short-term attraction and long-term repulsion cannot be captured by an ideal Bayesian observer model alone. Instead, it is well captured by an ideal observer model with efficient encoding and Bayesian decoding of visual information in a slowly changing environment. Concurrent attractive and repulsive history biases in perceptual decisions may thus be the consequence of the need for visual processing to simultaneously satisfy constraints of efficiency and stability.
Journal Article
Divisive normalization is an efficient code for multivariate Pareto-distributed environments
by
Bucher, Stefan F.
,
Brandenburger, Adam M.
in
Biological Sciences
,
Empirical analysis
,
Histograms
2022
Divisive normalization is a canonical computation in the brain, observed across neural systems, that is often considered to be an implementation of the efficient coding principle.We provide a theoretical result thatmakes the conditions under which divisive normalization is an efficient code analytically precise: We show that, in a low-noise regime, encoding an n-dimensional stimulus via divisive normalization is efficient if and only if its prevalence in the environment is described by a multivariate Pareto distribution. We generalize this multivariate analog of histogram equalization to allow for arbitrary metabolic costs of the representation, and show how different assumptions on costs are associated with different shapes of the distributions that divisive normalization efficiently encodes. Our result suggests that divisive normalization may have evolved to efficiently represent stimuli with Pareto distributions. We demonstrate that this efficiently encoded distribution is consistent with stylized features of naturalistic stimulus distributions such as their characteristic conditional variance dependence, and we provide empirical evidence suggesting that it may capture the statistics of filter responses to naturalistic images. Our theoretical finding also yields empirically testable predictions across sensory domains on howthe divisive normalization parameters should be tuned to features of the input distribution.
Journal Article
Prediction error and repetition suppression have distinct effects on neural representations of visual information
2018
Predictive coding theories argue that recent experience establishes expectations in the brain that generate prediction errors when violated. Prediction errors provide a possible explanation for repetition suppression, where evoked neural activity is attenuated across repeated presentations of the same stimulus. The predictive coding account argues repetition suppression arises because repeated stimuli are expected, whereas non-repeated stimuli are unexpected and thus elicit larger neural responses. Here, we employed electroencephalography in humans to test the predictive coding account of repetition suppression by presenting sequences of visual gratings with orientations that were expected either to repeat or change in separate blocks of trials. We applied multivariate forward modelling to determine how orientation selectivity was affected by repetition and prediction. Unexpected stimuli were associated with significantly enhanced orientation selectivity, whereas selectivity was unaffected for repeated stimuli. Our results suggest that repetition suppression and expectation have separable effects on neural representations of visual feature information.
Journal Article
From likely to likable
2020
Humans readily form social impressions, such as attractiveness and trustworthiness, from a stranger’s facial features. Understanding the provenance of these impressions has clear scientific importance and societal implications. Motivated by the efficient coding hypothesis of brain representation, as well as Claude Shannon’s theoretical result that maximally efficient representational systems assign shorter codes to statistically more typical data (quantified as log likelihood), we suggest that social “liking” of faces increases with statistical typicality. Combining human behavioral data and computational modeling, we show that perceived attractiveness, trustworthiness, dominance, and valence of a face image linearly increase with its statistical typicality (log likelihood). We also show that statistical typicality can at least partially explain the role of symmetry in attractiveness perception. Additionally, by assuming that the brain focuses on a task-relevant subset of facial features and assessing log likelihood of a face using those features, our model can explain the “ugliness-in-averageness” effect found in social psychology, whereby otherwise attractive, intercategory faces diminish in attractiveness during a categorization task.
Journal Article
Origin of information-limiting noise correlations
by
Coen-Cagli, Ruben
,
Pouget, Alexandre
,
Kanitscheider, Ingmar
in
Action Potentials
,
Animals
,
Biological Sciences
2015
The ability to discriminate between similar sensory stimuli relies on the amount of information encoded in sensory neuronal populations. Such information can be substantially reduced by correlated trial-to-trial variability. Noise correlations have been measured across a wide range of areas in the brain, but their origin is still far from clear. Here we show analytically and with simulations that optimal computation on inputs with limited information creates patterns of noise correlations that account for a broad range of experimental observations while at same time causing information to saturate in large neural populations. With the example of a network of V1 neurons extracting orientation from a noisy image, we illustrate to our knowledge the first generative model of noise correlations that is consistent both with neurophysiology and with behavioral thresholds, without invoking suboptimal encoding or decoding or internal sources of variability such as stochastic network dynamics or cortical state fluctuations. We further show that when information is limited at the input, both suboptimal connectivity and internal fluctuations could similarly reduce the asymptotic information, but they have qualitatively different effects on correlations leading to specific experimental predictions. Our study indicates that noise at the sensory periphery could have a major effect on cortical representations in widely studied discrimination tasks. It also provides an analytical framework to understand the functional relevance of different sources of experimentally measured correlations.
Journal Article
Surface color and predictability determine contextual modulation of V1 firing and gamma oscillations
2019
The integration of direct bottom-up inputs with contextual information is a core feature of neocortical circuits. In area V1, neurons may reduce their firing rates when their receptive field input can be predicted by spatial context. Gamma-synchronized (30–80 Hz) firing may provide a complementary signal to rates, reflecting stronger synchronization between neuronal populations receiving mutually predictable inputs. We show that large uniform surfaces, which have high spatial predictability, strongly suppressed firing yet induced prominent gamma synchronization in macaque V1, particularly when they were colored. Yet, chromatic mismatches between center and surround, breaking predictability, strongly reduced gamma synchronization while increasing firing rates. Differences between responses to different colors, including strong gamma-responses to red, arose from stimulus adaptation to a full-screen background, suggesting prominent differences in adaptation between M- and L-cone signaling pathways. Thus, synchrony signaled whether RF inputs were predicted from spatial context, while firing rates increased when stimuli were unpredicted from context.
Journal Article
Local dendritic balance enables learning of efficient representations in networks of spiking neurons
by
Rudelt, Lucas
,
Mikulasch, Fabian A.
,
Priesemann, Viola
in
Animals
,
Biological Sciences
,
Computer Simulation
2021
How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that feedforward weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity works only under unrealistic requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, recurrent connections learn to locally balance feedforward input in individual dendritic compartments and thereby can modulate synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex high-dimensional inputs and with inhibitory transmission delays, where Hebbian-like plasticity fails. Our results draw a direct connection between dendritic excitatory–inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo and suggest that both are crucial for representation learning.
Journal Article