Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
7,634
result(s) for
"Visual discrimination learning"
Sort by:
The underestimated giants: operant conditioning, visual discrimination and long-term memory in giant tortoises
by
Weissenbacher Anton
,
Kuba, Michael J
,
Gutnick Tamar
in
Animal cognition
,
Animals
,
Aquatic animals
2020
Relatively little is known about cognition in turtles, and most studies have focused on aquatic animals. Almost nothing is known about the giant land tortoises. These are visual animals that travel large distances in the wild, interact with each other and with their environment, and live extremely long lives. Here, we show that Galapagos and Seychelle tortoises, housed in a zoo environment, readily underwent operant conditioning and we provide evidence that they learned faster when trained in the presence of a group rather than individually. The animals readily learned to distinguish colors in a two-choice discrimination task. However, since each animal was assigned its own individual colour for this task, the presence of the group had no obvious effect on the speed of learning. When tested 95 days after the initial training, all animals remembered the operant task. When tested in the discrimination task, most animals relearned the task up to three times faster than naïve animals. Remarkably, animals that were tested 9 years after the initial training still retained the operant conditioning. As animals remembered the operant task, but needed to relearn the discrimination task constitutes the first evidence for a differentiation between implicit and explicit memory in tortoises. Our study is a first step towards a wider appreciation of the cognitive abilities of these unique animals.
Journal Article
Visual discrimination and amodal completion in zebrafish
by
Vicidomini, Sofia
,
Miletto Petrazzini, Maria Elena
,
Rosa-Salva, Orsola
in
Adaptation
,
Analysis
,
Animals
2022
While zebrafish represent an important model for the study of the visual system, visual perception in this species is still less investigated than in other teleost fish. In this work, we validated for zebrafish two versions of a visual discrimination learning task, which is based on the motivation to reach food and companions. Using this task, we investigated zebrafish ability to discriminate between two different shape pairs (i.e., disk vs. cross and full vs. amputated disk). Once zebrafish were successfully trained to discriminate a full from an amputated disk, we also tested their ability to visually complete partially occluded objects (amodal completion). After training, animals were presented with two amputated disks. In these test stimuli, another shape was either exactly juxtaposed or only placed close to the missing sectors of the disk. Only the former stimulus should elicit amodal completion. In human observers, this stimulus causes the impression that the other shape is occluding the missing sector of the disk, which is thus perceived as a complete, although partially hidden, disk. In line with our predictions, fish reinforced on the full disk chose the stimulus eliciting amodal completion, while fish reinforced on the amputated disk chose the other stimulus. This represents the first demonstration of amodal completion perception in zebrafish. Moreover, our results also indicated that a specific shape pair (disk vs. cross) might be particularly difficult to discriminate for this species, confirming previous reports obtained with different procedures.
Journal Article
A large-scale examination of inductive biases shaping high-level visual representation in brains and machines
by
Conwell, Colin
,
Konkle, Talia
,
Kay, Kendrick N.
in
59/57
,
631/378/116/2395
,
631/378/2613/2616
2024
The rapid release of high-performing computer vision models offers new potential to study the impact of different inductive biases on the emergent brain alignment of learned representations. Here, we perform controlled comparisons among a curated set of 224 diverse models to test the impact of specific model properties on visual brain predictivity – a process requiring over 1.8 billion regressions and 50.3 thousand representational similarity analyses. We find that models with qualitatively different architectures (e.g. CNNs versus Transformers) and task objectives (e.g. purely visual contrastive learning versus vision- language alignment) achieve near equivalent brain predictivity, when other factors are held constant. Instead, variation across visual training diets yields the largest, most consistent effect on brain predictivity. Many models achieve similarly high brain predictivity, despite clear variation in their underlying representations – suggesting that standard methods used to link models to brains may be too flexible. Broadly, these findings challenge common assumptions about the factors underlying emergent brain alignment, and outline how we can leverage controlled model comparison to probe the common computational principles underlying biological and artificial visual systems.
Through controlled model-to-brain comparisons across a large-scale survey of deep neural networks, the authors show the data models are trained on matters far more for downstream brain prediction than design factors such as architecture and training task.
Journal Article
Neural representational geometry underlies few-shot concept learning
by
Sorscher, Ben
,
Sompolinsky, Haim
,
Ganguli, Surya
in
Artificial neural networks
,
Biological Sciences
,
Cognitive ability
2022
Understanding the neural basis of the remarkable human cognitive capacity to learn novel concepts from just one or a few sensory experiences constitutes a fundamental problem. We propose a simple, biologically plausible, mathematically tractable, and computationally powerful neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higherorder sensory areas. We further posit that a single plastic downstream readout neuron learns to discriminate new concepts based on few examples using a simple plasticity rule. We demonstrate the computational power of our proposal by showing that it can achieve high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations and can even learn novel visual concepts specified only through linguistic descriptors. Moreover, we develop a mathematical theory of few-shot learning that links neurophysiology to predictions about behavioral outcomes by delineating several fundamental and measurable geometric properties of neural representations that can accurately predict the few-shot learning performance of naturalistic concepts across all our numerical simulations. This theory reveals, for instance, that high-dimensional manifolds enhance the ability to learn new concepts from few examples. Intriguingly, we observe striking mismatches between the geometry of manifolds in the primate visual pathway and in trained DNNs. We discuss testable predictions of our theory for psychophysics and neurophysiological experiments.
Journal Article
Hippocampal sharp-wave ripples linked to visual episodic recollection in humans
2019
What are the brain mechanisms responsible for episodic memory retrieval? Norman et al. investigated epilepsy patients who had electrodes implanted in the hippocampus and a variety of cortical areas. Using a visual learning paradigm, they examined the temporal relationship between the incidence of hippocampal sharp-wave ripples and recall. Effective encoding of visual information was associated with higher incidence of ripples. Successful recall was preceded by an increased probability of ripples, which were also associated with transient reemergence of activation patterns in higher visual cortical areas. Hippocampal ripples may thus boost recollections during episodic memory retrieval. Science , this issue p. eaax1030 Ripples reinstate human memory during free recall. Hippocampal sharp-wave ripples (SWRs) constitute one of the most synchronized activation events in the brain and play a critical role in offline memory consolidation. Yet their cognitive content and function during awake, conscious behavior remains unclear. We directly examined this question using intracranial recordings in human patients engaged in episodic free recall of previously viewed photographs. Our results reveal a content-selective increase in hippocampal ripple rate emerging 1 to 2 seconds prior to recall events. During recollection, high-order visual areas showed pronounced SWR-coupled reemergence of activation patterns associated with recalled content. Finally, the SWR rate during encoding predicted subsequent free-recall performance. These results point to a role for hippocampal SWRs in triggering spontaneous recollections and orchestrating the reinstatement of cortical representations during free episodic memory retrieval.
Journal Article
Variational autoencoder: An unsupervised model for encoding and decoding fMRI activity in visual cortex
2019
Goal-driven and feedforward-only convolutional neural networks (CNN) have been shown to be able to predict and decode cortical responses to natural images or videos. Here, we explored an alternative deep neural network, variational auto-encoder (VAE), as a computational model of the visual cortex. We trained a VAE with a five-layer encoder and a five-layer decoder to learn visual representations from a diverse set of unlabeled images. Using the trained VAE, we predicted and decoded cortical activity observed with functional magnetic resonance imaging (fMRI) from three human subjects passively watching natural videos. Compared to CNN, VAE could predict the video-evoked cortical responses with comparable accuracy in early visual areas, but relatively lower accuracy in higher-order visual areas. The distinction between CNN and VAE in terms of encoding performance was primarily attributed to their different learning objectives, rather than their different model architecture or number of parameters. Despite lower encoding accuracies, VAE offered a more convenient strategy for decoding the fMRI activity to reconstruct the video input, by first converting the fMRI activity to the VAE's latent variables, and then converting the latent variables to the reconstructed video frames through the VAE's decoder. This strategy was more advantageous than alternative decoding methods, e.g. partial least squares regression, for being able to reconstruct both the spatial structure and color of the visual input. Such findings highlight VAE as an unsupervised model for learning visual representation, as well as its potential and limitations for explaining cortical responses and reconstructing naturalistic and diverse visual experiences.
•Variational auto-encoder implements 1 an unsupervised model of “Bayesian brain”.•Variational auto-encoder explains and predicts fMRI responses to natural videos.•Variational auto-encoder decodes fMRI responses to directly reconstruct visual input.•Convolutional neural networks trained for image classification better predict fMRI responses than variational auto-encoder trained for image reconstruction.
Journal Article
Distinct learning-induced changes in stimulus selectivity and interactions of GABAergic interneuron classes in visual cortex
2018
How learning enhances neural representations for behaviorally relevant stimuli via activity changes of cortical cell types remains unclear. We simultaneously imaged responses of pyramidal cells (PYR) along with parvalbumin (PV), somatostatin (SOM), and vasoactive intestinal peptide (VIP) inhibitory interneurons in primary visual cortex while mice learned to discriminate visual patterns. Learning increased selectivity for task-relevant stimuli of PYR, PV and SOM subsets but not VIP cells. Strikingly, PV neurons became as selective as PYR cells, and their functional interactions reorganized, leading to the emergence of stimulus-selective PYR–PV ensembles. Conversely, SOM activity became strongly decorrelated from the network, and PYR–SOM coupling before learning predicted selectivity increases in individual PYR cells. Thus, learning differentially shapes the activity and interactions of multiple cell classes: while SOM inhibition may gate selectivity changes, PV interneurons become recruited into stimulus-specific ensembles and provide more selective inhibition as the network becomes better at discriminating behaviorally relevant stimuli.
Journal Article
Reconstructing seen image from brain activity by visually-guided cognitive representation and adversarial learning
2021
•We combined a dual-VAE structure with GAN to build a D-Vae/Gan framework.•Gan-based inter-modality knowledge distillation was introduced for feature learning.•Model training process was divided into cascade stages with a three-stage strategy.•Reconstructions on four fMRI datasets were objectively and subjectively identifiable.
Reconstructing perceived stimulus (image) only from human brain activity measured with functional Magnetic Resonance Imaging (fMRI) is a significant task in brain decoding. However, the inconsistent distribution and representation between fMRI signals and visual images cause great ‘domain gap’. Moreover, the limited fMRI data instances generally suffer from the issues of low signal noise ratio (SNR), extremely high dimensionality, and limited spatial resolution. Existing methods are often affected by these issues so that a satisfactory reconstruction is still an open problem. In this paper, we show that it is possible to obtain a promising solution by learning visually-guided latent cognitive representations from the fMRI signals, and inversely decoding them to the image stimuli. The resulting framework is called Dual-Variational Autoencoder/ Generative Adversarial Network (D-Vae/Gan), which combines the advantages of adversarial representation learning with knowledge distillation. In addition, we introduce a novel three-stage learning strategy which enables the (cognitive) encoder to gradually distill useful knowledge from the paired (visual) encoder during the learning process. Extensive experimental results on both artificial and natural images have demonstrated that our method could achieve surprisingly good results and outperform the available alternatives.
Journal Article
A bionic self-driven retinomorphic eye with ionogel photosynaptic retina
2024
Bioinspired bionic eyes should be self-driving, repairable and conformal to arbitrary geometries. Such eye would enable wide-field detection and efficient visual signal processing without requiring external energy, along with retinal transplantation by replacing dysfunctional photoreceptors with healthy ones for vision restoration. A variety of artificial eyes have been constructed with hemispherical silicon, perovskite and heterostructure photoreceptors, but creating zero-powered retinomorphic system with transplantable conformal features remains elusive. By combining neuromorphic principle with retinal and ionoelastomer engineering, we demonstrate a self-driven hemispherical retinomorphic eye with elastomeric retina made of ionogel heterojunction as photoreceptors. The receptor driven by photothermoelectric effect shows photoperception with broadband light detection (365 to 970 nm), wide field-of-view (180°) and photosynaptic (paired-pulse facilitation index, 153%) behaviors for biosimilar visual learning. The retinal photoreceptors are transplantable and conformal to any complex surface, enabling visual restoration for dynamic optical imaging and motion tracking.
Luo et al. report a self-driven hemispherical retinomorphic eye that employs ionogel heterojunctions as photoreceptors. This photoreceptor exhibits broadband photosynapse, high conformability, retinal transplantation, and visual restoration for re-time optical imaging and motion tracking.
Journal Article