Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
5,743
result(s) for
"Pattern Recognition, Visual - physiology"
Sort by:
The bilingual effect on Boston Naming Test performance
by
MONTOYA, ROSA I.
,
GOLLAN, TAMAR H.
,
FENNEMA-NOTESTINE, CHRISTINE
in
Adults
,
Aged
,
Aged, 80 and over
2007
The present study aimed to determine how older bilingual
subjects' naming performance is affected by their knowledge of two
languages. Twenty-nine aging (mean age = 74.0; SD = 7.1)
Spanish–English bilinguals were asked to name all pictures in the
Boston Naming Test (BNT) first in their dominant language and then in
their less-dominant language. Bilinguals with similar naming scores in
each language, or relatively balanced bilinguals, named more pictures
correctly when credited for producing a correct name in either language.
Balanced bilinguals also named fewer pictures in their dominant language
than unbalanced bilinguals, and named more pictures correctly in both
languages if the pictures had cognate names (e.g., dart is
dardo in Spanish). Unbalanced bilinguals did not benefit from the
alternative (either-language) scoring procedure and showed cognate effects
only in their nondominant language. These findings may help to guide the
interpretation of neuropsychological data for the purpose of determining
cognitive status in older bilinguals and can be used to develop models of
bilingual language processing. Bilinguals' ability to name pictures
reflects their experience with word forms in both languages.
(JINS, 2007, 13, 197–208.)
Journal Article
ISCEV standard for clinical visual evoked potentials: (2016 update)
by
Mizota, Atsushi
,
Brigell, Mitchell
,
Bach, Michael
in
Electrophysiology - standards
,
Evoked Potentials, Visual
,
Humans
2016
Visual evoked potentials (VEPs) can provide important diagnostic information regarding the functional integrity of the visual system. This document updates the ISCEV standard for clinical VEP testing and supersedes the 2009 standard. The main changes in this revision are the acknowledgment that pattern stimuli can be produced using a variety of technologies with an emphasis on the need for manufacturers to ensure that there is no luminance change during pattern reversal or pattern onset/offset. The document is also edited to bring the VEP standard into closer harmony with other ISCEV standards. The ISCEV standard VEP is based on a subset of stimulus and recording conditions that provide core clinical information and can be performed by most clinical electrophysiology laboratories throughout the world. These are: (1) Pattern-reversal VEPs elicited by checkerboard stimuli with large 1 degree (°) and small 0.25° checks. (2) Pattern onset/offset VEPs elicited by checkerboard stimuli with large 1° and small 0.25° checks. (3) Flash VEPs elicited by a flash (brief luminance increment) which subtends a visual field of at least 20°. The ISCEV standard VEP protocols are defined for a single recording channel with a midline occipital active electrode. These protocols are intended for assessment of the eye and/or optic nerves anterior to the optic chiasm. Extended, multi-channel protocols are required to evaluate postchiasmal lesions.
Journal Article
Unsupervised neural network models of the ventral visual stream
2021
Deep neural networks currently provide the best quantitative models of the response patterns of neurons throughout the primate ventral visual stream. However, such networks have remained implausible as a model of the development of the ventral stream, in part because they are trained with supervised methods requiring many more labels than are accessible to infants during development. Here, we report that recent rapid progress in unsupervised learning has largely closed this gap. We find that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today’s best supervised methods and that the mapping of these neural network models’ hidden layers is neuroanatomically consistent across the ventral stream. Strikingly, we find that these methods produce brain-like representations even when trained solely with real human child developmental data collected from head-mounted cameras, despite the fact that these datasets are noisy and limited. We also find that semisupervised deep contrastive embeddings can leverage small numbers of labeled examples to produce representations with substantially improved error-pattern consistency to human behavior. Taken together, these results illustrate a use of unsupervised learning to provide a quantitative model of a multiarea cortical brain system and present a strong candidate for a biologically plausible computational theory of primate sensory learning.
Journal Article
Individual differences in visual salience vary along semantic dimensions
by
Schwarzkopf, D. Samuel
,
de Haas, Benjamin
,
Iakovidis, Alexios L.
in
Adult
,
Attention - physiology
,
Biological Sciences
2019
What determines where we look? Theories of attentional guidance hold that image features and task demands govern fixation behavior, while differences between observers are interpreted as a “noise-ceiling” that strictly limits predictability of fixations. However, recent twin studies suggest a genetic basis of gaze-trace similarity for a given stimulus. This leads to the question of how individuals differ in their gaze behavior and what may explain these differences. Here, we investigated the fixations of >100 human adults freely viewing a large set of complex scenes containing thousands of semantically annotated objects. We found systematic individual differences in fixation frequencies along six semantic stimulus dimensions. These differences were large (>twofold) and highly stable across images and time. Surprisingly, they also held for first fixations directed toward each image, commonly interpreted as “bottom-up” visual salience. Their perceptual relevance was documented by a correlation between individual face salience and face recognition skills. The set of reliable individual salience dimensions and their covariance pattern replicated across samples from three different countries, suggesting they reflect fundamental biological mechanisms of attention. Our findings show stable individual differences in salience along a set of fundamental semantic dimensions and that these differences have meaningful perceptual implications. Visual salience reflects features of the observer as well as the image.
Journal Article
Atoms of recognition in human and computer vision
by
Ullman, Shimon
,
Assif, Liav
,
Harari, Daniel
in
Biological Sciences
,
Brain
,
Brain - physiology
2016
Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at theminimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.
Journal Article
Mid-level visual features underlie the high-level categorical organization of the ventral stream
by
Long, Bria
,
Yu, Chen-Ping
,
Konkle, Talia
in
Adult
,
Artificial neural networks
,
Biological Sciences
2018
Human object-selective cortex shows a large-scale organization characterized by the high-level properties of both animacy and object size. To what extent are these neural responses explained by primitive perceptual features that distinguish animals from objects and big objects from small objects? To address this question, we used a texture synthesis algorithm to create a class of stimuli—texforms—which preserve some mid-level texture and form information from objects while rendering them unrecognizable. We found that unrecognizable texforms were sufficient to elicit the large-scale organizations of object-selective cortex along the entire ventral pathway. Further, the structure in the neural patterns elicited by texforms was well predicted by curvature features and by intermediate layers of a deep convolutional neural network, supporting the mid-level nature of the representations. These results provide clear evidence that a substantial portion of ventral stream organization can be accounted for by coarse texture and form information without requiring explicit recognition of intact objects.
Journal Article
Natural images are reliably represented by sparse and variable populations of neurons in visual cortex
2020
Natural scenes sparsely activate neurons in the primary visual cortex (V1). However, how sparsely active neurons reliably represent complex natural images and how the information is optimally decoded from these representations have not been revealed. Using two-photon calcium imaging, we recorded visual responses to natural images from several hundred V1 neurons and reconstructed the images from neural activity in anesthetized and awake mice. A single natural image is linearly decodable from a surprisingly small number of highly responsive neurons, and the remaining neurons even degrade the decoding. Furthermore, these neurons reliably represent the image across trials, regardless of trial-to-trial response variability. Based on our results, diverse, partially overlapping receptive fields ensure sparse and reliable representation. We suggest that information is reliably represented while the corresponding neuronal patterns change across trials and collecting only the activity of highly responsive neurons is an optimal decoding strategy for the downstream neurons.
Natural scenes sparsely activate V1 neurons. Here, the authors show that a small number of active cells reliably represent visual contents of a natural image across trials regardless of response variability, due to the diverse and partially overlapping representations of individual cells.
Journal Article
Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons
2021
In order to better understand how the brain perceives faces, it is important to know what objective drives learning in the ventral visual stream. To answer this question, we model neural responses to faces in the macaque inferotemporal (IT) cortex with a deep self-supervised generative model,
β
-VAE, which disentangles sensory data into interpretable latent factors, such as gender or age. Our results demonstrate a strong correspondence between the generative factors discovered by
β
-VAE and those coded by single IT neurons, beyond that found for the baselines, including the handcrafted state-of-the-art model of face perception, the Active Appearance Model, and deep classifiers. Moreover,
β
-VAE is able to reconstruct novel face images using signals from just a handful of cells. Together our results imply that optimising the disentangling objective leads to representations that closely resemble those in the IT at the single unit level. This points at disentangling as a plausible learning objective for the visual brain.
Little is known about the brain’s computations that enable the recognition of faces. Here, the authors use unsupervised deep learning to show that the brain disentangles faces into semantically meaningful factors, like age or the presence of a smile, at the single neuron level.
Journal Article
Mapping the emotional face. How individual face parts contribute to successful emotion recognition
2017
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.
Journal Article