Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
38
result(s) for
"Denison, Rachel N"
Sort by:
Suboptimality in perceptual decision making
2018
Human perceptual decisions are often described as optimal. Critics of this view have argued that claims of optimality are overly flexible and lack explanatory power. Meanwhile, advocates for optimality have countered that such criticisms single out a few selected papers. To elucidate the issue of optimality in perceptual decision making, we review the extensive literature on suboptimal performance in perceptual tasks. We discuss eight different classes of suboptimal perceptual decisions, including improper placement, maintenance, and adjustment of perceptual criteria; inadequate tradeoff between speed and accuracy; inappropriate confidence ratings; misweightings in cue combination; and findings related to various perceptual illusions and biases. In addition, we discuss conceptual shortcomings of a focus on optimality, such as definitional difficulties and the limited value of optimality claims in and of themselves. We therefore advocate that the field drop its emphasis on whether observed behavior is optimal and instead concentrate on building and testing detailed observer models that explain behavior across a wide range of tasks. To facilitate this transition, we compile the proposed hypotheses regarding the origins of suboptimal perceptual decisions reviewed here. We argue that verifying, rejecting, and expanding these explanations for suboptimal behavior – rather than assessing optimality per se – should be among the major goals of the science of perceptual decision making.
Journal Article
A dynamic spatiotemporal normalization model captures perceptual and neural effects of spatial and temporal context
by
Denison, Rachel N
,
Chapman, Angus F
in
Analysis
,
Computer simulation
,
Computer-generated environments
2025
How does the visual system process dynamic inputs? Perception and neural activity are shaped by the spatial and temporal context of sensory input, which has been modeled by divisive normalization over space or time. However, theoretical work has largely treated normalization separately within these dimensions and has not explained how future stimuli can suppress past ones. Here, we introduce a dynamic spatiotemporal normalization model (DSTN) with a unified spatiotemporal receptive field structure that implements normalization across both space and time and ask whether this model captures the bidirectional effects of temporal context on neural responses and behavior. DSTN implements temporal normalization through excitatory and suppressive drives that depend on the recent history of stimulus input, controlled by separate temporal windows. We found that biphasic temporal receptive fields emerged from this normalization computation, consistent with empirical observations. The model also reproduced several neural response properties, including surround suppression, nonlinear response dynamics, subadditivity, response adaptation, and backwards masking. Further, spatiotemporal normalization captured bidirectional temporal suppression that depended on stimulus contrast, consistent with human behavior. Thus, DSTN captured a wide range of neural and behavioral effects, demonstrating that a unified spatiotemporal normalization computation could underlie dynamic stimulus processing and perception.
Journal Article
Studying the neural representations of uncertainty
by
Lee, Jennifer
,
Walker, Edgar Y.
,
Pohl, Stephan
in
631/378/116/2394
,
631/378/116/2395
,
631/378/2649/1723
2023
The study of the brain’s representations of uncertainty is a central topic in neuroscience. Unlike most quantities of which the neural representation is studied, uncertainty is a property of an observer’s beliefs about the world, which poses specific methodological challenges. We analyze how the literature on the neural representations of uncertainty addresses those challenges and distinguish between ‘code-driven’ and ‘correlational’ approaches. Code-driven approaches make assumptions about the neural code for representing world states and the associated uncertainty. By contrast, correlational approaches search for relationships between uncertainty and neural activity without constraints on the neural representation of the world state that this uncertainty accompanies. To compare these two approaches, we apply several criteria for neural representations: sensitivity, specificity, invariance and functionality. Our analysis reveals that the two approaches lead to different but complementary findings, shaping new research questions and guiding future experiments.
This Review explains how the neural coding of uncertainty is theoretically conceived and empirically tested. It compares the approaches of two largely separate research communities and proposes goals for the field that combine these approaches.
Journal Article
Anticipatory and evoked visual cortical dynamics of voluntary temporal attention
by
Denison, Rachel N.
,
Heeger, David J.
,
Tian, Karen J.
in
631/378/2649/1310
,
631/378/2649/1723
,
Adult
2024
We can often anticipate the precise moment when a stimulus will be relevant for our behavioral goals. Voluntary temporal attention, the prioritization of sensory information at task-relevant time points, enhances visual perception. However, the neural mechanisms of voluntary temporal attention have not been isolated from those of temporal expectation, which reflects timing predictability rather than relevance. Here we use time-resolved steady-state visual evoked responses (SSVER) to investigate how temporal attention dynamically modulates visual activity when temporal expectation is controlled. We recorded magnetoencephalography while participants directed temporal attention to one of two sequential grating targets with predictable timing. Meanwhile, a co-localized SSVER probe continuously tracked visual cortical modulations both before and after the target stimuli. We find that in the pre-target period, the SSVER gradually ramps up as the targets approach, reflecting temporal expectation. Furthermore, we find a low-frequency modulation of the SSVER, which shifts approximately half a cycle in phase according to which target is attended. In the post-target period, temporal attention to the first target transiently modulates the SSVER shortly after target onset. Thus, temporal attention dynamically modulates visual cortical responses via both periodic pre-target and transient post-target mechanisms to prioritize sensory information at precise moments.
People can direct attention to specific moments that they anticipate will be relevant to their goals. Here, the authors show that voluntary temporal attention engages both periodic and transient modulations of visual cortical activity to improve perception at precise time points.
Journal Article
Feature reliability determines specificity and transfer of perceptual learning in orientation search
2017
Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL's effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments.
Journal Article
Attention flexibly trades off across points in time
2017
Sensory signals continuously enter the brain, raising the question of how perceptual systems handle this constant flow of input. Attention to an anticipated point in time can prioritize visual information at that time. However, how we voluntarily attend across time when there are successive task-relevant stimuli has been barely investigated. We developed a novel experimental protocol that allowed us to assess, for the first time, both the benefits and costs of voluntary temporal attention when perceiving a short sequence of two or three visual targets with predictable timing. We found that when humans directed attention to a cued point in time, their ability to perceive orientation was better at that time but also worse earlier and later. These perceptual tradeoffs across time are analogous to those found across space for spatial attention. We concluded that voluntary attention is limited, and selective, across time.
Journal Article
Functional mapping of the magnocellular and parvocellular subdivisions of human LGN
by
Vu, An T.
,
Denison, Rachel N.
,
Yacoub, Essa
in
Adult
,
Biological and medical sciences
,
Brain Mapping
2014
The magnocellular (M) and parvocellular (P) subdivisions of primate LGN are known to process complementary types of visual stimulus information, but a method for noninvasively defining these subdivisions in humans has proven elusive. As a result, the functional roles of these subdivisions in humans have not been investigated physiologically. To functionally map the M and P subdivisions of human LGN, we used high-resolution fMRI at high field (7T and 3T) together with a combination of spatial, temporal, luminance, and chromatic stimulus manipulations. We found that stimulus factors that differentially drive magnocellular and parvocellular neurons in primate LGN also elicit differential BOLD fMRI responses in human LGN and that these responses exhibit a spatial organization consistent with the known anatomical organization of the M and P subdivisions. In test–retest studies, the relative responses of individual voxels to M-type and P-type stimuli were reliable across scanning sessions on separate days and across sessions at different field strengths. The ability to functionally identify magnocellular and parvocellular regions of human LGN with fMRI opens possibilities for investigating the functions of these subdivisions in human visual perception, in patient populations with suspected abnormalities in one of these subdivisions, and in visual cortical processing streams arising from parallel thalamocortical pathways.
•Functional mapping of the M and P subdivisions of human LGN with 7T and 3T fMRI•Stimuli based on electrophysiology in non-human primates allow human LGN mapping•Spatial organization of M-like and P-like voxels consistent with known LGN anatomy•Reliable M/P maps across test–retest sessions for individual subjects
Journal Article
Suboptimal but intact integration of Bayesian components during perceptual decision-making in autism
2025
Background
Alterations in sensory perception, a core phenotype of autism, are attributed to imbalanced integration of sensory information and prior knowledge during perceptual statistical (Bayesian) inference. This hypothesis has gained momentum in recent years, partly because it can be implemented both at the computational level, as in Bayesian perception, and at the level of canonical neural microcircuitry, as in predictive coding. However, empirical investigations have yielded conflicting results with evidence remaining limited. Critically, previous studies did not assess the independent contributions of priors and sensory uncertainty to the inference.
Method
We addressed this gap by quantitatively assessing both the independent and interdependent contributions of priors and sensory uncertainty to perceptual decision-making in autistic and non-autistic individuals (
N
= 126) during an orientation categorization task.
Results
Contrary to common views, autistic individuals integrated the two Bayesian components into their decision behavior, and did so indistinguishably from non-autistic individuals. Both groups adjusted their decision criteria in a suboptimal manner.
Limitations
This study focuses on explicit priors in a perceptual categorization task and high-functioning adults. Thus, although the findings provide strong evidence against a general and basic alteration in prior integration in autism, they cannot rule out more specific cases of reduced prior effect – such as due to implicit prior learning, particular level of decision making (e.g., social), and level of functioning of the autistic person.
Conclusions
These results reveal intact inference for autistic individuals during perceptual decision-making, challenging the notion that Bayesian computations are fundamentally altered in autism.
Journal Article
An auditory-visual tradeoff in susceptibility to clutter
2021
Sensory cortical mechanisms combine auditory or visual features into perceived objects. This is difficult in noisy or cluttered environments. Knowing that individuals vary greatly in their susceptibility to clutter, we wondered whether there might be a relation between an individual’s auditory and visual susceptibilities to clutter. In
auditory masking
, background sound makes spoken words unrecognizable. When masking arises due to interference at central auditory processing stages, beyond the cochlea, it is called
informational
masking. A strikingly similar phenomenon in vision, called
visual crowding
, occurs when nearby clutter makes a target object unrecognizable, despite being resolved at the retina. We here compare susceptibilities to auditory informational masking and visual crowding in the same participants. Surprisingly, across participants, we find a negative correlation (
R
= –0.7) between susceptibility to informational masking and crowding: Participants who have low susceptibility to auditory clutter tend to have high susceptibility to visual clutter, and vice versa. This reveals a tradeoff in the brain between auditory and visual processing.
Journal Article
Visual temporal attention from perception to computation
2024
Visual attention unfolds across space and time to prioritize a subset of incoming visual information. Distinct in key ways from spatial attention, temporal attention is a growing research area with its own conceptual and mechanistic territory. Here I review key conceptual issues, data and models in the field of visual temporal attention, with an emphasis on voluntary temporal attention. I first situate voluntary temporal attention in the broader domains of temporal attention and attentional dynamics, with the goal of organizing concepts and findings related to dynamic attention. Next, I review findings that voluntary temporal attention affects visual perception in a selective fashion — prioritizing certain time points at the expense of other time points. Selectivity is a hallmark of attention and implies a limitation in computational resources that prevents sustained maximal processing of all time points. I discuss a computational model of temporal attention that captures limited resources across time and review other models of attentional dynamics. Finally, I discuss productive future directions for the study of temporal attention.Visual temporal attention involves the prioritization of certain points in time at the expense of others. In this Review, Denison synthesizes experimental results and computational models of voluntary temporal attention and distinguishes it from related phenomena.
Journal Article