Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
33
result(s) for
"Smithson, Hannah E."
Sort by:
How do (perceptual) distracters distract?
by
Summerfield, Christopher
,
Dumbalska, Tsvetomira
,
Rudzka, Katarzyna
in
Analysis
,
Biology and Life Sciences
,
Computer applications
2022
When a target stimulus occurs in the presence of distracters, decisions are less accurate. But how exactly do distracters affect choices? Here, we explored this question using measurement of human behaviour, psychophysical reverse correlation and computational modelling. We contrasted two models: one in which targets and distracters had independent influence on choices (independent model) and one in which distracters modulated choices in a way that depended on their similarity to the target (interaction model). Across three experiments, participants were asked to make fine orientation judgments about the tilt of a target grating presented adjacent to an irrelevant distracter. We found strong evidence for the interaction model, in that decisions were more sensitive when target and distracter were consistent relative to when they were inconsistent. This consistency bias occurred in the frame of reference of the decision, that is, it operated on decision values rather than on sensory signals, and surprisingly, it was independent of spatial attention. A normalization framework, where target features are normalized by the expectation and variability of the local context, successfully captures the observed pattern of results.
Journal Article
Low level visual features support robust material perception in the judgement of metallicity
2021
The human visual system is able to rapidly and accurately infer the material properties of objects and surfaces in the world. Yet an inverse optics approach—estimating the bi-directional reflectance distribution function of a surface, given its geometry and environment, and relating this to the optical properties of materials—is both intractable and computationally unaffordable. Rather, previous studies have found that the visual system may exploit low-level spatio-chromatic statistics as heuristics for material judgment. Here, we present results from psychophysics and modeling that supports the use of image statistics heuristics in the judgement of metallicity—the quality of appearance that suggests an object is made from metal. Using computer graphics, we generated stimuli that varied along two physical dimensions: the smoothness of a metal object, and the evenness of its transparent coating. This allowed for the exploration of low-level image statistics, whilst ensuring that each stimulus was a naturalistic, physically plausible image. A conjoint-measurement task decoupled the contributions of these dimensions to the perception of metallicity. Low-level image features, as represented in the activations of oriented linear filters at different spatial scales, were found to correlate with the dimensions of the stimulus space, and decision-making models using these activations replicated observer performance in perceiving differences in metal smoothness and coating bumpiness, and judging metallicity. Importantly, the performance of these models did not deteriorate when objects were rotated within their simulated scene, with corresponding changes in image properties. We therefore conclude that low-level image features may provide reliable cues for the robust perception of metallicity.
Journal Article
Adaptive optics scanning laser ophthalmoscopy in a heterogenous cohort with Stargardt disease
2024
Image based cell-specific biomarkers will play an important role in monitoring treatment outcomes of novel therapies in patients with Stargardt (STGD1) disease and may provide information on the exact mechanism of retinal degeneration. This study reports retinal image features from conventional clinical imaging and from corresponding high-resolution imaging with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO) in a heterogenous cohort of patients with Stargardt (STGD1) disease. This is a prospective observational study in which 16 participants with clinically and molecularly confirmed STGD1, and 7 healthy controls underwent clinical assessment and confocal AOSLO imaging. Clinical assessment included short-wavelength and near-infrared fundus autofluorescence, spectral-domain optical coherence tomography, and macular microperimetry. AOSLO images were acquired over a range of retinal eccentricities (0°–20°) and mapped to areas of interest from the clinical images. A regular photoreceptor mosaic was identified in areas of normal or near normal retinal structure on clinical images. Where clinical imaging indicated areas of retinal degeneration, the photoreceptor mosaic was disorganised and lacked unambiguous cones. Discrete hyper-reflective foci were identified in 9 participants with STGD1 within areas of retinal degeneration. A continuous RPE cell mosaic at the fovea was identified in one participant with an optical gap phenotype. The clinical heterogeneity observed in STGD1 is reflected in the findings on confocal AOSLO imaging.
Journal Article
Emulated retinal image capture (ERICA) to test, train and validate processing of retinal images
2021
High resolution retinal imaging systems, such as adaptive optics scanning laser ophthalmoscopes (AOSLO), are increasingly being used for clinical research and fundamental studies in neuroscience. These systems offer unprecedented spatial and temporal resolution of retinal structures in vivo. However, a major challenge is the development of robust and automated methods for processing and analysing these images. We present ERICA (Emulated Retinal Image CApture), a simulation tool that generates realistic synthetic images of the human cone mosaic, mimicking images that would be captured by an AOSLO, with specified image quality and with corresponding ground-truth data. The simulation includes a self-organising mosaic of photoreceptors, the eye movements an observer might make during image capture, and data capture through a real system incorporating diffraction, residual optical aberrations and noise. The retinal photoreceptor mosaics generated by ERICA have a similar packing geometry to human retina, as determined by expert labelling of AOSLO images of real eyes. In the current implementation ERICA outputs convincingly realistic en face images of the cone photoreceptor mosaic but extensions to other imaging modalities and structures are also discussed. These images and associated ground-truth data can be used to develop, test and validate image processing and analysis algorithms or to train and validate machine learning approaches. The use of synthetic images has the advantage that neither access to an imaging system, nor to human participants is necessary for development.
Journal Article
Automated cone photoreceptor detection using synthetic data and deep learning in confocal adaptive optics scanning laser ophthalmoscope images
by
Young, Laura K.
,
Shah, Mital
,
Namburete, Ana I. L.
in
639/705
,
692/308
,
Adaptive optics scanning laser ophthalmoscope
2026
Adaptive optics scanning laser ophthalmoscope (AOSLO) imaging enables the cone photoreceptor mosaic to be visualised in the living human eye. Performing quantitative analysis of these images requires identification of individual photoreceptors. This is typically performed by manual labelling, which is subjective, time consuming and not feasible on a large scale. Automated algorithms to replace manual labelling are required and deep learning-based methods provide an effective way of achieving this. However, this approach requires large volumes of annotated training data that are difficult to acquire. Synthetic data may help to bridge this lack of annotated training data. A U-Net configuration was trained using a large synthetic dataset of confocal AOSLO images generated using ERICA alongside a smaller dataset of real confocal AOSLO images (Milwaukee dataset). Model performance was assessed by calculating the Dice coefficient, a metric quantifying segmentation overlap, on both a real held-out test set and an independent real dataset (Oxford dataset). Results from this evaluation were benchmarked against expert labelling and two automated cone detection methods: a confocal convolutional neural network (CNN) (1), and a combined graph-theory and dynamic programming approach (2)). The mean Dice coefficient compared to manual labelling was 0.989 (U-Net), 0.989 (confocal CNN), and 0.985 (graph-theory and dynamic programming) on the held-out test set. On the independent Oxford dataset, the U-Net achieved a mean Dice coefficient of 0.962 compared to manual labelling. Results show performance that is comparable to the gold standard of manual labelling and two automated cone detection methods. Furthermore, we demonstrate generalisability of this approach on an independent real dataset with images from higher retinal eccentricities. This approach may be useful for quantitative analysis of the photoreceptor mosaic in patients with retinal disease to provide cell-specific imaging biomarkers from AOSLO images.
Journal Article
Modulation of the face- and body-selective visual regions by the motion and emotion of point-light face and body stimuli
by
Atkinson, Anthony P.
,
Smithson, Hannah E.
,
Vuong, Quoc C.
in
Adaptation, Physiological - physiology
,
Adult
,
Affect - physiology
2012
Neural regions selective for facial or bodily form also respond to facial or bodily motion in highly form-degraded point-light displays. Yet it is unknown whether these face-selective and body-selective regions are sensitive to human motion regardless of stimulus type (faces and bodies) or to the specific motion-related cues characteristic of their proprietary stimulus categories. Using fMRI, we show that facial and bodily motions activate selectively those populations of neurons that code for the static structure of faces and bodies. Bodily (vs. facial) motion activated body-selective EBA bilaterally and right but not left FBA, irrespective of whether observers judged the emotion or color-change in point-light angry, happy and neutral stimuli. Facial (vs. bodily) motion activated face-selective right and left FFA, but only during emotion judgments for right FFA. Moreover, the strength of responses to point-light bodies vs. faces positively correlated with voxelwise selectivity for static bodies but not faces, whereas the strength of responses to point-light faces positively correlated with voxelwise selectivity for static faces but not bodies. Emotional content carried by point-light form-from-motion cues was sufficient to enhance the activity of several regions, including bilateral EBA and right FFA and FBA. However, although the strength of emotional modulation in right and left EBA by point-light body movements was related to the degree of voxelwise selectivity to static bodies but not static faces, there was no evidence that emotional modulation in fusiform cortex occurred in a similarly stimulus category-selective manner. This latter finding strongly constrains the claim that emotionally expressive movements modulate precisely those neuronal populations that code for the viewed stimulus category.
► Point-light body vs. face motion activates body—but not face-selective regions. ► Point-light face vs. body motion activates left fusiform face area (FFA). ► Right FFA activation to point-light faces for emotion but not color judgments. ► Emotional modulation of body and face areas by point-light body but not face motion.
Journal Article
Self-organising coordinate transformation with peaked and monotonic gain modulation in the primate dorsal visual pathway
by
Mender, Bedeho M. W.
,
Stringer, Simon M.
,
Smithson, Hannah E.
in
Algorithms
,
Animals
,
Artificial intelligence
2018
We study a self-organising neural network model of how visual representations in the primate dorsal visual pathway are transformed from an eye-centred to head-centred frame of reference. The model has previously been shown to robustly develop head-centred output neurons with a standard trace learning rule, but only under limited conditions. Specifically it fails when incorporating visual input neurons with monotonic gain modulation by eye-position. Since eye-centred neurons with monotonic gain modulation are so common in the dorsal visual pathway, it is an important challenge to show how efferent synaptic connections from these neurons may self-organise to produce head-centred responses in a subpopulation of postsynaptic neurons. We show for the first time how a variety of modified, yet still biologically plausible, versions of the standard trace learning rule enable the model to perform a coordinate transformation from eye-centred to head-centred reference frames when the visual input neurons have monotonic gain modulation by eye-position.
Journal Article
A Modeling Study of the Emergence of Eye Position Gain Fields Modulating the Responses of Visual Neurons in the Brain
by
Smithson, Hannah E.
,
Navarro, Daniel M.
,
Stringer, Simon M.
in
Coordinate transformations
,
Eye movements
,
eye-position
2020
The responses of many cortical neurons to visual stimuli are modulated by the position of the eye. This form of gain modulation by eye position does not change the retinotopic selectivity of the responses, but only changes the amplitude of the responses. Particularly in the case of cortical responses, this form of eye position gain modulation has been observed to be multiplicative. Multiplicative gain modulated responses are crucial to encode information that is relevant to high-level visual functions, such as stable spatial awareness, eye movement planning, visual-motor behaviours, and coordinate transformation. Here we first present a hardwired model of different functional forms of gain modulation, including peaked and monotonic modulation by eye position. We use a biologically realistic Gaussian function to model the influence of the position of the eye on the internal activation of visual neurons. Next we show how different functional forms of gain modulation by eye position may develop in a self-organising neural network model of visual neurons. A further contribution of our work is the investigation of the influence of the width of the eye position tuning curve on the development of a variety of forms of eye position gain modulation. Our simulation results show how the width of the eye position tuning curve affects the development of different forms of gain modulation of visual responses by the position of the eye.
Journal Article
How do
by
Smithson, Hannah E
,
Summerfield, Christopher
,
Dumbalska, Tsvetomira
in
Analysis
,
Computer simulation
,
Computer-generated environments
2022
Journal Article
S-cone psychophysics
2014
We review the features of the S-cone system that appeal to the psychophysicist and summarize the celebrated characteristics of S-cone mediated vision. Two factors are emphasized: First, the fine stimulus control that is required to isolate putative visual mechanisms and second, the relationship between physiological data and psychophysical approaches. We review convergent findings from physiology and psychophysics with respect to asymmetries in the retinal wiring of S-ON and S-OFF visual pathways, and the associated treatment of increments and decrements in the S-cone system. Beyond the retina, we consider the lack of S-cone projections to superior colliculus and the use of S-cone stimuli in experimental psychology, for example to address questions about the mechanisms of visually driven attention. Careful selection of stimulus parameters enables psychophysicists to produce entirely reversible, temporary, “lesions,” and to assess behavior in the absence of specific neural subsystems.
Journal Article