Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
17,077
result(s) for
"Visual field"
Sort by:
Luminance and thresholding limitations of virtual reality headsets for visual field testing
by
Lee, Changseok
,
Eng, Vivian
,
Redden, Liam
in
Automation
,
Brightness (Photometry)
,
Computer applications
2025
To investigate the luminance capacity and achievable threshold levels of commercially employed virtual reality (VR) devices for visual field testing.
This two-part study included (1) a literature review of VR headsets used for perimetry with luminance data extracted from technical specifications in publications and manufacturers; and (2) empirical evaluation of three most employed VR headsets in the literature using a custom virtual testing environment.
Three most employed VR devices for visual field testing were Pico Neo, Oculus Quest, and HTC Vive. The maximum reported luminance was 250 cd/m2 for the HTC Vive Pro. Information on luminance measurement was not consistently available, reporting only handheld luminance meters. Empirical measurements show that handheld luminance meters significantly overestimate luminance compared to standard spectroradiometers. Measured luminance varies significantly across aperture size and decreases for peripheral stimuli up to 30 degrees peripherally. Assuming conventional background of 10 cd/m2, the best performance with lowest possible thresholding was with HTC Vive at 16dB, corresponding to luminance of 80 cd/m2 centrally. Oculus Quest 2 and Pico Neo 3 had minimum threshold of 20dB.
Commercially available VR devices do not meet luminance requirements or threshold sensitivities for visual field testing. Current VR technology is not designed-nor has the capacity-to threshold at mid-to-low dB ranges, which limits accuracy in diagnosing and monitoring defects seen in glaucoma. Translational Relevance: This study highlights the technical limitations of current commercially available VR devices for visual field testing and significant variables in evaluating luminance performance in these devices.
Journal Article
Evaluation of visual function within the central 10 degrees using IMOvifa™ 24plus (1-2)
2025
The IMOvifa™ perimeter with a 24plus (1-2) testing mode has additional measurement points within the central 10 degrees, which may help evaluate the visual field within this area. Here, we comparatively evaluated the IMOvifa™ 24plus (1-2) and HFA 10-2 for the first time.
We included 30 patients (48 eyes) who underwent HFA 24-2 Swedish Interactive Threshold Algorithm Standard and IMOvifa™ 24plus (1-2) Ambient Interactive Zippy Estimated tests on the same day and HFA 10-2 within six months. We used Spearman's rank correlation coefficient to analyze the mean deviation (MD) and pattern standard deviation (PSD) between HFA 10-2 and IMOvifa™. The central 10-degree visual field was divided into four sectors, and concordance of visual field defects between IMOvifa™ 24plus (1-2) and HFA 10-2 was evaluated using kappa analysis. Additionally, all sectors showing a sensitivity of 0 dB on the HFA 24-2 were assessed for the presence and agreement of residual visual field in HFA 10-2 and IMOvifa™ 24plus (1-2).
The MD (0.843/0.804) and PSD (0.852/0.763) of IMOvifa™ 24plus (1-2) and HFA 24-2 correlated strongly with those of HFA 10-2. Regarding the ability to detect visual field defects within the central 10 degrees, agreement with HFA 10-2 was κ = 0.715 (0.611,0.819) and 0.754 (0.654,0.854) for IMOvifa™ 24plus (1-2) and HFA 24-2, respectively. In the evaluation of residual visual field, IMOvifa™ 24plus (1-2) detected residual visual function in 100% of cases where HFA 10-2 indicated residual function.
The IMOvifa™ 24plus (1-2) may have a higher ability to detect defects in certain areas of the visual field, compared with HFA 24-2, and may also detect residual visual function. However, the IMOvifa™ 24plus (1-2) is difficult to substitute for the 10-2 test, as the 10-2 test is necessary for evaluating visual field defects within the central 10 degrees.
Journal Article
Visual Field Prediction using Recurrent Neural Network
2019
Artificial intelligence capabilities have, recently, greatly improved. In the past few years, one of the deep learning algorithms, the recurrent neural network (RNN), has shown an outstanding ability in sequence labeling and prediction tasks for sequential data. We built a reliable visual field prediction algorithm using RNN and evaluated its performance in comparison with the conventional pointwise ordinary linear regression (OLR) method. A total of 1,408 eyes were used as a training dataset and another dataset, comprising 281 eyes, was used as a test dataset. Five consecutive visual field tests were provided to the constructed RNN as input and a 6
th
visual field test was compared with the output of the RNN. The performance of the RNN was compared with that of OLR by predicting the 6
th
visual field in the test dataset. The overall prediction performance of RNN was significantly better than OLR. The pointwise prediction error of the RNN was significantly smaller than that of the OLR in most areas known to be vulnerable to glaucomatous damage. The RNN was also more robust and reliable regarding worsening in the visual field examination. In clinical practice, the RNN model can therefore assist in decision-making for further treatment of glaucoma.
Journal Article
Synaptic organization of visual space in primary visual cortex
by
Iacaruso, M. Florencia
,
Gasler, Ioana T.
,
Hofer, Sonja B.
in
14/35
,
631/378/2613/1875
,
631/378/3917
2017
Mapping the organization of excitatory inputs onto the dendritic spines of individual mouse visual cortex neurons reveals how inputs representing features from the extended visual scene are organized and establishes a computational unit suited to amplify contours and elongated edges.
Synapses set the scene
Processing a visual stimulus requires various connections between neurons, with each encoding for particular features that integrate and generate the overall representation of a scene. The precise logic of this connectivity and what information an individual neuron receives regarding various parts of the visual field are unknown. Here, Sonja Hofer and colleagues mapped the organization of excitatory inputs onto the dendritic spines of individual mouse visual cortex neurons. Inputs representing similar visual features in similar visual field positions were more likely to cluster on neighbouring spines and inputs beyond the receptive field of the observed neuron were located on higher-order dendritic branches. Connections between neurons with dissimilar receptive fields were more likely when these fields were spatially displaced. These arrangements establish a computational unit suited to amplify contours and elongated edges, features that are common elements of our visual space.
How a sensory stimulus is processed and perceived depends on the surrounding sensory scene. In the visual cortex, contextual signals can be conveyed by an extensive network of intra- and inter-areal excitatory connections that link neurons representing stimulus features separated in visual space
1
,
2
,
3
,
4
. However, the connectional logic of visual contextual inputs remains unknown; it is not clear what information individual neurons receive from different parts of the visual field, nor how this input relates to the visual features that a neuron encodes, defined by its spatial receptive field. Here we determine the organization of excitatory synaptic inputs responding to different locations in the visual scene by mapping spatial receptive fields in dendritic spines of mouse visual cortex neurons using two-photon calcium imaging. We find that neurons receive functionally diverse inputs from extended regions of visual space. Inputs representing similar visual features from the same location in visual space are more likely to cluster on neighbouring spines. Inputs from visual field regions beyond the receptive field of the postsynaptic neuron often synapse on higher-order dendritic branches. These putative long-range inputs are more frequent and more likely to share the preference for oriented edges with the postsynaptic neuron when the receptive field of the input is spatially displaced along the axis of the receptive field orientation of the postsynaptic neuron. Therefore, the connectivity between neurons with displaced receptive fields obeys a specific rule, whereby they connect preferentially when their receptive fields are co-oriented and co-axially aligned. This organization of synaptic connectivity is ideally suited for the amplification of elongated edges, which are enriched in the visual environment, and thus provides a potential substrate for contour integration and object grouping.
Journal Article
Predicting eyes at risk for rapid glaucoma progression based on an initial visual field test using machine learning
2021
To assess whether machine learning algorithms (MLA) can predict eyes that will undergo rapid glaucoma progression based on an initial visual field (VF) test.
Retrospective analysis of longitudinal data.
175,786 VFs (22,925 initial VFs) from 14,217 patients who completed ≥5 reliable VFs at academic glaucoma centers were included.
Summary measures and reliability metrics from the initial VF and age were used to train MLA designed to predict the likelihood of rapid progression. Additionally, the neural network model was trained with point-wise threshold data in addition to summary measures, reliability metrics and age. 80% of eyes were used for a training set and 20% were used as a test set. MLA test set performance was assessed using the area under the receiver operating curve (AUC). Performance of models trained on initial VF data alone was compared to performance of models trained on data from the first two VFs.
Accuracy in predicting future rapid progression defined as MD worsening more than 1 dB/year.
1,968 eyes (8.6%) underwent rapid progression. The support vector machine model (AUC 0.72 [95% CI 0.70-0.75]) most accurately predicted rapid progression when trained on initial VF data. Artificial neural network, random forest, logistic regression and naïve Bayes classifiers produced AUC of 0.72, 0.70, 0.69, 0.68 respectively. Models trained on data from the first two VFs performed no better than top models trained on the initial VF alone. Based on the odds ratio (OR) from logistic regression and variable importance plots from the random forest model, older age (OR: 1.41 per 10 year increment [95% CI: 1.34 to 1.08]) and higher pattern standard deviation (OR: 1.31 per 5-dB increment [95% CI: 1.18 to 1.46]) were the variables in the initial VF most strongly associated with rapid progression.
MLA can be used to predict eyes at risk for rapid progression with modest accuracy based on an initial VF test. Incorporating additional clinical data to the current model may offer opportunities to predict patients most likely to rapidly progress with even greater accuracy.
Journal Article
ISCEV standard for clinical visual evoked potentials: (2016 update)
by
Mizota, Atsushi
,
Brigell, Mitchell
,
Bach, Michael
in
Electrophysiology - standards
,
Evoked Potentials, Visual
,
Humans
2016
Visual evoked potentials (VEPs) can provide important diagnostic information regarding the functional integrity of the visual system. This document updates the ISCEV standard for clinical VEP testing and supersedes the 2009 standard. The main changes in this revision are the acknowledgment that pattern stimuli can be produced using a variety of technologies with an emphasis on the need for manufacturers to ensure that there is no luminance change during pattern reversal or pattern onset/offset. The document is also edited to bring the VEP standard into closer harmony with other ISCEV standards. The ISCEV standard VEP is based on a subset of stimulus and recording conditions that provide core clinical information and can be performed by most clinical electrophysiology laboratories throughout the world. These are: (1) Pattern-reversal VEPs elicited by checkerboard stimuli with large 1 degree (°) and small 0.25° checks. (2) Pattern onset/offset VEPs elicited by checkerboard stimuli with large 1° and small 0.25° checks. (3) Flash VEPs elicited by a flash (brief luminance increment) which subtends a visual field of at least 20°. The ISCEV standard VEP protocols are defined for a single recording channel with a midline occipital active electrode. These protocols are intended for assessment of the eye and/or optic nerves anterior to the optic chiasm. Extended, multi-channel protocols are required to evaluate postchiasmal lesions.
Journal Article
Visual Field Testing with Head-Mounted Perimeter ‘imo’
2016
We developed a new portable head-mounted perimeter, \"imo\", which performs visual field (VF) testing under flexible conditions without a dark room. Besides the monocular eye test, imo can present a test target randomly to either eye without occlusion (a binocular random single eye test). The performance of imo was evaluated.
Using full HD transmissive LCD and high intensity LED backlights, imo can display a test target under the same test conditions as the Humphrey Field Analyzer (HFA). The monocular and binocular random single eye tests by imo and the HFA test were performed on 40 eyes of 20 subjects with glaucoma. VF sensitivity results by the monocular and binocular random single eye tests were compared, and these test results were further compared to those by the HFA. The subjects were asked whether they noticed which eye was being tested during the test.
The mean sensitivity (MS) obtained with the HFA highly correlated with the MS by the imo monocular test (R: r = 0.96, L: r = 0.94, P < 0.001) and the binocular random single eye test (R: r = 0.97, L: r = 0.98, P < 0.001). The MS values by the monocular and binocular random single eye tests also highly correlated (R: r = 0.96, L: r = 0.95, P < 0.001). No subject could detect which eye was being tested during the examination.
The perimeter imo can obtain VF sensitivity highly compatible to that by the standard automated perimeter. The binocular random single eye test provides a non-occlusion test condition without the examinee being aware of the tested eye.
Journal Article
Virtual reality perimetry compared to standard automated perimetry in adults with glaucoma: A systematic review
2025
The purpose of this systematic review was to consolidate and summarize available data comparing virtual reality perimetry (VRP) with standard automated perimetry (SAP) in adults with glaucoma. Understanding the utility and diagnostic performance of emerging VRP technology may expand access to visual field testing but requires evidence-based validation.
A systematic literature search was conducted in 3 databases (PubMed Central, Embase, and Cochrane Central Register of Controlled Trials) from the date of inception to 10/09/2024. Eligibility criteria included randomized controlled trials or prospective or retrospective cohort studies that compared different modalities of VRP to SAP in adults >18 years of age with glaucoma. Studies were excluded if they were review articles, letters, case reports, abstract-only papers, unavailable full text, or non-English language. Identified studies were formally evaluated for risk of bias using the Newcastle-Ottawa tool. The study protocol was prospectively registered with PROSPERO in May 2023 (registration number: CRD42023429071).
The literature search yielded 1657 results. After deduplication, abstract and title screening, 14 studies met inclusion criteria and were included in the final systematic review. Compared to Humphrey Field Analyzer or Octopus 900, 10 different VRP devices were included in our study: Oculus Quest, Smartphone-based Campimetry, Toronto Portable Perimeter, VirtualEye, Advance Vision Analyzer, VisuALL, Vivid Vision Perimeter, C3 fields visual field analyzer, Radius, and Virtual Field. Overall, published studies of VRP are promising; however, more work is required to better evaluate these devices, namely test-retest repeatability.
VRP holds strong potential to evaluate visual fields in adults with glaucoma, though further data is needed to validate emerging technologies and testing protocols. Eye providers may consider using these devices to monitor certain adults with glaucoma.
Journal Article
Topographic connectivity reveals task-dependent retinotopic processing throughout the human brain
2021
The human visual system is organized as a hierarchy of maps that share the topography of the retina. Known retinotopic maps have been identified using simple visual stimuli under strict fixation, conditions different from everyday vision which is active, dynamic, and complex. This means that it remains unknown how much of the brain is truly visually organized. Here I demonstrate widespread stable visual organization beyond the traditional visual system, in default-mode network and hippocampus. Detailed topographic connectivity with primary visual cortex during movie-watching, resting-state, and retinotopic-mapping experiments revealed that visual–spatial representations throughout the brain are warped by cognitive state. Specifically, traditionally visual regions alternate with default-mode network and hippocampus in preferentially representing the center of the visual field. This visual role of default-mode network and hippocampus would allow these regions to interface between abstract memories and concrete sensory impressions. Together, these results indicate that visual–spatial organization is a fundamental coding principle that structures the communication between distant brain regions.
Journal Article
Forecasting future Humphrey Visual Fields using deep learning
2019
To determine if deep learning networks could be trained to forecast future 24-2 Humphrey Visual Fields (HVFs).
All data points from consecutive 24-2 HVFs from 1998 to 2018 were extracted from a university database. Ten-fold cross validation with a held out test set was used to develop the three main phases of model development: model architecture selection, dataset combination selection, and time-interval model training with transfer learning, to train a deep learning artificial neural network capable of generating a point-wise visual field prediction. The point-wise mean absolute error (PMAE) and difference in Mean Deviation (MD) between predicted and actual future HVF were calculated.
More than 1.7 million perimetry points were extracted to the hundredth decibel from 32,443 24-2 HVFs. The best performing model with 20 million trainable parameters, CascadeNet-5, was selected. The overall point-wise PMAE for the test set was 2.47 dB (95% CI: 2.45 dB to 2.48 dB), and deep learning showed a statistically significant improvement over linear models. The 100 fully trained models successfully predicted future HVFs in glaucomatous eyes up to 5.5 years in the future with a correlation of 0.92 between the MD of predicted and actual future HVF and an average difference of 0.41 dB.
Using unfiltered real-world datasets, deep learning networks show the ability to not only learn spatio-temporal HVF changes but also to generate predictions for future HVFs up to 5.5 years, given only a single HVF.
Journal Article