Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
1,168 result(s) for "Eye Movement Measurements."
Sort by:
Reconsidering the role of temporal order in spoken word recognition
Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes , words that share the same phonemes but differ in their order (e.g., sub and bus ). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words ( well ) and to words that share the same vowel but not the same set of phonemes ( sun ). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.
Gaze and Movement Assessment (GaMA): Inter-site validation of a visuomotor upper limb functional protocol
Successful hand-object interactions require precise hand-eye coordination with continual movement adjustments. Quantitative measurement of this visuomotor behaviour could provide valuable insight into upper limb impairments. The Gaze and Movement Assessment (GaMA) was developed to provide protocols for simultaneous motion capture and eye tracking during the administration of two functional tasks, along with data analysis methods to generate standard measures of visuomotor behaviour. The objective of this study was to investigate the reproducibility of the GaMA protocol across two independent groups of non-disabled participants, with different raters using different motion capture and eye tracking technology. Twenty non-disabled adults performed the Pasta Box Task and the Cup Transfer Task. Upper body and eye movements were recorded using motion capture and eye tracking, respectively. Measures of hand movement, angular joint kinematics, and eye gaze were compared to those from a different sample of twenty non-disabled adults who had previously performed the same protocol with different technology, rater and site. Participants took longer to perform the tasks versus those from the earlier study, although the relative time of each movement phase was similar. Measures that were dissimilar between the groups included hand distances travelled, hand trajectories, number of movement units, eye latencies, and peak angular velocities. Similarities included all hand velocity and grip aperture measures, eye fixations, and most peak joint angle and range of motion measures. The reproducibility of GaMA was confirmed by this study, despite a few differences introduced by learning effects, task demonstration variation, and limitations of the kinematic model. GaMA accurately quantifies the typical behaviours of a non-disabled population, producing precise quantitative measures of hand function, trunk and angular joint kinematics, and associated visuomotor behaviour. This work advances the consideration for use of GaMA in populations with upper limb sensorimotor impairment.
Diagnostic accuracy of a smartphone bedside test to assess the fixation suppression of the vestibulo-ocular reflex: when nothing else matters
Objective Validation of a bedside test to objectify the fixation suppression of the vestibulo-ocular reflex (FS-VOR) in patients with a cerebellar syndrome and healthy controls. Methods The vestibulo-ocular reflex and its fixation suppression were assessed by video-nystagmography (VNG) in 20 healthy subjects (mean age 56 ± 15) and 19 patients with a cerebellar syndrome (mean age 70 ± 11). The statistical cutoff delineating normal from pathological FS-VOR was determined at the 2.5th percentile of the normal distribution of the healthy cohort. VNG was then compared to a bedside test, where eye movements were recorded with a smartphone while patients were rotated on a swivel chair at a defined speed and amplitude. These videos were rated as normal or pathological FS-VOR by six blinded raters, and results compared to VNG. Results VNG in healthy controls showed FS-VOR with a reduction of nystagmus beats by 95.0% ± 7.2 (mean ± SD). The statistical cutoff was set at 80.6%. Cerebellar patients reduced nystagmus beats by only 26.3% ± 25.1. Inter-rater agreement of the smartphone video ratings was 85%. The sensitivity of the video ratings to detect an impaired FS-VOR was 99%, its specificity 92%. Inter-test agreement was 91%. Conclusion The smartphone bedside test is an easily performed, reliable, sensitive, specific, and inexpensive alternative for assessing FS-VOR.
A Screening Tool to Measure Eye Contact Avoidance in Boys with Fragile X Syndrome
We examined the reliability, validity and factor structure of the Eye Contact Avoidance Scale (ECAS), a new 15-item screening tool designed to measure eye contact avoidance in individuals with fragile X syndrome (FXS). Internal consistency of the scale was acceptable to excellent and convergent validity with the Social Responsiveness Scale, Second Edition (SRS-2) and the Anxiety, Depression, and Mood Scale (ADAMS) was good. Boys with a comorbid ASD diagnosis obtained significantly higher scores on the ECAS compared to boys without ASD, when controlling for communication ability. A confirmatory factor analysis indicated that a two-factor model (avoidance and aversion) provided an excellent fit to the data. The ECAS appears to be a promising reliable and valid tool that could be employed as an outcome measure in future pharmacological/behavioral treatment trials for FXS.
Developing and Evaluating a Target-Background Similarity Metric for Camouflage Detection
Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results.
How robust are wearable eye trackers to slow and fast head and body movements?
How well can modern wearable eye trackers cope with head and body movement? To investigate this question, we asked four participants to stand still, walk, skip, and jump while fixating a static physical target in space. We did this for six different eye trackers. All the eye trackers were capable of recording gaze during the most dynamic episodes (skipping and jumping). The accuracy became worse as movement got wilder. During skipping and jumping, the biggest error was 5.8 ∘ . However, most errors were smaller than 3 ∘ . We discuss the implications of decreased accuracy in the context of different research scenarios.
Facial perception of conspecifics: chimpanzees (Pan troglodytes) preferentially attend to proper orientation and open eyes
This paper reports on the use of an eye-tracking technique to examine how chimpanzees look at facial photographs of conspecifics. Six chimpanzees viewed a sequence of pictures presented on a monitor while their eye movements were measured by an eye tracker. The pictures presented conspecific faces with open or closed eyes in an upright or inverted orientation in a frame. The results demonstrated that chimpanzees looked at the eyes, nose, and mouth more frequently than would be expected on the basis of random scanning of faces. More specifically, they looked at the eyes longer than they looked at the nose and mouth when photographs of upright faces with open eyes were presented, suggesting that particular attention to the eyes represents a spontaneous face-scanning strategy shared among monkeys, apes, and humans. In contrast to the results obtained for upright faces with open eyes, the viewing times for the eyes, nose, and mouth of inverted faces with open eyes did not differ from one another. The viewing times for the eyes, nose, and mouth of faces with closed eyes did not differ when faces with closed eyes were presented in either an upright or inverted orientation. These results suggest the possibility that open eyes play an important role in the configural processing of faces and that chimpanzees perceive and process open and closed eyes differently.
Accelerating eye movement research via accurate and affordable smartphone eye tracking
Eye tracking has been widely used for decades in vision research, language and usability. However, most prior research has focused on large desktop displays using specialized eye trackers that are expensive and cannot scale. Little is known about eye movement behavior on phones, despite their pervasiveness and large amount of time spent. We leverage machine learning to demonstrate accurate smartphone-based eye tracking without any additional hardware. We show that the accuracy of our method is comparable to state-of-the-art mobile eye trackers that are 100x more expensive. Using data from over 100 opted-in users, we replicate key findings from previous eye movement research on oculomotor tasks and saliency analyses during natural image viewing. In addition, we demonstrate the utility of smartphone-based gaze for detecting reading comprehension difficulty. Our results show the potential for scaling eye movement research by orders-of-magnitude to thousands of participants (with explicit consent), enabling advances in vision research, accessibility and healthcare. Progress in eye movement research has been limited since existing eye trackers are expensive and do not scale. Here, the authors show that smartphone-based eye tracking achieves high accuracy comparable to state-of-the-art mobile eye trackers, replicating key findings from prior eye movement research.
Gaze Amplifies Value in Decision Making
When making decisions, people tend to choose the option they have looked at more. An unanswered question is how attention influences the choice process: whether it amplifies the subjective value of the looked-at option or instead adds a constant, value-independent bias. To address this, we examined choice data from six eye-tracking studies (Ns = 39, 44, 44, 36, 20, and 45, respectively) to characterize the interaction between value and gaze in the choice process. We found that the summed values of the options influenced response times in every data set and the gaze-choice correlation in most data sets, in line with an amplifying role of attention in the choice process. Our results suggest that this amplifying effect is more pronounced in tasks using large sets of familiar stimuli, compared with tasks using small sets of learned stimuli.