Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
8,923 result(s) for "Visual fixation"
Sort by:
Investigating Consumer Preferences for Production Process Labeling Using Visual Attention Data
A second-price auction with eye movement recordings was used to investigate consumer preferences for labels disclosing the presence and absence of specific types of insecticides and to explore the relationship between visual attention and consumer purchasing behaviors. Findings contribute to the literature in the following ways. First, visual attention pattern was endogenously determined by personal knowledge and pollinator conservation activities. Less knowledgeable or less engaged participants fixated more and for longer durations on the product as a whole rather than other information. Secondly, the first and last gaze cascade effect was confirmed by identifying a significant negative impact of participants’ first and last gaze visits on neonicotinoid labels on their bid values. Third, new evidence was added to the existing literature that the link between visual attention and consumer valuation and preference may be weak. Our results suggest that visual attention could provide useful information toward understanding participants’ bidding behaviors; however, evidence indicates that visual attention measures may not be directly linked with decision making.
Two Fixations Suffice in Face Recognition
It is well known that there exist preferred landing positions for eye fixations in visual word recognition. However, the existence of preferred landing positions in face recognition is less well established. It is also unknown how many fixations are required to recognize a face. To investigate these questions, we recorded eye movements during face recognition. During an otherwise standard face-recognition task, subjects were allowed a variable number of fixations before the stimulus was masked. We found that optimal recognition performance is achieved with two fixations; performance does not improve with additional fixations. The distribution of the first fixation is just to the left of the center of the nose, and that of the second fixation is around the center of the nose. Thus, these appear to be the preferred landing positions for face recognition. Furthermore, the fixations made during face learning differ in location from those made during face recognition and are also more variable in duration; this suggests that different strategies are used for face learning and face recognition.
Head-Mounted Eye Tracking: A New Method to Describe Infant Looking
Despite hundreds of studies describing infants' visual exploration of experimental stimuli, researchers know little about where infants look during everyday interactions. The current study describes the first method for studying visual behavior during natural interactions in mobile infants. Six 14-month-old infants wore a headmounted eye-tracker that recorded gaze during free play with mothers. Results revealed that infants' visual exploration is opportunistic and depends on the availability of information and the constraints of infants' own bodies. Looks to mothers' faces were rare following infant-directed utterances but more likely if mothers were sitting at infants' eye level. Gaze toward the destination of infants' hand movements was common during manual actions and crawling, but looks toward obstacles during leg movements were less frequent.
Look Here, Eye Movements Play a Functional Role in Memory Retrieval
Research on episodic memory has established that spontaneous eye movements occur to spaces associated with retrieved information even if those spaces are blank at the time of retrieval. Although it has been claimed that such looks to \"nothing\" can function as facilitatory retrieval cues, there is currently no conclusive evidence for such an effect. In the present study, we addressed this fundamental issue using four direct eye manipulations in the retrieval phase of an episodic memory task: (a) free viewing on a blank screen, (b) maintaining central fixation, (c) looking inside a square congruent with the location of the to-be-recalled objects, and (d) looking inside a square incongruent with the location of the to-be-recalled objects. Our results provide novel evidence of an active and facilitatory role of gaze position during memory retrieval and demonstrate that memory for the spatial relationship between objects is more readily affected than memory for intrinsic object features.
Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions
How do we make decisions when confronted with several alternatives (e.g., on a supermarket shelf)? Previous work has shown that accumulator models, such as the drift-diffusion model, can provide accurate descriptions of the psychometric data for binary value-based choices, and that the choice process is guided by visual attention. However, the computational processes used to make choices in more complicated situations involving three or more options are unknown. We propose a model of trinary value-based choice that generalizes what is known about binary choice, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixation data using the same parameters that were estimated in previous work on binary choice. Our findings suggest that the brain uses similar computational processes to make binary and trinary choices.
At 6–9 months, human infants know the meanings of many common nouns
It is widely accepted that infants begin learning their native language not by learning words, but by discovering features of the speech signal: consonants, vowels, and combinations of these sounds. Learning to understand words, as opposed to just perceiving their sounds, is said to come later, between 9 and 15 mo of age, when infants develop a capacity for interpreting others’ goals and intentions. Here, we demonstrate that this consensus about the developmental sequence of human language learning is flawed: in fact, infants already know the meanings of several common words from the age of 6 mo onward. We presented 6- to 9-mo-old infants with sets of pictures to view while their parent named a picture in each set. Over this entire age range, infants directed their gaze to the named pictures, indicating their understanding of spoken words. Because the words were not trained in the laboratory, the results show that even young infants learn ordinary words through daily experience with language. This surprising accomplishment indicates that, contrary to prevailing beliefs, either infants can already grasp the referential intentions of adults at 6 mo or infants can learn words before this ability emerges. The precocious discovery of word meanings suggests a perspective in which learning vocabulary and learning the sound structure of spoken language go hand in hand as language acquisition begins.
Classification of Children With Autism and Typical Development Using Eye-Tracking Data From Face-to-Face Conversations: Machine Learning Model Development and Performance Evaluation
Previous studies have shown promising results in identifying individuals with autism spectrum disorder (ASD) by applying machine learning (ML) to eye-tracking data collected while participants viewed varying images (ie, pictures, videos, and web pages). Although gaze behavior is known to differ between face-to-face interaction and image-viewing tasks, no study has investigated whether eye-tracking data from face-to-face conversations can also accurately identify individuals with ASD.BACKGROUNDPrevious studies have shown promising results in identifying individuals with autism spectrum disorder (ASD) by applying machine learning (ML) to eye-tracking data collected while participants viewed varying images (ie, pictures, videos, and web pages). Although gaze behavior is known to differ between face-to-face interaction and image-viewing tasks, no study has investigated whether eye-tracking data from face-to-face conversations can also accurately identify individuals with ASD.The objective of this study was to examine whether eye-tracking data from face-to-face conversations could classify children with ASD and typical development (TD). We further investigated whether combining features on visual fixation and length of conversation would achieve better classification performance.OBJECTIVEThe objective of this study was to examine whether eye-tracking data from face-to-face conversations could classify children with ASD and typical development (TD). We further investigated whether combining features on visual fixation and length of conversation would achieve better classification performance.Eye tracking was performed on children with ASD and TD while they were engaged in face-to-face conversations (including 4 conversational sessions) with an interviewer. By implementing forward feature selection, four ML classifiers were used to determine the maximum classification accuracy and the corresponding features: support vector machine (SVM), linear discriminant analysis, decision tree, and random forest.METHODSEye tracking was performed on children with ASD and TD while they were engaged in face-to-face conversations (including 4 conversational sessions) with an interviewer. By implementing forward feature selection, four ML classifiers were used to determine the maximum classification accuracy and the corresponding features: support vector machine (SVM), linear discriminant analysis, decision tree, and random forest.A maximum classification accuracy of 92.31% was achieved with the SVM classifier by combining features on both visual fixation and session length. The classification accuracy of combined features was higher than that obtained using visual fixation features (maximum classification accuracy 84.62%) or session length (maximum classification accuracy 84.62%) alone.RESULTSA maximum classification accuracy of 92.31% was achieved with the SVM classifier by combining features on both visual fixation and session length. The classification accuracy of combined features was higher than that obtained using visual fixation features (maximum classification accuracy 84.62%) or session length (maximum classification accuracy 84.62%) alone.Eye-tracking data from face-to-face conversations could accurately classify children with ASD and TD, suggesting that ASD might be objectively screened in everyday social interactions. However, these results will need to be validated with a larger sample of individuals with ASD (varying in severity and balanced sex ratio) using data collected from different modalities (eg, eye tracking, kinematic, electroencephalogram, and neuroimaging). In addition, individuals with other clinical conditions (eg, developmental delay and attention deficit hyperactivity disorder) should be included in similar ML studies for detecting ASD.CONCLUSIONSEye-tracking data from face-to-face conversations could accurately classify children with ASD and TD, suggesting that ASD might be objectively screened in everyday social interactions. However, these results will need to be validated with a larger sample of individuals with ASD (varying in severity and balanced sex ratio) using data collected from different modalities (eg, eye tracking, kinematic, electroencephalogram, and neuroimaging). In addition, individuals with other clinical conditions (eg, developmental delay and attention deficit hyperactivity disorder) should be included in similar ML studies for detecting ASD.
Eye Movements During Mindless Reading
Mindless reading occurs when the eyes continue moving across the page even though the mind is thinking about something unrelated to the text. Despite how commonly it occurs, very little is known about mindless reading. The present experiment examined eye movements during mindless reading. Comparisons of fixation-duration measures collected during intervals of normal reading and intervals of mindless reading indicate that fixations during the latter were longer and less affected by lexical and linguistic variables than fixations during the former. Also, eye movements immediately preceding self-caught mind wandering were especially erratic. These results suggest that the cognitive processes that guide eye movements during normal reading are not engaged during mindless reading. We discuss the implications of these findings for theories of eye movement control in reading, for the distinction between experiential awareness and meta-awareness, and for reading comprehension.
Simultaneous Control of Attention by Multiple Working Memory Representations
Working memory representations play a key role in controlling attention by making it possible to shift attention to task-relevant objects. Visual working memory has a capacity of three to four objects, but recent studies suggest that only one representation can guide attention at a given moment. We directly tested this proposal by monitoring eye movements while observers performed a visual search task in which they attempted to limit attention to objects drawn in two colors. When the observers were motivated to attend to one color at a time, they searched many consecutive items of one color (long run lengths) and exhibited a delay prior to switching gaze from one color to the other (switch cost). In contrast, when they were motivated to attend to both colors simultaneously, observers' gaze switched back and forth between the two colors frequently (short run lengths), with no switch cost. Thus, multiple working memory representations can concurrently guide attention.
Uncertainty in learning, choice, and visual fixation
Uncertainty plays a critical role in reinforcement learning and decision making. However, exactly how it influences behavior remains unclear. Multiarmed-bandit tasks offer an ideal test bed, since computational tools such as approximate Kalman filters can closely characterize the interplay between trial-by-trial values, uncertainty, learning, and choice. To gain additional insight into learning and choice processes, we obtained data from subjects’ overt allocation of gaze. The estimated value and estimation uncertainty of options influenced what subjects looked at before choosing; these same quantities also influenced choice, as additionally did fixation itself. A momentary measure of uncertainty in the form of absolute prediction errors determined how long participants looked at the obtained outcomes. These findings affirm the importance of uncertainty in multiple facets of behavior and help delineate its effects on decision making.