Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
19,138 result(s) for "visual attention"
Sort by:
Involvement of the dorsal and ventral attention networks in visual attention span
Visual attention span (VAS), which refers to the window size of multielement parallel processing in a short time, plays an important role in higher‐level cognition (e.g., reading) as required by encoding large amounts of information input. However, it is still a matter of debate about the underlying neural mechanism of VAS. In the present study, a modified visual 1‐back task was designed by using nonverbal stimuli and nonverbal responses, in which possible influences of target presence and position were considered to identify more pure VAS processing. A task‐driven functional magnetic resonance imaging (fMRI) experiment was then performed, and 30 healthy adults participated in this study. Results of confirmatory and exploratory analyses consistently revealed that both dorsal attention network (DAN) and ventral attention network (VAN) were significantly activated during this visual simultaneous processing. In particular, more significant activation in the left superior parietal lobule (LSPL), as compared to that in the bilateral inferior frontal gyrus (IFGs), suggested a greater involvement of DAN in VAS‐related processing in contrast to VAN. In addition, it was also found that the activation in temporoparietal junctions (TPJs) were suppressed during multielement processing only in the target‐absent condition. The current results suggested the recruitment of LSPL in covert attentional shifts and top‐down control of VAS resources distribution during the rapid visual simultaneous processing, as well as the involvement of bilateral IFGs (especially RIFG) in both VAS processing and inhibitory control. The present findings might bring some enlightenments for diagnosis of the atypicality of attentional disorders and reading difficulties. A prospective visual 1‐back task used during functional magnetic resonance imaging (fMRI) scanning were designed to examine the neural mechanism of visual attention span (VAS) which is a basic cognitive ability and plays an important role in higher‐level cognition (e.g., reading) as required by encoding large amounts of information input. Results of confirmatory and exploratory analyses consistently revealed greater involvement of dorsal attention network (DAN, e.g., left superior parietal lobule) as compared to ventral attention network (VAN, e.g., bilateral inferior frontal gyrus) in this visual simultaneous processing. The present findings bring some enlightenments for diagnosis of the atypicality of attentional disorders and reading difficulties.
Neural dissociation of visual attention span and phonological deficits in developmental dyslexia: A hub‐based white matter network analysis
It has been suggested that developmental dyslexia may have two dissociable causes—a phonological deficit and a visual attention span (VAS) deficit. Yet, neural evidence for such a dissociation is still lacking. This study adopted a data‐driven approach to white matter network analysis to explore hubs and hub‐related networks corresponding to VAS and phonological accuracy in a group of French dyslexic children aged from 9 to 14 years. A double dissociation in brain‐behavior relations was observed. Structural connectivity of the occipital‐parietal network surrounding the left superior occipital gyrus hub accounted for individual differences in dyslexic children's VAS, but not in phonological processing accuracy. In contrast, structural connectivity of two networks: the temporal–parietal‐occipital network surrounding the left middle temporal gyrus hub and the frontal network surrounding the left medial orbital superior frontal gyrus hub, accounted for individual differences in dyslexic children's phonological processing accuracy, but not in VAS. Our findings provide evidence in favor of distinct neural circuits corresponding to VAS and phonological deficits in developmental dyslexia. The study points to connectivity‐constrained white matter subnetwork dysfunction as a key principle for understanding individual differences of cognitive deficits in developmental dyslexia. This study is taking seriously the possibility of multiple causes of dyslexia (phonological vs. visual‐attention span), and provides for the first‐time evidence that the two types of cognitive deficit in dyslexic children is associated with distinct white‐matter networks. It therefore provides a tentative correspondence between cognitive subtypes of dyslexia and neuroanatomical subtypes, thereby enhancing our comprehension of the relation between structure and function.
Assessment of the Autism Spectrum Disorder Based on Machine Learning and Social Visual Attention: A Systematic Review
The assessment of autism spectrum disorder (ASD) is based on semi-structured procedures addressed to children and caregivers. Such methods rely on the evaluation of behavioural symptoms rather than on the objective evaluation of psychophysiological underpinnings. Advances in research provided evidence of modern procedures for the early assessment of ASD, involving both machine learning (ML) techniques and biomarkers, as eye movements (EM) towards social stimuli. This systematic review provides a comprehensive discussion of 11 papers regarding the early assessment of ASD based on ML techniques and children’s social visual attention (SVA). Evidences suggest ML as a relevant technique for the early assessment of ASD, which might represent a valid biomarker-based procedure to objectively make diagnosis. Limitations and future directions are discussed.
Sustained attention, attentional selectivity, and attentional capacity across the lifespan
Changes in sustained attention, attentional selectivity, and attentional capacity were examined in a sample of 113 participants between the ages of 12 and 75. To measure sustained attention, we employed the sustained-attention-to-response task (Robertson, Manly, Andrade, Baddeley, & Yiend, Neuropsychologia 35:747–58, 1997 ), a short continuous-performance test designed to capture fluctuations in sustained attention. To measure attentional selectivity and capacity, we employed a paradigm based on the theory of visual attention (Bundesen, Psychological Review 97:523–547, 1990 ), which enabled the estimation of parameters related to attentional selection, perceptual threshold, visual short-term memory capacity, and processing capacity. We found evidence of age-related decline in each of the measured variables, but the declines varied markedly in terms of magnitude and lifespan trajectory. Variables relating to attentional capacity showed declines of very large effect sizes, while variables relating to attentional selectivity and sustained attention showed declines of medium to large effect sizes, suggesting that attentional control is relatively preserved in older adults. The variables relating to sustained attention followed a U-shaped, curvilinear trend, and the variables relating to attentional selectivity and capacity showed linear decline from early adulthood, providing further support for the differentiation of attentional functions.
Look at what I can do: Object affordances guide visual attention while speakers describe potential actions
As we act on the world around us, our eyes seek out objects we plan to interact with. A growing body of evidence suggests that overt visual attention selects objects in the environment that could be interacted with, even when the task precludes physical interaction. In previous work, objects that afford grasping interactions influenced attention when static scenes depicted reachable spaces, and attention was otherwise better explained by general informativeness. Because grasping is but one of many object interactions, previous work may have downplayed the influence of object affordances on attention. The current study investigated the relationship between overt visual attention and object affordances versus broadly construed semantic information in scenes as speakers describe or memorize scenes. In addition to meaning and grasp maps—which capture informativeness and grasping object affordances in scenes, respectively—we introduce interact maps, which capture affordances more broadly. In a mixed-effects analysis of 5 eyetracking experiments, we found that meaning predicted fixated locations in a general description task and during scene memorization. Grasp maps marginally predicted fixated locations during action description for scenes that depicted reachable spaces only. Interact maps predicted fixated regions in description experiments alone. Our findings suggest observers allocate attention to scene regions that could be readily interacted with when talking about the scene, while general informativeness preferentially guides attention when the task does not encourage careful consideration of objects in the scene. The current study suggests that the influence of object affordances on visual attention in scenes is mediated by task demands.
Top-down suppression of negative features applies flexibly contingent on visual search goals
Visually searching for a frequently changing target is assumed to be guided by flexible working memory representations of specific features necessary to discriminate targets from distractors. Here, we tested if these representations allow selective suppression or always facilitate perception based on search goals. Participants searched for a target (i.e., a horizontal bar) defined by one of two different negative features (e.g., not red vs. not blue; Experiment 1 ) or a positive (e.g., blue) versus a negative feature (Experiments 2 and 3 ). A prompt informed participants about the target identity, and search tasks alternated or repeated randomly. We used different peripheral singleton cues presented at the same (valid condition) or a different (invalid condition) position as the target to examine if negative features were suppressed depending on current instructions. In all experiments, cues with negative features elicited slower search times in valid than invalid trials, indicating suppression. Additionally, suppression of negative color cues tended to be selective when participants searched for the target by different negative features but generalized to negative and non-matching cue colors when switching between positive and negative search criteria was required. Nevertheless, when the same color – red – was used in positive and negative search tasks, red cues captured attention or were suppressed depending on whether red was positive or negative (Experiment 3 ). Our results suggest that working memory representations flexibly trigger suppression or attentional capture contingent on a task-relevant feature’s functional meaning during visual search, but top-down suppression operates at different levels of specificity depending on current task demands.
Intervention targeting different visual attention span components in Chinese children with developmental dyslexia: a study based on Bundesen’s theory of visual attention
Within the framework of the theory of visual attention (TVA), the visual attention span (VAS) deficit among individuals with developmental dyslexia has been ascribed to the problems entailed by bottom–up (BotU) and top–down (TopD) attentional processes. The former involves two VAS subcomponents: the visual short-term memory storage and perceptual processing speed; the latter consists of the spatial bias of attentional weight and the inhibitory control. Then, what about the influences of the BotU and TopD components on reading? Are there differences in the roles of the two types of attentional processes in reading? This study addresses these issues by using two types of training tasks separately, corresponding to the BotU and TopD attentional components. Three groups of Chinese children with dyslexia—15 children each in the BotU training, TopD training, and non-trained active control groups were recruited here. Participants completed reading measures and a CombiTVA task which was used to estimate VAS subcomponents, before and after the training procedure. Results showed that BotU training improved both the within-category and between-category VAS subcomponents and sentence reading performance; meanwhile, TopD training enhanced character reading fluency through improving spatial attention capacity. Moreover, benefits on attentional capacities and reading skills in the two training groups were generally maintained three months after the intervention. The present findings revealed diverse patterns in the influences of VAS on reading within the TVA framework, which contributes to enriching the understanding of VAS-reading relation.
Image caption generation using Visual Attention Prediction and Contextual Spatial Relation Extraction
Automatic caption generation with attention mechanisms aims at generating more descriptive captions containing coarser to finer semantic contents in the image. In this work, we use an encoder-decoder framework employing Wavelet transform based Convolutional Neural Network (WCNN) with two level discrete wavelet decomposition for extracting the visual feature maps highlighting the spatial, spectral and semantic details from the image. The Visual Attention Prediction Network (VAPN) computes both channel and spatial attention for obtaining visually attentive features. In addition to these, local features are also taken into account by considering the contextual spatial relationship between the different objects. The probability of the appropriate word prediction is achieved by combining the aforementioned architecture with Long Short Term Memory (LSTM) decoder network. Experiments are conducted on three benchmark datasets—Flickr8K, Flickr30K and MSCOCO datasets and the evaluation results prove the improved performance of the proposed model with CIDEr score of 124.2.
A Concise and Varied Visual Features-Based Image Captioning Model with Visual Selection
Image captioning has gained increasing attention in recent years. Visual characteristics found in input images play a crucial role in generating high-quality captions. Prior studies have used visual attention mechanisms to dynamically focus on localized regions of the input image, improving the effectiveness of identifying relevant image regions at each step of caption generation. However, providing image captioning models with the capability of selecting the most relevant visual features from the input image and attending to them can significantly improve the utilization of these features. Consequently, this leads to enhanced captioning network performance. In light of this, we present an image captioning framework that efficiently exploits the extracted representations of the image. Our framework comprises three key components: the Visual Feature Detector module (VFD), the Visual Feature Visual Attention module (VFVA), and the language model. The VFD module is responsible for detecting a subset of the most pertinent features from the local visual features, creating an updated visual features matrix. Subsequently, the VFVA directs its attention to the visual features matrix generated by the VFD, resulting in an updated context vector employed by the language model to generate an informative description. Integrating the VFD and VFVA modules introduces an additional layer of processing for the visual features, thereby contributing to enhancing the image captioning model’s performance. Using the MS-COCO dataset, our experiments show that the proposed framework competes well with state-of-the-art methods, effectively leveraging visual representations to improve performance. The implementation code can be found here: (accessed on 30 July 2024).