Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
66
result(s) for
"Visual world paradigm"
Sort by:
Moving visual world experiments online? A web-based replication of Dijkgraaf, Hartsuiker, and Duyck (2017) using PCIbex and WebGazer.js
by
Hartsuiker, Robert J.
,
Slim, Mieke Sarah
in
Behavioral Science and Psychology
,
Cognitive Psychology
,
Psychology
2023
The visual world paradigm is one of the most influential paradigms to study real-time language processing. The present study tested whether visual world studies can be moved online, using PCIbex software (Zehr & Schwarz,
2018
) and the WebGazer.js algorithm (Papoutsaki et al.,
2016
) to collect eye-movement data. Experiment
1
was a fixation task in which the participants looked at a fixation cross in multiple positions on the computer screen. Experiment
2
was a web-based replication of a visual world experiment by Dijkgraaf et al. (
2017
). Firstly, both experiments revealed that the spatial accuracy of the data allowed us to distinguish looks across the four quadrants of the computer screen. This suggest that the spatial resolution of WebGazer.js is fine-grained enough for most visual world experiments (which typically involve a two-by-two quadrant-based set-up of the visual display). Secondly, both experiments revealed a delay of roughly 300 ms in the time course of the eye movements, possibly caused by the internal processing speed of the browser or WebGazer.js. This delay can be problematic in studying questions that require a fine-grained temporal resolution and requires further investigation.
Journal Article
Analysing data from the psycholinguistic visual-world paradigm: Comparison of different analysis methods
by
Ito, Aine
,
Knoeferle, Pia
in
Behavioral Science and Psychology
,
Cognitive Psychology
,
Psychology
2023
In this paper, we discuss key characteristics and typical experimental designs of the visual-world paradigm and compare different methods of analysing eye-movement data. We discuss the nature of the eye-movement data from a visual-world study and provide data analysis tutorials on ANOVA,
t
-tests, linear mixed-effects model, growth curve analysis, cluster-based permutation analysis, bootstrapped differences of timeseries, generalised additive modelling, and divergence point analysis to enable psycholinguists to apply each analytical method to their own data. We discuss advantages and disadvantages of each method and offer recommendations about how to select an appropriate method depending on the research question and the experimental design.
Journal Article
Predicting upcoming information in native-language and non-native-language auditory word recognition
2017
Monolingual listeners continuously predict upcoming information. Here, we tested whether predictive language processing occurs to the same extent when bilinguals listen to their native language vs. a non-native language. Additionally, we tested whether bilinguals use prediction to the same extent as monolinguals. Dutch–English bilinguals and English monolinguals listened to constraining and neutral sentences in Dutch (bilinguals only) and in English, and viewed target and distractor pictures on a display while their eye movements were measured. There was a bias of fixations towards the target object in the constraining condition, relative to the neutral condition, before information from the target word could affect fixations. This prediction effect occurred to the same extent in native processing by bilinguals and monolinguals, but also in non-native processing. This indicates that unbalanced, proficient bilinguals can quickly use semantic information during listening to predict upcoming referents to the same extent in both of their languages.
Journal Article
I’m not sure that curve means what you think it means: Toward a more realistic understanding of the role of eye-movement generation in the Visual World Paradigm
The Visual World Paradigm (VWP) is a powerful experimental paradigm for language research. Listeners respond to speech in a “visual world” containing potential referents of the speech. Fixations to these referents provides insight into the preliminary states of language processing as decisions unfold. The VWP has become the dominant paradigm in psycholinguistics and extended to every level of language, development, and disorders. Part of its impact is the impressive data visualizations which reveal the millisecond-by-millisecond time course of processing, and advances have been made in developing new analyses that precisely characterize this time course. All theoretical and statistical approaches make the tacit assumption that the time course of fixations is closely related to the underlying activation in the system. However, given the serial nature of fixations and their long refractory period, it is unclear how closely the observed dynamics of the fixation curves are actually coupled to the underlying dynamics of activation. I investigated this assumption with a series of simulations. Each simulation starts with a set of true underlying activation functions and generates simulated fixations using a simple stochastic sampling procedure that respects the sequential nature of fixations. I then analyzed the results to determine the conditions under which the observed fixations curves match the underlying functions, the reliability of the observed data, and the implications for Type I error and power. These simulations demonstrate that even under the simplest fixation-based models, observed fixation curves are systematically biased relative to the underlying activation functions, and they are substantially noisier, with important implications for reliability and power. I then present a potential generative model that may ultimately overcome many of these issues.
Journal Article
Different effects of verbal and visual working memory loads on Language prediction
2025
Mounting studies suggest that working memory (WM) plays a crucial role in language prediction, but how varying types of WM loads influence language prediction remains unclear. This study investigated whether verbal and visual WM loads differentially impact language predictions during speech comprehension. Using a dual-task paradigm combined with eye-tracking in a visual world setting, we asked 48 participants to complete a sentence comprehension task under concurrent WM load conditions. Participants were divided into two groups, one of which performed a visual dots memory task and the other completed a visual words memory task, with memory load being applied in half of the trials. Results revealed anticipatory gaze towards target objects, suggesting the prediction of upcoming linguistic information. Notably, early fixations during the tonal cue window indicated tonal prediction in spoken sentence processing. Furthermore, WM load significantly disrupted participants’ language prediction effects, highlighting the involvement of working memory resources in this process. Importantly, the verbal memory task imposed a more severe disruption to language prediction than the visual memory task, suggesting differential roles of WM subtypes in linguistic prediction. This offers novel insights into how verbal WM and visual-spatial WM differentially influence predictive language processing.
Journal Article
Verbal working memory capacity modulates semantic and phonological prediction in spoken comprehension
by
Qu, Qingqing
,
Li, Xinjing
in
Behavioral Science and Psychology
,
Brief Report
,
Chinese languages
2024
Mounting evidence suggests that people may use multiple cues to predict different levels of representation (e.g., semantic, syntactic, and phonological) during language comprehension. One question that has been less investigated is the relationship between general cognitive processing and the efficiency of prediction at various linguistic levels, such as semantic and phonological levels. To address this research gap, the present study investigated how working memory capacity (WMC) modulates different kinds of prediction behavior (i.e., semantic prediction and phonological prediction) in the visual world. Chinese speakers listened to the highly predictable sentences that contained a highly predictable target word, and viewed a visual display of objects. The visual display of objects contained a target object corresponding to the predictable word, a semantic or a phonological competitor that was semantically or phonologically related to the predictable word, and an unrelated object. We conducted a Chinese version of the reading span task to measure verbal WMC and grouped participants into high- and low-span groups. Participants showed semantic and phonological prediction with comparable size in both groups during language comprehension, with earlier semantic prediction in the high-span group, and a similar time course of phonological prediction in both groups. These results suggest that verbal working memory modulates predictive processing in language comprehension.
Journal Article
The validation of online webcam-based eye-tracking: The replication of the cascade effect, the novelty preference, and the visual world paradigm
by
Cabooter, Quinn
,
Ben-Shakhar, Gershon
,
Verschuere, Bruno
in
Adolescent
,
Adult
,
Attention - physiology
2024
The many benefits of online research and the recent emergence of open-source eye-tracking libraries have sparked an interest in transferring time-consuming and expensive eye-tracking studies from the lab to the web. In the current study, we validate online webcam-based eye-tracking by conceptually replicating three robust eye-tracking studies (the cascade effect,
n
= 134, the novelty preference,
n
= 45, and the visual world paradigm,
n
= 32) online using the participant’s webcam as eye-tracker with the WebGazer.js library. We successfully replicated all three effects, although the effect sizes of all three studies shrank by 20–27%. The visual world paradigm was conducted both online and in the lab, using the same participants and a standard laboratory eye-tracker. The results showed that replication per se could not fully account for the effect size shrinkage, but that the shrinkage was also due to the use of online webcam-based eye-tracking, which is noisier. In conclusion, we argue that eye-tracking studies with relatively large effects that do not require extremely high precision (e.g., studies with four or fewer large regions of interest) can be done online using the participant’s webcam. We also make recommendations for how the quality of online webcam-based eye-tracking could be improved.
Journal Article
Cross-modal and cross-language activation in bilinguals reveals lexical competition even when words or signs are unheard or unseen
by
Giezen, Marcel
,
Carreiras, Manuel
,
Villameriel, Saúl
in
Bilingualism
,
Biological Sciences
,
Eye movements
2022
We exploit the phenomenon of cross-modal, cross-language activation to examine the dynamics of language processing. Previous within-language work showed that seeing a sign coactivates phonologically related signs, just as hearing a spoken word coactivates phonologically related words. In this study, we conducted a series of eye-tracking experiments using the visual world paradigm to investigate the time course of crosslanguage coactivation in hearing bimodal bilinguals (Spanish—Spanish Sign Language) and unimodal bilinguals (Spanish/Basque). The aim was to gauge whether (and how) seeing a sign could coactivate words and, conversely, how hearing a word could coactivate signs and how such cross-language coactivation patterns differ from withinlanguage coactivation. The results revealed cross-language, cross-modal activation in both directions. Furthermore, comparison with previous findings of within-language lexical coactivation for spoken and signed language showed how the impact of temporal structure changes in different modalities. Spoken word activation follows the temporal structure of that word only when the word itself is heard; for signs, the temporal structure of the sign does not govern the time course of lexical access (location coactivation precedes handshape coactivation)—even when the sign is seen. We provide evidence that, instead, this pattern of activation is motivated by how common in the lexicon the sublexical units of the signs are. These results reveal the interaction between the perceptual properties of the explicit signal and structural linguistic properties. Examining languages across modalities illustrates how this interaction impacts language processing.
Journal Article
Peekbank: An open, large-scale repository for developmental eye-tracking data of children’s word recognition
by
Uner, Sarp
,
Saleh, Annissa N.
,
Lewis, Molly
in
Auditory Perception
,
Behavioral Science and Psychology
,
Cognitive Psychology
2023
The ability to rapidly recognize words and link them to referents is central to children’s early language development. This ability, often called word recognition in the developmental literature, is typically studied in the looking-while-listening paradigm, which measures infants’ fixation on a target object (vs. a distractor) after hearing a target label. We present a large-scale, open database of infant and toddler eye-tracking data from looking-while-listening tasks. The goal of this effort is to address theoretical and methodological challenges in measuring vocabulary development. We first present how we created the database, its features and structure, and associated tools for processing and accessing infant eye-tracking datasets. Using these tools, we then work through two illustrative examples to show how researchers can use Peekbank to interrogate theoretical and methodological questions about children’s developing word recognition ability.
Journal Article
Webcams as Windows to the Mind? A Direct Comparison Between In-Lab and Web-Based Eye-Tracking Methods
by
Snedeker, Jesse
,
Yacovone, Anthony
,
Kandel, Margaret
in
eye-tracking
,
language processing
,
psycholinguistics
2024
There is a growing interest in the use of webcams to conduct eye-tracking experiments over the internet. We assessed the performance of two webcam-based eye-tracking techniques for behavioral research: manual annotation of webcam videos (
) and the automated WebGazer eye-tracking algorithm. We compared these methods to a traditional infrared eye-tracker and assessed their performance in both lab and web-based settings. In both lab and web experiments, participants completed the same battery of five tasks, selected to trigger effects of various sizes: two visual fixation tasks and three visual world tasks testing real-time (psycholinguistic) processing effects. In the lab experiment, we simultaneously collected infrared eye-tracking, manual eye-tracking, and WebGazer data; in the web experiment, we simultaneously collected manual eye-tracking and WebGazer data. We found that the two webcam-based methods are suited to capture different types of eye-movement patterns. Manual eye-tracking, similar to infrared eye-tracking, detected both large and small effects. WebGazer, however, showed less accuracy in detecting short, subtle effects. There was no notable effect of setting for either method. We discuss the trade-offs researchers face when choosing eye-tracking methods and offer advice for conducting eye-tracking experiments over the internet.
Journal Article