Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
7,497
result(s) for
"Language prediction"
Sort by:
I know how you’ll say it: evidence of speaker-specific speech prediction
by
Sala, Marco
,
Casalino, Laura
,
Vespignani, Francesco
in
Adult
,
Behavioral Science and Psychology
,
Brief Report
2024
Most models of language comprehension assume that the linguistic system is able to pre-activate phonological information. However, the evidence for phonological prediction is mixed and controversial. In this study, we implement a paradigm that capitalizes on the fact that foreign speakers usually make phonological errors. We investigate whether speaker identity (native vs. foreign) is used to make specific phonological predictions. Fifty-two participants were recruited to read sentence frames followed by a last spoken word which was uttered by either a native or a foreign speaker. They were required to perform a lexical decision on the last spoken word, which could be either semantically predictable or not. Speaker identity (native vs. foreign) may or may not be cued by the face of the speaker. We observed that the face cue is effective in speeding up the lexical decision when the word is predictable, but it is not effective when the word is not predictable. This result shows that speech prediction takes into account the phonological variability between speakers, suggesting that it is possible to pre-activate in a detailed and specific way the phonological representation of a predictable word.
Journal Article
Different effects of verbal and visual working memory loads on Language prediction
2025
Mounting studies suggest that working memory (WM) plays a crucial role in language prediction, but how varying types of WM loads influence language prediction remains unclear. This study investigated whether verbal and visual WM loads differentially impact language predictions during speech comprehension. Using a dual-task paradigm combined with eye-tracking in a visual world setting, we asked 48 participants to complete a sentence comprehension task under concurrent WM load conditions. Participants were divided into two groups, one of which performed a visual dots memory task and the other completed a visual words memory task, with memory load being applied in half of the trials. Results revealed anticipatory gaze towards target objects, suggesting the prediction of upcoming linguistic information. Notably, early fixations during the tonal cue window indicated tonal prediction in spoken sentence processing. Furthermore, WM load significantly disrupted participants’ language prediction effects, highlighting the involvement of working memory resources in this process. Importantly, the verbal memory task imposed a more severe disruption to language prediction than the visual memory task, suggesting differential roles of WM subtypes in linguistic prediction. This offers novel insights into how verbal WM and visual-spatial WM differentially influence predictive language processing.
Journal Article
Predictive Language Processing in Humans and Large Language Models: A Comparative Study of Contextual Dependencies
2025
Human language comprehension relies on predictive processing; however, the computational mechanisms underlying this phenomenon remain unclear. This study investigates these mechanisms using large language models (LLMs), specifically GPT-3.5-turbo and GPT-4. We conducted a comparison of LLM and human performance on a phrase-completion task under varying levels of contextual cues (high, medium, and low) as defined using human performance, thereby enabling direct AI–human comparisons. Our findings indicate that LLMs significantly outperform humans, particularly in medium- and low-context conditions. While success in medium-context scenarios reflects the efficient utilization of contextual information, performance in low-context situations—where LLMs achieved approximately 25% accuracy compared to just 1% for humans—suggests that the models harness deep linguistic structures beyond mere surface context. This discovery implies that LLMs may elucidate previously unknown aspects of language architecture. The ability of LLMs to exploit deep structural regularities and statistical patterns in medium- and low-predictability contexts offers a novel perspective on the computational architecture of the human language system.
Journal Article
Developing a hyperparameter optimization method for classification of code snippets and questions of stack overflow: HyperSCC
2023
Although there exist various machine learning and text mining techniques to identify the programming language of complete code files, multi-label code snippet prediction was not considered by the research community. This work aims at devising a tuner for multi-label programming language prediction of stack overflow posts. To that end, a Hyper Source Code Classifier (HyperSCC) is devised along with rule-based automatic labeling by considering the bottlenecks of multi-label classification. The proposed method is evaluated on seven multi-label predictors to conduct an extensive analysis. The method is further compared with the three competitive alternatives in terms of one-label programming language prediction. HyperSCC outperformed the other methods in terms of the F1 score. Preprocessing results in a high reduction (50%) of training time when ensemble multi-label predictors are employed. In one-label programming language prediction, Gradient Boosting Machine (gbm) yields the highest accuracy (0.99) in predicting R posts that have a lot of distinctive words determining labels. The findings support the hypothesis that multi-label predictors can be strengthened with sophisticated feature selection and labeling approaches.
Journal Article
Sign2Pose: A Pose-Based Approach for Gloss Prediction Using a Transformer Model
2023
Word-level sign language recognition (WSLR) is the backbone for continuous sign language recognition (CSLR) that infers glosses from sign videos. Finding the relevant gloss from the sign sequence and detecting explicit boundaries of the glosses from sign videos is a persistent challenge. In this paper, we propose a systematic approach for gloss prediction in WLSR using the Sign2Pose Gloss prediction transformer model. The primary goal of this work is to enhance WLSR’s gloss prediction accuracy with reduced time and computational overhead. The proposed approach uses hand-crafted features rather than automated feature extraction, which is computationally expensive and less accurate. A modified key frame extraction technique is proposed that uses histogram difference and Euclidean distance metrics to select and drop redundant frames. To enhance the model’s generalization ability, pose vector augmentation using perspective transformation along with joint angle rotation is performed. Further, for normalization, we employed YOLOv3 (You Only Look Once) to detect the signing space and track the hand gestures of the signers in the frames. The proposed model experiments on WLASL datasets achieved the top 1% recognition accuracy of 80.9% in WLASL100 and 64.21% in WLASL300. The performance of the proposed model surpasses state-of-the-art approaches. The integration of key frame extraction, augmentation, and pose estimation improved the performance of the proposed gloss prediction model by increasing the model’s precision in locating minor variations in their body posture. We observed that introducing YOLOv3 improved gloss prediction accuracy and helped prevent model overfitting. Overall, the proposed model showed 17% improved performance in the WLASL 100 dataset.
Journal Article
Towards a robust out-of-the-box neural network model for genomic data
by
Solis-Lemus, Claudia
,
Zhang, Zhaoyi
,
Cheng, Songyang
in
Accuracy
,
Algorithms
,
Artificial neural networks
2022
Background
The accurate prediction of biological features from genomic data is paramount for precision medicine and sustainable agriculture. For decades, neural network models have been widely popular in fields like computer vision, astrophysics and targeted marketing given their prediction accuracy and their robust performance under big data settings. Yet neural network models have not made a successful transition into the medical and biological world due to the ubiquitous characteristics of biological data such as modest sample sizes, sparsity, and extreme heterogeneity.
Results
Here, we investigate the robustness, generalization potential and prediction accuracy of widely used convolutional neural network and natural language processing models with a variety of heterogeneous genomic datasets. Mainly, recurrent neural network models outperform convolutional neural network models in terms of prediction accuracy, overfitting and transferability across the datasets under study.
Conclusions
While the perspective of a robust out-of-the-box neural network model is out of reach, we identify certain model characteristics that translate well across datasets and could serve as a baseline model for translational researchers.
Journal Article
Specification-driven predictive business process monitoring
2020
Predictive analysis in business process monitoring aims at forecasting the future information of a running business process. The prediction is typically made based on the model extracted from historical process execution logs (event logs). In practice, different business domains might require different kinds of predictions. Hence, it is important to have a means for properly specifying the desired prediction tasks, and a mechanism to deal with these various prediction tasks. Although there have been many studies in this area, they mostly focus on a specific prediction task. This work introduces a language for specifying the desired prediction tasks, and this language allows us to express various kinds of prediction tasks. This work also presents a mechanism for automatically creating the corresponding prediction model based on the given specification. Differently from previous studies, instead of focusing on a particular prediction task, we present an approach to deal with various prediction tasks based on the given specification of the desired prediction tasks. We also provide an implementation of the approach which is used to conduct experiments using real-life event logs.
Journal Article
Investigation of the achievement scores of the people learning Turkish as a foreign language according to linguistic distance
2018
In this study, predictor variables (age, gender, region and language family) affecting the scores of Turkish language learners are examined through multiple regression method. The study group consisted of 280 international students registered to Turkish Language Teaching Centers located at Gazi and Hacettepe Universities. The research data were obtained from the Turkish course completion exam papers and personal information forms. According to the results, the average scores of the students from the Afro-Asiatic, Indo-European, Bantu, Sino-Tibetan and Austronesian language families were lower than those from the Altai language family. Additionally, the writing scores of the students from the Afro-Asiatic and Austronesian language families; the speaking scores of the students from Afro-Asiatic, Indo-European language families; reading comprehension scores of the students from Afro-Asiatic, Indo-European, Bantu and Sino-Tibetan language families and grammar scores of the students from Sino-Tibetan and Austronesian language families were lower than the scores of the Altai language family. In addition, while the age variable was found to have a positive effect on speaking scores, it was observed that area and gender variables were not significant predicators of scores. Findings are discussed in the light of literature and suggestions for further research are provided.
Journal Article
Bioinformatics Tools for Gene Function Prediction
by
Cui, Yan
in
bioinformatics tools for gene function prediction ‐ in natural languages, difficulty for computers to process
,
CHEMISTRY
,
function prediction using integrated data ‐ interacting proteins, forming protein interaction networks
2011
This chapter contains sections titled:
Gene Ontology: Description of Gene Function with Controlled and Structured Vocabulary
Sequence‐Based Function Prediction
Structure‐Based Function Prediction
Function Prediction Using Integrated Data
Questions and Answers
References
Book Chapter