Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3,869
result(s) for
"Semantic complexity"
Sort by:
Linguistic complexity in high-school students’ EFL writing
by
JAHIĆ JAŠIĆ, Alma
,
Delić, Amer
in
Foreign languages learning
,
grammatical metaphor
,
lexical density
2017
This study examined the syntactic and semantic complexity of L2 English writing in a Bosnian Herzegovinian high school. Forty texts written by individual students, ten per grade, were quantitatively analyzed by applying methods established in previous research. The syntactic portion of the analysis, based on the t-unit analysis introduced by Hunt (1965), was done using the Web-based L2 Syntactic Complexity Analyzer (Lu, 2010), while the semantic portion, largely based on the theory laid out in systemic functional linguistics (Halliday Matthiessen, 2014), was done using the Web-based Lexical Complexity Analyzer (Ai Lu, 2010) as well as manual identification of grammatical metaphors. The statistical analysis included tests of variance, correlation, and effect size. It was found that the syntactic and semantic complexity of writing increases in later grades; however, this increase is not consistent across all grades.
Journal Article
A survey of semi- and weakly supervised semantic segmentation of images
2020
Image semantic segmentation is one of the most important tasks in the field of computer vision, and it has made great progress in many applications. Many fully supervised deep learning models are designed to implement complex semantic segmentation tasks and the experimental results are remarkable. However, the acquisition of pixel-level labels in fully supervised learning is time consuming and laborious, semi-supervised and weakly supervised learning is gradually replacing fully supervised learning, thus achieving good results at a lower cost. Based on the commonly used models such as convolutional neural networks, fully convolutional networks, generative adversarial networks, this paper focuses on the core methods and reviews the semi- and weakly supervised semantic segmentation models in recent years. In the following chapters, existing evaluations and data sets are summarized in details and the experimental results are analyzed according to the data set. The last part of the paper is an objective summary. In addition, it points out the possible direction of research and inspiring suggestions for future work.
Journal Article
Examining the role of stimulus complexity in item and associative memory
2025
Episodic memory comprises memory for individual information units (item memory) and for the connections among them (associative memory). In two experiments using an object pair learning task, we examined the effect of visual stimulus complexity on memory encoding and retrieval mechanisms and on item and associative memory performance. Subjects encoded pairs of black monochrome object images (low complexity, LC condition) or color photographs of objects (high complexity, HC condition) via interactive imagery, and subsequently item and associative recognition were tested. In Experiment 1, event-related potentials (ERPs) revealed an enhanced frontal N2 during encoding and an enhanced late posterior negativity (LPN) during item recognition in the HC condition, suggesting that memory traces containing visually more complex objects elicited a stronger effort in reconstructing the past episode. Item memory was consistently superior in the HC compared to the LC condition. Associative memory was either statistically unaffected by complexity (Experiment 1) or improved (Experiment 2) in the HC condition, speaking against a tradeoff between resources allocated to item versus associative memory, and hence contradicting results of some prior studies. In Experiment 2, in both young and older adults, both item and associative memory benefitted from stimulus complexity, such that the magnitude of the age-related associative deficit was not influenced by stimulus complexity. Together, these results suggest that if familiar objects are presented in a form that exhibits a higher visual complexity, which may support semantic processing, complexity can benefit both item and associative memory. Stimulus properties that enhance item memory can scaffold associative memory in this situation.
Journal Article
A Multimodal Fusion Framework for Early Non‐Invasive Screening of Cognitive Impairment Using Language Digital Biomarkers
2025
Background Alzheimer's disease (AD) is a prevalent neurodegenerative condition, and its early diagnosis is critical for timely intervention and treatment. Current diagnostic methods, such as biomarker detection and neuroimaging, are costly and reliant on specialized resources, limiting their accessibility. Non‐invasive cognitive screening, while promising, is often influenced by subjective and environmental factors, reducing its accuracy in practical use. Language biomarker analysis has emerged as a stable and convenient alternative. Advancements in machine learning, particularly the Bidirectional Encoder Representations from Transformers (BERT) model, provide robust support for speech‐based AD screening and hold promise for breakthroughs in early diagnosis. Method This study adopted a systematic and scientific method to accurately distinguish between people with cognitive impairment and healthy controls (HC). In terms of data processing, with the approval of the hospital ethics committee, 300 subjects were selected from the C ‐ PAS cohort and the data sets were reasonably divided. Semantic features were obtained through Shanghai cognitive screening (SCS) test, and audio features were extracted using BERT and OpenSMILE. During model training, an SCS score of 84.75 was determined as the classification boundary. For text, a CNN model was constructed, and for audio, five models such as RF and XGBoost were trained. The hard voting method was used for result fusion, and professional indicators such as Specificity were used for evaluation to ensure the reliability and validity of the study. Result The proposed framework achieved an accuracy of 91.80%, surpassing the 77.17% accuracy of the MoCA‐Basic test for identifying people with cognitive impairment. Additionally, it attained an F1‐score of 91.85%. Feature importance analysis revealed key biomarkers linked to cognitive impairment, including increased pause time and spectral changes in acoustic features, along with reduced semantic complexity in translated text. Conclusion The proposed multimodal framework offers a highly accurate, cost‐effective, and non‐invasive method for the early detection of cognitive impairment. The identified biomarkers provide valuable insights into early functional deficits associated with cognitive decline, advancing our understanding of the disease and enabling the development of more effective screening tools.
Journal Article
Transferring, Translating, and Transforming: An Integrative Framework for Managing Knowledge Across Boundaries
2004
The paper examines managing knowledge across boundaries in settings where innovation is desired. Innovation is a useful context because it allows us to explore the negative consequences of the path-dependent nature of knowledge. A framework is developed that describes three progressively complex boundariessyntactic, semantic, and pragmaticand three progressively complex processestransfer, translation, and transformation. The framework is used to specify the practical and political mismatches that occur when innovation is desired and how this relates to the common knowledge that actors use to share and assess each other's domain-specific knowledge. The development and use of a collaborative engineering tool in the early stages of a vehicle's development is presented to illustrate the conceptual and prescriptive value of the framework. The implication of this framework on key topics in the organization theory and strategy literatures is then discussed.
Journal Article
FULL-Names: a contribution to the Predicativist approach of proper names
2023
While the predicate view of proper names is popular among linguists, it is not unanimous. This paper contributes to the discussion by considering some linguistic data exemplified by phrases such as “Operation Valkyrie” and “Operation Desert Storm”. These examples bring some clues about the structure of the phrasesthat help us understand the procedure involved in naming individuals. One is the gap between the first constituent (“operation”) and the second constituent (“Valkyrie”),which is filled by an abstract functional structure, as will be argued in this paper. These clues also lead us into two consequences: a) the difference between a definite description and a proper name is not so clear; b) the naming procedure is enabled by a complex syntactic-semantic mechanism within this gap.Our analysis shows that the predicate view provides accurate results for the data under analysis.
Journal Article
Assessing Financial Reporting Complexity and Economic Value: A Computational Intelligence Approach
2026
As computing intelligence and information technology advance, financial reports have grown more complex in structure, semantics, and logic. This research investigates the relationship between such complexity and the statistical value of economic information, proposing a three-dimensional assessment framework that integrates language models, information theory, and model interpretability. Experiments on heterogeneous texts and metadata showed that moderate complexity enriches informational content, whereas excessive complexity undermines statistical utility. An attention-based fusion model successfully captured nonlinear interactions between semantic features and market responses. Theoretically, the study extends financial text analysis into the realm of intelligent computation; practically, it offers regulators and investors a functional complexity threshold and enables real-time monitoring systems.
Journal Article
I see what you mean: Semantic but not lexical factors modulate image processing in bilingual adults
by
Mendelson, Olivia
,
Titone, Debra
,
Furlani, Noah
in
Adult
,
Adults
,
Behavioral Science and Psychology
2022
Bilinguals frequently juggle competing representations from their two languages when they interact with their environment (i.e., nonselective activation). As a result, both first (L1) and second language (L2) communication may be impeded when words share orthographic form but not meaning (i.e., interlingual homographs; e.g.,
CRANE
, a
machine
in English, a
skull
in French). Similarly, bilinguals’ reduced exposure to each known language makes bilingual lexical processing more vulnerable to larger frequency effects. While much is known about processes within the language system, less is known about how the bilingual language system interacts with the visual system, specifically in the context of image processing. We investigated this by testing whether commonly observed semantic (homograph interference) and lexical (frequency) effects extend to a visual word–image matching task. We tested 48 bilinguals, who were asked to determine whether an image corresponded to a written word that was presented immediately beforehand. By modulating the complexity of visual referents and the semantic (Analysis 1) or lexical (Analysis 2) complexity of word cues, we simultaneously burdened the visual and language systems. The results showed that both semantic and lexical factors modulated response accuracy and correct reaction time on the word–image matching task. Crucially, we observed an interaction between the image factor (visual complexity) with the semantic (homograph status) but not the lexical factor (word frequency). We conclude that it is possible for the language and image processing systems to interact, although the extent to which this occurs depends on the degree of linguistic processing involved.
Journal Article
Not all arguments are processed equally
by
Huang, Chu-Ren
,
Lenci, Alessandro
,
Chersoni, Emmanuele
in
Argumentation
,
Artificial Intelligence
,
Cognitive science
2021
This work addresses some questions about language processing: what does it mean that natural language sentences are semantically complex? What semantic features can determine different degrees of difficulty for human comprehenders? Our goal is to introduce a framework for argument semantic complexity, in which the processing difficulty depends on the typicality of the arguments in the sentence, that is, their degree of compatibility with the selectional constraints of the predicate. We postulate that complexity depends on the difficulty of building a semantic representation of the event or the situation conveyed by a sentence. This representation can be either retrieved directly from the semantic memory or built dynamically by solving the constraints included in the stored representations. To support this postulation, we built a Distributional Semantic Model to compute a compositional cost function for the sentence unification process. Our evaluation on psycholinguistic datasets reveals that the model is able to account for semantic phenomena such as the context-sensitive update of argument expectations and the processing of logical metonymies.
Journal Article
Effects of syntactic structure on the processing of lexical repetition during sentence reading
by
Lowder, Matthew W.
,
Cardoso, Antonio
,
Pittman, Michael
in
Acknowledgment
,
Behavioral Science and Psychology
,
Clauses
2023
Previous research has demonstrated that the ease or difficulty of processing complex semantic expressions depends on sentence structure: Processing difficulty emerges when the constituents that create the complex meaning appear in the same clause, whereas difficulty is reduced when the constituents appear in separate clauses. The goal of the current eye-tracking-while-reading experiments was to determine how changes to sentence structure affect the processing of lexical repetition, as this manipulation enabled us to isolate processes involved in word recognition (repetition priming) from those involved in sentence interpretation (felicity of the repetition). When repetition of the target word was felicitous (Experiment
1
), we observed robust effects of repetition priming with some evidence that these effects were weaker when repetition occurred within a clause versus across a clause boundary. In contrast, when repetition of the target word was infelicitous (Experiment
2
), readers experienced an immediate repetition cost when repetition occurred within a clause, but this cost was eliminated entirely when repetition occurred across clause boundaries. The results have implications for word recognition during reading, processes of semantic integration, and the role of sentence structure in guiding these linguistic representations.
Journal Article