Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
238
result(s) for
"Cognitive Contrastive"
Sort by:
An Analysis of the Application of Computer-based Multimodal Discourse Analysis in English Teaching Reform
2020
Metaphor is ubiquitous in people's daily life. Color is a cognitive way of human beings to the outside world. When we use the basic category of color to express other cognitive domains, we form the cognitive metaphor of color. Due to the common cognitive characteristics, the English and Chinese nationalities have a certain degree of similarity in the recognition of color words. However, influenced by various external factors, their recognition of color words shows their own unique personality. By comparing the conceptual metaphors of color idioms between English and Chinese, this paper reveals the similarities and differences between the metaphors of Chinese and English color words in a language environment close to nature, and finds out the characteristics of the metaphors of color words. The development of computer technology provides us with the possibility of analyzing specific problems. This article analyzes the application of computer-based multimodal discourse analysis in English teaching reform.
Journal Article
Semantic Extension of HEAD in Korean and Thai: A Contrastive Perspective
2025
Modern linguistic traditions have witnessed increasing attention to embodied cognition, culture, and typology. Embodiment is responsible for the universalities of human experience as reflected in language, due to similarities in human cognitive processes such as metaphor and metonymy. Many crosslinguistic studies corroborate the claims of similar cognitive mechanisms operating behind ontological divisions, conceptual networks, construal of events, semantico-functional change of lexemes, among others. Closer scrutiny, however, reveals an array of differences in their operations, largely attributable to culture-specific idiosyncrasies and/or typological dissimilarities, hence the significance of crosslinguistic and comparative investigations. Based on the established cognitive-linguistic theoretical frameworks, this research analyzes the semantic extension of a conceptually prominent body-part term HEAD in Korean (meli and taykali) and in Thai (hud) from a comparative perspective. HEAD is among the most perceptually-prominent and functionally-essential body parts, and thus it constitutes a convenient and effective reference point for analyzing embodiment. A comparative investigation reveals commonalities and differences in the conceptualization of HEAD and its semantic extension scenarios, which merit the attention of researchers of cognitive linguistics, linguistic typology, and comparative/contrastive linguistics. This study shows that, most notably, Korean favors contour-based conceptualizations (HEAD is ROUND; HEAD is UNIT), whereas Thai favors horizontal-axis-based conceptualizations (HEAD is FRONT; HEAD is BEGINNING). Furthermore, Korean has multiple lexemes with specialized meanings, including one for animal heads, a reflection of the monosemy strategy of lexicalization, which is in contrast with the polysemy structure in Thai. The multiplicity of HEAD lexemes in Korean is partly responsible for the emergence of pejorative meanings. Index Terms--semantic extension, metaphor, Korean, Thai, HEAD
Journal Article
Block-Wise Domain Adaptation for Workload Prediction from fNIRS Data
2025
Functional near-infrared spectroscopy (fNIRS) is a non-intrusive way to measure cortical hemodynamic activity. Predicting cognitive workload from fNIRS data has taken on a diffuse set of methods. To be applicable in real-world settings, models are needed, which can perform well across different sessions as well as different subjects. However, most existing works assume that training and testing data come from the same subjects and/or cannot generalize well across never-before-seen subjects. Additional challenges imposed by fNIRS data include not only the high variations in inter-subject fNIRS data but also the variations in intra-subject data collected across different blocks of sessions. To address these challenges, we propose an effective method, referred to as the block-wise domain adaptation (BWise-DA), which explicitly minimizes intra-session variance as well by viewing different blocks from the same subject and same session as different domains. We minimize the intra-class domain discrepancy and maximize the inter-class domain discrepancy accordingly. In addition, we propose an MLPMixer-based model for workload prediction. Experimental results demonstrate that the proposed model provides better performance compared to three different baseline models on three publicly-available workload datasets. Two of the datasets are collected from n-back tasks and one of them is from finger-tapping. Moreover, the experimental results show that our proposed contrastive learning method can also be leveraged to improve the performance of the baseline models. We also present a visualization study showing that the models are paying attention to the right regions in the brain, which are known to be involved in the respective tasks.
Journal Article
Datasets, tasks, and training methods for large-scale hypergraph learning
2023
Relations among multiple entities are prevalent in many fields, and hypergraphs are widely used to represent such group relations. Hence, machine learning on hypergraphs has received considerable attention, and especially much effort has been made in neural network architectures for hypergraphs (a.k.a., hypergraph neural networks). However, existing studies mostly focused on small datasets for a few single-entity-level downstream tasks and overlooked scalability issues, although most real-world group relations are large-scale. In this work, we propose new tasks, datasets, and scalable training methods for addressing these limitations. First, we introduce two pair-level hypergraph-learning tasks to formulate a wide range of real-world problems. Then, we build and publicly release two large-scale hypergraph datasets with tens of millions of nodes, rich features, and labels. After that, we propose PCL, a scalable learning method for hypergraph neural networks. To tackle scalability issues, PCL splits a given hypergraph into partitions and trains a neural network via contrastive learning. Our extensive experiments demonstrate that hypergraph neural networks can be trained for large-scale hypergraphs by PCL while outperforming 16 baseline models. Specifically, the performance is comparable, or surprisingly even better than that achieved by training hypergraph neural networks on the entire hypergraphs without partitioning.
Journal Article
Instance-dimension dual contrastive learning of visual representations
by
Wang, Liantao
,
Liu, Qingrui
,
Wang, Qinxu
in
Cognitive tasks
,
Communications Engineering
,
Computer Science
2023
Existing contrastive methods usually learn visual representations either by maximizing instance contrast or by minimizing dimension redundancy separately, and fail to make full use of data information. In this paper, we propose an instance-dimension dual contrastive method named IDDCLR to thoroughly mine the intrinsic knowledge underlying data. It jointly optimizes the instance contrast and the dimension redundancy to learn better visual representations. Specifically, we employ the normalized temperature scaled cross entropy (NT-Xent) to formulate the instance contrast loss, and propose a dimension contrast loss function that also takes the form of NT-Xent, resulting in symmetric form of the whole loss. The significance of minimizing the loss is twofold: On the one hand, it learns effective visual representations in the latent space, where the agreement between differently augmented views of the same instance is maximized. On the other hand, it minimizes the redundancy among feature dimensions, consequently being capable of avoiding trivial embeddings. Experimental results show that IDDCLR outperforms state-of-the-art self-supervised methods on classification tasks, and performs comparably on transfer learning tasks.
Journal Article
Self-supervised group meiosis contrastive learning for EEG-based emotion recognition
2023
The progress of EEG-based emotion recognition has received widespread attention from the fields of human-machine interaction and cognitive science. However, recognizing emotions with limited labelled data is still challenging. To address this issue, this paper proposes a self-supervised group meiosis contrastive learning (SGMC) framework for EEG-based emotion recognition. First, to reduce the dependence of emotion labels, SGMC introduces a contrastive learning task according to the alignment of video clips based on the similar EEG response across subjects. Moreover, the model adopts a group projector to extract group-level representations from the group samples to further decrease the subject difference and random effects in EEG signals. Finally, a novel genetics-inspired data augmentation method, named meiosis is developed, which takes advantage of the alignment of video clips among a group of EEG samples to generate augmented groups by pairing, cross exchanging, and separating. The experiments show that SGMC exhibits competitive performance on the publicly available DEAP and SEED datasets. It is worth of noting that the SGMC shows a high ability to recognize emotion even when using limited labelled data. Moreover, the results of feature visualization suggest that the model might have learned video-level emotion-related feature representations to improve emotion recognition. The hyper-parametric analysis further shows the effect of the group size during emotion recognition. Finally, the comparisons of both the symmetric function and the ablation models and the analysis of computational efficiency are carried out to examine the rationality of the SGMC architecture. The code is provided publicly online.
Journal Article
GPS: graph contrastive learning via multi-scale augmented views from adversarial pooling
2025
Self-supervised graph representation learning has recently shown considerable promise in a range of fields, including bioinformatics and social networks. A large number of graph contrastive learning approaches have shown promising performance for representation learning on graphs, which train models by maximizing agreement between original graphs and their augmented views (i.e., positive views). Unfortunately, these methods usually involve pre-defined augmentation strategies based on the knowledge of human experts. Moreover, these strategies may fail to generate challenging positive views to provide sufficient supervision signals. In this paper, we present a novel approach named graph pooling contrast (GPS) to address these issues. Motivated by the fact that graph pooling can adaptively coarsen the graph with the removal of redundancy, we rethink graph pooling and leverage it to automatically generate multi-scale positive views with varying emphasis on providing challenging positives and preserving semantics, i.e., strongly-augmented view and weakly-augmented view. Then, we incorporate both views into a joint contrastive learning framework with similarity learning and consistency learning, where our pooling module is adversarially trained with respect to the encoder for adversarial robustness. Experiments on twelve datasets on both graph classification and transfer learning tasks verify the superiority of the proposed method over its counterparts.
Journal Article
A Two-Stage Model for Predicting Mild Cognitive Impairment to Alzheimer’s Disease Conversion
2022
Early detection of Alzheimer's disease (AD), such as predicting development from mild cognitive impairment (MCI) to AD, is critical for slowing disease progression and increasing quality of life. Although deep learning is a promising technique for structural magnetic resonance imaging (MRI) based diagnosis, the paucity of training samples limits its power, especially for three-dimensional (3D) models. To this end, we propose a two-stage model combining both transfer learning and contrastive learning that can achieve high accuracy of MRI-based early AD diagnosis even when the sample numbers are restricted. Specifically, a 3D CNN model was pre-trained using publicly available medical image data to learn common medical features, and contrastive learning was further utilized to learn more specific features of MCI images. The two-stage model outperformed each benchmark method. Compared with previous studies, we show that our model achieves superior performance in progressive MCI patients with an accuracy of 0.82 and AUC of 0.84. We further enhance the interpretability of the model by using 3D Grad-CAM, which highlights brain regions with high predictive weights. Brain regions, including the hippocampus, temporal, and precuneus, are associated with the classification of MCI, which is supported by various types of literature. Our model provides a novel model to avoid overfitting due to a lack of medical data and enable the early detection of AD.
Journal Article
The Objective Dementia Severity Scale Based on MRI with Contrastive Learning: A Whole Brain Neuroimaging Perspective
by
Li, Wei
,
Chen, Xi
,
Zhang, Yike
in
Alzheimer Disease - diagnostic imaging
,
Alzheimer Disease - pathology
,
Alzheimer's disease
2023
In the clinical treatment of Alzheimer’s disease, one of the most important tasks is evaluating its severity for diagnosis and therapy. However, traditional testing methods are deficient, such as their susceptibility to subjective factors, incomplete evaluation, low accuracy, or insufficient granularity, resulting in unreliable evaluation scores. To address these issues, we propose an objective dementia severity scale based on MRI (ODSS-MRI) using contrastive learning to automatically evaluate the neurological function of patients. The approach utilizes a deep learning framework and a contrastive learning strategy to mine relevant information from structural magnetic resonance images to obtain the patient’s neurological function level score. Given that the model is driven by the patient’s whole brain imaging data, but without any possible biased manual intervention or instruction from the physician or patient, it provides a comprehensive and objective evaluation of the patient’s neurological function. We conducted experiments on the Alzheimer’s disease Neuroimaging Initiative (ADNI) dataset, and the results showed that the proposed ODSS-MRI was correlated with the stages of AD 88.55% better than all existing methods. This demonstrates its efficacy to describe the neurological function changes of patients during AD progression. It also outperformed traditional psychiatric rating scales in discriminating different stages of AD, which is indicative of its superiority for neurological function evaluation.
Journal Article
FAGCL: frequency-based augmentation graph contrastive learning for recommendation
2025
Contrastive Learning (CL) has recently achieved remarkable performance in recommendation systems, especially in Graph Collaborative Filtering (GCF), due to its effective handling of data sparsity issues by comparing positive and negative sample pairs. In CL-based GCF models, those sample pairs can be created by various data augmentation methods, which can be typically divided into two main aspects: graph-based and feature-based. However, those methods are either slow to train or ignore graph structure during data augmentation. To solve those issues, in this paper, we propose a frequency-based augmentation graph contrastive learning model named FAGCL, which takes graph structure into account without a slow training process. To be specific, FAGCL consists of three key steps. First, we propose a frequency-based data augmentation method to reconstruct the user-item interaction graph and get sample pairs for contrastive learning, which is fast to operate and can filter out some high-frequency graph signals that may lower model’s accuracy. Second, to improve efficiency, we propose an optimized GNNs forward propagation process for CL-based GCF models based on the first step. Third, to avoid extra forward/backward propagation processes, we adopt the one-encoder framework, which combines recommendation and contrastive learning tasks in the same pipeline instead of separating them. Extensive experiments on three benchmark datasets demonstrate that the proposed model FAGCL has the fastest speed of data augmentation and training, and outperforms other CL-based GCF models in accuracy in most cases.
Journal Article