Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
6,281
result(s) for
"Semantic relations"
Sort by:
Spatial working memory is critical for gesture processing: Evidence from gestures with varying semantic links to speech
by
Göksun, Tilbe
,
Özer, Demet
,
Özyürek, Aslı
in
Adolescent
,
Adult
,
Behavioral Science and Psychology
2025
Gestures express redundant or complementary information to speech they accompany by depicting visual and spatial features of referents. In doing so, they recruit both spatial and verbal cognitive resources that underpin the processing of visual semantic information and its integration with speech. The relation between spatial and verbal skills and gesture comprehension, where gestures may serve different roles in relation to speech is yet to be explored. This study examined the role of spatial and verbal skills in processing gestures that expressed redundant or complementary information to speech during the comprehension of spatial relations between objects. Turkish-speaking adults (
N
=74) watched videos describing the spatial location of objects that involved perspective-taking (left-right) or not (on-under) with speech and gesture. Gestures either conveyed redundant information to speech (e.g., saying and gesturing
“left”
) or complemented the accompanying demonstrative in speech (e.g., saying “
here,
” gesturing
“left”
). We also measured participants’ spatial (the Corsi block span and the mental rotation tasks) and verbal skills (the digit span task). Our results revealed nuanced interactions between these skills and spatial language comprehension, depending on the modality in which the information was expressed. One insight emerged prominently. Spatial skills, particularly spatial working memory capacity, were related to enhanced comprehension of visual semantic information conveyed through gestures especially when this information was not present in the accompanying speech. This study highlights the critical role of spatial working memory in gesture processing and underscores the importance of examining the interplay among cognitive and contextual factors to understand the complex dynamics of multimodal language.
Journal Article
Neural correlates of semantic-driven syntactic parsing in sentence comprehension
2024
•Word order, case markers, and semantics are combined to decode a sentence structure.•Semantic-driven parsing in unmarked non-canonical sentences activates Broca's area.•Different types of syntactic parsing recruit a common neural substrate.
For sentence comprehension, information carried by semantic relations between constituents must be combined with other information to decode the constituent structure of a sentence, due to atypical and noisy situations of language use. Neural correlates of decoding sentence structure by semantic information have remained largely unexplored. In this functional MRI study, we examine the neural basis of semantic-driven syntactic parsing during sentence reading and compare it with that of other types of syntactic parsing driven by word order and case marking. Chinese transitive sentences of various structures were investigated, differing in word order, case making, and agent-patient semantic relations (i.e., same vs. different in animacy). For the non-canonical unmarked sentences without usable case marking, a semantic-driven effect triggered by agent-patient ambiguity was found in the left inferior frontal gyrus opercularis (IFGoper) and left inferior parietal lobule, with the activity not being modulated by naturalness factors of the sentences. The comparison between each type of non-canonical sentences with canonical sentences revealed that the non-canonicity effect engaged the left posterior frontal and temporal regions, in line with previous studies. No extra neural activity was found responsive to case marking within the non-canonical sentences. A word order effect across all types of sentences was also found in the left IFGoper, suggesting a common neural substrate between different types of parsing. The semantic-driven effect was also observed for the non-canonical marked sentences but not for the canonical sentences, suggesting that semantic information is used in decoding sentence structure in addition to case marking. The current findings illustrate the neural correlates of syntactic parsing with semantics, and provide neural evidence of how semantics facilitates syntax together with other information.
Journal Article
Core Semantic Links or Lexical Associations: Assessing the Nature of Responses in Word Association Tasks
by
García, Adolfo M
,
Manoiloff, Laura
,
Lizarralde, Francisco
in
Associative Learning
,
Associative processes
,
Cognitive psychology
2019
The processes tapped by the widely-used word association (WA) paradigm remain a matter of debate: while some authors consider them as driven by lexical co-occurrences, others emphasize the role of meaning-based connections. To test these contrastive hypotheses, we analyzed responses in a WA task in terms of their normative defining features (those describing the object denoted by the cue word). Results indicate that 72.5% of the responses had medium-to-high coincidence with such defining semantic features. Moreover, 75.51% of responses had medium-to-high values of Relevance (a measure of the importance of the feature for construing a given concept). Furthermore, most responses (62.7%) referred to elements of the situation in which the concept usually appears, followed by sensory properties (e.g., color) of the denoted object (27.86%). These results suggest that the processes behind WA tasks involve a reactivation of the cue item’s semantic properties, particularly those most relevant to its core meaning.
Journal Article
Rethinking cross-domain semantic relation for few-shot image generation
by
He, Yujie
,
Li, Min
,
Zhang, Yusen
in
Ablation
,
Generative adversarial networks
,
Image processing
2023
Training well-performing Generative Adversarial Networks (GANs) with limited data has always been challenging. Existing methods either require sufficient data (over 100 training images) for training or generate images of low quality and low diversity. To solve this problem, we propose a new Cross-domain Semantic Relation (CSR) loss. The CSR loss improves the performance of the generative model by maintaining the relationship between instances in the source domain and generated images. At the same time, a perceptual similarity loss and a discriminative contrastive loss are designed to further enrich the diversity of generated images and stabilize the training process of models. Experiments on nine publicly available few-shot datasets and comparisons with the current nine methods show that our approach is superior to all baseline methods. Finally, we perform ablation studies on the proposed three loss functions and prove that these three loss functions are essential for few-shot image generation tasks. Code is available at https://github.com/gouayao/CSR.
Journal Article
Scene Changes Understanding Framework Based on Graph Convolutional Networks and Swin Transformer Blocks for Monitoring LCLU Using High-Resolution Remote Sensing Images
by
Jeon, Gwanggil
,
Yang, Sihan
,
Sun, Rui
in
Algorithms
,
Artificial neural networks
,
Change detection
2022
High-resolution remote sensing images with rich land surface structure can provide data support for accurately understanding more detailed change information of land cover and land use (LCLU) at different times. In this study, we present a novel scene change understanding framework for remote sensing which includes scene classification and change detection. To enhance the feature representation of images in scene classification, a robust label semantic relation learning (LSRL) network based on EfficientNet is presented for scene classification. It consists of a semantic relation learning module based on graph convolutional networks and a joint expression learning framework based on similarity. Since the bi-temporal remote sensing image pairs include spectral information in both temporal and spatial dimensions, land cover and land use change monitoring can be improved by using the relationship between different spatial and temporal locations. Therefore, a change detection method based on swin transformer blocks (STB-CD) is presented to obtain contextual relationships between targets. The experimental results on the LEVIR-CD, NWPU-RESISC45, and AID datasets demonstrate the superiority of LSRL and STB-CD over other state-of-the-art methods.
Journal Article
BertSRC: transformer-based semantic relation classification
2022
The relationship between biomedical entities is complex, and many of them have not yet been identified. For many biomedical research areas including drug discovery, it is of paramount importance to identify the relationships that have already been established through a comprehensive literature survey. However, manually searching through literature is difficult as the amount of biomedical publications continues to increase. Therefore, the relation classification task, which automatically mines meaningful relations from the literature, is spotlighted in the field of biomedical text mining. By applying relation classification techniques to the accumulated biomedical literature, existing semantic relations between biomedical entities that can help to infer previously unknown relationships are efficiently grasped. To develop semantic relation classification models, which is a type of supervised machine learning, it is essential to construct a training dataset that is manually annotated by biomedical experts with semantic relations among biomedical entities. Any advanced model must be trained on a dataset with reliable quality and meaningful scale to be deployed in the real world and can assist biologists in their research. In addition, as the number of such public datasets increases, the performance of machine learning algorithms can be accurately revealed and compared by using those datasets as a benchmark for model development and improvement. In this paper, we aim to build such a dataset. Along with that, to validate the usability of the dataset as training data for relation classification models and to improve the performance of the relation extraction task, we built a relation classification model based on Bidirectional Encoder Representations from Transformers (BERT) trained on our dataset, applying our newly proposed fine-tuning methodology. In experiments comparing performance among several models based on different deep learning algorithms, our model with the proposed fine-tuning methodology showed the best performance. The experimental results show that the constructed training dataset is an important information resource for the development and evaluation of semantic relation extraction models. Furthermore, relation extraction performance can be improved by integrating our proposed fine-tuning methodology. Therefore, this can lead to the promotion of future text mining research in the biomedical field.
Journal Article
Semantic Relation Model and Dataset for Remote Sensing Scene Understanding
by
Zhang, Dezheng
,
Li, Peng
,
Liu, Xin
in
attentional mechanism
,
Cognition
,
Cognition & reasoning
2021
A deep understanding of our visual world is more than an isolated perception on a series of objects, and the relationships between them also contain rich semantic information. Especially for those satellite remote sensing images, the span is so large that the various objects are always of different sizes and complex spatial compositions. Therefore, the recognition of semantic relations is conducive to strengthen the understanding of remote sensing scenes. In this paper, we propose a novel multi-scale semantic fusion network (MSFN). In this framework, dilated convolution is introduced into a graph convolutional network (GCN) based on an attentional mechanism to fuse and refine multi-scale semantic context, which is crucial to strengthen the cognitive ability of our model Besides, based on the mapping between visual features and semantic embeddings, we design a sparse relationship extraction module to remove meaningless connections among entities and improve the efficiency of scene graph generation. Meanwhile, to further promote the research of scene understanding in remote sensing field, this paper also proposes a remote sensing scene graph dataset (RSSGD). We carry out extensive experiments and the results show that our model significantly outperforms previous methods on scene graph generation. In addition, RSSGD effectively bridges the huge semantic gap between low-level perception and high-level cognition of remote sensing images.
Journal Article
Socio-Cultural Perspectives on Color Semantics: A Semiotic Analysis of Color Symbolism in English and Arabic
2023
In the real world, each word within a language carries a simple referential meaning. Yet, there exist intricate semantic relationships, all of which may vary depending on the various linguistic contexts. In this paper, I explore two main color terms, red and black, in terms of their semantic relationships between English and Arabic. In fact, this paper discusses different kinds of meanings (i.e., denotational and connotational) and semantics relations (i.e., paradigmatic and syntagmatic) of the two terms red and black in English. Afterward, a comparison of the findings is expounded, elucidating the semantic nuances and pragmatic applications of these terms within the Arabic language- my native tongue. The results revealed that in English, red and black carry some semantic meanings similar to Arabic. These similarities arise from universal beliefs. However, the diversity of linguistic and cultural influences results in distinct semantic relations of these color terms in the two languages.
Journal Article
Evaluating semantic relations in neural word embeddings with biomedical and general domain knowledge bases
by
Chen, Zhiwei
,
He, Zhe
,
Bian, Jiang
in
Biological Ontologies
,
Black boxes
,
Computational biology
2018
Background
In the past few years, neural word embeddings have been widely used in text mining. However, the vector representations of word embeddings mostly act as a black box in downstream applications using them, thereby limiting their interpretability. Even though word embeddings are able to capture semantic regularities in free text documents, it is not clear how different kinds of semantic relations are represented by word embeddings and how semantically-related terms can be retrieved from word embeddings.
Methods
To improve the transparency of word embeddings and the interpretability of the applications using them, in this study, we propose a novel approach for evaluating the semantic relations in word embeddings using external knowledge bases: Wikipedia, WordNet and Unified Medical Language System (UMLS). We trained multiple word embeddings using health-related articles in Wikipedia and then evaluated their performance in the analogy and semantic relation term retrieval tasks. We also assessed if the evaluation results depend on the domain of the textual corpora by comparing the embeddings of health-related Wikipedia articles with those of general Wikipedia articles.
Results
Regarding the retrieval of semantic relations, we were able to retrieve semanti. Meanwhile, the two popular word embedding approaches, Word2vec and GloVe, obtained comparable results on both the analogy retrieval task and the semantic relation retrieval task, while dependency-based word embeddings had much worse performance in both tasks. We also found that the word embeddings trained with health-related Wikipedia articles obtained better performance in the health-related relation retrieval tasks than those trained with general Wikipedia articles.
Conclusion
It is evident from this study that word embeddings can group terms with diverse semantic relations together. The domain of the training corpus does have impact on the semantic relations represented by word embeddings. We thus recommend using domain-specific corpus to train word embeddings for domain-specific text mining tasks.
Journal Article
Computing semantic similarity of texts by utilizing dependency graph
by
Mohebbi, Majid
,
Razavi, Seyed Naser
,
Balafar, Mohammad Ali
in
Computation
,
Correlation coefficients
,
Datasets
2023
The problem of Semantic Textual Similarity (STS) is a significant issue in Natural Language Processing (NLP). STS recognizes and measures semantic relations between two texts. Since the ability to determine the degree of the semantic relationship between sentence pairs is an integral part of machines that understand and infer natural language, we intend to improve the performance of the neural network systems computing the degree of the semantic relation. We propose a graph-U-Net model that operates on a dependency graph and is placed on top of a transformer. Our proposed model indicates the importance of the words in the sentence by assigning the words to several levels while a score as a degree of importance is computed for each level. These scores are used as a weighted average to produce the final result. The importance of the words is new information that our proposed model extracts from the STS and Paraphrase Identification (PI) datasets. We examine the effect of the proposed model on the performance of some transformers in computing semantic relation scores. We use STS2017 and MRPC datasets to evaluate our proposed model. Experimental evaluations show that compared to the transformers, our proposed model obtains a higher value of Pearson and Spearman correlation coefficients and also generates valuable representations for each input so that they improve the Pearson and Spearman values of the systems computing the degree of semantic equivalence between two texts.
Journal Article