Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
11,788
result(s) for
"Semantic features"
Sort by:
Multidimensional Latent Semantic Networks for Text Humor Recognition
by
Xiong, Siqi
,
Chen, Zhiqun
,
Wang, Rongbo
in
Acknowledgment
,
Ambiguity
,
Artificial intelligence
2022
Humor is a special human expression style, an important “lubricant” for daily communication for people; people can convey emotional messages that are not easily expressed through humor. At present, artificial intelligence is one of the popular research domains; “discourse understanding” is also an important research direction, and how to make computers recognize and understand humorous expressions similar to humans has become one of the popular research domains for natural language processing researchers. In this paper, a humor recognition model (MLSN) based on current humor theory and popular deep learning techniques is proposed for the humor recognition task. The model automatically identifies whether a sentence contains humor expression by capturing the inconsistency, phonetic features, and ambiguity of a joke as semantic features. The model was experimented on three publicly available wisecrack datasets and compared with state-of-the-art language models, and the results demonstrate that the proposed model has better humor recognition accuracy and can contribute to the research on discourse understanding.
Journal Article
Feats: A database of semantic features for early produced noun concepts
by
Borovsky, Arielle
,
Peters, Ryan E.
,
Cox, Joseph I.
in
Access
,
Adults
,
Behavioral Science and Psychology
2024
Semantic feature production norms have several desirable characteristics that have supported models of representation and processing in adults. However, several key challenges have limited the use of semantic feature norms in studies of early language acquisition. First, existing norms provide uneven and inconsistent coverage of early-acquired concepts that are typically produced and assessed in children under the age of three, which is a time of tremendous growth of early vocabulary skills. Second, it is difficult to assess the degree to which young children may be familiar with normed features derived from these adult-generated datasets. Third, it has been difficult to adopt standard methods to generate semantic network models of early noun learning. Here, we introduce Feats—a tool that was designed to make headway on these challenges by providing a database, the Language Learning and Meaning Acquisition (LLaMA) lab Noun Norms that extends a widely used set of feature norms McRae et al.
Behavior Research Methods
37
, 547–559, (
2005
) to include full coverage of noun concepts on a commonly used early vocabulary assessment. Feats includes several tools to facilitate exploration of features comprising early-acquired nouns, assess the developmental appropriateness of individual features using toddler-accessibility norms, and extract semantic network statistics for individual vocabulary profiles. We provide a tutorial overview of Feats. We additionally validate our approach by presenting an analysis of an overlapping set of concepts collected across prior and new data collection methods. Furthermore, using network graph analyses, we show that the extended set of norms provides novel, reliable results given their enhanced coverage.
Journal Article
Salient semantics
2024
Semantic features are components of concepts. In philosophy, there is a predominant focus on those features that are necessary (and jointly sufficient) for the application of a concept. Consequently, the method of cases has been the paradigm tool among philosophers, including experimental philosophers. However, whether a feature is salient is often far more important for cognitive processes like memory, categorization, recognition and even decision-making than whether it is necessary. The primary objective of this paper is to emphasize the significance of researching salient features of concepts. I thereby advocate the use of semantic feature production tasks, which not only enable researchers to determine whether a feature is salient, but also provide a complementary method for studying ordinary language use. I will discuss empirical data on three concepts,
conspiracy theory
,
female/male professor
, and
life
, to illustrate that semantic feature production tasks can help philosophers (a) identify those salient features that play a central role in our reasoning about and with concepts, (b) examine socially relevant stereotypes, and (c) investigate the structure of concepts.
Journal Article
XSS Attack Detection Based on Multisource Semantic Feature Fusion
2025
Cross-site scripting (XSS) attacks can be implemented through various attack vectors, and the diversity of these vectors significantly increases the overhead required for detection systems. The existing XSS detection methods face issues such as insufficient feature extraction capabilities for XSS attacks, inadequate multisource feature fusion processes, and high resource consumption levels for their detection models. To address these problems, we propose a novel XSS detection approach based on multisource semantic feature fusion. First, we design a normalized tokenization rule based on the structural features of XSS code and use a word embedding model to generate the original feature vectors of XSS. Second, we propose a local semantic feature extraction network based on depthwise separable convolution (DSC) that extracts XSS text and syntactic features using convolution kernels with different sizes. Then, we use a bidirectional long short-term memory (Bi-LSTM) network to extract the global semantic features of XSS. Finally, we introduce a multihead attention fusion network that employs a saliency score and a dynamic weight adjustment mechanism to identify the key parts of the input sequence and dynamically adjust the weight of each head. This enables the deep fusion of local and global XSS semantic features. Experimental results demonstrate that the proposed approach achieves an F1 score of 99.92%, outperforming the existing detection methods.
Journal Article
Fishing Vessel Type Recognition Based on Semantic Feature Vector
by
Yuan, Junfeng
,
Zhang, Jilin
,
Xue, Meiting
in
Analysis
,
Artificial intelligence
,
Classification
2024
Identifying fishing vessel types with artificial intelligence has become a key technology in marine resource management. However, classical feature modeling lacks the ability to express time series features, and the feature extraction is insufficient. Hence, this work focuses on the identification of trawlers, gillnetters, and purse seiners based on semantic feature vectors. First, we extract trajectories from massive and complex historical Vessel Monitoring System data that contain a large amount of dirty data and then extract the semantic features of fishing vessel trajectories. Finally, we input the semantic feature vectors into the LightGBM classification model for classification of fishing vessel types. In this experiment, the F1 measure of our proposed method on the East China Sea fishing vessel dataset reached 96.25, which was 6.82% higher than that of the classical feature-modeling method based on fishing vessel trajectories. Experiments show that this method is accurate and effective for the classification of fishing vessels.
Journal Article
Against semantic features: the view from derivational affixes
2024
This paper builds a systematic argument against the existence of semantic features, although these would in principle conform with the understanding of features in Chomsky (1995) as instructions to the interfaces, to the Conceptual-Intentional Interface in this case. I first lay out their superfluous character as well as their redundancy in separationist / realisational approaches, and in non-lexicalist models of grammar, more generally. Under the assumption that lexical meaning in natural language is mediated by grammatical structure containing roots, (purely) semantic features would inevitably be restricted to “non-lexical” elements only, i.e. those derivational affixes that encode rich conceptual content. This makes the positing of semantic features methodologically suspect and, ultimately, redundant.Accordingly, the rich content of derivational affixes, which can involve pretty much any nominal concept (as in Acquaviva 2009) from ‘profession’, ’tree’, and ‘place’ to body parts, will be argued not to be encoded in terms of semantic features. On the contrary, this paper makes the case for derivational affixes not belonging to a unitary syntactic category, with some derivational affixes actually being roots interpreted in particular structural contexts, as has been argued already since De Belder (2011). The chapter closes by offering a taxonomy of the elements that grammar manipulates and sketches the division of labour between root structures and formal features.
Journal Article
Chinese sentiment analysis model by integrating multi-granularity semantic features
2023
PurposeIn recent years, Chinese sentiment analysis has made great progress, but the characteristics of the language itself and downstream task requirements were not explored thoroughly. It is not practical to directly migrate achievements obtained in English sentiment analysis to the analysis of Chinese because of the huge difference between the two languages.Design/methodology/approachIn view of the particularity of Chinese text and the requirement of sentiment analysis, a Chinese sentiment analysis model integrating multi-granularity semantic features is proposed in this paper. This model introduces the radical and part-of-speech features based on the character and word features, with the application of bidirectional long short-term memory, attention mechanism and recurrent convolutional neural network.FindingsThe comparative experiments showed that the F1 values of this model reaches 88.28 and 84.80 per cent on the man-made dataset and the NLPECC dataset, respectively. Meanwhile, an ablation experiment was conducted to verify the effectiveness of attention mechanism, part of speech, radical, character and word factors in Chinese sentiment analysis. The performance of the proposed model exceeds that of existing models to some extent.Originality/valueThe academic contribution of this paper is as follows: first, in view of the particularity of Chinese texts and the requirement of sentiment analysis, this paper focuses on solving the deficiency problem of Chinese sentiment analysis under the big data context. Second, this paper borrows ideas from multiple interdisciplinary frontier theories and methods, such as information science, linguistics and artificial intelligence, which makes it innovative and comprehensive. Finally, this paper deeply integrates multi-granularity semantic features such as character, word, radical and part of speech, which further complements the theoretical framework and method system of Chinese sentiment analysis.
Journal Article
Aspect-level sentiment classification with fused local and global context
2023
Sentiment analysis aims to determine the sentiment orientation of a text piece (sentence or document), but many practical applications require more in-depth analysis, which makes finer-grained sentiment classification the ideal solution. Aspect-level Sentiment Classification (ALSC) is a task that identifies the emotional polarity for aspect terms in a sentence. As the mainstream Transformer framework in sentiment classification, BERT-based models apply self-attention mechanism that extracts global semantic information for a given aspect, while a certain proportion of local information is missing in the process. Although recent ALSC models have achieved good performance, they suffer from robustness issues. In addition, uneven distribution of samples greatly hurts model performance. To address these issues, we present the PConvBERT (Prompt-ConvBERT) and PConvRoBERTa (Prompt-ConvRoBERTa) models, in which local context features learned by a Local Semantic Feature Extractor (LSFE) are fused with the BERT/RoBERTa global features. To deal with the robustness problem of many deep learning models, adversarial training is applied to increase model stability. Additionally, Focal Loss is applied to alleviate the impact of unbalanced sample distribution. To fully explore the ability of the pre-training model itself, we also propose natural language prompt approaches that better solve the ALSC problem. We utilize masked vector outputs of templates for sentiment classification. Extensive experiments on public datasets demonstrate the effectiveness of our model.
Journal Article
The Effects of Implementing the Strategy of Semantic Feature Analysis (SFA) in Promoting Vocabulary in School-Aged Portuguese Children in Inclusive Schools
by
Cruz-Santos, Anabela
,
Verde, Elisabete
,
Lima, Etelvina
in
Academic achievement
,
Children
,
Classrooms
2024
Background: The purpose of this study was to apply and analyze the impact of the semantic feature analysis (SFA) strategy on vocabulary development and comprehension of texts and theoretical concepts in Portuguese school-age students with and without special educational needs (SEN) attending inclusive schools. Method: The research design was quasi-experimental. The SFA was administered in ten sessions of approximately 60 min each. The sample was a convenience sample and consisted of selecting three classes in each school: (i) in the first cycle of basic education, 65 students were divided into a control group, an experimental group and a structured teaching group; (ii) in the second cycle of basic education, 55 students were divided into an experimental group, an online virtual school and a control group. Results: (1) The SFA strategy is motivating, appealing, inexpensive, flexible and easy to implement; (2) learning the SFA strategy is easy and can be successfully taught in any classroom; (3) the performance of the students assigned to the experimental groups was significantly higher in both cycles compared to all the other groups; (4) the effect sizes were 0.87 in the first cycle and 0.88 in the second cycle. Conclusion: The SFA strategy effectively promotes the development of vocabulary, concept knowledge and text comprehension in school-age children, being more effective than regular teaching.
Journal Article
Semantic Feature Analysis Treatment for Anomia in Two Fluent Aphasia Syndromes
2004
The effect of semantic feature analysis (SFA) treatment on confrontation naming and discourse production was examined in 2 persons, 1 with anomic aphasia and 1 with Wernicke's aphasia. Results indicated that confrontation naming of treated nouns improved and generalized to untreated nouns for both participants, who appeared to have different lexical access impairments. Both participants demonstrated improvement in some aspects of discourse production associated with the confrontation naming SFA treatment. However, there was no change in most manifestations of lexical retrieval difficulty during discourse for either participant. These findings support previous work regarding improved and generalized naming associated with SFA treatment and indicate a need to examine effects of improved confrontation naming on more natural speaking situations.
Journal Article