Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
13,246
result(s) for
"Slang"
Sort by:
The a-Z of Gender and Sexuality
An A-Z glossary of trans and queer words and phrases that explains specific terminology and contextualises terms within transgender history. By dispelling myths about 'correct' language, this guide will serve as an accessible introduction to more informed conversations around gender and sexuality.
How Gen Z took over incel slang | Opinion
Adam Aleksic, a Gen Z linguist also known as the \"Etymology Nerd\" on social media, explains how Gen Z transforms and satirizes some incel terms.
Streaming Video
Slang feature extraction by analysing topic change on social media
by
Matsumoto, Kazuyuki
,
Ren, Fuji
,
Matsuoka, Masaya
in
Accuracy
,
analysing topic change
,
automatic information collection
2019
Recently, the authors often see words such as youth slang, neologism and Internet slang on social networking sites (SNSs) that are not registered on dictionaries. Since the documents posted to SNSs include a lot of fresh information, they are thought to be useful for collecting information. It is important to analyse these words (hereinafter referred to as ‘slang’) and capture their features for the improvement of the accuracy of automatic information collection. This study aims to analyse what features can be observed in slang by focusing on the topic. They construct topic models from document groups including target slang on Twitter by latent Dirichlet allocation. With the models, they chronologically the analyse change of topics during a certain period of time to find out the difference in the features between slang and general words. Then, they propose a slang classification method based on the change of features.
Journal Article
It’s All Greek to Them: Challenges in Translating Greek Slang and Idioms via LLMs and NMT
2025
This thesis investigates the relatively understudied task of translating Greek(medium-resource language) slang and idiomatic expressions. While MachineTranslation (MT) has made significant advancements over the past decade —firstthrough Neural Machine Translation (NMT) and later with Large Language Mod-els (LLMs)—, its effectiveness remains underexplored for informal and culturallyspecific linguistic constructions, especially in under-resourced settings. Thiswork addresses these challenges by creating two novel parallel Greek-Englishdatasets: one for slang and one for idioms, and testing three LLMs (Gemma, Llama, Mistral) and one NMT model (Helsinki). To probe the models’ knowledge,both datasets are also manually annotated with \"informativeness\" scores with theassistance of human participants, with them indicating how easily the meaningof a target expression can be inferred in a given context. An error analysis is alsoconducted to identify specific model weaknesses. Results show consistently poortranslation performance across all models, for both datasets, with no statisticallysignificant differences among them. However, localized differences in handlinginformativeness and substantial variation in error types are observed. Thesefindings align with existing research highlighting the challenges of translatingunder-resourced languages and culturally embedded expressions.
Dissertation
The influence of preprocessing on text classification using a bag-of-words representation
2020
Text classification (TC) is the task of automatically assigning documents to a fixed number of categories. TC is an important component in many text applications. Many of these applications perform preprocessing. There are different types of text preprocessing, e.g., conversion of uppercase letters into lowercase letters, HTML tag removal, stopword removal, punctuation mark removal, lemmatization, correction of common misspelled words, and reduction of replicated characters. We hypothesize that the application of different combinations of preprocessing methods can improve TC results. Therefore, we performed an extensive and systematic set of TC experiments (and this is our main research contribution) to explore the impact of all possible combinations of five/six basic preprocessing methods on four benchmark text corpora (and not samples of them) using three ML methods and training and test sets. The general conclusion (at least for the datasets verified) is that it is always advisable to perform an extensive and systematic variety of preprocessing methods combined with TC experiments because it contributes to improve TC accuracy. For all the tested datasets, there was always at least one combination of basic preprocessing methods that could be recommended to significantly improve the TC using a BOW representation. For three datasets, stopword removal was the only single preprocessing method that enabled a significant improvement compared to the baseline result using a bag of 1,000-word unigrams. For some of the datasets, there was minimal improvement when we removed HTML tags, performed spelling correction or removed punctuation marks, and reduced replicated characters. However, for the fourth dataset, the stopword removal was not beneficial. Instead, the conversion of uppercase letters into lowercase letters was the only single preprocessing method that demonstrated a significant improvement compared to the baseline result. The best result for this dataset was obtained when we performed spelling correction and conversion into lowercase letters. In general, for all the datasets processed, there was always at least one combination of basic preprocessing methods that could be recommended to improve the accuracy results when using a bag-of-words representation.
Journal Article