Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
331 result(s) for "Arabic language Data processing."
Sort by:
On the fractal patterns of language structures
Natural Language Processing (NLP) makes use of Artificial Intelligence algorithms to extract meaningful information from unstructured texts, i.e., content that lacks metadata and cannot easily be indexed or mapped onto standard database fields. It has several applications, from sentiment analysis and text summary to automatic language translation. In this work, we use NLP to figure out similar structural linguistic patterns among several different languages. We apply the word2vec algorithm that creates a vector representation for the words in a multidimensional space that maintains the meaning relationship between the words. From a large corpus we built this vectorial representation in a 100-dimensional space for English, Portuguese, German, Spanish, Russian, French, Chinese, Japanese, Korean, Italian, Arabic, Hebrew, Basque, Dutch, Swedish, Finnish, and Estonian. Then, we calculated the fractal dimensions of the structure that represents each language. The structures are multi-fractals with two different dimensions that we use, in addition to the token-dictionary size rate of the languages, to represent the languages in a three-dimensional space. Finally, analyzing the distance among languages in this space, we conclude that the closeness there is tendentially related to the distance in the Phylogenetic tree that depicts the lines of evolutionary descent of the languages from a common ancestor.
Open source Arabic research paper dataset for natural language processing
Recent advancements in applications such as natural language processing (NLP), applied linguistics, indexing, data mining, information retrieval, and machine translation have emphasized the need for robust datasets and corpora. While there exist many Arabic corpora, most are derived from social media platforms like X or news sources, leaving a significant gap in datasets tailored to academic research. To address this gap, the ARPD, Arabic Research Papers Dataset, is developed as a specialized resource for Arabic academic research papers. This paper explains the methodology used to construct the dataset, which consists of seven classes and is publicly available in several formats to benefit Arabic research. Experiments conducted on the ARPD dataset demonstrate its performance in classification and clustering tasks. The results show that most of the classical clustering algorithms achieve low performance compared to bio-inspiration algorithms such as Particle Swarm Optimization (PSO) and Gray Wolf Optimization (GWO) based on the Davies–Bouldin index measure. For classification, the Support Vector Machine (SVM) algorithm outperformed others, achieving the highest accuracy, with other classifiers ranging from 89% to 99%. These findings highlight the ARPD’s potential to enhance Arabic academic research and support advanced NLP applications.
BERT-Based Joint Model for Aspect Term Extraction and Aspect Polarity Detection in Arabic Text
Aspect-based sentiment analysis (ABSA) is a method used to identify the aspects discussed in a given text and determine the sentiment expressed towards each aspect. This can help provide a more fine-grained understanding of the opinions expressed in the text. The majority of Arabic ABSA techniques in use today significantly rely on repeated pre-processing and feature-engineering operations, as well as the use of outside resources (e.g., lexicons). In essence, there is a significant research gap in NLP with regard to the use of transfer learning (TL) techniques and language models for aspect term extraction (ATE) and aspect polarity detection (APD) in Arabic text. While TL has proven to be an effective approach for a variety of NLP tasks in other languages, its use in the context of Arabic has been relatively under-explored. This paper aims to address this gap by presenting a TL-based approach for ATE and APD in Arabic, leveraging the knowledge and capabilities of previously trained language models. The Arabic base (Arabic version) of the BERT model serves as the foundation for the suggested models. Different BERT implementations are also contrasted. A reference ABSA dataset was used for the experiments (HAAD dataset). The experimental results demonstrate that our models surpass the baseline model and previously proposed approaches.
Deep learning CNN–LSTM framework for Arabic sentiment analysis using textual information shared in social networks
Recently, the world has witnessed an exponential growth of social networks which have opened a venue for online users to express and share their opinions in different life aspects. Sentiment analysis has become a hot-trend research topic in the field of natural language processing due to its significant roles in analyzing the public’s opinion and deriving effective opinion-based decisions. Arabic is one of the widely used languages across social networks. However, its morphological complexities and varieties of dialects make it a challenging language for sentiment analysis. Therefore, inspired by the success of deep learning algorithms, in this paper, we propose a novel deep learning model for Arabic language sentiment analysis based on one layer CNN architecture for local feature extraction, and two layers LSTM to maintain long-term dependencies. The feature maps learned by CNN and LSTM are passed to SVM classifier to generate the final classification. This model is supported by FastText words embedding model. Extensive experiments carried out on a multi-domain corpus demonstrate the outstanding classification performance of this model with an accuracy of 90.75%. Furthermore, the proposed model is validated using different embedding models and classifiers. The results show that FastText skip-gram model and SVM classifier are more valuable alternatives for the Arabic sentiment analysis. The proposed model outperforms several well-established state-of-the-art approaches on relevant corpora with up to + 20.71 % accuracy improvement.
Deciphering Arabic question: a dedicated survey on Arabic question analysis methods, challenges, limitations and future pathways
This survey reviews different research on question analysis, including other comparative studies of question analysis approaches and an evaluation of the questions by different NLP techniques that are used in question interpretation and categorization. Among these key findings noted includes the assessment of deep learning models such as M-BiGRU-CNN and M-TF-IDF, which come with high precision and accuracy when applied with the effectiveness of use in dealing with the complexities involved in a language. Some of the most mature machine learning algorithms, for example, SVM or logistic regression, remain powerful models, especially on the classification task, meaning that the latter continues to be relevant. This study further underlines the applicability of rule-based or hybrid methodologies in certain linguistic situations, and it must be said that custom design solutions are required. We could recommend, on this basis, directing future work towards the integration of these hybrid systems and towards the definition of more general methodologies of evaluation that are in line with the constant evolution of NLP technologies. It revealed that the underlying challenges and barriers in the domain are very complex syntactic and dialectic variations, unavailability of software tools, very critical standardization in Arabic datasets, benchmark creation, handling of translated data, and the integration of Large Language Models (LLMs). The paper discusses the lack of identity and processing of such structures through online systems for comparison. This comprehensive review highlights not only the diversified potential for the capabilities of NLP techniques in refining question analysis but also the potential way of great promises for further enhancements and improvements in this progressive domain.
Arab2Vec: An Arabic word embedding model for use in Twitter NLP applications
The analysis of Arabic Twitter data sets is a highly active research topic, particularly since the outbreak of COVID-19 and subsequent attempts to understand public sentiment related to the pandemic. This activity is partially driven by the high number of Arabic Twitter users, around 164 million. Word embedding models are a vital tool for analysing Twitter data sets, as they are considered one of the essential methods of transforming words into numbers that can be processed using machine learning (ML) algorithms. In this work, we introduce a new model, Arab2Vec , that can be used in Twitter-based natural language processing (NLP) applications. Arab2Vec was constructed using a vast data set of approximately 186,000,000 tweets from 2008 to 2021 from all Arabic Twitter sources. This makes Arab2Vec the most up-to-date word embedding model researchers can use for Twitter-based applications. The model is compared with existing models from the literature. The reported results demonstrate superior performance regarding the number of recognised words and F1 score for classification tasks with known data sets and the ability to work with emojis. We also incorporate skip-grams with negative sampling, an approach that other Arabic models haven’t previously used. Nine versions of Arab2Vec are produced; these models differ regarding available features, the number of words trained on, speed, etc. This paper provides Arab2Vec as an open-source project for users to employ in research. It describes the data collection methods, the data pre-processing and cleaning step, the effort to build these nine models, and experiments to validate them qualitatively and quantitatively.
Arabic question answering system: a survey
Question answering is a subfield of information retrieval. It is a task of answering a question posted in a natural language. A question answering system (QAS) may be considered a good alternative to search engines that return a set of related documents. The QAS system is composed of three main modules; question analysis, passage retrieval, and answer extraction. Over the years, numerous QASs have been presented for use in different languages. However, the the development of Arabic QASs has been slowed by linguistic challenges and the lack of resources and tools available to researchers. In this survey, we start with the challenges due to the language and how these challenges make the development of new Arabic QAS more difficult. Next, we do a detailed review of several Arabic QASs. This is followed by an in-depth analysis of the techniques and approaches in the three modules of a QAS. We present an overview of important and recent tools that were developed to help the researchers in this field. We also cover the available Arabic and multilingual datasets, and a look at the different measures used to assess QASs. Finally, the survey delves into the future direction of Arabic QAS systems based on the current state-of-the-art techniques developed for question answering in other languages.
Arabic aspect sentiment polarity classification using BERT
Aspect-based sentiment analysis (ABSA) is a textual analysis methodology that defines the polarity of opinions on certain aspects related to specific targets. The majority of research on ABSA is in English, with a small amount of work available in Arabic. Most previous Arabic research has relied on deep learning models that depend primarily on context-independent word embeddings (e.g. word2vec), where each word has a fixed representation independent of its context. This article explores the modeling capabilities of contextual embeddings from pre-trained language models, such as BERT, and making use of sentence pair input on Arabic aspect sentiment polarity classification task. In particular, we develop a simple but effective BERT-based neural baseline to handle this task. Our BERT architecture with a simple linear classification layer surpassed the state-of-the-art works, according to the experimental results on three different Arabic datasets. Achieving an accuracy of 89.51% on the Arabic hotel reviews dataset, 73.23% on the Human annotated book reviews dataset, and 85.73% on the Arabic news dataset.
Enhancing Arabic Dialect Detection on Social Media: A Hybrid Model with an Attention Mechanism
Recently, the widespread use of social media and easy access to the Internet have brought about a significant transformation in the type of textual data available on the Web. This change is particularly evident in Arabic language usage, as the growing number of users from diverse domains has led to a considerable influx of Arabic text in various dialects, each characterized by differences in morphology, syntax, vocabulary, and pronunciation. Consequently, researchers in language recognition and natural language processing have become increasingly interested in identifying Arabic dialects. Numerous methods have been proposed to recognize this informal data, owing to its crucial implications for several applications, such as sentiment analysis, topic modeling, text summarization, and machine translation. However, Arabic dialect identification is a significant challenge due to the vast diversity of the Arabic language in its dialects. This study introduces a novel hybrid machine and deep learning model, incorporating an attention mechanism for detecting and classifying Arabic dialects. Several experiments were conducted using a novel dataset that collected information from user-generated comments from Twitter of Arabic dialects, namely, Egyptian, Gulf, Jordanian, and Yemeni, to evaluate the effectiveness of the proposed model. The dataset comprises 34,905 rows extracted from Twitter, representing an unbalanced data distribution. The data annotation was performed by native speakers proficient in each dialect. The results demonstrate that the proposed model outperforms the performance of long short-term memory, bidirectional long short-term memory, and logistic regression models in dialect classification using different word representations as follows: term frequency-inverse document frequency, Word2Vec, and global vector for word representation.