Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
24,168
result(s) for
"BERT"
Sort by:
Here comes the night : the dark soul of Bert Berns and the dirty business of rhythm & blues
Songwriter and record producer Bert Berns' meteoric career was fueled by his pending doom. His heart damaged by rheumatic fever as a youth, Berns was not expected to live to see 21. Although his name is little remembered today, Berns went from nobody to the top of the pops, producer of monumental r&b classics, songwriter of \"Twist and Shout,\" \"My Girl Sloopy,\" \"Piece of My Heart,\" and others. His fury to succeed led Berns to use his Mafia associations to muscle Atlantic Records out of their partnership and intimidate new talents whom he had signed to his record label. Berns died at age 38 just when he was seeing his grandest plans frustrated and foiled.
DPAL-BERT: A Faster and Lighter Question Answering Model
2024
Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems. However, with the constant evolution of algorithms, data, and computing power, the increasing size and complexity of these models have led to increased training costs and reduced efficiency. This study aims to minimize the inference time of such models while maintaining computational performance. It also proposes a novel Distillation model for PAL-BERT (DPAL-BERT), specifically, employs knowledge distillation, using the PAL-BERT model as the teacher model to train two student models: DPAL-BERT-Bi and DPAL-BERT-C. This research enhances the dataset through techniques such as masking, replacement, and n-gram sampling to optimize knowledge transfer. The experimental results showed that the distilled models greatly outperform models trained from scratch. In addition, although the distilled models exhibit a slight decrease in performance compared to PAL-BERT, they significantly reduce inference time to just 0.25% of the original. This demonstrates the effectiveness of the proposed approach in balancing model performance and efficiency.
Journal Article
Database of dreams : the lost quest to catalog humanity
\"Just a few years before the dawn of the digital age, Harvard psychologist Bert Kaplan set out to build the largest database of sociological information ever assembled. It was the mid-1950s, and social scientists were entranced by the human insights promised by Rorschach tests and other innovative scientific protocols. Kaplan, along with anthropologist A. I. Hallowell and a team of researchers, sought out a varied range of non-European subjects-among remote and largely non-literate peoples around the globe. Recording their dreams, stories, and innermost thoughts in a vast database, Kaplan envisioned future researchers accessing the data through the cutting-edge Readex machine. Almost immediately, however, technological developments and the obsolescence of the theoretical framework rendered the project irrelevant, and eventually it was forgotten. Kaplan's story is a tale of the search for what it means to be human, or what it came to mean in an age of rapid change in technological and social conditions. His project--call it a database of consciousness--was intended as a repository of humankind's most elusive ways of being human, as an anthropological archive; through it a veritable sluice of social knowledge was expected to flow from seemingly unlikely encounters. This is a book about those encounters--between scientists and subjects, between knowledge and machines--as well as the data that flowed out of them and the ways these were preserved and not preserved.\"-- Book jacket.
BERT applications in natural language processing: a review
by
Gardazi, Nadia Mushtaq
,
Alsahfi, Tariq
,
Malik, Muhammad Kamran
in
Ability
,
Artificial Intelligence
,
Bidirectionality
2025
BERT (Bidirectional Encoder Representations from Transformers) has revolutionized Natural Language Processing (NLP) by significantly enhancing the capabilities of language models. This review study examines the complex nature of BERT, including its structure, utilization in different NLP tasks, and the further development of its design via modifications. The study thoroughly analyses the methodological aspects, conducting a comprehensive analysis of the planning process, the implemented procedures, and the criteria used to decide which data to include or exclude in the evaluation framework. In addition, the study thoroughly examines the influence of BERT on several NLP tasks, such as Sentence Boundary Detection, Tokenization, Grammatical Error Detection and Correction, Dependency Parsing, Named Entity Recognition, Part of Speech Tagging, Question Answering Systems, Machine Translation, Sentiment analysis, fake review detection and Cross-lingual transfer learning. The review study adds to the current literature by integrating ideas from multiple sources, explicitly emphasizing the problems and prospects in BERT-based models. The objective is to comprehensively comprehend BERT and its implementations, targeting both experienced researchers and novices in the domain of NLP. Consequently, the present study is expected to inspire more research endeavors, promote innovative adaptations of BERT, and deepen comprehension of its extensive capabilities in various NLP applications. The results presented in this research are anticipated to influence the advancement of future language models and add to the ongoing discourse on enhancing technology for understanding natural language.
Journal Article
Taos Society of Artists
by
Hassrick, Peter H., editor
,
Peters, Gerald P., editor
,
Speidel, Melissa W., 1956- editor
in
Sharp, Joseph Henry, 1859-1953.
,
Couse, E. Irving 1866-1936.
,
Phillips, Bert Geer, 1868-1956.
2025
\"\"A lavishly illustrated two-volume study of the Taos Society of Artists. Essays on the TSA and its founding plus scholarly biographical and art historical essays on twelve TSA artists with exemplary works of the artists studied\"-Provided by publisher\"-- Provided by publisher.
Aspect-based sentiment-analysis using topic modelling and machine-learning
2024
This study addresses the critical need for an accurate aspect-based sentiment-analysis (ABSA) model to understand sentiments effectively. The existing ABSA models often face challenges in accurately extracting aspects and determining sentiment polarity from textual data. Therefore, we propose a novel approach leveraging latent-Dirichlet-allocation (LDA) for aspect extraction and transformer-based bidirectional-encoder-representations from transformers (TF-BERT) for sentiment-polarity evaluation. The experiments were carried out on SemEval 2014 laptop and restaurant datasets. Also, a multi-domain dataset was generated by combining SemEval 2014, Amazon, and hospital reviews. The results demonstrate the superiority of the LDA-TF-BERT model, achieving 82.19% accuracy and 79.52% Macro-F1 score for the laptop task and 86.26% accuracy of 87.26% and 81.27% for Macro-F1 score for the restaurant task. This showcases the model's robustness and effectiveness in accurately analyzing textual data and extracting meaningful insights. The novelty of our work lies in combining LDA and TF-BERT, providing a comprehensive and accurate ABSA solution for various industries, thereby contributing significantly to the advancement of sentiment analysis techniques.
Journal Article
Research on Text Classification Based on BERT-BiGRU Model
2021
Text classification is a typical application of natural language processing. At present, the most commonly used text classification method is deep learning. Meanwhile there are many difficulties in natural language processing, such as metaphor expression, semantic diversity and grammatical specificity. To solve these problems, this paper proposes the structure of BERT-BiGRU model. First, use the BERT model instead of the traditional word2vec model to represent the word vector, the word representation is calculated according to the context information, and it can be adjusted according to the meaning of word while the context information is fused. Secondly BiGRU model is attached to the BERT model, BiGRU model can extract the text information features from both directions at the same time. Multiple sets of experiments were set up and compared with the model proposed in this paper, according to the final experimental results, using the proposed BERT-BiGRU model for text classification, the final accuracy, recall and F1 score were all above 0.9. It shows that BERT-BiGRU model has good performance in the Chinese text classification task.
Journal Article
Survey of BERT (Bidirectional Encoder Representation Transformer) types
2021
There are many algorithms used in Natural Language Processing( NLP) to achieve good results, such as Machine Learning (ML), Deep Learning(DL) and many other algorithms. In Natural Language Processing,the first challenges is to convert text to numbers for using by any algorithm that a researcher choose. So how can convert text to numbers? This is happen by using Word Embedding algorithms such as skip gram,bags of words,BERT and etc. Representing words as numerical vectors by relying on the contents has become one of the effective methods for analyzing texts in machine learning, so that each word is represented by a vector to determine its meaning or to know how close or distant this word from the rest of the other word. BERT(Bidirectional Encoder Representation Transformer) is one of the embedding methods. It is designed to pre-trained form left and right in all layer deep training. It is a deep language model that is used for various tasks in natural language processing. In this paper we will review the different versions and types of BERT.
Journal Article
Aspect based sentiment analysis using fine-tuned BERT model with deep context features
2024
Sentiment analysis is the task of analysing, processing, inferencing and concluding the subjective texts along with sentiment. Considering the application of sentiment analysis, it is categorized into document-level, sentence-level and aspect level. In past, several researches have achieved solutions through the bidirectional encoder representations from transformers (BERT) model, however, the existing model does not understand the context of the aspect in deep, which leads to low metrics. This research work leads to the study of the aspect-based sentiment analysis presented by deep context bidirectional encoder representations from transformers (DC-BERT), main aim of the DC-BERT model is to improvise the context understating for aspects to enhance the metrics. DC-BERT model comprises fine-tuned BERT model along with a deep context features layer, which enables the model to understand the context of targeted aspects deeply. A customized feature layer is introduced to extract two distinctive features, later both features are integrated through the interaction layer. DC-BERT mode is evaluated considering the review dataset of laptops and restaurants from SemEval 2014 task 4, evaluation is carried out considering the different metrics. In comparison with the other model, DC-BERT achieves an accuracy of 84.48% and 92.86% for laptop and restaurant datasets respectively.
Journal Article
Comprehensive study of pre-trained language models: detecting humor in news headlines
by
Al-Ayyoub, Mahmoud
,
Hammad, Mahmoud
,
Abdullah, Malak
in
Artificial Intelligence
,
Computational Intelligence
,
Control
2023
The ability to automatically understand and analyze human language attracted researchers and practitioners in the Natural Language Processing (NLP) field. Detecting humor is an NLP task needed in many areas, including marketing, politics, and news. However, such a task is challenging due to the context, emotion, culture, and rhythm. To address this problem, we have proposed a robust model called BFHumor, a BERT-Flair-based Humor detection model that detects humor through news headlines. It is an ensemble model of different state-of-the-art pre-trained models utilizing various NLP techniques. We used public humor datasets from the SemEval-2020 workshop to evaluate the proposed model. As a result, the model achieved outstanding performance with 0.51966 as Root Mean Squared Error (RMSE) and 0.62291 as accuracy. In addition, we extensively investigated the underlying reasons behind the high accuracy of the BFHumor model in humor detection tasks. To that end, we conducted two experiments on the BERT model: vocabulary level and linguistic capturing level. Our investigation shows that BERT can capture surface knowledge in the lower layers, syntactic in the middle, and semantic in the higher layers.
Journal Article