Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
124,049
result(s) for
"Language processing"
Sort by:
Learning to Prompt for Vision-Language Models
2022
Large pre-trained vision-language models like CLIP have shown great potential in learning representations that are transferable across a wide range of downstream tasks. Different from the traditional representation learning that is based mostly on discretized labels, vision-language pre-training aligns images and texts in a common feature space, which allows zero-shot transfer to a downstream task via prompting, i.e., classification weights are synthesized from natural language describing classes of interest. In this work, we show that a major challenge for deploying such models in practice is prompt engineering, which requires domain expertise and is extremely time-consuming—one needs to spend a significant amount of time on words tuning since a slight change in wording could have a huge impact on performance. Inspired by recent advances in prompt learning research in natural language processing (NLP), we propose Context Optimization (CoOp), a simple approach specifically for adapting CLIP-like vision-language models for downstream image recognition. Concretely, CoOp models a prompt’s context words with learnable vectors while the entire pre-trained parameters are kept fixed. To handle different image recognition tasks, we provide two implementations of CoOp: unified context and class-specific context. Through extensive experiments on 11 datasets, we demonstrate that CoOp requires as few as one or two shots to beat hand-crafted prompts with a decent margin and is able to gain significant improvements over prompt engineering with more shots, e.g., with 16 shots the average gain is around 15% (with the highest reaching over 45%). Despite being a learning-based approach, CoOp achieves superb domain generalization performance compared with the zero-shot model using hand-crafted prompts.
Journal Article
Corpus Stylistics
2004
This book combines stylistic analysis with corpus linguistics to present an innovative account of the phenomenon of speech, writing and thought presentation - commonly referred to as 'speech reporting' or 'discourse presentation'. This new account is based on an extensive analysis of a quarter-of-a-million word electronic collection of written narrative texts, including both fiction and non-fiction. The book includes detailed discussions of:
The construction of this corpus of late twentieth-century written British narratives taken from fiction, newspaper news reports and (auto)biographies
The development of a manual annotation system for speech, writing and thought presentation and its application to the corpus.
The findings of a quantitive and qualitative analysis of the forms and functions of speech, writing and thought presentation in the three genres represented in the corpus.
The findings of the analysis of a range of specific phenomena, including hypothetical speech, writing and thought presentation, embedded speech, writing and thought presentation and ambiguities in speech, writing and thought presentation.
Two case studies concentrating on specific texts from the corpus.
Corpus Stylistics shows how stylistics, and text/discourse analysis more generally, can benefit from the use of a corpus methodology and the authors' innovative approach results in a more reliable and comprehensive categorisation of the forms of speech, writing and thought presentation than have been suggested so far. This book is essential reading for linguists interested in the areas of stylistics and corpus linguistics.
Elena Semino is Senior Lecturer in the Department of Linguistics and Modern English Language at Lancaster University. She is the author of Language and World Creation in Poems and Other Texts (1997), and co-editor (with Jonathan Culpetter) of Cognitive Stylistics: Language and Cognition in Text Analysis (2002). Mick Short is Professor of English Language and Literature at Lancaster University. He has written Exploring the Language of Poems, Plays and Prose (1997) and (with Geoffrey Leech) Style in Fiction (1997). He founded the Poetics and Linguistics Association and was the founding editor of its international journal, Language and Literature.
1. A Corpus-Based Approach to the Study of Discourse Presentation in Written Narratives 2. Methodology: The Construction and Annotation of the Corpus 3. A Revised Model of Speech, Writing and Thought Presentation 4. Speech Presentation in the Corpus: A Quantitative and Qualitative Analysis 5. Writing Presentation in the Corpus: A Quantitative and Qualitative Analysis 6. Thought Presentation in the Corpus: A Quantitative and Qualitative Analysis 7. Specific Phenomena in Speech, Writing Presentation 8. Case Studies of Specific Texts from the Corpus 9. Conclusion
The Chinese computer : a global history of the information age
\"Exploration of the largely unknown history of Chinese-language computing systems, accessible to an audience unfamiliar with the Chinese language or the technical workings of personal computers\"-- Provided by publisher.
A Comprehensive Study of ChatGPT: Advancements, Limitations, and Ethical Considerations in Natural Language Processing and Cybersecurity
by
Mejri, Sami
,
Alawida, Moatsum
,
Chikhaoui, Belkacem
in
Accuracy
,
Artificial intelligence
,
Automatic summarization
2023
This paper presents an in-depth study of ChatGPT, a state-of-the-art language model that is revolutionizing generative text. We provide a comprehensive analysis of its architecture, training data, and evaluation metrics and explore its advancements and enhancements over time. Additionally, we examine the capabilities and limitations of ChatGPT in natural language processing (NLP) tasks, including language translation, text summarization, and dialogue generation. Furthermore, we compare ChatGPT to other language generation models and discuss its applicability in various tasks. Our study also addresses the ethical and privacy considerations associated with ChatGPT and provides insights into mitigation strategies. Moreover, we investigate the role of ChatGPT in cyberattacks, highlighting potential security risks. Lastly, we showcase the diverse applications of ChatGPT in different industries and evaluate its performance across languages and domains. This paper offers a comprehensive exploration of ChatGPT’s impact on the NLP field.
Journal Article
Large language models (LLMs): survey, technical frameworks, and future challenges
Artificial intelligence (AI) has significantly impacted various fields. Large language models (LLMs) like GPT-4, BARD, PaLM, Megatron-Turing NLG, Jurassic-1 Jumbo etc., have contributed to our understanding and application of AI in these domains, along with natural language processing (NLP) techniques. This work provides a comprehensive overview of LLMs in the context of language modeling, word embeddings, and deep learning. It examines the application of LLMs in diverse fields including text generation, vision-language models, personalized learning, biomedicine, and code generation. The paper offers a detailed introduction and background on LLMs, facilitating a clear understanding of their fundamental ideas and concepts. Key language modeling architectures are also discussed, alongside a survey of recent works employing LLM methods for various downstream tasks across different domains. Additionally, it assesses the limitations of current approaches and highlights the need for new methodologies and potential directions for significant advancements in this field.
Journal Article
Introduction to natural language processing
\"The book provides a technical perspective on the most contemporary data-driven approaches, focusing on techniques from supervised and unsupervised machine learning. It also includes background in the salient linguistic issues, as well as computational representations and algorithms. The first section of the book explores what can be with individual words. The second section concerns structured representations such as sequences, trees, and graphs. The third section highlights different approaches to the representation and analysis of linguistic meaning. The final section describes three of the most transformative applications of natural language processing: information extraction, machine translation, and text generation. The book describes the technical foundations of the field, including the most relevant machine learning techniques, algorithms, and linguistic representations. From these foundations, it extends to contemporary research in areas such as deep learning. Each chapter contains exercises that include paper-and-pencil analysis of the computational algorithms and linguistic issues, as well as software implementations\"-- Provided by publisher.
Prompt Engineering as an Important Emerging Skill for Medical Professionals: Tutorial
2023
Prompt engineering is a relatively new field of research that refers to the practice of designing, refining, and implementing prompts or instructions that guide the output of large language models (LLMs) to help in various tasks. With the emergence of LLMs, the most popular one being ChatGPT that has attracted the attention of over a 100 million users in only 2 months, artificial intelligence (AI), especially generative AI, has become accessible for the masses. This is an unprecedented paradigm shift not only because of the use of AI becoming more widespread but also due to the possible implications of LLMs in health care. As more patients and medical professionals use AI-based tools, LLMs being the most popular representatives of that group, it seems inevitable to address the challenge to improve this skill. This paper summarizes the current state of research about prompt engineering and, at the same time, aims at providing practical recommendations for the wide range of health care professionals to improve their interactions with LLMs.
Journal Article