Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
415
result(s) for
"Hugging"
Sort by:
Hug Machine
A young boy known as the Hug Machine is available to hug anyone, any time, whether they're square or long, spiky or soft.
A tutorial on open-source large language models for behavioral science
by
Binz, Marcel
,
Wulff, Dirk U.
,
Mata, Rui
in
Behavioral Research - methods
,
Behavioral Science and Psychology
,
Behavioral Sciences - methods
2024
Large language models (LLMs) have the potential to revolutionize behavioral science by accelerating and improving the research cycle, from conceptualization to data analysis. Unlike closed-source solutions, open-source frameworks for LLMs can enable transparency, reproducibility, and adherence to data protection standards, which gives them a crucial advantage for use in behavioral science. To help researchers harness the promise of LLMs, this tutorial offers a primer on the open-source Hugging Face ecosystem and demonstrates several applications that advance conceptual and empirical work in behavioral science, including feature extraction, fine-tuning of models for prediction, and generation of behavioral responses. Executable code is made available at
github.com/Zak-Hussain/LLM4BeSci.git
. Finally, the tutorial discusses challenges faced by research with (open-source) LLMs related to interpretability and safety and offers a perspective on future research at the intersection of language modeling and behavioral science.
Journal Article
Mom hugs
by
Joosten, Michael, author
in
Hugging Juvenile fiction.
,
Animals Juvenile fiction.
,
Hugging Fiction.
2017
Features twelve mama animals hugging their babies.
Does Hugging Provide Stress-Buffering Social Support? A Study of Susceptibility to Upper Respiratory Infection and Illness
2015
Perceived social support has been hypothesized to protect against the pathogenic effects of stress. How such protection might be conferred, however, is not well understood. Using a sample of 404 healthy adults, we examined the roles of perceived social support and received hugs in buffering against interpersonal stress-induced susceptibility to infectious disease. Perceived support was assessed by questionnaire, and daily interpersonal conflict and receipt of hugs were assessed by telephone interviews on 14 consecutive evenings. Subsequently, participants were exposed to a virus that causes a common cold and were monitored in quarantine to assess infection and illness signs. Perceived support protected against the rise in infection risk associated with increasing frequency of conflict. A similar stress-buffering effect emerged for hugging, which explained 32% of the attenuating effect of support. Among infected participants, greater perceived support and more-frequent hugs each predicted less-severe illness signs. These data suggest that hugging may effectively convey social support.
Journal Article
Bear hugs
by
Walden, Libby, author
,
Riley, Vicky, illustrator
in
Nicknames Juvenile fiction.
,
Hugging Juvenile fiction.
,
Bears Juvenile fiction.
2017
Celebrate the little special ones in your life with this sweet rhyming board book, filled with cute nicknames and bear hugs.
Elevating Offensive Language Detection: CNN-GRU and BERT for Enhanced Hate Speech Identification
by
Ruprah, Taranpreet Singh
,
Chowdhary, Harish
,
Madhavi, M.
in
Artificial neural networks
,
Contextual information
,
Cultural factors
2024
Upholding a secure and accepting digital environment is severely hindered by hate speech and inappropriate information on the internet. A novel approach that combines Convolutional Neural Network with GRU and BERT from Transformers proposed for enhancing the identification of offensive content, particularly hate speech. The method utilizes the strengths of both CNN-GRU and BERT models to capture complex linguistic patterns and contextual information present in hate speech. The proposed model first utilizes CNN-GRU to extract local and sequential features from textual data, allowing for effective representation learning of offensive language. Subsequently, BERT, advanced transformer-based model, is employed to capture contextualized representations of the text, thereby enhancing the understanding of detailed linguistic nuances and cultural contexts associated with hate speech. Fine tuning BERT model using hugging face transformer. To execute tests using datasets for hate speech identification that are made accessible to the public and show how well the method works to identify inappropriate content. By assisting with the continuing efforts to prevent the dissemination of hate speech and undesirable language online, the proposed framework promotes a more diverse and secure digital environment. The proposed method is implemented using python. The method achieves 98% competitive performance compared to existing approaches LSTM and RNN, CNN, LSTM and GBAT, showcasing its potential for real-world applications in combating online hate speech. Furthermore, it provides insights into the interpretability of the model's predictions, highlighting key linguistic and contextual factors influencing offensive language detection. The study contributes to advancing hate speech detection by integrating CNN-GRU and BERT models, giving a robust solution for enhancing offensive content identification in online platforms.
Journal Article
Do you want a hug?
by
Lewis, Kevin (Children's author), author
,
Mosqueda, Olga T, illustrator
in
Snowmen Juvenile fiction.
,
Snow Juvenile fiction.
,
Hugging Juvenile fiction.
2016
Olaf is a snowman who wants to play games with others and give lots of hugs.
Ascle—A Python Natural Language Processing Toolkit for Medical Text Generation: Development and Evaluation Study
2024
Medical texts present significant domain-specific challenges, and manually curating these texts is a time-consuming and labor-intensive process. To address this, natural language processing (NLP) algorithms have been developed to automate text processing. In the biomedical field, various toolkits for text processing exist, which have greatly improved the efficiency of handling unstructured text. However, these existing toolkits tend to emphasize different perspectives, and none of them offer generation capabilities, leaving a significant gap in the current offerings.
This study aims to describe the development and preliminary evaluation of Ascle. Ascle is tailored for biomedical researchers and clinical staff with an easy-to-use, all-in-one solution that requires minimal programming expertise. For the first time, Ascle provides 4 advanced and challenging generative functions: question-answering, text summarization, text simplification, and machine translation. In addition, Ascle integrates 12 essential NLP functions, along with query and search capabilities for clinical databases.
We fine-tuned 32 domain-specific language models and evaluated them thoroughly on 27 established benchmarks. In addition, for the question-answering task, we developed a retrieval-augmented generation (RAG) framework for large language models that incorporated a medical knowledge graph with ranking techniques to enhance the reliability of generated answers. Additionally, we conducted a physician validation to assess the quality of generated content beyond automated metrics.
The fine-tuned models and RAG framework consistently enhanced text generation tasks. For example, the fine-tuned models improved the machine translation task by 20.27 in terms of BLEU score. In the question-answering task, the RAG framework raised the ROUGE-L score by 18% over the vanilla models. Physician validation of generated answers showed high scores for readability (4.95/5) and relevancy (4.43/5), with a lower score for accuracy (3.90/5) and completeness (3.31/5).
This study introduces the development and evaluation of Ascle, a user-friendly NLP toolkit designed for medical text generation. All code is publicly available through the Ascle GitHub repository. All fine-tuned language models can be accessed through Hugging Face.
Journal Article