Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
415 result(s) for "Hugging"
Sort by:
Hug Machine
A young boy known as the Hug Machine is available to hug anyone, any time, whether they're square or long, spiky or soft.
A tutorial on open-source large language models for behavioral science
Large language models (LLMs) have the potential to revolutionize behavioral science by accelerating and improving the research cycle, from conceptualization to data analysis. Unlike closed-source solutions, open-source frameworks for LLMs can enable transparency, reproducibility, and adherence to data protection standards, which gives them a crucial advantage for use in behavioral science. To help researchers harness the promise of LLMs, this tutorial offers a primer on the open-source Hugging Face ecosystem and demonstrates several applications that advance conceptual and empirical work in behavioral science, including feature extraction, fine-tuning of models for prediction, and generation of behavioral responses. Executable code is made available at github.com/Zak-Hussain/LLM4BeSci.git . Finally, the tutorial discusses challenges faced by research with (open-source) LLMs related to interpretability and safety and offers a perspective on future research at the intersection of language modeling and behavioral science.
Does Hugging Provide Stress-Buffering Social Support? A Study of Susceptibility to Upper Respiratory Infection and Illness
Perceived social support has been hypothesized to protect against the pathogenic effects of stress. How such protection might be conferred, however, is not well understood. Using a sample of 404 healthy adults, we examined the roles of perceived social support and received hugs in buffering against interpersonal stress-induced susceptibility to infectious disease. Perceived support was assessed by questionnaire, and daily interpersonal conflict and receipt of hugs were assessed by telephone interviews on 14 consecutive evenings. Subsequently, participants were exposed to a virus that causes a common cold and were monitored in quarantine to assess infection and illness signs. Perceived support protected against the rise in infection risk associated with increasing frequency of conflict. A similar stress-buffering effect emerged for hugging, which explained 32% of the attenuating effect of support. Among infected participants, greater perceived support and more-frequent hugs each predicted less-severe illness signs. These data suggest that hugging may effectively convey social support.
Bear hugs
Celebrate the little special ones in your life with this sweet rhyming board book, filled with cute nicknames and bear hugs.
Elevating Offensive Language Detection: CNN-GRU and BERT for Enhanced Hate Speech Identification
Upholding a secure and accepting digital environment is severely hindered by hate speech and inappropriate information on the internet. A novel approach that combines Convolutional Neural Network with GRU and BERT from Transformers proposed for enhancing the identification of offensive content, particularly hate speech. The method utilizes the strengths of both CNN-GRU and BERT models to capture complex linguistic patterns and contextual information present in hate speech. The proposed model first utilizes CNN-GRU to extract local and sequential features from textual data, allowing for effective representation learning of offensive language. Subsequently, BERT, advanced transformer-based model, is employed to capture contextualized representations of the text, thereby enhancing the understanding of detailed linguistic nuances and cultural contexts associated with hate speech. Fine tuning BERT model using hugging face transformer. To execute tests using datasets for hate speech identification that are made accessible to the public and show how well the method works to identify inappropriate content. By assisting with the continuing efforts to prevent the dissemination of hate speech and undesirable language online, the proposed framework promotes a more diverse and secure digital environment. The proposed method is implemented using python. The method achieves 98% competitive performance compared to existing approaches LSTM and RNN, CNN, LSTM and GBAT, showcasing its potential for real-world applications in combating online hate speech. Furthermore, it provides insights into the interpretability of the model's predictions, highlighting key linguistic and contextual factors influencing offensive language detection. The study contributes to advancing hate speech detection by integrating CNN-GRU and BERT models, giving a robust solution for enhancing offensive content identification in online platforms.
Ascle—A Python Natural Language Processing Toolkit for Medical Text Generation: Development and Evaluation Study
Medical texts present significant domain-specific challenges, and manually curating these texts is a time-consuming and labor-intensive process. To address this, natural language processing (NLP) algorithms have been developed to automate text processing. In the biomedical field, various toolkits for text processing exist, which have greatly improved the efficiency of handling unstructured text. However, these existing toolkits tend to emphasize different perspectives, and none of them offer generation capabilities, leaving a significant gap in the current offerings. This study aims to describe the development and preliminary evaluation of Ascle. Ascle is tailored for biomedical researchers and clinical staff with an easy-to-use, all-in-one solution that requires minimal programming expertise. For the first time, Ascle provides 4 advanced and challenging generative functions: question-answering, text summarization, text simplification, and machine translation. In addition, Ascle integrates 12 essential NLP functions, along with query and search capabilities for clinical databases. We fine-tuned 32 domain-specific language models and evaluated them thoroughly on 27 established benchmarks. In addition, for the question-answering task, we developed a retrieval-augmented generation (RAG) framework for large language models that incorporated a medical knowledge graph with ranking techniques to enhance the reliability of generated answers. Additionally, we conducted a physician validation to assess the quality of generated content beyond automated metrics. The fine-tuned models and RAG framework consistently enhanced text generation tasks. For example, the fine-tuned models improved the machine translation task by 20.27 in terms of BLEU score. In the question-answering task, the RAG framework raised the ROUGE-L score by 18% over the vanilla models. Physician validation of generated answers showed high scores for readability (4.95/5) and relevancy (4.43/5), with a lower score for accuracy (3.90/5) and completeness (3.31/5). This study introduces the development and evaluation of Ascle, a user-friendly NLP toolkit designed for medical text generation. All code is publicly available through the Ascle GitHub repository. All fine-tuned language models can be accessed through Hugging Face.