Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
820 result(s) for "Content analysis (Communication) Data processing."
Sort by:
Multilingual text analysis : challenges, models, and approaches
Text analytics (TA) covers a very wide research area. Its overarching goal is to discover and present knowledge - facts, rules, and relationships - that is otherwise hidden in the textual content. The authors of this book guide us in a quest to attain this knowledge automatically, by applying various machine learning techniques. This book describes recent development in multilingual text analysis. It covers several specific examples of practical TA applications, including their problem statements, theoretical background, and implementation of the proposed solution. The reader can see which preprocessing techniques and text representation models were used, how the evaluation process was designed and implemented, and how these approaches can be adapted to multilingual domains.
Quantitative Analysis of Culture Using Millions of Digitized Books
We constructed a corpus of digitized texts containing about 4% of all books ever printed. Analysis of this corpus enables us to investigate cultural trends quantitatively. We survey the vast terrain of 'culturomics,' focusing on linguistic and cultural phenomena that were reflected in the English language between 1800 and 2000. We show how this approach can provide insights about fields as diverse as lexicography, the evolution of grammar, collective memory, the adoption of technology, the pursuit of fame, censorship, and historical epidemiology. Culturomics extends the boundaries of rigorous quantitative inquiry to a wide array of new phenomena spanning the social sciences and the humanities.
Handbook of research on opinion mining and text analytics on literary works and social media
\"This book uses artificial intelligence and big data analytics to conduct opinion mining and text analytics on literary works and social media, focusing on theories, method, applications and approaches of data analytic techniques that can be used to extract and analyze data from literary books and social media, in a meaningful pattern\"-- Provided by publisher.
Advertising Content and Consumer Engagement on Social Media: Evidence from Facebook
We describe the effect of social media advertising content on customer engagement using data from Facebook. We content-code 106,316 Facebook messages across 782 companies, using a combination of Amazon Mechanical Turk and natural language processing algorithms. We use this data set to study the association of various kinds of social media marketing content with user engagement—defined as Likes , comments, shares, and click-throughs—with the messages. We find that inclusion of widely used content related to brand personality—like humor and emotion—is associated with higher levels of consumer engagement ( Likes , comments, shares) with a message. We find that directly informative content—like mentions of price and deals—is associated with lower levels of engagement when included in messages in isolation, but higher engagement levels when provided in combination with brand personality–related attributes. Also, certain directly informative content, such as deals and promotions, drive consumers’ path to conversion (click-throughs). These results persist after incorporating corrections for the nonrandom targeting of Facebook’s EdgeRank (News Feed) algorithm and so reflect more closely user reaction to content than Facebook’s behavioral targeting. Our results suggest that there are benefits to content engineering that combines informative characteristics that help in obtaining immediate leads (via improved click-throughs) with brand personality–related content that helps in maintaining future reach and branding on the social media site (via improved engagement). These results inform content design strategies. Separately, the methodology we apply to content-code text is useful for future studies utilizing unstructured data such as advertising content or product reviews. The online appendix is available at https://doi.org/10.1287/mnsc.2017.2902 . This paper was accepted by Chris Forman, information systems.
Numerical algorithms for personalized search in self-organizing information networks
\"This book lays out the theoretical groundwork for personalized search and reputation management, both on the Web and in peer-to-peer and social networks.\" The book develops scalable algorithms that exploit the graphlike properties underlying personalized search and reputation management, and delves into realistic scenarios regarding web-scale data.--[book cover]
Interoperability of heterogeneous health information systems: a systematic literature review
Background The lack of interoperability between health information systems reduces the quality of care provided to patients and wastes resources. Accordingly, there is an urgent need to develop integration mechanisms among the various health information systems. The aim of this review was to investigate the interoperability requirements for heterogeneous health information systems and to summarize and present them. Methods In accordance with the PRISMA guideline, a broad electronic search of all literature was conducted on the topic through six databases, including PubMed, Web of science, Scopus, MEDLINE, Cochrane Library and Embase to 25 July 2022. The inclusion criteria were to select English-written articles available in full text with the closest objectives. 36 articles were selected for further analysis. Results Interoperability has been raised in the field of health information systems from 2003 and now it is one of the topics of interest to researchers. The projects done in this field are mostly in the national scope and to achieve the electronic health record. HL7 FHIR, CDA, HIPAA and SNOMED-CT, SOA, RIM, XML, API, JAVA and SQL are among the most important requirements for implementing interoperability. In order to guarantee the concept of data exchange, semantic interaction is the best choice because the systems can recognize and process semantically similar information homogeneously. Conclusions The health industry has become more complex and has new needs. Interoperability meets this needs by communicating between the output and input of processor systems and making easier to access the data in the required formats.
Issues and Best Practices in Content Analysis
This article discusses three issues concerning content analysis method and ends with a list of best practices in conducting and reporting content analysis projects. Issues addressed include the use of search and databases for sampling, the differences between content analysis and algorithmic text analysis, and which reliability coefficients should be calculated and reported. The “Best Practices” section provides steps to produce reliable and valid content analysis data and the appropriate reporting of those steps so the project can be properly evaluated and replicated.
A performance comparison of supervised machine learning models for Covid-19 tweets sentiment analysis
The spread of Covid-19 has resulted in worldwide health concerns. Social media is increasingly used to share news and opinions about it. A realistic assessment of the situation is necessary to utilize resources optimally and appropriately. In this research, we perform Covid-19 tweets sentiment analysis using a supervised machine learning approach. Identification of Covid-19 sentiments from tweets would allow informed decisions for better handling the current pandemic situation. The used dataset is extracted from Twitter using IDs as provided by the IEEE data port. Tweets are extracted by an in-house built crawler that uses the Tweepy library. The dataset is cleaned using the preprocessing techniques and sentiments are extracted using the TextBlob library. The contribution of this work is the performance evaluation of various machine learning classifiers using our proposed feature set. This set is formed by concatenating the bag-of-words and the term frequency-inverse document frequency. Tweets are classified as positive, neutral, or negative. Performance of classifiers is evaluated on the accuracy, precision, recall, and F 1 score. For completeness, further investigation is made on the dataset using the Long Short-Term Memory (LSTM) architecture of the deep learning model. The results show that Extra Trees Classifiers outperform all other models by achieving a 0.93 accuracy score using our proposed concatenated features set. The LSTM achieves low accuracy as compared to machine learning classifiers. To demonstrate the effectiveness of our proposed feature set, the results are compared with the Vader sentiment analysis technique based on the GloVe feature extraction approach.
An exploratory content and sentiment analysis of the guardian metaverse articles using leximancer and natural language processing
The metaverse has become one of the most popular concepts of recent times. Companies and entrepreneurs are fiercely competing to invest and take part in this virtual world. Millions of people globally are anticipated to spend much of their time in the metaverse, regardless of their age, gender, ethnicity, or culture. There are few comprehensive studies on the positive/negative sentiment and effect of the newly identified, but not well defined, metaverse concept that is already fast evolving the digital landscape. Thereby, this study aimed to better understand the metaverse concept, by, firstly, identifying the positive and negative sentiment characteristics and, secondly, by revealing the associations between the metaverse concept and other related concepts. To do so, this study used Natural Language Processing (NLP) methods, specifically Artificial Intelligence (AI) with computational qualitative analysis. The data comprised metaverse articles from 2021 to 2022 published on The Guardian website, a key global mainstream media outlet. To perform thematic content analysis of the qualitative data, this research used the Leximancer software, and the The Natural Language Toolkit (NLTK) from NLP libraries were used to identify sentiment. Further, an AI-based Monkeylearn API was used to make sectoral classifications of the main topics that emerged in the Leximancer analysis. The key themes which emerged in the Leximancer analysis, included \"metaverse\", \"Facebook\", \"games\" and \"platforms\". The sentiment analysis revealed that of all articles published in the period of 2021–2022 about the metaverse, 61% (n = 622) were positive, 30% (n = 311) were negative, and 9% (n = 90) were neutral. Positive discourses about the metaverse were found to concern key innovations that the virtual experiences brought to users and companies with the support of the technological infrastructure of blockchain, algorithms, NFTs, led by the gaming world. Negative discourse was found to evidence various problems (misinformation, harmful content, algorithms, data, and equipment) that occur during the use of Facebook and other social media platforms, and that individuals encountered harm in the metaverse or that the metaverse produces new problems. Monkeylearn findings revealed “marketing/advertising/PR” role, “Recreational” business, “Science & Technology” events as the key content topics. This study’s contribution is twofold: first, it showcases a novel way to triangulate qualitative data analysis of large unstructured textual data as a method in exploring the metaverse concept; and second, the study reveals the characteristics of the metaverse as a concept, as well as its association with other related concepts. Given that the topic of the metaverse is new, this is the first study, to our knowledge, to do both.
Twitter as a Tool for Health Research: A Systematic Review
Background. Researchers have used traditional databases to study public health for decades. Less is known about the use of social media data sources, such as Twitter, for this purpose. Objectives. To systematically review the use of Twitter in health research, define a taxonomy to describe Twitter use, and characterize the current state of Twitter in health research. Search methods. We performed a literature search in PubMed, Embase, Web of Science, Google Scholar, and CINAHL through September 2015. Selection criteria. We searched for peer-reviewed original research studies that primarily used Twitter for health research. Data collection and analysis. Two authors independently screened studies and abstracted data related to the approach to analysis of Twitter data, methodology used to study Twitter, and current state of Twitter research by evaluating time of publication, research topic, discussion of ethical concerns, and study funding source. Main results. Of 1110 unique health-related articles mentioning Twitter, 137 met eligibility criteria. The primary approaches for using Twitter in health research that constitute a new taxonomy were content analysis (56%; n = 77), surveillance (26%; n = 36), engagement (14%; n = 19), recruitment (7%; n = 9), intervention (7%; n = 9), and network analysis (4%; n = 5). These studies collectively analyzed more than 5 billion tweets primarily by using the Twitter application program interface. Of 38 potential data features describing tweets and Twitter users, 23 were reported in fewer than 4% of the articles. The Twitter-based studies in this review focused on a small subset of data elements including content analysis, geotags, and language. Most studies were published recently (33% in 2015). Public health (23%; n = 31) and infectious disease (20%; n = 28) were the research fields most commonly represented in the included studies. Approximately one third of the studies mentioned ethical board approval in their articles. Primary funding sources included federal (63%), university (13%), and foundation (6%). Conclusions. We identified a new taxonomy to describe Twitter use in health research with 6 categories. Many data elements discernible from a user’s Twitter profile, especially demographics, have been underreported in the literature and can provide new opportunities to characterize the users whose data are analyzed in these studies. Twitter-based health research is a growing field funded by a diversity of organizations. Public health implications. Future work should develop standardized reporting guidelines for health researchers who use Twitter and policies that address privacy and ethical concerns in social media research.