Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
31 result(s) for "Shaikh, Samira"
Sort by:
Using reinforcement learning with external rewards for open-domain natural language generation
We propose a new approach towards emotional natural language generation using bidirectional seq2seq model. Our goal is to generate emotionally relevant language that accommodates the emotional tone of the prior context. To incorporate emotional information, we train our own embeddings appended with emotion values through valence, arousal and dominance scores. We use a reinforcement-learning framework, which is tuned using policy gradient method. Two of the internal rewards in our reinforcement learning framework, viz. Ease of Answering and Semantic Coherence are based on prior state-of-the-art. We propose a new internal reward, Emotional Intelligence, computed by minimizing the affective dissonance between the source and generated text. We also train a separate external reward analyzer to predict the rewards as well as to maximize the expected rewards (both internal and external). We evaluate the system on two common corpora used for Natural Language Generation tasks: the Cornell Movie Dialog and Yelp Restaurant Review Corpus. We report standard evaluation metrics including BLEU, ROUGE-L and perplexity as well as human evaluation to validate our approach. We demonstrate the ability of proposed model to generate emotionally appropriate responses on both corpora.
Optimizing Recruitment for Qualitative Research: A Comparison of Social Media, Emails, and Offline Methods
Participant recruitment through social media platforms has been suggested as an effective method for sampling from specific populations; however, recent online recruitment attempts have been met with varying levels of success. In the current study, we targeted a specific social media population: those who advocate for the Black Lives Matter movement, delineating between those with high and with low follower counts on Twitter. We compared the outcomes of our recruitment methods, which include Facebook ads; unpaid, personalized tweets; emails to groups involved in community advocacy; and offline methods. Included in our analysis is the amount of effort involved in each recruitment method, as well as advertising costs. Based on our comparison, Facebook advertising was the most effective form of social media recruitment for our study. In contrast, unpaid, personalized tweets were time consuming and ineffective.
Can we generate shellcodes via natural language? An empirical study
Writing software exploits is an important practice for offensive security analysts to investigate and prevent attacks. In particular, shellcodes are especially time-consuming and a technical challenge, as they are written in assembly language. In this work, we address the task of automatically generating shellcodes, starting purely from descriptions in natural language, by proposing an approach based on Neural Machine Translation (NMT). We then present an empirical study using a novel dataset ( Shellcode_IA32 ), which consists of 3200 assembly code snippets of real Linux/x86 shellcodes from public databases, annotated using natural language. Moreover, we propose novel metrics to evaluate the accuracy of NMT at generating shellcodes. The empirical analysis shows that NMT can generate assembly code snippets from the natural language with high accuracy and that in many cases can generate entire shellcodes with no errors.
Dynamics of Health Agency Response and Public Engagement in Public Health Emergency: A Case Study of CDC Tweeting Patterns During the 2016 Zika Epidemic
Social media have been increasingly adopted by health agencies to disseminate information, interact with the public, and understand public opinion. Among them, the Centers for Disease Control and Prevention (CDC) is one of the first US government health agencies to adopt social media during health emergencies and crisis. It had been active on Twitter during the 2016 Zika epidemic that caused 5168 domestic noncongenital cases in the United States. The aim of this study was to quantify the temporal variabilities in CDC's tweeting activities throughout the Zika epidemic, public engagement defined as retweeting and replying, and Zika case counts. It then compares the patterns of these 3 datasets to identify possible discrepancy among domestic Zika case counts, CDC's response on Twitter, and public engagement in this topic. All of the CDC-initiated tweets published in 2016 with corresponding retweets and replies were collected from 67 CDC-associated Twitter accounts. Both univariate and multivariate time series analyses were performed in each quarter of 2016 for domestic Zika case counts, CDC tweeting activities, and public engagement in the CDC-initiated tweets. CDC sent out >84.0% (5130/6104) of its Zika tweets in the first quarter of 2016 when Zika case counts were low in the 50 US states and territories (only 560/5168, 10.8% cases and 662/38,885, 1.70% cases, respectively). While Zika case counts increased dramatically in the second and third quarters, CDC efforts on Twitter substantially decreased. The time series of public engagement in the CDC-initiated tweets generally differed among quarters and from that of original CDC tweets based on autoregressive integrated moving average model results. Both original CDC tweets and public engagement had the highest mutual information with Zika case counts in the second quarter. Furthermore, public engagement in the original CDC tweets was substantially correlated with and preceded actual Zika case counts. Considerable discrepancies existed among CDC's original tweets regarding Zika, public engagement in these tweets, and actual Zika epidemic. The patterns of these discrepancies also varied between different quarters in 2016. CDC was much more active in the early warning of Zika, especially in the first quarter of 2016. Public engagement in CDC's original tweets served as a more prominent predictor of actual Zika epidemic than the number of CDC's original tweets later in the year.
Insights into codeswitching from online communication: Effects of language preference and conditions arising from vocabulary richness
Twitter data from a crisis that impacted many English–Spanish bilinguals show that the direction of codeswitches is associated with the statistically documented tendency of single speakers to prefer one language over another in their tweets, as gleaned from their tweeting history. Further, lexical diversity, a measure of vocabulary richness derived from information-theoretic measures of uncertainty in communication, is greater in proximity to a codeswitch than in productions remote from a switch. The prospects of a role for lexical diversity in characterizing the conditions for a language switch suggest that communicative precision may induce conditions that attenuate constraints against language mixing.
Persuasion in Online Communication - Automation and Counteraction
In this thesis, we studied persuasion in online communication and how to automate persuasive behavior in an autonomous chat agent. We implemented known persuasive strategies into the agent, which are based upon the strength and evaluation of the beliefs expressed by participants in conversation, to induce belief change. The foundation of our persuasive strategies comes from the summative model of attitude, where belief change leads to attitude change, and, ultimately, behavior change. Upon placing an agent in the midst of conversations, it is able to discern beliefs that are expressed by the participants in the group, and use them to ascertain participant’s opinions on topics of discussion. Using this information and drawing upon theories of influence and persuasion from social psychology, cognitive science and communication, the agent aligns participants towards or against a particular issue. We organized the work in three phases. First, we conducted a belief elicitation study to obtain salient beliefs on a variety of social issues and used these salient beliefs to create survey instruments. Next, we programmed behaviors and strategies in the agent that were aimed at persuading individuals through online conversation as well as counteracting persuasion by the participants. The behaviors programmed in the agents are triggered, in part, by a variety of linguistic cues emerging from the conversation, such as dialogue acts, topic, polarity and communication acts. The annotated context of conversation is used to inform the agent’s models by updating the underlying beliefs of participants in real time. Third, we ran controlled experiments with human participants to validate the chat agent in a variety of settings, including Wizard-of-Oz and autonomous agent conditions to determine the efficacy of its programmed strategies. In the validation experiments, we used pre-discussion and post-discussion surveys to determine changes in participants’ attitudes prior to and after a discussion. We showed that the agent achieved statistically significant changes in the participant’s attitudes, thus demonstrating its effectiveness in being persuasive. Through our work, we have shown that specific persuasion strategies can be automated as well as counteracted using sophisticated communication models built upon sociolinguistic and psychological theories of social influence.
Modeling Sociocultural phenomena in discourse
Abstract In this paper, we describe a novel approach to computational modeling and understanding of social and cultural phenomena in multi-party dialogues. We developed a two-tier approach in which we first detect and classify certain sociolinguistic behaviors, including topic control, disagreement, and involvement, that serve as first-order models from which presence the higher level social roles, such as leadership, may be inferred. [PUBLICATION ABSTRACT]
A Survey on Artificial Intelligence for Source Code: A Dialogue Systems Perspective
In this survey paper, we overview major deep learning methods used in Natural Language Processing (NLP) and source code over the last 35 years. Next, we present a survey of the applications of Artificial Intelligence (AI) for source code, also known as Code Intelligence (CI) and Programming Language Processing (PLP). We survey over 287 publications and present a software-engineering centered taxonomy for CI placing each of the works into one category describing how it best assists the software development cycle. Then, we overview the field of conversational assistants and their applications in software engineering and education. Lastly, we highlight research opportunities at the intersection of AI for code and conversational assistants and provide future directions for researching conversational assistants with CI capabilities.
Emotional Neural Language Generation Grounded in Situational Contexts
Emotional language generation is one of the keys to human-like artificial intelligence. Humans use different type of emotions depending on the situation of the conversation. Emotions also play an important role in mediating the engagement level with conversational partners. However, current conversational agents do not effectively account for emotional content in the language generation process. To address this problem, we develop a language modeling approach that generates affective content when the dialogue is situated in a given context. We use the recently released Empathetic-Dialogues corpus to build our models. Through detailed experiments, we find that our approach outperforms the state-of-the-art method on the perplexity metric by about 5 points and achieves a higher BLEU metric score.
Towards Best Experiment Design for Evaluating Dialogue System Output
To overcome the limitations of automated metrics (e.g. BLEU, METEOR) for evaluating dialogue systems, researchers typically use human judgments to provide convergent evidence. While it has been demonstrated that human judgments can suffer from the inconsistency of ratings, extant research has also found that the design of the evaluation task affects the consistency and quality of human judgments. We conduct a between-subjects study to understand the impact of four experiment conditions on human ratings of dialogue system output. In addition to discrete and continuous scale ratings, we also experiment with a novel application of Best-Worst scaling to dialogue evaluation. Through our systematic study with 40 crowdsourced workers in each task, we find that using continuous scales achieves more consistent ratings than Likert scale or ranking-based experiment design. Additionally, we find that factors such as time taken to complete the task and no prior experience of participating in similar studies of rating dialogue system output positively impact consistency and agreement amongst raters