Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
183,648 result(s) for "chatbot"
Sort by:
Can you spot text from ChatGPT?
ChatGPT keeps trying to sound more human. Here are the latest signs to look for.
Is the A.I. Boom Just Vibes?
Are we living through an A.I. bubble? Or is it all just vibes? Jason Furman, a contributing Opinion writer and an economist at the Harvard Kennedy School, tells Ross Douthat that while it’s hard to put a number on it, “there’s something enormous going on here.”
Antecedents and consequences of chatbot initial trust
Purpose Artificial intelligence chatbots are shifting the nature of online services by revolutionizing the interactions of service providers with consumers. Thus, this study aims to explore the antecedents (e.g. compatibility, perceived ease of use, performance expectancy and social influence) and consequences (e.g. chatbot usage intention and customer engagement) of chatbot initial trust. Design/methodology/approach A sample of 184 responses was collected in Lebanon using a questionnaire and analyzed using structural equation modeling (SEM) by AMOS 24. Findings The results revealed that except for performance expectancy, all the other three factors (compatibility, perceived ease of use and social influence) significantly boost customers’ initial trust toward chatbots. Further, initial trust in chatbots enhances the intention to use chatbots and encourages customer engagement. Research limitations/implications The study provides insights into some variables influencing initial chatbot trust. Future studies could extend the model by adding other variables (e.g. customer experience and attitude), in addition to exploring the dark side of artificial intelligence chatbots. Practical implications This study suggests key insights for marketing managers on how to build chatbot initial trust, which, in turn, will lead to an increase in customers’ interactions with the brand. Originality/value The current study marks substantial contributions to the artificial intelligence marketing literature by proposing and testing a novel conceptual model that examines for the first time the factors that impact chatbot initial trust and the key outcomes of the latter.
Large language models in education: A focus on the complementary relationship between human teachers and ChatGPT
Artificial Intelligence (AI) is developing in a manner that blurs the boundaries between specific areas of application and expands its capability to be used in a wide range of applications. The public release of ChatGPT, a generative AI chatbot powered by a large language model (LLM), represents a significant step forward in this direction. Accordingly, professionals predict that this technology will affect education, including the role of teachers. However, despite some assumptions regarding its influence on education, how teachers may actually use the technology and the nature of its relationship with teachers remain under-investigated. Thus, in this study, the relationship between ChatGPT and teachers was explored with a particular focus on identifying the complementary roles of each in education. Eleven language teachers were asked to use ChatGPT for their instruction during a period of two weeks. They then participated in individual interviews regarding their experiences and provided interaction logs produced during their use of the technology. Through qualitative analysis of the data, four ChatGPT roles (interlocutor, content provider, teaching assistant, and evaluator) and three teacher roles (orchestrating different resources with quality pedagogical decisions, making students active investigators, and raising AI ethical awareness) were identified. Based on the findings, an in-depth discussion of teacher-AI collaboration is presented, highlighting the importance of teachers’ pedagogical expertise when using AI tools. Implications regarding the future use of LLM-powered chatbots in education are also provided.
Trust me, I'm a bot – repercussions of chatbot disclosure in different service frontline settings
PurposeChatbots are increasingly prevalent in the service frontline. Due to advancements in artificial intelligence, chatbots are often indistinguishable from humans. Regarding the question whether firms should disclose their chatbots' nonhuman identity or not, previous studies find negative consumer reactions to chatbot disclosure. By considering the role of trust and service-related context factors, this study explores how negative effects of chatbot disclosure for customer retention can be prevented.Design/methodology/approachThis paper presents two experimental studies that examine the effect of disclosing the nonhuman identity of chatbots on customer retention. While the first study examines the effect of chatbot disclosure for different levels of service criticality, the second study considers different service outcomes. The authors employ analysis of covariance and mediation analysis to test their hypotheses.FindingsChatbot disclosure has a negative indirect effect on customer retention through mitigated trust for services with high criticality. In cases where a chatbot fails to handle the customer's service issue, disclosing the chatbot identity not only lacks negative impact but even elicits a positive effect on retention.Originality/valueThe authors provide evidence that customers will react differently to chatbot disclosure depending on the service frontline setting. They show that chatbot disclosure does not only have undesirable consequences as previous studies suspect but can lead to positive reactions as well. By doing so, the authors draw a more balanced picture on the consequences of chatbot disclosure.
Help! Is my chatbot falling into the uncanny valley? An empirical study of user experience in human-chatbot interaction
Advances in artificial intelligence strengthen chatbots’ ability to resemble human conversational agents. For some application areas, it may be tempting not to be transparent regarding a conversational agent’s nature as chatbot or human. However, the uncanny valley theory suggests that such lack in transparency may cause uneasy feelings in the user. In this study, we combined quantitative and qualitative methods to investigate this issue. First, we used a 2 x 2 experimental research design (n = 28) to investigate effects of lack in transparency on the perceived pleasantness of the conversation in addition to perceived human likeness and affinity for the conversational agent. Second, we conducted an exploratory analysis of qualitative participant reports on these conversations. We did not find that a lack in transparency negatively affected user experience, but we identified three factors important to participants’ assessments. The findings are of theoretical and practical significance and motivate future research.