Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
183,648
result(s) for
"chatbot"
Sort by:
Can you spot text from ChatGPT?
in
Chatbots
2025
ChatGPT keeps trying to sound more human. Here are the latest signs to look for.
Streaming Video
Is the A.I. Boom Just Vibes?
in
Chatbots
2025
Are we living through an A.I. bubble? Or is it all just vibes? Jason Furman, a contributing Opinion writer and an economist at the Harvard Kennedy School, tells Ross Douthat that while it’s hard to put a number on it, “there’s something enormous going on here.”
Streaming Video
P192/S4-P2 USO DE CHATGPT EN EL EJERCICIO DE LA NUTRICION HUMANA. ¿FUNCIONA PARA LA VALORACION CLINICA Y CLASIFICACION NOVA DE PROCESAMIENTOS DE PRODUCTOS ALIMENTICIOS?
by
Licda Ana Lissette Guzmán
,
Sra Norma Carolina Alfaro Villatoro
,
Dr Wilton Pérez Corrales INCAP
in
Chatbots
2023
Journal Article
Antecedents and consequences of chatbot initial trust
2022
Purpose
Artificial intelligence chatbots are shifting the nature of online services by revolutionizing the interactions of service providers with consumers. Thus, this study aims to explore the antecedents (e.g. compatibility, perceived ease of use, performance expectancy and social influence) and consequences (e.g. chatbot usage intention and customer engagement) of chatbot initial trust.
Design/methodology/approach
A sample of 184 responses was collected in Lebanon using a questionnaire and analyzed using structural equation modeling (SEM) by AMOS 24.
Findings
The results revealed that except for performance expectancy, all the other three factors (compatibility, perceived ease of use and social influence) significantly boost customers’ initial trust toward chatbots. Further, initial trust in chatbots enhances the intention to use chatbots and encourages customer engagement.
Research limitations/implications
The study provides insights into some variables influencing initial chatbot trust. Future studies could extend the model by adding other variables (e.g. customer experience and attitude), in addition to exploring the dark side of artificial intelligence chatbots.
Practical implications
This study suggests key insights for marketing managers on how to build chatbot initial trust, which, in turn, will lead to an increase in customers’ interactions with the brand.
Originality/value
The current study marks substantial contributions to the artificial intelligence marketing literature by proposing and testing a novel conceptual model that examines for the first time the factors that impact chatbot initial trust and the key outcomes of the latter.
Journal Article
Large language models in education: A focus on the complementary relationship between human teachers and ChatGPT
2023
Artificial Intelligence (AI) is developing in a manner that blurs the boundaries between specific areas of application and expands its capability to be used in a wide range of applications. The public release of ChatGPT, a generative AI chatbot powered by a large language model (LLM), represents a significant step forward in this direction. Accordingly, professionals predict that this technology will affect education, including the role of teachers. However, despite some assumptions regarding its influence on education, how teachers may actually use the technology and the nature of its relationship with teachers remain under-investigated. Thus, in this study, the relationship between ChatGPT and teachers was explored with a particular focus on identifying the complementary roles of each in education. Eleven language teachers were asked to use ChatGPT for their instruction during a period of two weeks. They then participated in individual interviews regarding their experiences and provided interaction logs produced during their use of the technology. Through qualitative analysis of the data, four ChatGPT roles (interlocutor, content provider, teaching assistant, and evaluator) and three teacher roles (orchestrating different resources with quality pedagogical decisions, making students active investigators, and raising AI ethical awareness) were identified. Based on the findings, an in-depth discussion of teacher-AI collaboration is presented, highlighting the importance of teachers’ pedagogical expertise when using AI tools. Implications regarding the future use of LLM-powered chatbots in education are also provided.
Journal Article
Trust me, I'm a bot – repercussions of chatbot disclosure in different service frontline settings
2022
PurposeChatbots are increasingly prevalent in the service frontline. Due to advancements in artificial intelligence, chatbots are often indistinguishable from humans. Regarding the question whether firms should disclose their chatbots' nonhuman identity or not, previous studies find negative consumer reactions to chatbot disclosure. By considering the role of trust and service-related context factors, this study explores how negative effects of chatbot disclosure for customer retention can be prevented.Design/methodology/approachThis paper presents two experimental studies that examine the effect of disclosing the nonhuman identity of chatbots on customer retention. While the first study examines the effect of chatbot disclosure for different levels of service criticality, the second study considers different service outcomes. The authors employ analysis of covariance and mediation analysis to test their hypotheses.FindingsChatbot disclosure has a negative indirect effect on customer retention through mitigated trust for services with high criticality. In cases where a chatbot fails to handle the customer's service issue, disclosing the chatbot identity not only lacks negative impact but even elicits a positive effect on retention.Originality/valueThe authors provide evidence that customers will react differently to chatbot disclosure depending on the service frontline setting. They show that chatbot disclosure does not only have undesirable consequences as previous studies suspect but can lead to positive reactions as well. By doing so, the authors draw a more balanced picture on the consequences of chatbot disclosure.
Journal Article
Help! Is my chatbot falling into the uncanny valley? An empirical study of user experience in human-chatbot interaction
by
Skjuve, Marita
,
Følstad, Asbjørn
,
Brandtzaeg, Petter Bae
in
Agents (artificial intelligence)
,
chatbot
,
Chatbots
2019
Advances in artificial intelligence strengthen chatbots’ ability to resemble human conversational agents. For some application areas, it may be tempting not to be transparent regarding a conversational agent’s nature as chatbot or human. However, the uncanny valley theory suggests that such lack in transparency may cause uneasy feelings in the user. In this study, we combined quantitative and qualitative methods to investigate this issue. First, we used a 2 x 2 experimental research design (n = 28) to investigate effects of lack in transparency on the perceived pleasantness of the conversation in addition to perceived human likeness and affinity for the conversational agent. Second, we conducted an exploratory analysis of qualitative participant reports on these conversations. We did not find that a lack in transparency negatively affected user experience, but we identified three factors important to participants’ assessments. The findings are of theoretical and practical significance and motivate future research.
Journal Article