Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
908
result(s) for
"BOT"
Sort by:
Health chatbots acceptability moderated by perceived stigma and severity: A cross-sectional survey
by
Nadarzynski, Tom
,
West, Robert
,
Miles, Oliver
in
Chatbots
,
Cross-sectional studies
,
Original Research
2021
Background
Chatbots and virtual voice assistants are increasingly common in primary care without sufficient evidence for their feasibility and effectiveness. We aimed to assess how perceived stigma and severity of various health issues are associated with the acceptability for three sources of health information and consultation: an automated chatbot, a General Practitioner (GP), or a combination of both.
Methods
Between May and June 2019, we conducted an online study, advertised via Facebook, for UK citizens. It was a factorial simulation experiment with three within-subject factors (perceived health issue stigma, severity, and consultation source) and six between-subject covariates. Acceptability rating for each consultation source was the dependant variable. A single mixed-model ANOVA was performed.
Results
Amongst 237 participants (65% aged over 45 years old, 73% women), GP consultations were seen as most acceptable, followed by GP-chatbot service. Chatbots were seen least acceptable as a consultation source for severe health issues, while the acceptability was significantly higher for stigmatised health issues. No associations between participants’ characteristics and acceptability were found.
Conclusions
Although healthcare professionals are perceived as the most desired sources of health information, chatbots may be useful for sensitive health issues in which disclosure of personal information is challenging. However, chatbots are less acceptable for health issues of higher severity and should not be recommended for use within that context. Policymakers and digital service designers need to recognise the limitations of health chatbots. Future research should establish a set of health topics most suitable for chatbot-led interventions and primary healthcare services.
Journal Article
Social media bot detection with deep learning methods: a systematic review
by
Mathew, Sujith Samuel
,
Masud, Mohammad Mehedy
,
Hayawi, Kadhim
in
Artificial Intelligence
,
Computational Biology/Bioinformatics
,
Computational Science and Engineering
2023
Social bots are automated social media accounts governed by software and controlled by humans at the backend. Some bots have good purposes, such as automatically posting information about news and even to provide help during emergencies. Nevertheless, bots have also been used for malicious purposes, such as for posting fake news or rumour spreading or manipulating political campaigns. There are existing mechanisms that allow for detection and removal of malicious bots automatically. However, the bot landscape changes as the bot creators use more sophisticated methods to avoid being detected. Therefore, new mechanisms for discerning between legitimate and bot accounts are much needed. Over the past few years, a few review studies contributed to the social media bot detection research by presenting a comprehensive survey on various detection methods including cutting-edge solutions like machine learning (ML)/deep learning (DL) techniques. This paper, to the best of our knowledge, is the first one to only highlight the DL techniques and compare the motivation/effectiveness of these techniques among themselves and over other methods, especially the traditional ML ones. We present here a refined taxonomy of the features used in DL studies and details about the associated pre-processing strategies required to make suitable training data for a DL model. We summarize the gaps addressed by the review papers that mentioned about DL/ML studies to provide future directions in this field. Overall, DL techniques turn out to be computation and time efficient techniques for social bot detection with better or compatible performance as traditional ML techniques.
Journal Article
Detection and impact estimation of social bots in the Chilean Twitter network
by
Mendoza, Marcelo
,
Providel, Eliana
,
Valenzuela, Sebastián
in
639/705/117
,
639/705/258
,
Bot detection
2024
The rise of bots that mimic human behavior represents one of the most pressing threats to healthy information environments on social media. Many bots are designed to increase the visibility of low-quality content, spread misinformation, and artificially boost the reach of brands and politicians. These bots can also disrupt civic action coordination, such as by flooding a hashtag with spam and undermining political mobilization. Social media platforms have recognized these malicious bots’ risks and implemented strict policies and protocols to block automated accounts. However, effective bot detection methods for Spanish are still in their early stages. Many studies and tools used for Spanish are based on English-language models and lack performance evaluations in Spanish. In response to this need, we have developed a method for detecting bots in Spanish called Botcheck. Botcheck was trained on a collection of Spanish-language accounts annotated in Twibot-20, a large-scale dataset featuring thousands of accounts annotated by humans in various languages. We evaluated Botcheck’s performance on a large set of labeled accounts and found that it outperforms other competitive methods, including deep learning-based methods. As a case study, we used Botcheck to analyze the 2021 Chilean Presidential elections and discovered evidence of bot account intervention during the electoral term. In addition, we conducted an external validation of the accounts detected by Botcheck in the case study and found our method to be highly effective. We have also observed differences in behavior among the bots that are following the social media accounts of official presidential candidates.
Journal Article
Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study
by
Nadarzynski, Tom
,
Ridge, Damien
,
Cowie, Aimee
in
Artificial intelligence
,
Mixed methods research
,
Original Research
2019
Background
Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots is required to predict uptake; however, few studies to date have explored their acceptability. This research aimed to explore participants’ willingness to engage with AI-led health chatbots.
Methods
The study incorporated semi-structured interviews (N-29) which informed the development of an online survey (N-216) advertised via social media. Interviews were recorded, transcribed verbatim and analysed thematically. A survey of 24 items explored demographic and attitudinal variables, including acceptability and perceived utility. The quantitative data were analysed using binary regressions with a single categorical predictor.
Results
Three broad themes: ‘Understanding of chatbots’, ‘AI hesitancy’ and ‘Motivations for health chatbots’ were identified, outlining concerns about accuracy, cyber-security, and the inability of AI-led services to empathise. The survey showed moderate acceptability (67%), correlated negatively with perceived poorer IT skills OR = 0.32 [CI95%:0.13–0.78] and dislike for talking to computers OR = 0.77 [CI95%:0.60–0.99] as well as positively correlated with perceived utility OR = 5.10 [CI95%:3.08–8.43], positive attitude OR = 2.71 [CI95%:1.77–4.16] and perceived trustworthiness OR = 1.92 [CI95%:1.13–3.25].
Conclusion
Most internet users would be receptive to using health chatbots, although hesitancy regarding this technology is likely to compromise engagement. Intervention designers focusing on AI-led health chatbots need to employ user-centred and theory-based approaches addressing patients’ concerns and optimising user experience in order to achieve the best uptake and utilisation. Patients’ perspectives, motivation and capabilities need to be taken into account when developing and assessing the effectiveness of health chatbots.
Journal Article
Assessing Generative Pretrained Transformers (GPT) in Clinical Decision-Making: Comparative Analysis of GPT-3.5 and GPT-4
2024
Artificial intelligence, particularly chatbot systems, is becoming an instrumental tool in health care, aiding clinical decision-making and patient engagement.
This study aims to analyze the performance of ChatGPT-3.5 and ChatGPT-4 in addressing complex clinical and ethical dilemmas, and to illustrate their potential role in health care decision-making while comparing seniors' and residents' ratings, and specific question types.
A total of 4 specialized physicians formulated 176 real-world clinical questions. A total of 8 senior physicians and residents assessed responses from GPT-3.5 and GPT-4 on a 1-5 scale across 5 categories: accuracy, relevance, clarity, utility, and comprehensiveness. Evaluations were conducted within internal medicine, emergency medicine, and ethics. Comparisons were made globally, between seniors and residents, and across classifications.
Both GPT models received high mean scores (4.4, SD 0.8 for GPT-4 and 4.1, SD 1.0 for GPT-3.5). GPT-4 outperformed GPT-3.5 across all rating dimensions, with seniors consistently rating responses higher than residents for both models. Specifically, seniors rated GPT-4 as more beneficial and complete (mean 4.6 vs 4.0 and 4.6 vs 4.1, respectively; P<.001), and GPT-3.5 similarly (mean 4.1 vs 3.7 and 3.9 vs 3.5, respectively; P<.001). Ethical queries received the highest ratings for both models, with mean scores reflecting consistency across accuracy and completeness criteria. Distinctions among question types were significant, particularly for the GPT-4 mean scores in completeness across emergency, internal, and ethical questions (4.2, SD 1.0; 4.3, SD 0.8; and 4.5, SD 0.7, respectively; P<.001), and for GPT-3.5's accuracy, beneficial, and completeness dimensions.
ChatGPT's potential to assist physicians with medical issues is promising, with prospects to enhance diagnostics, treatments, and ethics. While integration into clinical workflows may be valuable, it must complement, not replace, human expertise. Continued research is essential to ensure safe and effective implementation in clinical environments.
Journal Article
Interacting with educational chatbots: A systematic review
by
Alhejori, Kholood
,
Kuhail, Mohammad Amin
,
Alturki, Nazik
in
Chatbots
,
Computer Science Education
,
Cooperative Learning
2023
Chatbots hold the promise of revolutionizing education by engaging learners, personalizing learning activities, supporting educators, and developing deep insight into learners’ behavior. However, there is a lack of studies that analyze the recent evidence-based chatbot-learner interaction design techniques applied in education. This study presents a systematic review of 36 papers to understand, compare, and reflect on recent attempts to utilize chatbots in education using seven dimensions: educational field, platform, design principles, the role of chatbots, interaction styles, evidence, and limitations. The results show that the chatbots were mainly designed on a web platform to teach computer science, language, general education, and a few other fields such as engineering and mathematics. Further, more than half of the chatbots were used as teaching agents, while more than a third were peer agents. Most of the chatbots used a predetermined conversational path, and more than a quarter utilized a personalized learning approach that catered to students’ learning needs, while other chatbots used experiential and collaborative learning besides other design principles. Moreover, more than a third of the chatbots were evaluated with experiments, and the results primarily point to improved learning and subjective satisfaction. Challenges and limitations include inadequate or insufficient dataset training and a lack of reliance on usability heuristics. Future studies should explore the effect of chatbot personality and localization on subjective satisfaction and learning effectiveness.
Journal Article
Social bots spoil activist sentiment without eroding engagement
2024
Social bots are highly active on social media platforms, significantly affecting online discourse. We analyzed the dynamic nature of bot engagement related to Extinction Rebellion climate change protests in 2019. We found bots to impact human behavior more than the other way around during active discussions. To assess the causal impact of bot encounters, we compared communication histories of those who interacted with bots with matched users who did not. There is a consistent negative impact of bot encounters on subsequent sentiment. The impact on sentiment is conditional on the user’s original support level, with a more negative impact on those with a favourable or neutral stance towards climate activism. Political ’astroturfing’ bots induce an increase in human communications, while encounters with other bots result in a decrease. Bot encounters do not change activists’ engagement levels in climate activism. Despite the seemingly minor impact of individual bot encounters, the cumulative effect is profound due to the large volume of bot communication. Our findings underscore the importance of monitoring the influence of social bots, as with new technological advancements distinguishing between bots and humans becomes ever more challenging.
Journal Article
Bot, or not? Comparing three methods for detecting social bots in five political discourses
2021
Social bots – partially or fully automated accounts on social media platforms – have not only been widely discussed, but have also entered political, media and research agendas. However, bot detection is not an exact science. Quantitative estimates of bot prevalence vary considerably and comparative research is rare. We show that findings on the prevalence and activity of bots on Twitter depend strongly on the methods used to identify automated accounts. We search for bots in political discourses on Twitter, using three different bot detection methods: Botometer, Tweetbotornot and “heavy automation”. We drew a sample of 122,884 unique user Twitter accounts that had produced 263,821 tweets contributing to five political discourses in five Western democracies. While all three bot detection methods classified accounts as bots in all our cases, the comparison shows that the three approaches produce very different results. We discuss why neither manual validation nor triangulation resolves the basic problems, and conclude that social scientists studying the influence of social bots on (political) communication and discourse dynamics should be careful with easy-to-use methods, and consider interdisciplinary research.
Journal Article
Faculty Assistant Bot-automation of administrative activities using robotic process automation
2024
In this paper, a process workflow for Bot is created using robotic process automation (RPA), associated with artificial intelligence that is used to stream line the administrative tasks and alleviate stress levels of faculty in handling administrative tasks while teaching in higher education. These activities are must for National Academic Audit Council (NAAC) accreditation and All India Survey on Higher Education (AISHE) surveys, which strive to bring quality in teaching higher education by shaping educational policies. Therefore, ensuring the accuracy of this data is paramount to avoid misleading decisions. The Bot automatically gathers student results from the website and saves them into individual files, eliminating the need for human intervention. It is trained to find the related file of student and update his results of upcoming semesters or backlogs. The Bot efficiently manages folders during file saving to enhance retrieval. Additionally, it maintains pertinent student details such as community, caste, and religion, beneficial for educational policy surveys aiming for improved quality. Moreover, it generates and updates reports post each process execution, ensuring data integrity, and can be trained for statistical analysis to predict student outcomes. The UiPath tool is used in the design and testing of the developed Bot
Journal Article
Assessing the Alignment of Large Language Models With Human Values for Mental Health Integration: Cross-Sectional Study Using Schwartz’s Theory of Basic Values
by
Haber, Yuval
,
Mizrachi, Yonathan
,
Hadar-Shoval, Dorit
in
Allied Health Personnel
,
Artificial intelligence
,
Burnout
2024
Large language models (LLMs) hold potential for mental health applications. However, their opaque alignment processes may embed biases that shape problematic perspectives. Evaluating the values embedded within LLMs that guide their decision-making have ethical importance. Schwartz's theory of basic values (STBV) provides a framework for quantifying cultural value orientations and has shown utility for examining values in mental health contexts, including cultural, diagnostic, and therapist-client dynamics.
This study aimed to (1) evaluate whether the STBV can measure value-like constructs within leading LLMs and (2) determine whether LLMs exhibit distinct value-like patterns from humans and each other.
In total, 4 LLMs (Bard, Claude 2, Generative Pretrained Transformer [GPT]-3.5, GPT-4) were anthropomorphized and instructed to complete the Portrait Values Questionnaire-Revised (PVQ-RR) to assess value-like constructs. Their responses over 10 trials were analyzed for reliability and validity. To benchmark the LLMs' value profiles, their results were compared to published data from a diverse sample of 53,472 individuals across 49 nations who had completed the PVQ-RR. This allowed us to assess whether the LLMs diverged from established human value patterns across cultural groups. Value profiles were also compared between models via statistical tests.
The PVQ-RR showed good reliability and validity for quantifying value-like infrastructure within the LLMs. However, substantial divergence emerged between the LLMs' value profiles and population data. The models lacked consensus and exhibited distinct motivational biases, reflecting opaque alignment processes. For example, all models prioritized universalism and self-direction, while de-emphasizing achievement, power, and security relative to humans. Successful discriminant analysis differentiated the 4 LLMs' distinct value profiles. Further examination found the biased value profiles strongly predicted the LLMs' responses when presented with mental health dilemmas requiring choosing between opposing values. This provided further validation for the models embedding distinct motivational value-like constructs that shape their decision-making.
This study leveraged the STBV to map the motivational value-like infrastructure underpinning leading LLMs. Although the study demonstrated the STBV can effectively characterize value-like infrastructure within LLMs, substantial divergence from human values raises ethical concerns about aligning these models with mental health applications. The biases toward certain cultural value sets pose risks if integrated without proper safeguards. For example, prioritizing universalism could promote unconditional acceptance even when clinically unwise. Furthermore, the differences between the LLMs underscore the need to standardize alignment processes to capture true cultural diversity. Thus, any responsible integration of LLMs into mental health care must account for their embedded biases and motivation mismatches to ensure equitable delivery across diverse populations. Achieving this will require transparency and refinement of alignment techniques to instill comprehensive human values.
Journal Article