Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
278
result(s) for
"4000/4008"
Sort by:
The global landscape of AI ethics guidelines
2019
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.
As AI technology develops rapidly, it is widely recognized that ethical guidelines are required for safe and fair implementation in society. But is it possible to agree on what is ‘ethical AI’? A detailed analysis of 84 AI ethics reports around the world, from national and international organizations, companies and institutes, explores this question, finding a convergence around core principles but substantial divergence on practical implementation.
Journal Article
When combinations of humans and AI are useful: A systematic review and meta-analysis
by
Malone, Thomas
,
Vaccaro, Michelle
,
Almaatouq, Abdullah
in
4000/4008
,
4014/4045
,
4014/477/2811
2024
Inspired by the increasing use of artificial intelligence (AI) to augment humans, researchers have studied human–AI systems involving different tasks, systems and populations. Despite such a large body of work, we lack a broad conceptual understanding of when combinations of humans and AI are better than either alone. Here we addressed this question by conducting a preregistered systematic review and meta-analysis of 106 experimental studies reporting 370 effect sizes. We searched an interdisciplinary set of databases (the Association for Computing Machinery Digital Library, the Web of Science and the Association for Information Systems eLibrary) for studies published between 1 January 2020 and 30 June 2023. Each study was required to include an original human-participants experiment that evaluated the performance of humans alone, AI alone and human–AI combinations. First, we found that, on average, human–AI combinations performed significantly worse than the best of humans or AI alone (Hedges’
g
= −0.23; 95% confidence interval, −0.39 to −0.07). Second, we found performance losses in tasks that involved making decisions and significantly greater gains in tasks that involved creating content. Finally, when humans outperformed AI alone, we found performance gains in the combination, but when AI outperformed humans alone, we found losses. Limitations of the evidence assessed here include possible publication bias and variations in the study designs analysed. Overall, these findings highlight the heterogeneity of the effects of human–AI collaboration and point to promising avenues for improving human–AI systems.
Vaccaro et al. present a systematic review and meta-analysis of the performance of human–AI combinations, finding that on average, human–AI combinations performed significantly worse than the best of humans or AI alone. They also found performance losses in decision-making tasks and significantly greater gains in content creation tasks.
Journal Article
Principles alone cannot guarantee ethical AI
2019
Artificial intelligence (AI) ethics is now a global topic of discussion in academic and policy circles. At least 84 public–private initiatives have produced statements describing high-level principles, values and other tenets to guide the ethical development, deployment and governance of AI. According to recent meta-analyses, AI ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach for the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.
AI ethics initiatives have seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite this, Brent Mittelstadt highlights important differences between medical practice and AI development that suggest a principled approach may not work in the case of AI.
Journal Article
Improving personalized recommendations system using graph attention networks driven by perceived complexity and innovation
2026
This paper presents a product recommendation system (GAT-RS) based on perceived complexity and perceived innovation in product reviews. Perceived complexity refers to the usability of a product, while perceived innovation refers to the extent to which something looks novel. Such aspects significantly impact consumer dynamics and business performance indicators, including user engagement and sales results. In this study, product reviews are manually annotated, and then Explainable AI (XAI) is used to improve the decision-making process of the proposed GAT-RS model. The proposed model used pre-trained SimCSE embeddings to find high-quality textual representations of product reviews. It also used Graph Attention Networks (GAT) to discover the associations between the attributes of products and the perceptions of customers about complexity and innovation. The SMOTE oversampling on classes and loss class weights functions are used to handle the imbalance between reviews during training. The evaluation of the proposed GAT-RS is done on accuracy, precision, recall, F1 score, ROC AUC, and the system was found to have a higher accuracy of 94.61% and a ROC AUC of 98.94% compared to the baseline approaches. A combination of complexity and innovation will enhance user satisfaction by aligning recommendations with preferred styles of cognition and novelty. The offered solution would also strengthen the accuracy of personalized recommendations based on customer interests.
Journal Article
Research on user satisfaction with AIGC assisted museum scenario design
2025
In the context of digital transformation, artificial intelligence generated content (AIGC) technology provides an innovative path for museum smart scene design, but existing research lacks a user-centered systematic framework. This study uses questionnaire surveys and structural equation models (SEM) to explore the mechanism of AIGC technology adaptability, user demand fit, scene design innovation, technology acceptance and user satisfaction. The results show that user demand fit has the strongest direct impact on satisfaction, highlighting the “user-centered” design core; AIGC technology adaptability improves satisfaction through direct and indirect paths, verifying the mediating effect of the technology acceptance model (TAM); scene design innovation needs to transform value through technology acceptance to affect user satisfaction. The study constructs a closed-loop model of “demand drive-technology adaptation-scene innovation-acceptance conversion”, and proposes a design strategy based on cognitive load balance, which provides a theoretical basis and practical path for museums to use AIGC technology to improve user experience, and promotes the paradigm shift of museums from “object-centered” to “people-centered”.
Journal Article
A large-scale audit of dataset licensing and attribution in AI
by
Kabbara, Jad
,
Longpre, Shayne
,
Shippole, Enrico
in
4000/4008
,
706/648/270
,
Artificial intelligence
2024
The race to train language models on vast, diverse and inconsistently documented datasets raises pressing legal and ethical concerns. To improve data transparency and understanding, we convene a multi-disciplinary effort between legal and machine learning experts to systematically audit and trace more than 1,800 text datasets. We develop tools and standards to trace the lineage of these datasets, including their source, creators, licences and subsequent use. Our landscape analysis highlights sharp divides in the composition and focus of data licenced for commercial use. Important categories including low-resource languages, creative tasks and new synthetic data all tend to be restrictively licenced. We observe frequent miscategorization of licences on popular dataset hosting sites, with licence omission rates of more than 70% and error rates of more than 50%. This highlights a crisis in misattribution and informed use of popular datasets driving many recent breakthroughs. Our analysis of data sources also explains the application of copyright law and fair use to finetuning data. As a contribution to continuing improvements in dataset transparency and responsible use, we release our audit, with an interactive user interface, the Data Provenance Explorer, to enable practitioners to trace and filter on data provenance for the most popular finetuning data collections:
www.dataprovenance.org
.
The Data Provenance Initiative audits over 1,800 text artificial intelligence (AI) datasets, analysing trends, permissions of use and global representation. It exposes frequent errors on several major data hosting sites and offers tools for transparent and informed use of AI training data.
Journal Article
Pandemic publishing poses a new COVID-19 challenge
by
Norgaard, Ole
,
Safreed-Harmon, Kelly
,
Rasmussen, Lauge Neimann
in
4000/4008/4046
,
706/648
,
Behavioral Sciences
2020
The scientific community’s response to COVID-19 has resulted in a large volume of research moving through the publication pipeline at extraordinary speed, with a median time from receipt to acceptance of 6 days for journal articles. Although the nature of this emergency warrants accelerated publishing, measures are required to safeguard the integrity of scientific evidence.
Journal Article
Governing AI safety through independent audits
2021
Highly automated systems are becoming omnipresent. They range in function from self-driving vehicles to advanced medical diagnostics and afford many benefits. However, there are assurance challenges that have become increasingly visible in high-profile crashes and incidents. Governance of such systems is critical to garner widespread public trust. Governance principles have been previously proposed offering aspirational guidance to automated system developers; however, their implementation is often impractical given the excessive costs and processes required to enact and then enforce the principles. This Perspective, authored by an international and multidisciplinary team across government organizations, industry and academia, proposes a mechanism to drive widespread assurance of highly automated systems: independent audit. As proposed, independent audit of AI systems would embody three ‘AAA’ governance principles of prospective risk Assessments, operation Audit trails and system Adherence to jurisdictional requirements. Independent audit of AI systems serves as a pragmatic approach to an otherwise burdensome and unenforceable assurance challenge.
As highly automated systems become pervasive in society, enforceable governance principles are needed to ensure safe deployment. This Perspective proposes a pragmatic approach where independent audit of AI systems is central. The framework would embody three AAA governance principles: prospective risk Assessments, operation Audit trails and system Adherence to jurisdictional requirements.
Journal Article
The mental health implications of artificial intelligence adoption: the crucial role of self-efficacy
2024
The rapid adoption of artificial intelligence (AI) in organizations has transformed the nature of work, presenting both opportunities and challenges for employees. This study utilizes several theories to investigate the relationships between AI adoption, job stress, burnout, and self-efficacy in AI learning. A three-wave time-lagged research design was used to collect data from 416 professionals in South Korea. Structural equation modeling was used to test the proposed mediation and moderation hypotheses. The results reveal that AI adoption does not directly influence employee burnout but exerts its impact through the mediating role of job stress. The results also show that AI adoption significantly increases job stress, thus increasing burnout. Furthermore, self-efficacy in AI learning was found to moderate the relationship between AI adoption and job stress, with higher self-efficacy weakening the positive relationship. These findings highlight the importance of considering the mediating and moderating mechanisms that shape employee experiences in the context of AI adoption. The results also suggest that organizations should proactively address the potential negative impact of AI adoption on employee well-being by implementing strategies to manage job stress and foster self-efficacy in AI learning. This study underscores the need for a human-centric approach to AI adoption that prioritizes employee well-being alongside technological advancement. Future research should explore additional factors that may influence the relationships between AI adoption, job stress, burnout, and self-efficacy across diverse contexts to inform the development of evidence-based strategies for supporting employees in AI-driven workplaces.
Journal Article
Problematic mobile phone and social media use among adolescents and its relationship with cyberbullying, cybervictimisation and social anxiety
by
Delgado, Beatriz
,
Martínez-Monteagudo, María Carmen
,
Aparisi, David
in
4000/4008
,
4014/4045
,
4014/477
2026
The use of mobile phone and social media has become a global and unstoppable phenomenon, especially among adolescents, largely due to the ease of access to numerous applications that facilitate communication and social interaction via the Internet. This study examines the relationship between problematic mobile phone and social media use, cyberbullying, and social anxiety in a representative sample of secondary school adolescents. A total of 1164 students with an age range of 12 to 18 years (
M
= 14.56;
SD
= 1.4) completed a battery of self-report measures to assess problematic mobile phone and social media use and social anxiety. The results indicate that students with high problematic use of mobile phone and social media have significantly higher levels of cyberbullying and social anxiety compared to those with low and medium problematic use. Furthermore, logistic regression analyses showed that cyberbullying, cybervictimisation and social anxiety, specifically, fear of negative evaluation were significant predictors of problematic mobile phone and social media use, indicating a higher probability of dependence as levels of cybervictimisation and social anxiety increase. Furthermore, women are more likely than men to experience PMPU and PSMU. The results suggest the need to implement interventions aimed at improving emotional management and reducing problematic behaviours related to technology use.
Journal Article