Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
27,275
result(s) for
"Artificial intelligence Ethics."
Sort by:
Applied artificial intelligence : a handbook for business leaders
This bestselling book gives business leaders and executives a foundational education on how to leverage artificial intelligence and machine learning solutions to deliver ROI for your business.
Proposing a Principle-Based Approach for Teaching AI Ethics in Medical Education
2024
The use of artificial intelligence (AI) in medicine, potentially leading to substantial advancements such as improved diagnostics, has been of increasing scientific and societal interest in recent years. However, the use of AI raises new ethical challenges, such as an increased risk of bias and potential discrimination against patients, as well as misdiagnoses potentially leading to over- or underdiagnosis with substantial consequences for patients. Recognizing these challenges, current research underscores the importance of integrating AI ethics into medical education. This viewpoint paper aims to introduce a comprehensive set of ethical principles for teaching AI ethics in medical education. This dynamic and principle-based approach is designed to be adaptive and comprehensive, addressing not only the current but also emerging ethical challenges associated with the use of AI in medicine. This study conducts a theoretical analysis of the current academic discourse on AI ethics in medical education, identifying potential gaps and limitations. The inherent interconnectivity and interdisciplinary nature of these anticipated challenges are illustrated through a focused discussion on “informed consent” in the context of AI in medicine and medical education. This paper proposes a principle-based approach to AI ethics education, building on the 4 principles of medical ethics—autonomy, beneficence, nonmaleficence, and justice—and extending them by integrating 3 public health ethics principles—efficiency, common good orientation, and proportionality. The principle-based approach to teaching AI ethics in medical education proposed in this study offers a foundational framework for addressing the anticipated ethical challenges of using AI in medicine, recommended in the current academic discourse. By incorporating the 3 principles of public health ethics, this principle-based approach ensures that medical ethics education remains relevant and responsive to the dynamic landscape of AI integration in medicine. As the advancement of AI technologies in medicine is expected to increase, medical ethics education must adapt and evolve accordingly. The proposed principle-based approach for teaching AI ethics in medical education provides an important foundation to ensure that future medical professionals are not only aware of the ethical dimensions of AI in medicine but also equipped to make informed ethical decisions in their practice. Future research is required to develop problem-based and competency-oriented learning objectives and educational content for the proposed principle-based approach to teaching AI ethics in medical education.
Journal Article
Responsible AI : implement an ethical approach in your organization
\"Responsible AI is a guide to how business leaders can develop and implement a robust and responsible AI strategy for their organizations.Responsible AI has rapidly transitioned to a strategic priority for leaders and organizations worldwide. Responsible AI guides readers step-by-step through the process of establishing robust yet manageable ethical AI initiatives for any size organization, outlining the three core pillars of building a responsible AI strategy: people, process and technology. It provides the insight and guidance needed to help leaders fully understand the technical and commercial potential of ethics in AI while also covering the operations and strategy needed to support implementation.Responsible AI breaks down what it means to use ethics and values as a modern-day decision-making tool in the design and development of AI. It conceptually covers both how ethics can be used to identify risks and establish safeguards in the development of AI and how to use ethics-by-design methods to stimulate AI innovation. It also covers the different considerations for large enterprises and SMEs and discusses the role of the AI ethicist. It is supported by practical case studies from organizations such as IKEA, Nvidia, IBM and NatWest Group\"-- Provided by publisher.
Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care
2024
Background
In an effort to improve the quality of medical care, the philosophy of patient-centered care has become integrated into almost every aspect of the medical community. Despite its widespread acceptance, among patients and practitioners, there are concerns that rapid advancements in artificial intelligence may threaten elements of patient-centered care, such as personal relationships with care providers and patient-driven choices. This study explores the extent to which patients are confident in and comfortable with the use of these technologies when it comes to their own individual care and identifies areas that may align with or threaten elements of patient-centered care.
Methods
An exploratory, mixed-method approach was used to analyze survey data from 600 US-based adults in the State of Florida. The survey was administered through a leading market research provider (August 10–21, 2023), and responses were collected to be representative of the state’s population based on age, gender, race/ethnicity, and political affiliation.
Results
Respondents were more comfortable with the use of AI in health-related tasks that were not associated with doctor-patient relationships, such as scheduling patient appointments or follow-ups (84.2%). Fear of losing the ‘human touch’ associated with doctors was a common theme within qualitative coding, suggesting a potential conflict between the implementation of AI and patient-centered care. In addition, decision self-efficacy was associated with higher levels of comfort with AI, but there were also concerns about losing decision-making control, workforce changes, and cost concerns. A small majority of participants mentioned that AI could be useful for doctors and lead to more equitable care but only when used within limits.
Conclusion
The application of AI in medical care is rapidly advancing, but oversight, regulation, and guidance addressing critical aspects of patient-centered care are lacking. While there is no evidence that AI will undermine patient-physician relationships at this time, there is concern on the part of patients regarding the application of AI within medical care and specifically as it relates to their interaction with physicians. Medical guidance on incorporating AI while adhering to the principles of patient-centered care is needed to clarify how AI will augment medical care.
Journal Article
Ethical considerations and concerns in the implementation of AI in pharmacy practice: a cross-sectional study
by
Alzoubi, Karem H.
,
Jaber, Deema
,
Khabour, Omar F.
in
Adult
,
Africa, Northern
,
Artificial intelligence
2024
Background
Integrating artificial intelligence (AI) into healthcare has raised significant ethical concerns. In pharmacy practice, AI offers promising advances but also poses ethical challenges.
Methods
A cross-sectional study was conducted in countries from the Middle East and North Africa (MENA) region on 501 pharmacy professionals. A 12-item online questionnaire assessed ethical concerns related to the adoption of AI in pharmacy practice. Demographic factors associated with ethical concerns were analyzed via SPSS v.27 software using appropriate statistical tests.
Results
Participants expressed concerns about patient data privacy (58.9%), cybersecurity threats (58.9%), potential job displacement (62.9%), and lack of legal regulation (67.0%). Tech-savviness and basic AI understanding were correlated with higher concern scores (
p
< 0.001). Ethical implications include the need for informed consent, beneficence, justice, and transparency in the use of AI.
Conclusion
The findings emphasize the importance of ethical guidelines, education, and patient autonomy in adopting AI. Collaboration, data privacy, and equitable access are crucial to the responsible use of AI in pharmacy practice.
Highlights
Pharmacy professionals in the MENA region express significant ethical concerns about integrating AI into pharmacy practice.
Key ethical considerations for AI highlighted in the current study include the privacy of patient data, AI replacing non-specialized pharmacists, and a lack of legal regulation.
Tech-savviness and basic understanding of AI are positively correlated with higher ethical concerns.
Informed consent as a vital part of autonomy, beneficence, and justice are crucial ethical principles in the adoption of AI in pharmacy.
Collaboration, education, and ethical frameworks are essential for the responsible use of AI in pharmacy practice.
Journal Article
High-reward, high-risk technologies? An ethical and legal account of AI development in healthcare
by
Régis, Catherine
,
Martineau, Joé T.
,
Corfmat, Maelenn
in
Artificial intelligence
,
Artificial Intelligence - ethics
,
Artificial Intelligence - legislation & jurisprudence
2025
Background
Considering the disruptive potential of AI technology, its current and future impact in healthcare, as well as healthcare professionals’ lack of training in how to use it, the paper summarizes how to approach the challenges of AI from an ethical and legal perspective. It concludes with suggestions for improvements to help healthcare professionals better navigate the AI wave.
Methods
We analyzed the literature that specifically discusses ethics and law related to the development and implementation of AI in healthcare as well as relevant normative documents that pertain to both ethical and legal issues. After such analysis, we created categories regrouping the most frequently cited and discussed ethical and legal issues. We then proposed a breakdown within such categories that emphasizes the different - yet often interconnecting - ways in which ethics and law are approached for each category of issues. Finally, we identified several key ideas for healthcare professionals and organizations to better integrate ethics and law into their practices.
Results
We identified six categories of issues related to AI development and implementation in healthcare: (1) privacy; (2) individual autonomy; (3) bias; (4) responsibility and liability; (5) evaluation and oversight; and (6) work, professions and the job market. While each one raises different questions depending on perspective, we propose three main legal and ethical priorities: education and training of healthcare professionals, offering support and guidance throughout the use of AI systems, and integrating the necessary ethical and legal reflection at the heart of the AI tools themselves.
Conclusions
By highlighting the main ethical and legal issues involved in the development and implementation of AI technologies in healthcare, we illustrate their profound effects on professionals as well as their relationship with patients and other organizations in the healthcare sector. We must be able to identify AI technologies in medical practices and distinguish them by their nature so we can better react and respond to them. Healthcare professionals need to work closely with ethicists and lawyers involved in the healthcare system, or the development of reliable and trusted AI will be jeopardized.
Journal Article
On the Contribution of Neuroethics to the Ethics and Regulation of Artificial Intelligence
by
Farisco Michele
,
Salles Arleen
,
Evers Kathinka
in
Artificial intelligence
,
Ethics
,
Medical ethics
2022
Contemporary ethical analysis of Artificial Intelligence (AI) is growing rapidly. One of its most recognizable outcomes is the publication of a number of ethics guidelines that, intended to guide governmental policy, address issues raised by AI design, development, and implementation and generally present a set of recommendations. Here we propose two things: first, regarding content, since some of the applied issues raised by AI are related to fundamental questions about topics like intelligence, consciousness, and the ontological and ethical status of humans, among others, the treatment of these issues would benefit from interfacing with neuroethics that has been addressing those same issues in the context of brain research. Second, the identification and management of some of the practical ethical challenges raised by AI would be enriched by embracing the methodological resources used in neuroethics. In particular, we focus on the methodological distinction between conceptual and action-oriented neuroethical approaches. We argue that the normative (often principles-oriented) discussion about AI will benefit from further integration of conceptual analysis, including analysis of some operative assumptions, their meaning in different contexts, and their mutual relevance in order to avoid misplaced or disproportionate concerns and achieve a more realistic and useful approach to identifying and managing the emerging ethical issues.
Journal Article
Ethical implications of using general-purpose LLMs in clinical settings: a comparative analysis of prompt engineering strategies and their impact on patient safety
2025
Background
The rapid integration of large language models (LLMs) into healthcare raises critical ethical concerns regarding patient safety, reliability, transparency, and equitable care delivery. Despite not being trained explicitly on medical data, individuals increasingly use general-purpose LLMs to address medical questions and clinical scenarios. While prompt engineering can optimize LLM performance, its ethical implications for clinical decision-making remain underexplored. This study aimed to evaluate the ethical dimensions of prompt engineering strategies in the clinical applications of LLMs, focusing on safety, bias, transparency, and their implications for the responsible implementation of AI in healthcare.
Methods
We conducted an ethics-focused analysis of three advanced and reasoning-capable LLMs (OpenAI O3, Claude Sonnet 4, Google Gemini 2.5 Pro) across six prompt engineering strategies and five clinical scenarios of varying ethical complexity. Six expert clinicians evaluated 90 responses using domains that included diagnostic accuracy, safety assessment, communication, empathy, and ethical reasoning. We specifically analyzed safety incidents, bias patterns, and transparency of reasoning processes.
Results
Significant ethical concerns emerged across all models and scenarios. Critical safety issues occurred in 12.2% of responses, with concentration in complex ethical scenarios (Level 5: 23.1% vs. Level 1: 2.3%,
p
< 0.001). Meta-cognitive prompting demonstrated superior ethical reasoning (mean ethics score: 78.3 ± 9.1), while safety-first prompting reduced safety incidents by 45% compared to zero-shot approaches (8.9% vs. 16.2%). However, all models showed concerning deficits in communication empathy (mean 54% of maximum) and exhibited potential bias in complex multi-cultural scenarios. Transparency varied significantly by prompt strategy, with meta-cognitive approaches providing the clearest reasoning pathways (4.2 vs. 1.8 explicit reasoning steps), which are essential for clinical accountability. The study highlighted critical gaps in ethical decision-making transparency, with meta-cognitive approaches providing 4.2 explicit reasoning steps compared to 1.8 in zero-shot methods (
p
< 0.001). Bias patterns disproportionately affected vulnerable populations, with systematic underestimation of treatment appropriateness in elderly patients and inadequate cultural considerations in end-of-life scenarios.
Conclusions
Current clinical applications of general-purpose LLMs present substantial ethical challenges requiring urgent attention. While structured prompt engineering demonstrated measurable improvements in some domains, with meta-cognitive approaches showing 13.0% performance gains and safety-first prompting reducing critical incidents by 45%, substantial limitations persist across all strategies. Even optimized approaches achieved inadequate performance in communication and empathy (≤ 54% of maximum), retained residual bias patterns (11.7% in safety-first conditions), and exhibited concerning safety deficits, indicating that current prompt engineering methods provide only marginal improvements, which are insufficient for reliable clinical deployment. These findings highlight significant ethical challenges that necessitate further investigation into the development of appropriate guidelines and regulatory frameworks for the clinical use of general-purpose AI models.
Journal Article
Proactive vs. passive algorithmic ethics practices in healthcare: the moderating role of healthcare engagement type in patients’ responses
2025
Background
Artificial intelligence (AI) is transforming healthcare, but concerns about algorithmic biases and ethical challenges hinder patient acceptance. This study examined the effects of proactive versus passive algorithmic ethics practices on patient responses across different healthcare engagement types (privacy-focused vs. utility-focused).
Methods
We conducted a 2 × 2 online experiment with 513 participants in China. The experiment manipulated the healthcare provider’s algorithmic ethics approach (proactive vs. passive) and the healthcare engagement type (privacy-focused vs. utility-focused). Participants were randomly assigned to view a scenario describing a hospital’s AI diagnostic system, then completed measures of attitudes, trust, and intentions to use the AI-enabled service.
Results
Proactive algorithmic ethics practices significantly increased positive attitudes, trust, and usage intentions compared to passive practices. The positive impact of proactive practices was stronger for privacy-focused healthcare (e.g., mental health services) compared to utility-focused services emphasizing care optimization.
Conclusions
This study underscores the critical role of proactive, context-specific algorithmic ethics practices in cultivating patient trust and engagement with AI-enabled healthcare. To optimize outcomes, healthcare providers must strategically adapt their ethical governance approaches to align with the unique privacy-utility considerations that are most salient to patients across different healthcare contexts and AI use cases.
Clinical trial number
Not applicable.
Journal Article
Deepfake Pornography and the Ethics of Non-Veridical Representations
2023
We investigate the question of whether (and if so why) creating or distributing deepfake pornography of someone without their consent is inherently objectionable. We argue that nonconsensually distributing deepfake pornography of a living person on the internet is inherently pro tanto wrong in virtue of the fact that nonconsensually distributing intentionally non-veridical representations about someone violates their right that their social identity not be tampered with, a right which is grounded in their interest in being able to exercise autonomy over their social relations with others. We go on to suggest that nonconsensual deepfakes are especially worrisome in connection with this right because they have a high degree of phenomenal immediacy, a property which corresponds inversely to the ease with which a representation can be doubted. We then suggest that nonconsensually creating and privately consuming deepfake pornography is worrisome but may not be inherently pro tanto wrong. Finally, we discuss the special issue of whether nonconsensually distributing deepfake pornography of a deceased person is inherently objectionable. We argue that the answer depends on how long it has been since the person died.
Journal Article