Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
6,999
result(s) for
"AI ethics"
Sort by:
Your face belongs to us : a secretive startup's quest to end privacy as we know it
by
Hill, Kashmir, author
in
Clearview AI (Software company) History.
,
Human face recognition (Computer science) Social aspects.
,
Data privacy.
2023
\"In this riveting feat of reporting, Kashmir Hill illuminates the improbable rise of Clearview AI and how Hoan Ton-That, a computer engineer and Richard Schwartz, a Giuliani associate, launched a terrifying facial recognition app with society-altering potential. They were assisted by a cast of controversial characters, including conservative provocateur Charles Johnson and billionaire Trump backer Peter Thiel. The app can scan a blurry portrait, and, in just seconds, collect every instance of a person's online life. It can find your name, your social media profiles, your friends and family, even your home address (as well as photos of you that you may not even have known existed). The story of Clearview AI opens up a window into a larger, more urgent one about our tortured relationship to technology, the way it entertains and seduces us even as it steals our privacy and lays us bare to bad actors in politics, criminal justice, and tech. This technology has been quietly growing more powerful for decades. Ubiquitous in China and Russia, it was also developed by American companies, including Google and Facebook, who decided it was too radical to release. That did not stop Clearview. They gave demos of the tech to interested private investors and contracted it out to hundreds of law enforcement agencies around the country. American law enforcement, including the Department of Homeland Security, has already used it to arrest people for everything from petty theft to assault. Without regulation it could expand the reach of policing-as it has in China and Russia-to a terrifying, dystopian level\"-- Provided by publisher.
A framework for AI ethics literacy: development, validation, and its role in fostering students’ self-rated learning competence
2025
This study investigates the relationship between AI ethics literacy and students’ self-rated learning competence using AI by developing a comprehensive framework of AI ethics literacy comprising knowledge, attitude, and competence dimensions. Data were collected from 482 college students through an online questionnaire and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). Key findings reveal that: (1) AI ethics knowledge is primarily characterized by four ethical principles: fairness and inclusivity, privacy protection, human-centricity, and responsibility and accountability; (2) AI ethics knowledge positively influences both AI ethics attitude and competence; and (3) AI ethics attitude and competence significantly enhance students’ self-rated learning competence using AI. This research contributes a novel theoretical framework for understanding AI ethics literacy while providing practical insights for cultivating students’ self-rated learning competence using AI.
Journal Article
Decolonizing AI Ethics: Relational Autonomy as a Means to Counter AI Harms
2023
Many popular artificial intelligence (AI) ethics frameworks center the principle of autonomy as necessary in order to mitigate the harms that might result from the use of AI within society. These harms often disproportionately affect the most marginalized within society. In this paper, we argue that the principle of autonomy, as currently formalized in AI ethics, is itself flawed, as it expresses only a mainstream mainly liberal notion of autonomy as rational self-determination, derived from Western traditional philosophy. In particular, we claim that the adherence to such principle, as currently formalized, does not only fail to address many ways in which people’s autonomy can be violated, but also to grasp a broader range of AI-empowered harms profoundly tied to the legacy of colonization, and which particularly affect the already marginalized and most vulnerable on a global scale. To counter such a phenomenon, we advocate for the need of a relational turn in AI ethics, starting from a relational rethinking of the AI ethics principle of autonomy that we propose by drawing on theories on relational autonomy developed both in moral philosophy and Ubuntu ethics.
Journal Article
From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices
by
Elhalal, Anat
,
Floridi, Luciano
,
Kinsey, Libby
in
AI ethics
,
Artificial intelligence
,
Autonomy
2020
The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel in Science, 132(3429):741–742, 1960.
https://doi.org/10.1126/science.132.3429.741
; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the ‘what’ of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)—rather than on practices, the ‘how.’ Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.
Journal Article
A Proposed Framework on Integrating Health Equity and Racial Justice into the Artificial Intelligence Development Lifecycle
by
Dankwa-Mullan, Irene
,
Chapman, Wendy W
,
Matheny, Michael E
in
AI ethics
,
Artificial intelligence
,
Bias
2021
The COVID-19 pandemic has created multiple opportunities to deploy artificial intelligence (AI)-driven tools and applied interventions to understand, mitigate, and manage the pandemic and its consequences. The disproportionate impact of COVID-19 on racial/ethnic minority and socially disadvantaged populations underscores the need to anticipate and address social inequalities and health disparities in AI development and application. Before the pandemic, there was growing optimism about AI's role in addressing inequities and enhancing personalized care. Unfortunately, ethical and social issues that are encountered in scaling, developing, and applying advanced technologies in health care settings have intensified during the rapidly evolving public health crisis. Critical voices concerned with the disruptive potentials and risk for engineered inequities have called for reexamining ethical guidelines in the development and application of AI. This paper proposes a framework to incorporate ethical AI principles into the development process in ways that intentionally promote racial health equity and social justice. Without centering on equity, justice, and ethical AI, these tools may exacerbate structural inequities that can lead to disparate health outcomes.
Journal Article
Artificial intelligence ethics has a black box problem
by
Bélisle-Pipon, Jean-Christophe
,
Couture, Vincent
,
Roy, Marie-Christine
in
AI ethics
,
Artificial Intelligence
,
Black boxes
2023
It has become a truism that the ethics of artificial intelligence (AI) is necessary and must help guide technological developments. Numerous ethical guidelines have emerged from academia, industry, government and civil society in recent years. While they provide a basis for discussion on appropriate regulation of AI, it is not always clear how these ethical guidelines were developed, and by whom. Using content analysis, we surveyed a sample of the major documents (
n
= 47) and analyzed the accessible information regarding their methodology and stakeholder engagement. Surprisingly, only 38% report some form of stakeholder engagement (with 9% involving citizens) and most do not report their methodology for developing normative insights (15%). Our results show that documents with stakeholder engagement develop more comprehensive ethical guidance with greater applicability, and that the private sector is least likely to engage stakeholders. We argue that the current trend for enunciating AI ethical guidance not only poses widely discussed challenges of applicability in practice, but also of transparent development (as it rather behaves as a black box) and of active engagement of diversified, independent and trustworthy stakeholders. While most of these documents consider people and the common good as central to their telos, engagement with the general public is significantly lacking. As AI ethics moves from the initial race for enunciating general principles to more sustainable, inclusive and practical guidance, stakeholder engagement and citizen involvement will need to be embedded into the framing of ethical and societal expectations towards this technology.
Journal Article
Provable AI Ethics and Explainability in Medical and Educational AI Agents: Trustworthy Ethical Firewall
2025
Rapid advances in artificial intelligence are transforming high-stakes fields like medicine and education while raising pressing ethical challenges. This paper introduces the Ethical Firewall Architecture—a comprehensive framework that embeds mathematically provable ethical constraints directly into AI decision-making systems. By integrating formal verification techniques, blockchain-inspired cryptographic immutability, and emotion-like escalation protocols that trigger human oversight when needed, the architecture ensures that every decision is rigorously certified to align with core human values before implementation. The framework also addresses emerging issues, such as biased value systems in large language models and the risks associated with accelerated AI learning. In addition, it highlights the potential societal impacts—including workforce displacement—and advocates for new oversight roles like the Ethical AI Officer. The findings suggest that combining rigorous mathematical safeguards with structured human intervention can deliver AI systems that perform efficiently while upholding transparency, accountability, and trust in critical applications.
Journal Article
Proposing a Principle-Based Approach for Teaching AI Ethics in Medical Education
2024
The use of artificial intelligence (AI) in medicine, potentially leading to substantial advancements such as improved diagnostics, has been of increasing scientific and societal interest in recent years. However, the use of AI raises new ethical challenges, such as an increased risk of bias and potential discrimination against patients, as well as misdiagnoses potentially leading to over- or underdiagnosis with substantial consequences for patients. Recognizing these challenges, current research underscores the importance of integrating AI ethics into medical education. This viewpoint paper aims to introduce a comprehensive set of ethical principles for teaching AI ethics in medical education. This dynamic and principle-based approach is designed to be adaptive and comprehensive, addressing not only the current but also emerging ethical challenges associated with the use of AI in medicine. This study conducts a theoretical analysis of the current academic discourse on AI ethics in medical education, identifying potential gaps and limitations. The inherent interconnectivity and interdisciplinary nature of these anticipated challenges are illustrated through a focused discussion on “informed consent” in the context of AI in medicine and medical education. This paper proposes a principle-based approach to AI ethics education, building on the 4 principles of medical ethics—autonomy, beneficence, nonmaleficence, and justice—and extending them by integrating 3 public health ethics principles—efficiency, common good orientation, and proportionality. The principle-based approach to teaching AI ethics in medical education proposed in this study offers a foundational framework for addressing the anticipated ethical challenges of using AI in medicine, recommended in the current academic discourse. By incorporating the 3 principles of public health ethics, this principle-based approach ensures that medical ethics education remains relevant and responsive to the dynamic landscape of AI integration in medicine. As the advancement of AI technologies in medicine is expected to increase, medical ethics education must adapt and evolve accordingly. The proposed principle-based approach for teaching AI ethics in medical education provides an important foundation to ensure that future medical professionals are not only aware of the ethical dimensions of AI in medicine but also equipped to make informed ethical decisions in their practice. Future research is required to develop problem-based and competency-oriented learning objectives and educational content for the proposed principle-based approach to teaching AI ethics in medical education.
Journal Article
Mapping AI-ethics' dilemmas in forensic case work: To trust AI or not?
2023
In this paper I discuss the challenges and ethical considerations surrounding the use of Artificial Intelligence (AI) in forensic science, particularly in criminal cases, and I emphasize the need for a comprehensive definition of AI systems within the context of forensic science and the importance of accountability and adherence to legal procedures. Human involvement and oversight are deemed crucial in forensic science to ensure accountability, transparency, and the ability to articulate and interpret nuances that AI systems may currently lack.
Journal Article
Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI
2020
Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?
Journal Article