Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
15,119
result(s) for
"Technology Philosophy Research."
Sort by:
Enhancing Evolution
2010,2007,2011
InEnhancing Evolution, leading bioethicist John Harris dismantles objections to genetic engineering, stem-cell research, designer babies, and cloning and makes an ethical case for biotechnology that is both forthright and rigorous. Human enhancement, Harris argues, is a good thing--good morally, good for individuals, good as social policy, and good for a genetic heritage that needs serious improvement.Enhancing Evolutiondefends biotechnological interventions that could allow us to live longer, healthier, and even happier lives by, for example, providing us with immunity from cancer and HIV/AIDS. Further, Harris champions the possibility of influencing the very course of evolution to give us increased mental and physical powers--from reasoning, concentration, and memory to strength, stamina, and reaction speed. Indeed, he says, it's not only morally defensible to enhance ourselves; in some cases, it's morally obligatory.
In a new preface, Harris offers a glimpse at the new science and technology to come, equipping readers with the knowledge to assess the ethics and policy dimensions of future forms of human enhancement.
Artificial Intelligence and the ‘Good Society’: the US, EU, and UK approach
by
Cath, Corinne
,
Mittelstadt, Brent
,
Wachter, Sandra
in
Artificial intelligence
,
Development strategies
,
Ethics
2018
In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: (a) the development of a ‘good AI society’; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a ‘good AI society’. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.
Journal Article
How data happened : a history from the age of reason to the age of algorithms
\"From facial recognition--capable of checking people into flights or identifying undocumented residents--to automated decision systems that inform who gets loans and who receives bail, each of us moves through a world determined by data-empowered algorithms. But these technologies didn't just appear: they are part of a history that goes back centuries, from the census enshrined in the US Constitution to the birth of eugenics in Victorian Britain to the development of Google search. Expanding on the popular course they created at Columbia University, Chris Wiggins and Matthew L. Jones illuminate the ways in which data has long been used as a tool and a weapon in arguing for what is true, as well as a means of rearranging or defending power. They explore how data was created and curated, as well as how new mathematical and computational techniques developed to contend with that data serve to shape people, ideas, society, military operations, and economies. Although technology and mathematics are at its heart, the story of data ultimately concerns an unstable game among states, corporations, and people. How were new technical and scientific capabilities developed; who supported, advanced, or funded these capabilities or transitions; and how did they change who could do what, from what, and to whom? Wiggins and Jones focus on these questions as they trace data's historical arc, and look to the future. By understanding the trajectory of data--where it has been and where it might yet go--Wiggins and Jones argue that we can understand how to bend it to ends that we collectively choose, with intentionality and purpose.\"-- Publisher marketing.
Creative understanding
1990
\"A pleasure to read. Gracefully written by a scholar well grounded in the relevant philosophical, historical, and technical background. . . . a helpfully clarifying review and analysis of some issues of importance to recent philosophy of science and a source of some illuminating insights.\"—Burke Townsend, Philosophy of Science
From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices
by
Elhalal, Anat
,
Floridi, Luciano
,
Kinsey, Libby
in
AI ethics
,
Artificial intelligence
,
Autonomy
2020
The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel in Science, 132(3429):741–742, 1960.
https://doi.org/10.1126/science.132.3429.741
; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the ‘what’ of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)—rather than on practices, the ‘how.’ Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.
Journal Article
Definitions and Conceptual Dimensions of Responsible Research and Innovation: A Literature Review
2017
The aim of this study is to provide a discussion on the definitions and conceptual dimensions of Responsible Research and Innovation based on findings from the literature. In the study, the outcomes of a literature review of 235 RRI-related articles were presented. The articles were selected from the EBSCO and Google Scholar databases regarding the definitions and dimensions of RRI. The results of the study indicated that while administrative definitions were widely quoted in the reviewed literature, they were not substantially further elaborated. Academic definitions were mostly derived from the institutional definitions; however, more empirical studies should be conducted in order to give a broader empirical basis to the development of the concept. In the current study, four distinct conceptual dimensions of RRI that appeared in the reviewed literature were brought out: inclusion, anticipation, responsiveness and reflexivity. Two emerging conceptual dimensions were also added: sustainability and care.
Journal Article
In AI We Trust: Ethics, Artificial Intelligence, and Reliability
by
Ryan, Mark
in
Artificial Intelligence
,
Artificial intelligence ethics
,
Biomedical Engineering and Bioengineering
2020
One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in human relationships, so proposing that AI should be trusted, is a very serious claim. This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions—requirements of the affective and normative accounts of trust. While AI meets all of the requirements of the rational account of trust, it will be shown that this is not actually a type of trust at all, but is instead, a form of reliance. Ultimately, even complex machines such as AI should not be viewed as trustworthy as this undermines the value of interpersonal trust, anthropomorphises AI, and diverts responsibility from those developing and using them.
Journal Article
Ethical and Philosophical Consideration of the Dual-Use Dilemma in the Biological Sciences
2008
This book examines the life-science experiments that give rise to the dual-use dilemma. It therefore addresses a topic of tremendous contemporary importance. This is the first book-length treatment of the subject by professional ethicists.