Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
5
result(s) for
"Bariach, Ben"
Sort by:
Artificial intelligence in support of the circular economy: ethical considerations and a path forward
by
Juneja, Prathm
,
Zhang, Joyce
,
Taddeo, Mariarosaria
in
Artificial Intelligence
,
Business models
,
Circular economy
2024
The world’s current model for economic development is unsustainable. It encourages high levels of resource extraction, consumption, and waste that undermine positive environmental outcomes. Transitioning to a circular economy (CE) model of development has been proposed as a sustainable alternative. Artificial intelligence (AI) is a crucial enabler for CE. It can aid in designing robust and sustainable products, facilitate new circular business models, and support the broader infrastructures needed to scale circularity. However, to date, considerations of the ethical implications of using AI to achieve a transition to CE have been limited. This article addresses this gap. It outlines how AI is and can be used to transition towards CE, analyzes the ethical risks associated with using AI for this purpose, and supports some recommendations to policymakers and industry on how to minimise these risks.
Journal Article
Towards a Harms Taxonomy of AI Likeness Generation
2024
Generative artificial intelligence models, when trained on a sufficient number of a person's images, can replicate their identifying features in a photorealistic manner. We refer to this process as 'likeness generation'. Likeness-featuring synthetic outputs often present a person's likeness without their control or consent, and may lead to harmful consequences. This paper explores philosophical and policy issues surrounding generated likeness. It begins by offering a conceptual framework for understanding likeness generation by examining the novel capabilities introduced by generative systems. The paper then establishes a definition of likeness by tracing its historical development in legal literature. Building on this foundation, we present a taxonomy of harms associated with generated likeness, derived from a comprehensive meta-analysis of relevant literature. This taxonomy categorises harms into seven distinct groups, unified by shared characteristics. Utilising this taxonomy, we raise various considerations that need to be addressed for the deployment of appropriate mitigations. Given the multitude of stakeholders involved in both the creation and distribution of likeness, we introduce concepts such as indexical sufficiency, a distinction between generation and distribution, and harms as having a context-specific nature. This work aims to serve industry, policymakers, and future academic researchers in their efforts to address the societal challenges posed by likeness generation.
Sociotechnical Safety Evaluation of Generative AI Systems
by
Kay, Jackie
,
Isaac, William
,
Iason Gabriel
in
Artificial intelligence
,
Context
,
Generative artificial intelligence
2023
Generative AI systems produce a range of risks. To ensure the safety of generative AI systems, these risks must be evaluated. In this paper, we make two main contributions toward establishing such evaluations. First, we propose a three-layered framework that takes a structured, sociotechnical approach to evaluating these risks. This framework encompasses capability evaluations, which are the main current approach to safety evaluation. It then reaches further by building on system safety principles, particularly the insight that context determines whether a given capability may cause harm. To account for relevant context, our framework adds human interaction and systemic impacts as additional layers of evaluation. Second, we survey the current state of safety evaluation of generative AI systems and create a repository of existing evaluations. Three salient evaluation gaps emerge from this analysis. We propose ways forward to closing these gaps, outlining practical steps as well as roles and responsibilities for different actors. Sociotechnical safety evaluation is a tractable approach to the robust and comprehensive safety evaluation of generative AI systems.
Imagen 3
2024
We introduce Imagen 3, a latent diffusion model that generates high quality images from text prompts. We describe our quality and responsibility evaluations. Imagen 3 is preferred over other state-of-the-art (SOTA) models at the time of evaluation. In addition, we discuss issues around safety and representation, as well as methods we used to minimize the potential harm of our models.
Imagen 3
2024
We introduce Imagen 3, a latent diffusion model that generates high quality images from text prompts. We describe our quality and responsibility evaluations. Imagen 3 is preferred over other state-of-the-art (SOTA) models at the time of evaluation. In addition, we discuss issues around safety and representation, as well as methods we used to minimize the potential harm of our models.