Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
821 result(s) for "Technological mediation"
Sort by:
Harmonizing Artificial Intelligence for Social Good
To become more broadly applicable, positions on AI ethics require perspectives from non-Western regions and cultures such as China and Japan. In this paper, we propose that the addition of the concept of harmony to the discussion on ethical AI would be highly beneficial due to its centrality in East Asian cultures and its applicability to the challenge of designing AI for social good. We first present a synopsis of different definitions of harmony in multiple contexts, such as music and society, which reveals that the concept is, at its core, about well-balanced relationships and appropriate actions which give rise to order, balance, and aesthetically pleasing phenomena. The mediator for these well-balanced relationships is Takt which is an ability to act thoughtfully and sensibly according to the specific situation and to put things into proportion and order. We propose that the central challenge of building harmonizing AI is to make intelligent systems tactful and also to design and use them tactfully. For an AI system to become tactful, it needs to be able to have an advanced sensitivity to the specific contexts which it is in and their social and ethical implications and have the capability of approximately inferring the emotional and cognitive states of people with whom it is interacting.
Ethics from Within
Following the “control dilemma” of Collingridge, influencing technological developments is easy when their implications are not yet manifest, yet once we know these implications, they are difficult to change. This article revisits the Collingridge dilemma in the context of contemporary ethics of technology, when technologies affect both society and the value frameworks we use to evaluate them. Early in its development, we do not know how a technology will affect the value frameworks from which it will be evaluated, while later, when the implications for society and morality are clearer, it is more difficult to guide the development in a desirable direction. Present-day approaches to this dilemma focus on methods to anticipate ethical impacts of a technology (“technomoral scenarios”), being too speculative to be reliable, or on ethically regulating technological developments (“sociotechnical experiments”), discarding anticipation of the future implications. We present the approach of technological mediation as an alternative that focuses on the dynamics of the interaction between technologies and human values. By investigating online discussions about Google Glass, we examine how people articulate new meanings of the value of privacy. This study of “morality in the making” allows developing a modest and empirically informed form of anticipation.
Aligning artificial intelligence with human values: reflections from a phenomenological perspective
Artificial Intelligence (AI) must be directed at humane ends. The development of AI has produced great uncertainties of ensuring AI alignment with human values (AI value alignment) through AI operations from design to use. For the purposes of addressing this problem, we adopt the phenomenological theories of material values and technological mediation to be that beginning step. In this paper, we first discuss the AI value alignment from the relevant AI studies. Second, we briefly present what are material values and technological mediation and reflect on the AI value alignment through the lenses of these theories. We conclude that a set of finite human values can be defined and adapted to the stable life tasks that AI systems will be called upon to accomplish. The AI value alignment can also be fostered between designers and users through technological mediation. Upon that foundation, we propose a set of common principles to understand the AI value alignment through phenomenological theories. This paper contributes the unique knowledge of phenomenological theories to the discourse on AI alignment with human values.
Anthropology and Voice
Voice is both a set of sonic, material, and literary practices shaped by culturally and historically specific moments and a category invoked in discourse about personal agency, communication and representation, and political power. This review focuses on scholarship produced since the 1990s in a variety of fields, addressing the status of the voice within Euro-Western modernity, voice as sound and embodied practice, technological mediation, and voicing. It then turns to the ways in which anthropology and related fields have framed the relationship between voice and identity, status, subjectivity, and publics. The review suggests that attending to voice in its multiple registers gives particular insight into the intimate, affective, and material/embodied dimensions of cultural life and sociopolitical identity. Questions of voice are implicated in many issues of concern to contemporary anthropology and can lend theoretical acuity to broader concepts of more general concern to social theory as well.
Virtual Representations and Their Ethical Implications
This paper will address ethical concerns surrounding the representation of vulnerable groups as well as the methodological challenges inherent in using artificial intelligence and human-like computer-generated characters in human studies that involve representing such groups. Such concerns focus on consequences arising from the technological affordances of new systems for creating narratives, as well as graphical and audio representations that are capable of portraying beings with close resemblance to humans. Enacting such virtual representations of humans inevitably gives rise to important ethical questions: (1) Who has the right to tell certain stories? (2) Is it ethical to change the medium of a narrative and the identity of a protagonist? (3) Do such changes, or technological mediations, affect whether a vulnerable group will be fairly and accurately portrayed? (4) And what are the implications, either way? While the backdrop of the paper involves discussing the potential of virtual representation as a meditative tool for moral and social change, the ethical implications inherent in the use of new cutting-edge technologies, such as OpenAI’s ChatGPT and Unreal Engine’s MetaHuman, to create human-like virtual character narratives call for theoretical scrutiny from a methodological perspective.
Technological Environmentality: Conceptualizing Technology as a Mediating Milieu
After several technological revolutions in which technologies became ever more present in our daily lives, the digital technologies that are currently being developed are actually fading away from sight. Information and Communication Technologies (ICTs) are not only embedded in devices that we explicitly “use” but increasingly become an intrinsic part of the material environment in which we live. How to conceptualize the role of these new technological environments in human existence? And how to anticipate the ways in which these technologies will mediate our everyday lives? In order to answer these questions, we draw on two approaches that each offers a framework to conceptualize these new technological environments: Postphenomenology and Material Engagement Theory. As we will show, each on their own, these approaches fail to do justice to the new environmental role of technology and its implications for human existence. But by bringing together Postphenomenology’s account of technological mediation and Material Engagement Theory’s account of engaging with environments, it becomes possible to sufficiently account for the new environmental workings of technology. To do justice to these new workings of environmental technologies, we introduce and develop the concept of “Technological Environmentality.”
To-Do Is to Be: Foucault, Levinas, and Technologically Mediated Subjectivation
The theory of technological mediation aims to take technological artifacts seriously, recognizing the constitutive role they play in how we experience the world, act in it, and how we are constituted as (moral) subjects. Its quest for a compatible ethics has led it to Foucault’s “care of the self,” i.e., a transformation of the self by oneself through self-discipline. In this regard, technologies have been interpreted as power structures to which one can relate through Foucaultian “technologies of the self” or ascetic practices. However, this leaves unexplored how concrete technologies can actually support the process of self-care. This paper explores this possibility by examining one such technology: a gamified To-Do list app. Doing so, it first shows that despite the apparent straightforwardness of gamification, confrontation and shame play an important role in how the app motivates me to do better. Second, inspired by Ihde’s schema of human-technology relations, it presents different ways in which the app may confront me with myself. Subsequently, it accounts for the motivation and shame that this technologically mediated confrontation with myself invokes through a Levinasian account of ethical subjectivity. In so doing, it also shows how Levinas’ phenomenology implies a responsibility for self-care and how nonhuman, technological others may still call me to responsibility. It concludes with a reflection on the role of gamification in technologically mediated subjectivation and some implications for design.
AI as a Political Artefact: Technological Mediation in Constructivist International Relations
This article advances a constructivist account of artificial intelligence (AI) as a political artefact in international relations (IR). Drawing on technopolitics, Actor–Network Theory (ANT), and postphenomenology, it argues that AI systems should be examined not only through the meanings stakeholders ascribe to them, but also through how design choices materialize normative commitments that preconfigure perception, judgment, and action. ANT highlights the agency of non-human actants and the affordances inscribed in technical artefacts, while postphenomenology specifies the mediation modes – embodiment, hermeneutic, alterity, and background – through which increasingly autonomous and data-driven systems reshape IR decision environments. The article is conceptual in nature, supported by illustrative evidence from decision-support systems (DSS) used in NATO and the European Union, including AI-enabled tools for operational command and conflict early warning. These cases demonstrate how DSS redistribute epistemic authority and responsibility across technical and institutional layers, accelerating sense-making under time pressure while introducing new risks. The contribution addresses a research gap by integrating insights from postphenomenology and the sociology of technology into IR, proposing a permissive ontology that treats (semi-)autonomous systems as analytical actants without assuming personhood, and offering conceptual tools for examining mediation in practice and the political implications of disruptive technologies.
Understanding science-in-the-making by letting scientific instruments speak
Latour encourages us to use science-in-the-making as an entry point to understanding science, because it allows us to see how scientific knowledge is constituted and through which processes the ‘absolute certainties’ of ready-made science appear. He approaches science-in-the-making from the perspective of semiotics because it enables him (1) to attribute equal importance to humans and nonhumans, and (2) to let the actors in scientific practices speak for themselves. We argue that Latour’s semiotic approach to science-in-the-making and his understanding of scientific instruments as inscription devices do not fulfill these desiderata. This, in turn, prevents him from understanding the crucial role that scientific instruments play in science-in-the-making. As an alternative to Latour’s semiotic approach, we present a postphenomenological approach to studying science-in-the-making. Using the notion of technological mediation, we argue that scientific instruments actively mediate how reality becomes present to – and is treated by – scientists. Focusing on how intentional relations between scientists and the world are mediated by scientific instruments makes it possible to turn them into genuine actors that speak for themselves, thereby recognizing their constitutive role in the development of the interpretational frameworks of scientists. We then show how a postphenomenological approach can be understood as an ethnomethodology of human-technology relations that meets both of Latour’s requirements when studying science-in-the-making.
Philosophical Potencies of Postphenomenology
As a distinctive voice in the current philosophy of technology, postphenomenology elucidates various ways of how technologies “shape” both the world (or objectivity) and humans (or subjectivity) in it. Distancing itself from more speculative approaches, postphenomenology advocates the so-called empirical turn in philosophy of technology: It focuses on diverse effects of particular technologies instead of speculating on the essence of technology and its general impact. Critics of postphenomenology argue that by turning to particularities and emphasizing that technologies are always open to different uses and interpretations, postphenomenology becomes unable to realize how profoundly technology determines our being in the world. Seeking to evaluate the postphenomenological (in)ability to radically reflect on the human being conditioned by technology, I discuss the two most pertinent criticisms of postphenomenology: an “existential” one by Robert C. Scharff and an “ontological” one by Jochem Zwier, Vincent Blok, and Pieter Lemmens. Assessing the ontological alternative, I point to incapacity of Heidegger’s concept of Enframing to do justice to material technologies. Simultaneously, I acknowledge the necessity of speculating on (the concept of) technology as transcending concrete technologies. Such speculating would be instrumental in reviving Ihde’s idea of non-neutrality of technology in its full philosophical potency.