Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
30 result(s) for "Bode, Ingvild"
Sort by:
Autonomous Weapons Systems and International Norms
In Autonomous Weapons Systems and International Norms Ingvild Bode and Hendrik Huelss present an innovative analysis of how testing, developing, and using weapons systems with autonomous features shapes ethical and legal norms, arguing that they have already established standards for what counts as meaningful human control.
Emergent Normativity: Communities of Practice, Technology, and Lethal Autonomous Weapon Systems
Lethal autonomous weapon systems (LAWS) are the subject of considerable international debate turning around the extent to which humans remain in control over using force. But what is precisely at stake is less clear as stakeholders have different perspectives on the technologies that animate LAWS. Such differences matter because they shape the substance of the debate, which regulatory options are put on the table, and also normativity on LAWS in the sense of understandings of appropriateness. To understand this process, I draw on practice theories, science and technology studies (STS), and critical norm research. I argue that a constellation of communities of practice (CoPs) shapes the public debate about LAWS and focus on three of these CoPs: diplomats, weapon manufacturers, and journalists. Actors in these CoPs discursively perform practices of boundary-work, in the STS sense, to shape understandings of technologies at the heart of LAWS: automation, autonomy, and AI. I analyze these dynamics empirically in two steps: first, by offering a general-level analysis of practices of boundary-work performed by diplomats at the Group of Governmental Experts on LAWS from 2017 to 2022; and second, through examining such practices performed by weapon manufacturers and journalists in relation to the use of loitering munitions, a particular type of LAWS, in the Second Libyan Civil War (2014–2020).
Autonomous weapons systems and changing norms in international relations
Autonomous weapons systems (AWS) are emerging as key technologies of future warfare. So far, academic debate concentrates on the legal-ethical implications of AWS but these do not capture how AWS may shape norms through defining diverging standards of appropriateness in practice. In discussing AWS, the article formulates two critiques on constructivist models of norm emergence: first, constructivist approaches privilege the deliberative over the practical emergence of norms; and second, they overemphasise fundamental norms rather than also accounting for procedural norms, which we introduce in this article. Elaborating on these critiques allows us to respond to a significant gap in research: we examine how standards of procedural appropriateness emerging in the development and usage of AWS often contradict fundamental norms and public legitimacy expectations. Normative content may therefore be shaped procedurally, challenging conventional understandings of how norms are constructed and considered as relevant in International Relations. In this, we outline the contours of a research programme on the relationship of norms and AWS, arguing that AWS can have fundamental normative consequences by setting novel standards of appropriate action in international security policy.
Cross-cultural narratives of weaponised artificial intelligence: Comparing France, India, Japan and the United States
Stories about ‘intelligent machines’ have long featured in popular culture. Existing research has mapped these artificial intelligence (AI) narratives but lacks an in-depth understanding of (a) narratives related specifically to weaponised AI and autonomous weapon systems and (b) whether and how these narratives resonate across different states and associated cultural contexts. We speak to these gaps by examining narratives about weaponised AI across publics in France, India, Japan and the US. Based on a public opinion survey conducted in these states in 2022–2023, we find that narratives found in English-language popular culture are shared cross-culturally, although with some variations. However, we also find culturally distinct narratives, particularly in India and Japan. Further, we assess whether these narratives shape the publics’ attitudes towards regulating weaponised AI. Although respondents demonstrate overall uncertainty and lack of knowledge regarding developments in the sphere of weaponised AI, they assess these technologies in a negative-leaning way and mostly support regulation. With these findings, our study offers a first step towards further investigating the extent to which weaponised AI narratives circulate globally and how salient perceptions of these technologies are across different publics.
Establishing human responsibility and accountability at early stages of the lifecycle for AI-based defence systems
The use of AI technologies in weapons systems has triggered a decade-long international debate, especially with regard to human control, responsibility, and accountability around autonomous and intelligent systems (AIS) in defence. However, most of these ethical and legal discussions have revolved primarily around the point of use of a hypothetical AIS, and in doing so, one critical component still remains under-appreciated: human decision-making across the full timeline of the AIS lifecycle. When discussions around human involvement start at the point at which a hypothetical AIS has taken some undesirable action, they typically prompt the question: “what happens next?” This approach primarily concerns the technology at the time of use and may be appropriate for conventional weapons systems, for which humans have clear lines of control and therefore accountability at the time of use. However, this is not precisely the case for AIS. Rather than focusing first on the system in its comparatively most autonomous state, it is more helpful to consider when, along the lifecycle, humans have more clear, direct control over the system (e.g. through research, design, testing, or procurement) and how, at those earlier times, human decision-makers can take steps to decrease the likelihood that an AIS will perform ‘inappropriately’ or take incorrect actions. In this paper, we therefore argue that addressing many arising concerns requires a shift in how and when participants of the international debate on AI in the military domain think about, talk about, and plan for human involvement across the full lifecycle of AIS in defence. This shift includes a willingness to hold human decision-makers accountable, even if their roles occurred at much earlier stages of the lifecycle. Of course, this raises another question: “How?” We close by formulating a number of recommendations, including the adoption of the IEEE-SA Lifecycle Framework, the consideration of policy knots, and the adoption of Human Readiness Levels.
Technologische Herausforderungen: Künstliche Intelligenz, Normativität, Normalität und Praktiken jenseits des öffentlichen Raums
Die zunehmende Bedeutung Künstlicher Intelligenz (KI) in vielen Bereichen der politisch-relevanten Entscheidungsfindung führt zu neuen Fragestellungen hinsichtlich der Rolle von Normen. Die bestehende Normenforschung hat in den letzten Jahren insbesondere zur Entstehung, Wirkung und Veränderung von Normen im öffentlich-diskursiven Raum wichtige, diversifizierte Konzepte angeboten. Die Perspektive auf operative Praktiken jenseits des öffentlichen Raums ist jedoch bislang kaum theoretisch erfasst worden. Im Kontext der Analyse militärischer Anwendungen von KI argumentiert der Forumsbeitrag, erstens, dass Praktiken der Entwicklung und des Gebrauchs von KI-Technologien, die von verschiedenen Akteuren jenseits des öffentlichen Raums durchgeführt werden, wichtige und gegenwärtig unter-theoretisierte Quellen von Normativität sind. Zweitens, möchten wir das Zusammenspiel von Normativität, also Vorstellungen von moralischen Pflichten und Gerechtigkeit, und von Normalität, d.h. Vorstellungen des Typischen und Durchschnittlichen, in der Entstehung und Entwicklung von Normen hervorheben. Die vorliegende theoretische Reflexion bietet der IB-Normenforschung einen breiteren analytischen Blick auf die Konstitution von normativem Raum. The increasing importance of artificial intelligence (AI) in many areas of politically relevant decision-making leads to new questions regarding the role of norms. Existing norm research offers important, diversified concepts, particularly on the emergence, impact and change of norms in the public-discursive space. However, the perspective on operational practices beyond the public space has so far hardly been comprehended theoretically. In the context of analysing military applications of AI, this contribution argues, first, that practices of developing and using AI technologies carried out by various actors beyond the public sphere are important and currently under-theorized sources of normativity. Secondly, it emphasises the interplay between normativity, i.e. ideas about moral duties and justice, and normality, i.e. ideas about the typical and average, in the emergence and development of norms. The theoretical reflection in this contribution offers IR norm research a broader analytical view of the constitution of normative space.
The need for and nature of a normative, cultural psychology of weaponized AI (artificial intelligence)
The use of AI in weapons systems raises numerous ethical issues. To date, work on weaponized AI has tended to be theoretical and normative in nature, consisting in critical policy analyses and ethical considerations, carried out by philosophers, legal scholars, and political scientists. However, adequately addressing the cultural and social dimensions of technology requires insights and methods from empirical moral and cultural psychology. To do so, this position piece describes the motivations for and sketches the nature of a normative, cultural psychology of weaponized AI. The motivations for this project include the increasingly global, cross-cultural and international, nature of technologies, and counter-intuitive nature of normative thoughts and behaviors. The nature of this project consists in developing standardized measures of AI ethical reasoning and intuitions, coupled with questions exploring the development of norms, administered and validated across different cultural groups and disciplinary contexts. The goal of this piece is not to provide a comprehensive framework for understanding the cultural facets and psychological dimensions of weaponized AI but, rather, to outline in broad terms the contours of an emerging research agenda.
Ensuring the exercise of human agency in AI-based military systems: concerns across the lifecycle
Over the past years, the number of governance initiatives on applications of artificial intelligence (AI) in the military domain has expanded. As actors across the governance landscape turn towards implementing these initiatives, principles will need to be spelled out in practical terms, marking a decisive phase in the governance process. This includes the exercise of human agency in the context of using AI-based systems in the military domain. This paper considers what the notion of exercising human agency means across the lifecycle of AI systems. A lifecycle framework acknowledges that ensuring a qualitatively high exercise of human agency in AI-based systems cannot rely exclusively on the tail-end of the targeting decision-making process. Rather, it needs to be built into the lifecycle of AI-based systems from before the potential development of such systems all the way to post-use review. Each of the lifecycle stages raises manifold questions and challenges that various stakeholders need to address in their efforts to sustain and strengthen human agency. The paper highlights twelve key technical, ethical, legal, and strategic concerns across different stages of the lifecycle. These sets of concerns illustrate the value of developing more fine-grained thinking around applied lifecycle models. We conclude that ensuring the exercise of human agency in the use of AI-based systems in military contexts will require careful and reflective decision-making around questions and challenges among the stakeholders involved.
Francis Deng and the Concern for Internally Displaced Persons: Intellectual Leadership in the United Nations
Using the case of Francis Deng as representative of the Secretary-General for internally displaced persons as as example, this article considers how temporary civil servants may become intellectual leaders within the United Nations. During his 1992–2004 tenure, Deng managed to raise assistance and protection expectations for the internally displaced through framing their concerns in the concept of sovereignty as responsibility. He also contributed to legal change through formulating protection and assistance standards—the Guiding Principles on Internal Displacement. The article argues that a combination of three factors enabled him to exercise intellectual leadership. First, his insider-outsider position at the border between the UN Secretariat (the second UN) and the nongovernmental organizations, academic scholars, and independent experts who engage regularly with the UN (the third UN); second, his personal qualities; and third, his effective ways of framing at an opportune moment in time.