Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
24,174 result(s) for "weapon system"
Sort by:
Torpedo : inventing the military-industrial complex in the United States and Great Britain
In a bold reappraisal, Katherine Epstein uncovers the origins of the \"military-industrial complex\" not in the Cold War but in the decades before WWI, as the United States and Great Britain struggled to perfect a crucial new weapon: the self-propelled torpedo. Torpedo R&D sparked intellectual property battles that reshaped national security law.
Autonomous Weapons Systems and International Norms
In Autonomous Weapons Systems and International Norms Ingvild Bode and Hendrik Huelss present an innovative analysis of how testing, developing, and using weapons systems with autonomous features shapes ethical and legal norms, arguing that they have already established standards for what counts as meaningful human control.
A Comparative Analysis of the Definitions of Autonomous Weapons Systems
In this report we focus on the definition of autonomous weapons systems (AWS). We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approaches to address the ethical and legal problems of these weapons systems. This approach is detrimental both in terms of fostering an understanding of AWS and in facilitating agreement around conditions of deployment and regulations of their use and, indeed, whether AWS are to be used at all. We draw from the comparative analysis to identify essential aspects of AWS and then offer a definition that provides a value-neutral ground to address the relevant ethical and legal problems. In particular, we identify four key aspects—autonomy; adapting capabilities of AWS; human control; and purpose of use—as the essential factors to define AWS and which are key when considering the related ethical and legal implications.
Death by Moderation
This book addresses an important but little-noticed phenomenon in the revolutionary world of military technology. Across a wide range of otherwise-unrelated weapons programs, the Pentagon is now pursuing arms that are deliberately crafted to be less powerful, less deadly, and less destructive than the systems they are designed to supplement or replace. This direction is historically anomalous; military forces generally pursue ever-bigger bangs, but the modern conditions of counter-insurgency warfare and military operations 'other than war' (such as peacekeeping and humanitarian assistance) demand a military capable of modulated force. By providing a capacity to intervene deftly yet effectively, the new generations of 'useable' weaponry should enable the U.S. military to accomplish its demanding missions in a manner consistent with legal obligations, public relations realities, and political constraints. Five case studies are provided, regarding precision-guided 'smart bombs', low-yield nuclear weapons, self-neutralizing anti-personnel land mines, directed-energy anti-satellite weapons, and non-lethal weapons.
Prospects for the global governance of autonomous weapons: comparing Chinese, Russian, and US practices
Technological developments in the sphere of artificial intelligence (AI) inspire debates about the implications of autonomous weapon systems (AWS), which can select and engage targets without human intervention. While increasingly more systems which could qualify as AWS, such as loitering munitions, are reportedly used in armed conflicts, the global discussion about a system of governance and international legal norms on AWS at the United Nations Convention on Certain Conventional Weapons (UN CCW) has stalled. In this article we argue for the necessity to adopt legal norms on the use and development of AWS. Without a framework for global regulation, state practices in using weapon systems with AI-based and autonomous features will continue to shape the norms of warfare and affect the level and quality of human control in the use of force. By examining the practices of China, Russia, and the United States in their pursuit of AWS-related technologies and participation at the UN CCW debate, we acknowledge that their differing approaches make it challenging for states parties to reach an agreement on regulation, especially in a forum based on consensus. Nevertheless, we argue that global governance on AWS is not impossible. It will depend on the extent to which an actor or group of actors would be ready to take the lead on an alternative process outside of the CCW, inspired by the direction of travel given by previous arms control and weapons ban initiatives.
Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit
Abstract In this article, we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human agents can take meaningful moral responsibility for the actions of AWS. This is a moral responsibility attributed to individuals in a justified and fair way and which is accepted by individuals as an assessment of their own moral character. We argue that, given the unpredictability of AWS, meaningful moral responsibly can only be discharged by human agents who are willing to take a moral gambit: they decide to design/develop/deploy AWS despite the uncertainty about the effects an AWS may produce, hoping that unintended and unwanted or unforeseen outcomes may never occurs, but also accepting to be held responsible if such outcomes will occur. We argue that, while a moral gambit is permissible for the use of non-lethal AWS, this is not the case for the actions of lethal autonomous weapon systems.
Autonomous weapon systems and jus ad bellum
In this article, we focus on the scholarly and policy debate on autonomous weapon systems (AWS) and particularly on the objections to the use of these weapons which rest on jus ad bellum principles of proportionality and last resort. Both objections rest on the idea that AWS may increase the incidence of war by reducing the costs for going to war (proportionality) or by providing a propagandistic value (last resort). We argue that whilst these objections offer pressing concerns in their own right, they suffer from important limitations: they overlook the difficulties of calculating ad bellum proportionality; confuse the concept of proportionality of effects with the precision of weapon systems; disregard the ever-changing nature of war and of its ethical implications; mistake the moral obligation imposed by the principle of last resort with the impact that AWS may have on political decision to resort to war. Our analysis does not entail that AWS are acceptable or justifiable, but it shows that ad bellum principles are not the best set of ethical principles for tackling the ethical problems raised by AWS; and that developing adequate understanding of the transformations that the use of AWS poses to the nature of war itself is a necessary, preliminary requirement to any ethical analysis of the use of these weapons.
When stigmatization does not work: over-securitization in efforts of the Campaign to Stop Killer Robots
This article reflects on securitization efforts with respect to ‘killer robots’, known more impartially as autonomous weapons systems (AWS). Our contribution focuses, theoretically and empirically, on the Campaign to Stop Killer Robots, a transnational advocacy network vigorously pushing for a pre-emptive ban on AWS. Marking exactly a decade of its activity, there is still no international regime formally banning, or even purposefully regulating, AWS. Our objective is to understand why the Campaign has not been able to advance its disarmament agenda thus far, despite all the resources, means and support at its disposal. For achieving this objective, we challenge the popular assumption that strong stigmatization is the universally best strategy towards humanitarian disarmament. We investigate the consequences of two specifics present in AWS, which set them apart from processes and successes of the campaigns to ban anti-personnel landmines, cluster munitions, and laser-blinding weapons: the complexity of AWS as a distinct weapons category, and the subsequent circumvention of its complexity through the utilization of pop-culture, namely science fiction imagery. We particularly focus on two mechanisms through which such distortion has occurred: hybridization and grafting. These provide the conceptual basis and heuristic tools to unpack the paradox of over-securitization: success in broadening the stakeholder base in relation to the first mechanism and deepening the sense of insecurity in relation to the second one does not necessarily lead to the achievement of the desired prohibitory norm. In conclusion, we ask whether it is not the time for a more epistemically-oriented expert debate with a less ambitious, lowest common denominator strategy as the preferred model of arms control for such a complex weapons category.
Energizing Data-Driven Operations at the Tactical Edge
Significant efforts are ongoing within the U.S. Air Force (USAF) to improve national security and competitiveness by harnessing the growing power of information technologies, such as artificial intelligence (AI) and robotics. Product and process technologies are being researched, experimented with, and integrated into future warfighting concepts and plans. A significant part of this effort is focused on integrating operations, from the strategic to the tactical and across all lines of effort. A question that must be asked in considering these future warfighting concepts is: how will the devices that enable the knowledge-based future be powered? The abundant energy supplies that characterize peacetime operating environments may not be readily available at the far reaches of the force projections - the tactical edge - during conflict. Understanding the energy challenges associated with continued data collection, processing, storage, analysis, and communications at the tactical edge is an important part of developing the plans for meeting the future competition on the battlefield. This report identifies challenges and issues associated with energy needs at the tactical edge as well as any potential for solutions to be considered in the future to help address these challenges. The recommendations of Energizing Data-Driven Operations at the Tactical Edge address understanding these requirement needs and the cascading effects of not meeting those needs, integrating energy needs for data processing into mission and unit readiness assessments, and research into product and process technologies to address energy-efficient computation, resilience, interoperability, and alternative solutions to energy management at the tactical edge.
Multidimensional Effectiveness Evaluation of Weapon System-of-Systems Based on Hypernetwork Under Communication Constraints
A weapon system-of-systems (WSoS) is a higher-level system comprising various functional weapon equipment systems interconnected via mutual relationships, forming a hierarchical structure that can generate overall combat effectiveness. A critical factor in assessing WSoS performance is the kill chain, and quantifying the combat effectiveness of a WSoS based on the kill chain is crucial for optimizing the system’s structure and improving the understanding of the battlefield situation, holding significant military value. Scenarios involving restricted communication (e.g., limitations in weapon system capabilities, terrain obstructions, or enemy interference) make analyzing WSoS performance challenging, so proposed here is a kill chain-based method for analyzing WSoS capability in order to address the impact of communication restrictions. Specifically, a generalized multilayer network model with information relays is used to network the WSoS, then based on this, a capability-matrix-based method for generating and analyzing the kill chain is designed. Experiments show that the proposed model and method enable effective generation and analysis of the kill chain in communication-denial situations. Furthermore, a framework for evaluating WSoS performance is established from the dimensions of mission tasks and network structure, and combat effectiveness is assessed by quantifying performance indicators based on kill chain information. Finally, case studies are used to validate the proposed algorithm and show its reliability.