Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
494 result(s) for "Weapons systems Automation."
Sort by:
Lethal autonomous weapons : re-examining the law and ethics of robotic warfare
\"Because of the increasing use of Unmanned Aerial Vehicles (UAVs, also commonly known as drones) in various military and para-military (i.e., CIA) settings, there has been increasing debate in the international community as to whether it is morally and ethically permissible to allow robots (flying or otherwise) the ability to decide when and where to take human life. In addition, there has been intense debate as to the legal aspects, particularly from a humanitarian law framework. In response to this growing international debate, the United States government released the Department of Defense (DoD) 3000.09 Directive (2011), which sets a policy for if and when autonomous weapons would be used in US military and para-military engagements. This US policy asserts that only \"human-supervised autonomous weapon systems may be used to select and engage targets, with the exception of selecting humans as targets, for local defense ...\". This statement implies that outside of defensive applications, autonomous weapons will not be allowed to independently select and then fire upon targets without explicit approval from a human supervising the autonomous weapon system. Such a control architecture is known as human supervisory control, where a human remotely supervises an automated system (Sheridan 1992). The defense caveat in this policy is needed because the United States currently uses highly automated systems for defensive purposes, e.g., Counter Rocket, Artillery, and Mortar (C-RAM) systems and Patriot anti-missile missiles. Due to the time-critical nature of such environments (e.g., soldiers sleeping in barracks within easy reach of insurgent shoulder-launched missiles), these automated defensive systems cannot rely upon a human supervisor for permission because of the short engagement times and the inherent human neuromuscular lag which means that even if a person is paying attention, there is approximately a half-second delay in hitting a firing button, which can mean the difference for life and death for the soldiers in the barracks. So as of now, no US UAV (or any robot) will be able to launch any kind of weapon in an offensive environment without human direction and approval. However, the 3000.09 Directive does contain a clause that allows for this possibility in the future. This caveat states that the development of a weapon system that independently decides to launch a weapon is possible but first must be approved by the Under Secretary of Defense for Policy (USD(P)); the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)); and the Chairman of the Joint Chiefs of Staff. Not all stakeholders are happy with this policy that leaves the door open for what used to be considered science fiction. Many opponents of such uses of technologies call for either an outright ban on autonomous weaponized systems, or in some cases, autonomous systems in general (Human Rights Watch 2013, Future of Life Institute 2015, Chairperson of the Informal Meeting of Experts 2016). Such groups take the position that weapons systems should always be under \"meaningful human control,\" but do not give a precise definition of what this means. One issue in this debate that often is overlooked is that autonomy is not a discrete state, rather it is a continuum, and various weapons with different levels of autonomy have been in the US inventory for some time. Because of these ambiguities, it is often hard to draw the line between automated and autonomous systems. Present-day UAVs use the very same guidance, navigation and control technology flown on commercial aircraft. Tomahawk missiles, which have been in the US inventory for more than 30 years, are highly automated weapons with accuracies of less than a meter. These offensive missiles can navigate by themselves with no GPS, thus exhibiting some autonomy by today's definitions. Global Hawk UAVs can find their way home and land on their own without any human intervention in the case
We kill because we can
Welcome to the Drone Age. Where self-defense has become naked aggression. Where courage has become cowardice. Where black ops have become standard operating procedure. In this remarkable and often shocking book, Laurie Calhoun dissects the moral, psychological, and cultural impact of remote-control killing in the twenty-first century. Can a drone operator conducting a targeted killing be likened to a mafia hitman? What difference, if any, is there between the Trayvon Martin case and the drone killing of a teen in Yemen? We Kill Because We Can takes a scalpel to the dark heart of Western foreign policy in order to answer these and many other troubling questions.
Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci
Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human–robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.
Weapons Detection for Security and Video Surveillance Using CNN and YOLO-V5s
In recent years, the number of Gun-related incidents has crossed over 250,000 per year and over 85% of the existing 1 billion firearms are in civilian hands, manual monitoring has not proven effective in detecting firearms. which is why an automated weapon detection system is needed. Various automated convolutional neural networks (CNN) weapon detection systems have been proposed in the past to generate good results. However, These techniques have high computation overhead and are slow to provide real-time detection which is essential for the weapon detection system. These models have a high rate of false negatives because they often fail to detect the guns due to the low quality and visibility issues of surveillance videos. This research work aims to minimize the rate of false negatives and false positives in weapon detection while keeping the speed of detection as a key parameter. The proposed framework is based on You Only Look Once (YOLO) and Area of Interest (AOI). Initially, the models take pre-processed frames where the background is removed by the use of the Gaussian blur algorithm. The proposed architecture will be assessed through various performance parameters such as False Negative, False Positive, precision, recall rate, and F1 score. The results of this research work make it clear that due to YOLO-v5s high recall rate and speed of detection are achieved. Speed reached 0.010 s per frame compared to the 0.17 s of the Faster R-CNN. It is promising to be used in the field of security and weapon detection.
Emergent Normativity: Communities of Practice, Technology, and Lethal Autonomous Weapon Systems
Lethal autonomous weapon systems (LAWS) are the subject of considerable international debate turning around the extent to which humans remain in control over using force. But what is precisely at stake is less clear as stakeholders have different perspectives on the technologies that animate LAWS. Such differences matter because they shape the substance of the debate, which regulatory options are put on the table, and also normativity on LAWS in the sense of understandings of appropriateness. To understand this process, I draw on practice theories, science and technology studies (STS), and critical norm research. I argue that a constellation of communities of practice (CoPs) shapes the public debate about LAWS and focus on three of these CoPs: diplomats, weapon manufacturers, and journalists. Actors in these CoPs discursively perform practices of boundary-work, in the STS sense, to shape understandings of technologies at the heart of LAWS: automation, autonomy, and AI. I analyze these dynamics empirically in two steps: first, by offering a general-level analysis of practices of boundary-work performed by diplomats at the Group of Governmental Experts on LAWS from 2017 to 2022; and second, through examining such practices performed by weapon manufacturers and journalists in relation to the use of loitering munitions, a particular type of LAWS, in the Second Libyan Civil War (2014–2020).
Identification of bullets fired from air guns using machine and deep learning methods
Ballistics (the linkage of bullets and cartridge cases to weapons) is a common type of evidence encountered in criminal cases around the world. The interest lies in determining whether two bullets were fired using the same firearm. This paper proposes an automated method to classify bullets from surface topography and Land Engraved Area (LEA) images of the fired pellets using machine and deep learning methods. The curvature of the surface topography was removed using loess fit and features were extracted using Empirical Mode Decomposition (EMD) followed by various entropy measures. The informative features were identified using minimum Redundancy Maximum Relevance (mRMR), finally the classification was performed using Support Vector Machines (SVM), Decision Tree (DT) and Random Forest (RF) classifiers. The results revealed a good predictive performance. In addition, the deep learning model DenseNet121 was used to classify the LEA images. DenseNet121 provided a higher predictive performance than SVM, DT and RF classifiers. Moreover, the Grad-CAM technique was used to visualise the discriminative regions in the LEA images. These results suggest that the proposed deep learning method can be used to expedite the linkage of projectiles to firearms and assist in ballistic examinations. In this work, the bullets that were compared were air pellets fired from both air rifles and a high velocity air pistol. Air guns were used to collect the data because they were more accessible than other firearms and could be used as a proxy, delivering comparable LEAs. The methods developed here can be used as a proof-of-concept and are easily expandable to bullet and cartridge case identification from any weapon. •The classification of bullets based on LEA topography using machine learning.•Features were extracted using Empirical Mode Decomposition and entropy measures.•Bullet classification was performed using the whole image of LEA and deep learning.•DenseNet121 provided high classification performance.
On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making
This article considers the recent literature concerned with establishing an international prohibition on autonomous weapon systems. It seeks to address concerns expressed by some scholars that such a ban might be problematic for various reasons. It argues in favour of a theoretical foundation for such a ban based on human rights and humanitarian principles that are not only moral, but also legal ones. In particular, an implicit requirement for human judgement can be found in international humanitarian law governing armed conflict. Indeed, this requirement is implicit in the principles of distinction, proportionality, and military necessity that are found in international treaties, such as the 1949 Geneva Conventions, and firmly established in international customary law. Similar principles are also implicit in international human rights law, which ensures certain human rights for all people, regardless of national origins or local laws, at all times. I argue that the human rights to life and due process, and the limited conditions under which they can be overridden, imply a specific duty with respect to a broad range of automated and autonomous technologies. In particular, there is a duty upon individuals and states in peacetime, as well as combatants, military organizations, and states in armed conflict situations, not to delegate to a machine or automated process the authority or capability to initiate the use of lethal force independently of human determinations of its moral and legal legitimacy in each and every case. I argue that it would be beneficial to establish this duty as an international norm, and express this with a treaty, before the emergence of a broad range of automated and autonomous weapons systems begin to appear that are likely to pose grave threats to the basic rights of individuals.
Fuzzy knowledge based intelligent decision support system for ground based air defence
This research proposes an Intelligent Decision Support System for Ground-Based Air Defense (GBAD) environments, which consist of Defended Assets (DA) on the ground that require protection from enemy aerial threats. A Fire Control Officer is responsible for assessing threats and assigning the most appropriate weapon to neutralize them. However, the decision-making process can be prone to errors, risking resource wastage and endangering DA protection. To address this problem, this research proposes a hybrid approach that combines a knowledge-driven fuzzy inference system with machine learning models to optimize resource allocation while incorporating expert knowledge in the decision-making process. Since sensory data obtained from multiple radars may be incomplete or incorrect, a fuzzy knowledge graph-based system is used for data fusion and providing it to the connected modules. Feature selection is optimized by including the most important parameters, such as the vitality of defended assets and threat score, in the threat evaluation. The results from these subsystems are visualized using a Geographical Information System, allowing for real-time mapping of the GBAD environment and displaying the results in a user-friendly web interface. The proposed system has undergone rigorous testing and evaluation, resulting in an efficient and accurate weapon assignment model with a low RMSE value of 0.037. Overall, this Intelligent Decision Support System provides an effective solution for optimizing decision-making processes in GBAD environments and can significantly improve DA protection.
Energetic materials in 3D: an in-depth exploration of additive manufacturing techniques
Recently, because of the complex international situation and combat environment in the future, the development and application of new concept weapons have raised higher performance requirements for manufacturing technologies. However, at present, most weapons are still prepared using traditional charging methods (cast curing, pressure casting, and melt casting), which require subtractive manufacturing (SM) treatments before use. At present, the demand for weapon products is shifting towards reactive micro structures, high preparation efficiency, miniaturization, and controllable energy release. Besides, the modern “energetic-on-a-chip” trend was expected to reduce size and cost while increasing safety and maintaining performance. In this case, the traditional charging methods were not preferred due to their inherent drawbacks, such as being limited to the model, requiring long solvent drying times and recycling required and pores/cracks caused by the shrinkage of slurry, and so on. Therefore, it is necessary to innovate the processing and manufacturing technology of weapons and address the boundaries of existing charging methods, and this will enable the precise customization of high-quality energetic materials and avoid many defects. Additive manufacturing (AM), or 3D printing technology, has been booming recently. The application of additive manufacturing technology in the field of energetic materials (EMs) can promote the innovation of manufacturing technology for EMs and regulate the microstructure. Additionally, 3D printing technology can break through the existing design and development mode, expand explosive charging technology, and enable the distribution of different types of explosives and explosive density in a specific space area. Besides, 3D printing can fabricate “reactive microstructures” (RMS), which offer a deeper understanding of the EMs’ combustion and detonation phenomena at the micro- and nanoscale. Thus, the explosive/propellant grains with multiple damage modes can be designed and manufactured. This paper aims to summarize the current progress in the 3D printing of EMs, analyze the corresponding mechanisms, and provide guidance for future research.
Autonomous weapons systems and the necessity of interpretation: what Heidegger can tell us about automated warfare
Despite resistance from various societal actors, the development and deployment of lethal autonomous weaponry to warzones is perhaps likely, considering the perceived operational and ethical advantage such weapons are purported to bring. In this paper, it is argued that the deployment of truly autonomous weaponry presents an ethical danger by calling into question the ability of such weapons to abide by the Laws of War. This is done by noting the resonances between battlefield target identification and the process of ontic-ontological investigation detailed in Martin Heidegger’s Being and Time , before arguing that the nature of lethal autonomous weaponry precludes them from being able to engage in such investigations—a key requisite for abiding by the relevant legislation that governs battlefield conduct.