Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
54 result(s) for "Human-machine systems Philosophy."
Sort by:
Interface
In this book, Branden Hookway considers the interface not as technology but as a form of relationship with technology. The interface, Hookway proposes, is at once ubiquitous and hidden from view. It is both the bottleneck through which our relationship to technology must pass and a productive encounter embedded within the use of technology. It is a site of contestation -- between human and machine, between the material and the social, between the political and the technological -- that both defines and elides differences. A virtuoso in multiple disciplines, Hookway offers a theory of the interface that draws on cultural theory, political theory, philosophy, art, architecture, new media, and the history of science and technology. He argues that the theoretical mechanism of the interface offers a powerful approach to questions of the human relationship to technology. Hookway finds the origin of the terminterfacein nineteenth-century fluid dynamics and traces its migration to thermodynamics, information theory, and cybernetics. He discusses issues of subject formation, agency, power, and control, within contexts that include technology, politics, and the social role of games. He considers the technological augmentation of humans and the human-machine system, discussing notions of embodied intelligence. Hookway views the figure of the subject as both receiver and active producer in processes of subjectification. The interface, he argues, stands in a relation both alien and intimate, vertiginous and orienting to those who cross its threshold.
Four ethical priorities for neurotechnologies and AI
Artificial intelligence and brain-computer interfaces must respect and preserve people's privacy, identity, agency and equality, say Rafael Yuste, Sara Goering and colleagues.
From automation to autonomy: Human machine relations in the age of artificial intelligence
The shift from automation to autonomy marks a new chapter in human-machine relations, especially in the context of the expanding and diversifying applications of Artificial Intelligence (AI). As machines gain capabilities that resemble autonomous agency, the boundary between human and machine autonomy blurs, challenging traditional concepts of agency, control, and independence. This special issue examines the multidisciplinary perspectives on autonomy in the digital age, addressing the complexities of attributing autonomy to machines and AI systems. Philosophical, sociological, and technical approaches converge to explore how emerging forms of machine autonomy impacts human agency, freedom, and decision-making, with applications spanning from autonomous vehicles to digital assistants and military drones. Central to this discourse is the growing tension between viewing autonomy as a positive attribute and the concerns about diminishing human authority in the face of increasingly independent technologies. By framing autonomy as a gradual, relational, and attributional concept, the essays of this special issue aim to foster an integrated understanding of autonomy as both an individual and collective construct, reflecting the highly complex and quickly evolving nature of current societal, ethical, and technological challenges. Through contributions from diverse fields, the issue offers theoretical insights and empirical findings to better understand how AI systems reshape human-machine interactions and redefine autonomy within modern sociotechnical landscapes.
Towards an Epistemology of Interdependence Among the Orthogonal Roles in Human–Machine Teams
Rational social theorists (e.g., game and decision theorists) have failed to confirm that observations of social reality equal social reality. Yet they argue that teams, organizations and social systems should minimize interdependence and competition, echoed by social psychologists to make data iid (i.e., independent and factorable). But the evidence indicates that competitive teams maximize interdependence; self-reports of social reality correlate poorly with social behavior; and only competition measures interdependent social states. Rational expectations aside, we report progress towards a science of interdependence for human–machine teams. Our model of interdependence works like an uncertainty principle in the sense that tradeoffs arise from the uncertainty caused by measuring interdependent actors in orthogonal roles; e.g., in the tradeoff between teams and individuals, teams are more productive but more opaque. Previously, we described interdependence as bistable stories of social reality; the motivation to reject alternative interpretations, increasing uncertainty and errors; and the inability to factor social states. Now we explore education as a surrogate for intelligence in teams. We hypothesized that teams rely on the education (a trained intellect) of its members to produce more patents (a team’s goals). We found that the average schooling in a nation is significantly related to its total patents produced.
Explainable AI in the military domain
Artificial intelligence (AI) has become nearly ubiquitous in modern society, from components of mobile applications to medical support systems, and everything in between. In societally impactful systems imbued with AI, there has been increasing concern related to opaque AI, that is, artificial intelligence where it is unclear how or why certain decisions are reached. This has led to a recent boom in research on “explainable AI” (XAI), or approaches to making AI more explainable and understandable to human users. In the military domain, numerous bodies have argued that autonomous and AI-enabled weapon systems ought not incorporate unexplainable AI, with the International Committee of the Red Cross and the United States Department of Defense both explicitly including explainability as a relevant factor in the development and use of such systems. In this article, I present a cautiously critical assessment of this view, arguing that explainability will be irrelevant for many current and near-future autonomous systems in the military (which do not incorporate any AI), that it will be trivially incorporated into most military systems which do possess AI (as these generally possess simpler AI systems), and that for those systems with genuinely opaque AI, explainability will prove to be of more limited value than one might imagine. In particular, I argue that explainability, while indeed a virtue in design, is a virtue aimed primarily at designers and troubleshooters of AI-enabled systems, but is far less relevant for users and handlers actually deploying these systems. I further argue that human–machine teaming is a far more important element of responsibly using AI for military purposes, adding that explainability may undermine efforts to improve human–machine teamings by creating a prima facie sense that the AI, due to its explainability, may be utilized with little (or less) potential for mistakes. I conclude by clarifying that the arguments are not against XAI in the military, but are instead intended as a caution against over-inflating the value of XAI in this domain, or ignoring the limitations and potential pitfalls of this approach.
Who is controlling whom? Reframing “meaningful human control” of AI systems in security
Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of “meaningful human control” of intelligent systems. In this opinion paper, we outline generic configurations of control of AI and we present an alternative to human control of AI, namely the inverse idea of having AI control humans, and we discuss the normative consequences of this alternative.
Cognitive Integration for Hybrid Collective Agency
Can human–machine hybrid systems (HMHs) constitute genuine collective agents? This paper defends an affirmative answer. I argue that HMHs achieve collective intentionality without shared consciousness by satisfying the following three functional criteria: goal alignment, functional complementarity, and stable interactivity. Against this functionalist account, the following two objections arise: (1) the cognitive bloat problem, that functional criteria cannot distinguish genuine cognitive integration from mere tool use; and (2) the phenomenological challenge, that AI’s lack of practical reason reduces human–AI interaction to subject–tool relations. I respond by distinguishing constitutive from instrumental functional contributions and showing that collective agency requires stable functional integration, not phenomenological fusion. The result is what I call Functional Hybrid Collective Agents (FHCAs), which are systems exhibiting irreducible collective intentionality through deep human–AI coupling.
Truly Autonomous Machines Are Ethical
There is widespread concern that as machines move toward greater autonomy, they may become a law unto themselves and turn against us. Yet the threat lies more in how we conceive of an autonomous machine rather than the machine itself. We tend to see an autonomous agent as one that sets its own agenda, free from external constraints, including ethical constraints. A deeper and more adequate understanding of autonomy has evolved in the philosophical literature, specifically in deontological ethics. It teaches that ethics is an internal, not an external, constraint on autonomy, and that a truly autonomous agent must be ethical. It tells us how we can protect ourselves from smart machines by making sure they are truly autonomous rather than simply beyond human control.