Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
38 result(s) for "Chopra, Samir"
Sort by:
Anxiety : a philosophical guide
Today, anxiety is usually thought of as a pathology, the most diagnosed and medicated of all psychological disorders. But anxiety isn't always or only a medical condition. Indeed, many philosophers argue that anxiety is a normal, even essential, part of being human, and that coming to terms with this fact is potentially transformative, allowing us to live more meaningful lives by giving us a richer understanding of ourselves. In this book, Samir Chopra explores valuable insights about anxiety offered by ancient and modern philosophies - Buddhism, existentialism, psychoanalysis, and critical theory. Blending memoir and philosophy, he also tells how serious anxiety has affected his own life - and how philosophy has helped him cope with it.
A Legal Theory for Autonomous Artificial Agents
\"An extraordinarily good synthesis from an amazing range of philosophical, legal, and technological sources . . . the book will appeal to legal academics and students, lawyers involved in e-commerce and cyberspace legal issues, technologists, moral philosophers, and intelligent lay readers interested in high tech issues, privacy, [and] robotics.\"-Kevin Ashley, University of Pittsburgh School of LawAs corporations and government agencies replace human employees with online customer service and automated phone systems, we become accustomed to doing business with nonhuman agents. If artificial intelligence (AI) technology advances as today's leading researchers predict, these agents may soon function with such limited human input that they appear to act independently. When they achieve that level of autonomy, what legal status should they have?Samir Chopra and Laurence F. White present a carefully reasoned discussion of how existing philosophy and legal theory can accommodate increasingly sophisticated AI technology. Arguing for the legal personhood of an artificial agent, the authors discuss what it means to say it has \"knowledge\" and the ability to make a decision. They consider key questions such as who must take responsibility for an agent's actions, whom the agent serves, and whether it could face a conflict of interest.
Relevance sensitive belief structures
We propose a new relevance sensitive model for representing and revising belief structures, which relies on a notion of partial language splitting and tolerates some amount of inconsistency while retaining classical logic. The model preserves an agent's ability to answer queries in a coherent way using Belnap's four‐valued logic. Axioms analogous to the AGM axioms hold for this new model. The distinction between implicit and explicit beliefs is represented and psychologically plausible, computationally tractable procedures for query answering and belief base revision are obtained.
Belief Liberation (and Retraction)
We provide a formal study of belief retraction operators that do not necessarily satisfy the (Inclusion) postulate. Our intuition is that a rational description of belief change must do justice to cases in which dropping a belief can lead to the inclusion, or 'liberation', of others in an agent's corpus. We provide two models of liberation via retraction operators: σ-liberation and linear liberation. We show that the class of σ-liberation operators is included in the class of linear ones and provide axiomatic characterisations for each class. We show how any retraction operator (including the liberation operators) can be 'converted' into either a withdrawal operator (i.e., satisfying (Inclusion)) or a revision operator via (a slight variant of) the Harper Identity and the Levi Identity respectively.
The freedoms of software and its ethical uses
The “free” in “free software” refers to a cluster of four specific freedoms identified by the Free Software Definition. The first freedom, termed “Freedom Zero,” intends to protect the right of the user to deploy software in whatever fashion, towards whatever end, he or she sees fit. But software may be used to achieve ethically questionable ends. This highlights a tension in the provision of software freedoms: while the definition explicitly forbids direct restrictions on users’ freedoms, it does not address other means by which software may indirectly restrict freedoms. In particular, ethically-inflected debate has featured prominently in the discussion of restrictions on digital rights management and privacy-violating code in version 3 of the GPL (GPLv3). The discussion of this proposed language revealed the spectrum of ethical positions and valuations held by members of the free software community. In our analysis, we will provide arguments for upholding Freedom Zero; we embed the problem of possible uses of software in the broader context of the uses of scientific knowledge, and go on to argue that the provision of Freedom Zero mitigates against too great a moral burden—of anticipating possible uses of software—being placed on the programmer and that, most importantly, it facilitates deliberative discourse in the free software community.
ITERATED BELIEF CHANGE AND THE RECOVERY AXIOM
The axiom of recovery, while capturing a central intuition regarding belief change, has been the source of much controversy. We argue briefly against putative counterexamples to the axiom—while agreeing that some of their insight deserves to be preserved—and present additional recovery-like axioms in a framework that uses epistemic states, which encode preferences, as the object of revisions. This makes iterated revision possible and renders explicit the connection between iterated belief change and the axiom of recovery. We provide a representation theorem that connects the semantic conditions we impose on iterated revision and our additional syntactical properties. We show interesting similarities between our framework and that of Darwiche-Pearl (Artificial Intelligence 89:1-29 1997). In particular, we show that intuitions underlying the controversial (C2) postulate are captured by the recovery axiom and our recovery-like postulates (the latter can be seen as weakenings of (C2)). We present postulates for contraction, in the same spirit as the Darwiche-Pearl postulates for revision, and provide a theorem that connects our syntactic postulates with a set of semantic conditions. Lastly, we show a connection between the contraction postulates and a generalisation of the recovery axiom.
Tort Liability for Artificial Agents
Tort liability, which includes a multiplicity of liability schemes and recovery theories such as negligence, product liability, malpractice liability, liability in trespass, and liability for negligent misstatement, arises from the harm caused by a person’s breach of a duty to avoid harm to others, and seeks to put the person harmed in the position he would have been in had the breach of duty not taken place. It can arise in favor of third parties harmed by their interactions with an artificial agent. Such potential harm is not trivial; the liabilities may be huge. As particularly salient examples, missile battery