Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
26 result(s) for "Buechner, Jeff"
Sort by:
Some Circumstances Under Which It Is Rational for Human Agents Not to Trust Artificial Agents
In this paper, I argue that there are several different circumstances in which it is rational for human agents not to trust artificial agents (such as ChatGPT). I claim that artificial agents cannot, in principle, be programmed with their own self (nor a simulation of their own self) and, consequently, cannot properly understand the indexicals ‘I’ and ‘me’. It also follows that they cannot take up a first-person point-of-view and that they cannot be conscious. They can understand that agent so-and-so (described in objective indexical-free terms) trusts or is entrusted but cannot know that they are that agent (if they are) and so cannot know that they are trusted or entrusted. Artificial agents cannot know what it means for it to have a normative expectation, nor what it means for it to be responsible for performing certain actions. Artificial agents lack all of the first-person properties that human agents possess, and which are epistemically important to human agents. Because of these limitations, and because artificial agents figure centrally in the trust relation defined in the Buechner–Tavani model of digital trust, there will be several different kinds of circumstances in which it would be rational for human agents not to trust artificial agents. I also examine the problem of moral luck, define a converse problem of moral luck, and argue that although each kind of problem of moral luck does not arise for artificial agents (since they cannot take up a first-person point-of-view), human agents should not trust artificial agents interacting with those human agents in moral luck and converse moral luck circumstances.
A Revision of the Buechner–Tavani Model of Digital Trust and a Philosophical Problem It Raises for Social Robotics
In this paper the Buechner–Tavani model of digital trust is revised—new conditions for self-trust are incorporated into the model. These new conditions raise several philosophical problems concerning the idea of a substantial self for social robotics, which are closely examined. I conclude that reductionism about the self is incompatible with, while the idea of a substantial self is compatible with, trust relations between human agents, between human agents and artificial agents, and between artificial agents.
Two New Philosophical Problems for Robo-Ethics
The purpose of this paper is to describe two new philosophical problems for robo-ethics. When one considers the kinds of philosophical problems that arise in the emerging field of robo-ethics, one typically thinks of issues that concern agency, autonomy, rights, consciousness, warfare/military applications, employment and work, the impact for elder-care, and many others. All of these philosophical problems are well known. However, this paper describes two new philosophical problems for robo-ethics that have not been previously addressed in the literature. The author’s view is that if these philosophical problems are not solved, some aspects of robo-ethics research and development will be challenged.
Trust, Privacy, and Frame Problems in Social and Business E-Networks, Part 1
Privacy issues in social and business e-networks are daunting in complexity—private information about oneself might be routed through countless artificial agents. For each such agent, in that context, two questions about trust are raised: Where an agent must access (or store) personal information, can one trust that artificial agent with that information and, where an agent does not need to either access or store personal information, can one trust that agent not to either access or store that information? It would be an infeasible task for any human being to explicitly determine, for each artificial agent, whether it can be trusted. That is, no human being has the computational resources to make such an explicit determination. There is a well-known class of problems in the artificial intelligence literature, known as frame problems, where explicit solutions to them are computationally infeasible. Human common sense reasoning solves frame problems, though the mechanisms employed are largely unknown. I will argue that the trust relation between two agents (human or artificial) functions, in some respects, is a frame problem solution. That is, a problem is solved without the need for a computationally infeasible explicit solution. This is an aspect of the trust relation that has remained unexplored in the literature. Moreover, there is a formal, iterative structure to agent-agent trust interactions that serves to establish the trust relation non-circularly, to reinforce it, and to “bootstrap” its strength.
Gödel, Putnam, and functionalism : a new reading of representation and reality
In the early 1970s, Hilary Putnam began to have doubts about functionalism, and in his masterwork 'Representation and Reality' (1988) he advanced four powerful arguments against his own doctrine of computational functionalism. In this book, Jeff Buechner systematically examines Putnam's arguments against functionalism.
Are the Gödel incompleteness theorems limitative results for the neurosciences?
There are many kinds of limitative results in the sciences, some of which are philosophical. I am interested in examining one kind of limitative result in the neurosciences that is mathematical—a result secured by the Gödel incompleteness theorems. I will view the incompleteness theorems as independence results, develop a connection with independence results in set theory, and then argue that work in the neurosciences (as well as in molecular, systems and synthetic biology) may well avoid these mathematical limitative results. In showing this, I argue that demonstrating that one cannot avoid them is a computational task that is beyond the computational capacities of finitary minds. Along the way, I reformulate three philosophical claims about the nature of consciousness in terms of the Gödel incompleteness theorems and argue that these precise reformulations of the claims can be disarmed.
Are the Goedel incompleteness theorems limitative results for the neurosciences?
There are many kinds of limitative results in the sciences, some of which are philosophical. I am interested in examining one kind of limitative result in the neurosciences that is mathematical-a result secured by the Goedel incompleteness theorems. I will view the incompleteness theorems as independence results, develop a connection with independence results in set theory, and then argue that work in the neurosciences (as well as in molecular, systems and synthetic biology) may well avoid these mathematical limitative results. In showing this, I argue that demonstrating that one cannot avoid them is a computational task that is beyond the computational capacities of finitary minds. Along the way, I reformulate three philosophical claims about the nature of consciousness in terms of the Goedel incompleteness theorems and argue that these precise reformulations of the claims can be disarmed.
Are the Gödel incompleteness theorems limitative results for the neurosciences
There are many kinds of limitative results in the sciences, some of which are philosophical. I am interested in examining one kind of limitative result in the neurosciences that is mathematical a result secured by the Gödel incompleteness theorems. I will view the incompleteness theorems as independence results, develop a connection with independence results in set theory, and then argue that work in the neurosciences (as well as in molecular, systems and synthetic biology) may well avoid these mathematical limitative results. In showing this, I argue that demonstrating that one cannot avoid them is a computational task that is beyond the computational capacities of finitary minds. Along the way, I reformulate three philosophical claims about the nature of consciousness in terms of the Gödel incompleteness theorems and argue that these precise reformulations of the claims can be disarmed.