Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
1,195
result(s) for
"Human-computer interaction Philosophy."
Sort by:
ChatGPT is bullshit
by
Humphries, James
,
Slater, Joe
,
Hicks, Michael Townsen
in
Artificial intelligence
,
Chatbots
,
Hallucinations
2024
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
Journal Article
Critiquing the Concept of BCI Illiteracy
2019
Brain–computer interfaces (BCIs) are a form of technology that read a user’s neural signals to perform a task, often with the aim of inferring user intention. They demonstrate potential in a wide range of clinical, commercial, and personal applications. But BCIs are not always simple to operate, and even with training some BCI users do not operate their systems as intended. Many researchers have described this phenomenon as “BCI illiteracy,” and a body of research has emerged aiming to characterize, predict, and solve this perceived problem. However, BCI illiteracy is an inadequate concept for explaining difficulty that users face in operating BCI systems. BCI illiteracy is a methodologically weak concept; furthermore, it relies on the flawed assumption that BCI users possess physiological or functional traits that prevent proficient performance during BCI use. Alternative concepts to BCI illiteracy may offer better outcomes for prospective users and may avoid the conceptual pitfalls that BCI illiteracy brings to the BCI research process.
Journal Article
In Conversation with Artificial Intelligence: Aligning language Models with Human Values
by
Kasirzadeh, Atoosa
,
Gabriel, Iason
in
Agents
,
Agents (artificial intelligence)
,
Artificial intelligence
2023
Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this paper, we propose a number of steps that help answer these questions. We start by developing a philosophical analysis of the building blocks of linguistic communication between conversational agents and human interlocutors. We then use this analysis to identify and formulate ideal norms of conversation that can govern successful linguistic communication between humans and conversational agents. Furthermore, we explore how these norms can be used to align conversational agents with human values across a range of different discursive domains. We conclude by discussing the practical implications of our proposal for the design of conversational agents that are aligned with these norms and values.
Journal Article
The sentient machine : the coming age of artificial intelligence /
Explores universal questions about humanity's capacity for living and thriving in the coming age of sentient machines and AI, examining debates from opposing perspectives while discussing emerging intellectual diversity and its potential role in enabling a positive life.
Universal design, inclusive design, accessible design, design for all: different concepts—one goal? On the concept of accessibility—historical, methodological and philosophical aspects
by
åhman, Henrik
,
Persson, Hans
,
Yngling, Alexander Arvei
in
Access to information
,
Accessibility
,
Barrierefreiheit
2015
\"Accessibility and equal opportunities for all in the digital age have become increasingly important over the last decade. In one form or another, the concept of accessibility is being considered to a greater or smaller extent in most projects that develop interactive systems. However, the concept varies among different professions, cultures and interest groups. Design for all, universal access and inclusive design are all different names of approaches that largely focus on increasing the accessibility of the interactive system for the widest possible range of use. But, in what way do all these concepts differ and what is the underlying philosophy in all of these concepts? This paper aims at investigating the various concepts used for accessibility, its methodological and historical development and some philosophical aspects of the concept. It can be concluded that there is little or no consensus regarding the definition and use of the concept, and consequently, there is a risk of bringing less accessibility to the target audience. Particularly in international standardization the lack of consensus is striking. Based on this discussion, the authors argue for a much more thorough definition of the concept and discuss what effects it may have on measurability, conformance with standards and the overall usability for the widest possible range of target users.\" [Abstract: Editor’s / authors’ information]
Journal Article
How Human–Chatbot Interaction Impairs Charitable Giving: The Role of Moral Judgment
by
Yang, Zhilin
,
Zhou, Yuanyuan
,
He, Yuanqiong
in
Artificial intelligence
,
Behavior
,
Business ethics
2022
Interactions between human beings and chatbots are gradually becoming part of our everyday social lives. It is still unclear how human–chatbot interactions (HCIs), compared to human–human interactions (HHIs), influence individual morality. Building on the dual-process theory of moral judgment, a secondary data analysis (Study 1), and two scenario-based experiments (Studies 2 and 3) provide sufficient evidence that HCIs (vs. HHIs) support utilitarian judgments (vs. deontological judgments), which reduce participants' donation amount. Study 3 further demonstrates that the negative effects of HCIs can be attenuated by inducing a social-oriented (vs. task-oriented) communication style in chatbots’ verbal language designs. These findings highlight the negative impacts of HCIs on relationships among human beings and suggest a practical intervention for nonprofit organization managers.
Journal Article