Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
88
result(s) for
"McKee, Kevin R"
Sort by:
Negotiation and honesty in artificial intelligence methods for the board game of Diplomacy
by
Gemp, Ian
,
Tacchetti, Andrea
,
Malinowski, Mateusz
in
639/705/1042
,
639/705/117
,
Agents (artificial intelligence)
2022
The success of human civilization is rooted in our ability to cooperate by communicating and making joint plans. We study how artificial agents may use communication to better cooperate in Diplomacy, a long-standing AI challenge. We propose negotiation algorithms allowing agents to agree on contracts regarding joint plans, and show they outperform agents lacking this ability. For humans, misleading others about our intentions forms a barrier to cooperation. Diplomacy requires reasoning about our opponents’ future plans, enabling us to study broken commitments between agents and the conditions for honest cooperation. We find that artificial agents face a similar problem as humans: communities of communicating agents are susceptible to peers who deviate from agreements. To defend against this, we show that the inclination to sanction peers who break contracts dramatically reduces the advantage of such deviators. Hence, sanctioning helps foster mostly truthful communication, despite conditions that initially favor deviations from agreements.
Artificial Intelligence has achieved success in a variety of single-player or competitive two-player games with no communication between players. Here, the authors propose an approach where Artificial Intelligence agents have ability to negotiate and form agreements, playing the board game Diplomacy.
Journal Article
Defining acceptable data collection and reuse standards for queer artificial intelligence research in mental health: protocol for the online PARQAIR-MH Delphi study
by
Tomasev, Nenad
,
Kormilitzin, Andrey
,
Hamer-Hunt, Julia
in
Artificial Intelligence
,
Biobanks
,
Community support
2024
IntroductionFor artificial intelligence (AI) to help improve mental healthcare, the design of data-driven technologies needs to be fair, safe, and inclusive. Participatory design can play a critical role in empowering marginalised communities to take an active role in constructing research agendas and outputs. Given the unmet needs of the LGBTQI+ (Lesbian, Gay, Bisexual, Transgender, Queer and Intersex) community in mental healthcare, there is a pressing need for participatory research to include a range of diverse queer perspectives on issues of data collection and use (in routine clinical care as well as for research) as well as AI design. Here we propose a protocol for a Delphi consensus process for the development of PARticipatory Queer AI Research for Mental Health (PARQAIR-MH) practices, aimed at informing digital health practices and policy.Methods and analysisThe development of PARQAIR-MH is comprised of four stages. In stage 1, a review of recent literature and fact-finding consultation with stakeholder organisations will be conducted to define a terms-of-reference for stage 2, the Delphi process. Our Delphi process consists of three rounds, where the first two rounds will iterate and identify items to be included in the final Delphi survey for consensus ratings. Stage 3 consists of consensus meetings to review and aggregate the Delphi survey responses, leading to stage 4 where we will produce a reusable toolkit to facilitate participatory development of future bespoke LGBTQI+–adapted data collection, harmonisation, and use for data-driven AI applications specifically in mental healthcare settings.Ethics and disseminationPARQAIR-MH aims to deliver a toolkit that will help to ensure that the specific needs of LGBTQI+ communities are accounted for in mental health applications of data-driven technologies. The study is expected to run from June 2024 through January 2025, with the final outputs delivered in mid-2025. Participants in the Delphi process will be recruited by snowball and opportunistic sampling via professional networks and social media (but not by direct approach to healthcare service users, patients, specific clinical services, or via clinicians’ caseloads). Participants will not be required to share personal narratives and experiences of healthcare or treatment for any condition. Before agreeing to participate, people will be given information about the issues considered to be in-scope for the Delphi (eg, developing best practices and methods for collecting and harmonising sensitive characteristics data; developing guidelines for data use/reuse) alongside specific risks of unintended harm from participating that can be reasonably anticipated. Outputs will be made available in open-access peer-reviewed publications, blogs, social media, and on a dedicated project website for future reuse.
Journal Article
Warmth and competence in human-agent cooperation
by
McKee, Kevin R.
,
Bai, Xuechunzi
,
Fiske, Susan T.
in
Agents (artificial intelligence)
,
Algorithms
,
Artificial Intelligence
2024
Interaction and cooperation with humans are overarching aspirations of artificial intelligence research. Recent studies demonstrate that AI agents trained with deep reinforcement learning are capable of collaborating with humans. These studies primarily evaluate human compatibility through “objective” metrics such as task performance, obscuring potential variation in the levels of trust and subjective preference that different agents garner. To better understand the factors shaping subjective preferences in human-agent cooperation, we train deep reinforcement learning agents in Coins, a two-player social dilemma. We recruit
N
=
501
participants for a human-agent cooperation study and measure their impressions of the agents they encounter. Participants’ perceptions of warmth and competence predict their stated preferences for different agents, above and beyond objective performance metrics. Drawing inspiration from social science and biology research, we subsequently implement a new “partner choice” framework to elicit
revealed
preferences: after playing an episode with an agent, participants are asked whether they would like to play the next episode with the same agent or to play alone. As with stated preferences, social perception better predicts participants’ revealed preferences than does objective performance. Given these results, we recommend human-agent interaction researchers routinely incorporate the measurement of social perception and subjective preferences into their studies.
Journal Article
A social path to human-like artificial intelligence
2023
Traditionally, cognitive and computer scientists have viewed intelligence solipsistically, as a property of unitary agents devoid of social context. Given the success of contemporary learning algorithms, we argue that the bottleneck in artificial intelligence (AI) advancement is shifting from data assimilation to novel data generation. We bring together evidence showing that natural intelligence emerges at multiple scales in networks of interacting agents via collective living, social relationships and major evolutionary transitions, which contribute to novel data generation through mechanisms such as population pressures, arms races, Machiavellian selection, social learning and cumulative culture. Many breakthroughs in AI exploit some of these processes, from multi-agent structures enabling algorithms to master complex games such as Capture-The-Flag and StarCraft II, to strategic communication in the game Diplomacy and the shaping of AI data streams by other AIs. Moving beyond a solipsistic view of agency to integrate these mechanisms could provide a path to human-like compounding innovation through ongoing novel data generation.
Advances in machine intelligence often depend on data assimilation, but data generation has been neglected. The authors discuss mechanisms that might achieve continuous novel data generation and the creation of intelligent systems that are capable of human-like innovation, focusing on social aspects of intelligence.
Journal Article
Scaffolding cooperation in human groups with deep reinforcement learning
by
Tacchetti, Andrea
,
Balaguer, Jan
,
Campbell-Gillingham, Lucy
in
4014/477/2811
,
639/705/117
,
Behavioral Sciences
2023
Effective approaches to encouraging group cooperation are still an open challenge. Here we apply recent advances in deep learning to structure networks of human participants playing a group cooperation game. We leverage deep reinforcement learning and simulation methods to train a ‘social planner’ capable of making recommendations to create or break connections between group members. The strategy that it develops succeeds at encouraging pro-sociality in networks of human participants (
N
= 208 participants in 13 groups) playing for real monetary stakes. Under the social planner, groups finished the game with an average cooperation rate of 77.7%, compared with 42.8% in static networks (
N
= 176 in 11 groups). In contrast to prior strategies that separate defectors from cooperators (tested here with
N
= 384 in 24 groups), the social planner learns to take a conciliatory approach to defectors, encouraging them to act pro-socially by moving them to small highly cooperative neighbourhoods.
McKee et al. show that deep reinforcement learning can be used to learn a new and effective strategy for encouraging mutually beneficial cooperation in a network game.
Journal Article
AI learns to encourage group cooperation by making new connections
2023
We trained an artificial intelligence (AI) system to recommend different interactions and connections between humans playing a group game together. Through trial and error, the AI system learned to take an encouraging approach to uncooperative individuals, keeping them engaged with the group and boosting cooperation levels for everyone.
Journal Article
Human participants in AI research: Ethics and transparency in practice
2024
In recent years, research involving human participants has been critical to advances in artificial intelligence (AI) and machine learning (ML), particularly in the areas of conversational, human-compatible, and cooperative AI. For example, roughly 9% of publications at recent AAAI and NeurIPS conferences indicate the collection of original human data. Yet AI and ML researchers lack guidelines for ethical research practices with human participants. Fewer than one out of every four of these AAAI and NeurIPS papers confirm independent ethical review, the collection of informed consent, or participant compensation. This paper aims to bridge this gap by examining the normative similarities and differences between AI research and related fields that involve human participants. Though psychology, human-computer interaction, and other adjacent fields offer historic lessons and helpful insights, AI research presents several distinct considerations\\(\\unicode{x2014}\\)namely, participatory design, crowdsourced dataset development, and an expansive role of corporations\\(\\unicode{x2014}\\)that necessitate a contextual ethics framework. To address these concerns, this manuscript outlines a set of guidelines for ethical and transparent practice with human participants in AI and ML research. Overall, this paper seeks to equip technical researchers with practical knowledge for their work, and to position them for further dialogue with social scientists, behavioral researchers, and ethicists.