Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
18
result(s) for
"Sargsyan, Anahit"
Sort by:
Trust within human-machine collectives depends on the perceived consensus about cooperative norms
by
Sargsyan, Anahit
,
Makovi, Kinga
,
Rahwan, Talal
in
631/477/2811
,
706/689/159
,
Artificial Intelligence
2023
With the progress of artificial intelligence and the emergence of global online communities, humans and machines are increasingly participating in mixed collectives in which they can help or hinder each other. Human societies have had thousands of years to consolidate the social norms that promote cooperation; but mixed collectives often struggle to articulate the norms which hold when humans coexist with machines. In five studies involving 7917 individuals, we document the way people treat machines differently than humans in a stylized society of beneficiaries, helpers, punishers, and trustors. We show that a different amount of trust is gained by helpers and punishers when they follow norms over not doing so. We also demonstrate that the trust-gain of norm-followers is associated with trustors’ assessment about the consensual nature of cooperative norms over helping and punishing. Lastly, we establish that, under certain conditions, informing trustors about the norm-consensus over helping tends to decrease the differential treatment of both machines and people interacting with them. These results allow us to anticipate how humans may develop cooperative norms for human-machine collectives, specifically, by relying on already extant norms in human-only groups. We also demonstrate that this evolution may be accelerated by making people aware of their emerging consensus.
Humans and machines are increasingly participating in mixed collectives in which they can help or hinder each other. Here the authors show the way in which people treat machines differently than humans in a stylized society of beneficiaries, helpers, punishers, and trustors.
Journal Article
Correction: Unequal treatment toward copartisans versus non-copartisans is reduced when partisanship can be falsified
2022
[This corrects the article DOI: 10.1371/journal.pone.0244651.].
Journal Article
Correction: Unequal treatment toward copartisans versus non-copartisans is reduced when partisanship can be falsified
2022
[This corrects the article DOI: 10.1371/journal.pone.0244651.].
Journal Article
Unequal treatment toward copartisans versus non-copartisans is reduced when partisanship can be falsified
by
Sargsyan, Anahit
,
Makovi, Kinga
,
Abascal, Maria
in
Adult
,
Biology and Life Sciences
,
Citizenship
2021
Studies show that Democrats and Republicans treat copartisans better than they do non-copartisans. However, party affiliation is different from other identities associated with unequal treatment. Compared to race or gender, people can more easily falsify, i.e., lie about, their party affiliation. We use a behavioral experiment to study how people allocate resources to copartisan and non-copartisan partners when partners are allowed to falsify their affiliation and may have incentives to do so. When affiliation can be falsified, the gap between contributions to signaled copartisans and signaled non-copartisans is eliminated. This happens in part because some participants—especially strong partisans—suspect that partners who signal a copartisan affiliation are, in fact, non-copartisans. Suspected non-copartisans earn less than both partners who signal that they are non-copartisans and partners who withhold their affiliation. The findings reveal an unexpected upside to the availability of falsification: at the aggregate level, it reduces unequal treatment across groups. At the individual-level, however, falsification is risky.
Journal Article
Measuring the predictability of life outcomes with a scientific mass collaboration
2020
How predictable are life trajectories? We investigated this question with a scientific mass collaboration using the common task method; 160 teams built predictive models for six life outcomes using data from the Fragile Families and Child Wellbeing Study, a high-quality birth cohort study. Despite using a rich dataset and applying machine-learning methods optimized for prediction, the best predictions were not very accurate and were only slightly better than those from a simple benchmark model. Within each outcome, prediction error was strongly associated with the family being predicted and weakly associated with the technique used to generate the prediction. Overall, these results suggest practical limits to the predictability of life outcomes in some settings and illustrate the value of mass collaborations in the social sciences.
Journal Article
The Half-Life of a Tweet
2023
Twitter has started to share an impression_count variable as part of the available public metrics for every Tweet collected with Twitter's APIs. With the information about how often a particular Tweet has been shown to Twitter users at the time of data collection, we can learn important insights about the dissemination process of a Tweet by measuring its impression count repeatedly over time. With our preliminary analysis, we can show that on average the peak of impressions per second is 72 seconds after a Tweet was sent and that after 24 hours, no relevant number of impressions can be observed for ~95% of all Tweets. Finally, we estimate that the median half-life of a Tweet, i.e. the time it takes before half of all impressions are created, is about 80 minutes.
Explainable AI as a Social Microscope: A Case Study on Academic Performance
by
Karapetyan, Areg
,
Wei Lee Woon
,
Alshamsi, Aamena
in
Academic achievement
,
Algorithms
,
Artificial intelligence
2020
Academic performance is perceived as a product of complex interactions between students' overall experience, personal characteristics and upbringing. Data science techniques, most commonly involving regression analysis and related approaches, serve as a viable means to explore this interplay. However, these tend to extract factors with wide-ranging impact, while overlooking variations specific to individual students. Focusing on each student's peculiarities is generally impossible with thousands or even hundreds of subjects, yet data mining methods might prove effective in devising more targeted approaches. For instance, subjects with shared characteristics can be assigned to clusters, which can then be examined separately with machine learning algorithms, thereby providing a more nuanced view of the factors affecting individuals in a particular group. In this context, we introduce a data science workflow allowing for fine-grained analysis of academic performance correlates that captures the subtle differences in students' sensitivities to these factors. Leveraging the Local Interpretable Model-Agnostic Explanations (LIME) algorithm from the toolbox of Explainable Artificial Intelligence (XAI) techniques, the proposed pipeline yields groups of students having similar academic attainment indicators, rather than similar features (e.g. familial background) as typically practiced in prior studies. As a proof-of-concept case study, a rich longitudinal dataset is selected to evaluate the effectiveness of the proposed approach versus a standard regression model.