Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
34
result(s) for
"Makovi, Kinga"
Sort by:
The effects of ideological value framing and symbolic racism on pro-environmental behavior
2021
Environmental degradation continues to be one of the greatest threats to human well-being, posing a disproportionate burden on communities of color. Environmental action, however, fails to reflect this urgency, leaving social-behavioral research at the frontier of environmental conservation, as well as environmental justice. Broad societal consensus for environmental action is particularly sparse among conservatives. The lack of even small personal sacrifices in favor of the environment could be attributed to the relatively low salience of environmental threats to white Americans and the partisan nature of environmentalism in America. We evaluate if (1) environmental action is causally related to the ideological value framing of an environmental issue; and (2) if the perceived race of impacted communities influences environmental action as a function of racial resentment. With this large-scale, original survey experiment examining the case of air-pollution, we find weak support for the first, but we do not find evidence for the second. We advance our understanding of environmental justice advocacy and environmental inaction in the United States.
Protocol registration
The stage 1 protocol for this Registered Report was accepted in principle on 10 June 2021. The protocol, as accepted by the journal, can be found at
https://doi.org/10.6084/m9.figshare.14769558
.
Journal Article
Non-transformative climate policy options decrease conservative support for renewable energy in the US
2023
Motivated by ongoing partisan division in support of climate change policy, this paper investigates whether, among self-identifying liberals and conservatives, the mere presence of a non-transformative climate policy such as carbon capture and storage (CCS), lowers support for a renewable energy (RE) policy. To interrogate this question, we use a survey experiment asking 2374 respondents about their support for a RE policy when presented with the RE policy alone, and when presented alongside a CCS policy whose funding and implementation leverage independent funding sources. We find that among conservatives, the presence of a CCS policy lowers support for RE. Furthermore, despite the lack of apparent political party cues, when presented with the policy-pair, conservatives tend to view the RE policy in more partisan terms, specifically, less supported by Republicans. Additional experimental conditions with explicit party cues reinforce this interpretation. These findings suggest that the triggering of partisan perceptions even without explicit partisan cues—what we call political anchoring—might be a key impediment to bipartisan support of climate solutions in the U.S. context.
Journal Article
Web of lies: a tool for determining the limits of verification in preventing the spread of false information on networks
2021
The spread of false information on social networks has garnered substantial scientific and popular attention. To counteract this spread, verification of the truthfulness of information has been proposed as a key intervention. Using a novel behavioral experiment with over 2000 participants, we analyze participants’ willingness to spread false information in a network. All participants in the network have aligned incentives making lying attractive and countering the explicit norm of truth-telling that we impose. We investigate how verifying the truth, endogenously or exogenously, impacts the choice to lie or to adhere to the norm of truth-telling and how this compares to the spread of information in a setting in which such verification is not possible. The three key take-aways are (1) verification is only moderately effective in reducing the spread of lies, and (2) its effectivity is contingent on the agency of people in seeking the truth, and (3) on the exposure of liars, not only on the exposure of the lies being told. These results suggest that verification is not a blanket solution. To enhance its effectivity, verification should be combined with efforts to foster a culture of truth-seeking and with information on who is spreading lies.
Journal Article
Trust within human-machine collectives depends on the perceived consensus about cooperative norms
by
Sargsyan, Anahit
,
Makovi, Kinga
,
Rahwan, Talal
in
631/477/2811
,
706/689/159
,
Artificial Intelligence
2023
With the progress of artificial intelligence and the emergence of global online communities, humans and machines are increasingly participating in mixed collectives in which they can help or hinder each other. Human societies have had thousands of years to consolidate the social norms that promote cooperation; but mixed collectives often struggle to articulate the norms which hold when humans coexist with machines. In five studies involving 7917 individuals, we document the way people treat machines differently than humans in a stylized society of beneficiaries, helpers, punishers, and trustors. We show that a different amount of trust is gained by helpers and punishers when they follow norms over not doing so. We also demonstrate that the trust-gain of norm-followers is associated with trustors’ assessment about the consensual nature of cooperative norms over helping and punishing. Lastly, we establish that, under certain conditions, informing trustors about the norm-consensus over helping tends to decrease the differential treatment of both machines and people interacting with them. These results allow us to anticipate how humans may develop cooperative norms for human-machine collectives, specifically, by relying on already extant norms in human-only groups. We also demonstrate that this evolution may be accelerated by making people aware of their emerging consensus.
Humans and machines are increasingly participating in mixed collectives in which they can help or hinder each other. Here the authors show the way in which people treat machines differently than humans in a stylized society of beneficiaries, helpers, punishers, and trustors.
Journal Article
Perception of experience influences altruism and perception of agency influences trust in human–machine interactions
2024
As robots become increasingly integrated into social economic interactions, it becomes crucial to understand how people perceive a robot’s mind. It has been argued that minds are perceived along two dimensions: experience, i.e., the ability to feel, and agency, i.e., the ability to act and take responsibility for one’s actions. However, the influence of these perceived dimensions on human–machine interactions, particularly those involving altruism and trust, remains unknown. We hypothesize that the perception of experience influences altruism, while the perception of agency influences trust. To test these hypotheses, we pair participants with bot partners in a dictator game (to measure altruism) and a trust game (to measure trust) while varying the bots’ perceived experience and agency, either by manipulating the degree to which the bot resembles humans, or by manipulating the description of the bots’ ability to feel and exercise self-control. The results demonstrate that the money transferred in the dictator game is influenced by the perceived experience, while the money transferred in the trust game is influenced by the perceived agency, thereby confirming our hypotheses. More broadly, our findings support the specificity of the mind hypothesis: Perceptions of different dimensions of the mind lead to different kinds of social behavior.
Journal Article
Retraction Note: The association between early career informal mentorship in academic collaborations and junior author performance
by
AlShebli, Bedoor
,
Makovi, Kinga
,
Rahwan, Talal
in
706/648
,
706/648/76
,
Humanities and Social Sciences
2020
This article has been retracted. Please see the retraction notice for more detail: https://doi.org/10.1038/s41467-020-20617-y
Journal Article
Perception, performance, and detectability of conversational artificial intelligence across 32 university courses
by
Zantout, Dania
,
Habash, Nizar
,
Gleason, Nancy W.
in
631/477/2811
,
639/705/258
,
Artificial intelligence
2023
The emergence of large language models has led to the development of powerful tools such as ChatGPT that can produce text indistinguishable from human-generated work. With the increasing accessibility of such technology, students across the globe may utilize it to help with their school work—a possibility that has sparked ample discussion on the integrity of student evaluation processes in the age of artificial intelligence (AI). To date, it is unclear how such tools perform compared to students on university-level courses across various disciplines. Further, students’ perspectives regarding the use of such tools in school work, and educators’ perspectives on treating their use as plagiarism, remain unknown. Here, we compare the performance of the state-of-the-art tool, ChatGPT, against that of students on 32 university-level courses. We also assess the degree to which its use can be detected by two classifiers designed specifically for this purpose. Additionally, we conduct a global survey across five countries, as well as a more in-depth survey at the authors’ institution, to discern students’ and educators’ perceptions of ChatGPT’s use in school work. We find that ChatGPT’s performance is comparable, if not superior, to that of students in a multitude of courses. Moreover, current AI-text classifiers cannot reliably detect ChatGPT’s use in school work, due to both their propensity to classify human-written answers as AI-generated, as well as the relative ease with which AI-generated text can be edited to evade detection. Finally, there seems to be an emerging consensus among students to use the tool, and among educators to treat its use as plagiarism. Our findings offer insights that could guide policy discussions addressing the integration of artificial intelligence into educational frameworks.
Journal Article
RETRACTED ARTICLE: The association between early career informal mentorship in academic collaborations and junior author performance
by
AlShebli, Bedoor
,
Makovi, Kinga
,
Rahwan, Talal
in
706/648
,
706/648/76
,
Humanities and Social Sciences
2020
We study mentorship in scientific collaborations, where a junior scientist is supported by potentially multiple senior collaborators, without them necessarily having formal supervisory roles. We identify 3 million mentor–protégé pairs and survey a random sample, verifying that their relationship involved some form of mentorship. We find that mentorship quality predicts the scientific impact of the papers written by protégés post mentorship without their mentors. We also find that increasing the proportion of female mentors is associated not only with a reduction in post-mentorship impact of female protégés, but also a reduction in the gain of female mentors. While current diversity policies encourage same-gender mentorships to retain women in academia, our findings raise the possibility that opposite-gender mentorship may actually increase the impact of women who pursue a scientific career. These findings add a new perspective to the policy debate on how to best elevate the status of women in science.
Here, the authors study mentorship in scientific collaborations, and find that mentorship quality predicts the scientific impact of protégés post mentorship. Moreover, female protégés collaborating with male mentors become more impactful post mentorship than those who collaborate with female mentors.
Journal Article
Correction: Unequal treatment toward copartisans versus non-copartisans is reduced when partisanship can be falsified
2022
[This corrects the article DOI: 10.1371/journal.pone.0244651.].
Journal Article