Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
250 result(s) for "Markowitz, David M."
Sort by:
Self and other-perceived deception detection abilities are highly correlated but unassociated with objective detection ability: Examining the detection consensus effect
Subjective lying rates are often strongly and positively correlated. Called the deception consensus effect, people who lie often tend to believe others lie often, too. The present paper evaluated how this cognitive bias also extends to deception detection. Two studies (Study 1: N  = 180 students; Study 2: N  = 250 people from the general public) had participants make 10 veracity judgments based on videotaped interviews, and also indicate subjective detection abilities (self and other). Subjective, perceived detection abilities were significantly linked, supporting a detection consensus effect, yet they were unassociated with objective detection accuracy. More overconfident detectors—those whose subjective detection accuracy was greater than their objective detection accuracy—reported telling more white and big lies, cheated more on a behavioral task, and were more ideologically conservative than less overconfident detectors. This evidence supports and extends contextual models of deception (e.g., the COLD model), highlighting possible (a)symmetries in subjective and objective veracity assessments.
Language style (mis)matching: Consuming entertainment media from someone unlike you is linked to positive attitudes
Prior work suggests people often match with conversational partners by using a common rate of style words (e.g., articles, pronouns). Indeed, such language style matching (LSM) has associated positively with downstream social and psychological dynamics like cooperation, liking, and well-being. To what degree is LSM predictive of positive attitudes in entertainment media settings? The current two-study paper addressed this question by collecting participants’ writing style with three diverse prompts, and then having them consume a random selection of TED talks (Study 1) and videotaped podcast narratives (Study 2). The evidence suggested less LSM was associated with positive attitudes (e.g., an interest in watching another video by the speaker, feeling connected to the speaker). Mediation analyses revealed the negative relationship between LSM and positive attitudes was explained by one’s novelty need satisfaction. Implications for LSM research, plus character construction and narrative development in media psychology, are discussed.
From vulnerability to duplicity: Examining the connection between childhood adversity and deception
Deception research has traditionally evaluated how individual differences like personality traits and demographics correlate with lying. However, the establishment of adverse childhood experiences (ACEs) as an individual difference that also links to deception remains underexplored. To this end, the present study ( N = 784 students) investigated the relationship between ACEs and deception in adulthood. Results indicated that individuals with more (versus less) adverse childhood experiences, particularly those involving maltreatment and victimization, reported more daily white and big lies, independent of aversive personality traits like narcissism and Machiavellianism. Consistent with other studies on individual differences and deception, the effect sizes were small, but systematic. Together, these findings support the dispositional honesty hypothesis , indicating that foundational childhood experiences and events can shape or signal deceptive behavior. Generally, the study contributes to our underexamined knowledge base of the developmental antecedents of lying, emphasizing the role that adversity plays during childhood to influence deceptive behavior beyond commonly studied personality traits.
Why we dehumanize illegal immigrants: A US mixed-methods study
Dehumanization is a topic of significant interest for academia and society at large. Empirical studies often have people rate the evolved nature of outgroups and prior work suggests immigrants are common victims of less-than-human treatment. Despite existing work that suggests who dehumanizes particular outgroups and who is often dehumanized, the extant literature knows less about why people dehumanize outgroups such as immigrants. The current work takes up this opportunity by examining why people dehumanize immigrants said to be illegal and how measurement format affects dehumanization ratings. Participants ( N = 672) dehumanized such immigrants more if their ratings were made on a slider versus clicking images of hominids, an effect most pronounced for Republicans. Dehumanization was negatively associated with warmth toward illegal immigrants and the perceived unhappiness felt by illegal immigrants from U.S. immigration policies. Finally, most dehumanization is not entirely blatant but instead, captured by virtuous violence and affect as well, suggesting the many ways that dehumanization can manifest as predicted by theory. This work offers a mechanistic account for why people dehumanize immigrants and addresses how survey measurement artifacts (e.g., clicking on images of hominids vs. using a slider) affect dehumanization rates. We discuss how these data extend dehumanization theory and inform empirical research.
Perceived social contribution and its associations with political participation
Many people who are eligible to participate in the political process do not, suggesting the interests of a large portion of the electorate are not adequately represented in government. While some past work has found that subjective well-being is related to political engagement, less is known about which specific aspects of well-being might drive this effect. We propose and test the idea that self-perceived social contribution – the belief that one’s life and everyday activities provide something of value to society – is related to multiple forms of political participation, likely because people who believe they provide something of value to society feel more integrated with society and therefore may be more likely to act on its behalf via political participation. Two correlational studies (N = 3,729) with data from distinct points in American politics (1996 and 2024) find that individuals with greater self-perceived social contribution were more likely to intend to vote, be willing to engage in activism, seek rather than avoid election information (Study 1), and donate to and volunteer for political causes (Study 2). Further, Study 2 provides empirical support for the previously theorized components of social contribution, providing evidence that self-efficacy and social responsibility underlie this construct in political contexts. Together, these studies identify a specific dimension of well-being that is related to multiple forms of political participation and suggests that fostering feelings of social contribution may promote democratic engagement.
Linguistic Traces of a Scientific Fraud: The Case of Diederik Stapel
When scientists report false data, does their writing style reflect their deception? In this study, we investigated the linguistic patterns of fraudulent (N  =  24; 170,008 words) and genuine publications (N  =  25; 189,705 words) first-authored by social psychologist Diederik Stapel. The analysis revealed that Stapel's fraudulent papers contained linguistic changes in science-related discourse dimensions, including more terms pertaining to methods, investigation, and certainty than his genuine papers. His writing style also matched patterns in other deceptive language, including fewer adjectives in fraudulent publications relative to genuine publications. Using differences in language dimensions we were able to classify Stapel's publications with above chance accuracy. Beyond these discourse dimensions, Stapel included fewer co-authors when reporting fake data than genuine data, although other evidentiary claims (e.g., number of references and experiments) did not differ across the two article types. This research supports recent findings that language cues vary systematically with deception, and that deception can be revealed in fraudulent scientific discourse.
Cross-checking journalistic fact-checkers: The role of sampling and scaling in interpreting false and misleading statements
Professional fact-checkers and fact-checking organizations provide a critical public service. Skeptics of modern media, however, often question the accuracy and objectivity of fact-checkers. The current study assessed agreement among two independent fact-checkers, The Washington Post and PolitiFact, regarding the false and misleading statements of then President Donald J. Trump. Differences in statement selection and deceptiveness scaling were investigated. The Washington Post checked PolitiFact fact-checks 77.4% of the time (22.6% selection disagreement). Moderate agreement was observed for deceptiveness scaling. Nearly complete agreement was observed for bottom-line attributed veracity. Additional cross-checking with other sources (Snopes, FactCheck.org ), original sources, and with fact-checking for the first 100 days of President Joe Biden’s administration were inconsistent with potential ideology effects. Our evidence suggests fact-checking is a difficult enterprise, there is considerable variability between fact-checkers in the raw number of statements that are checked, and finally, selection and scaling account for apparent discrepancies among fact-checkers.
Psychological and physiological effects of applying self-control to the mobile phone
This preregistered study examined the psychological and physiological consequences of exercising self-control with the mobile phone. A total of 125 participants were randomly assigned to sit in an unadorned room for six minutes and either (a) use their mobile phone, (b) sit alone with no phone, or (c) sit with their device but resist using it. Consistent with prior work, participants self-reported more concentration difficulty and more mind wandering with no device present compared to using the phone. Resisting the phone led to greater perceived concentration abilities than sitting without the device (not having external stimulation). Failing to replicate prior work, however, participants without external stimulation did not rate the experience as less enjoyable or more boring than having something to do. We also observed that skin conductance data were consistent across conditions for the first three-minutes of the experiment, after which participants who resisted the phone were less aroused than those who were without the phone. We discuss how the findings contribute to our understanding of exercising self-control with mobile media and how psychological consequences, such as increased mind wandering and focusing challenges, relate to periods of idleness or free thinking.
Can generative AI infer thinking style from language? Evaluating the utility of AI as a psychological text analysis tool
Generative AI, short for Generative Artificial Intelligence, a class of artificial intelligence systems, is not currently the choice technology for text analysis, but prior work suggests it may have some utility to assess dynamics like emotion. The current work builds upon this empirical foundation to consider how analytic thinking scores from a large language model chatbot, ChatGPT, were linked to analytic thinking scores from dictionary-based tools like Linguistic Inquiry and Word Count (LIWC). Using over 16,000 texts from four samples and tested against three prompts and two large language models (GPT-3.5, GPT-4), the evidence suggests there were small associations between ChatGPT and LIWC analytic thinking scores (meta-analytic effect sizes: .058 < r s < .304; p s < .001). When given the formula to calculate the LIWC analytic thinking index, ChatGPT performed incorrect mathematical operations in 22% of the cases, suggesting basic word and number processing may be unreliable with large language models. Researchers should be cautious when using AI for text analysis.
Social, psychological, and demographic characteristics of dehumanization toward immigrants
This study extends the current body of work on dehumanization by evaluating the social, psychological, and demographic correlates of blatant disregard for immigrants. Participants (n = 468) were randomly assigned to read a scenario where 1) an immigrant or 2) an immigrant and their child were caught illegally crossing the southern border of the United States, and then rated how long they should spend in jail if convicted. Participants reported that they would sentence the immigrant to more jail time than the immigrant and child. Those who sent immigrants to jail for more time also viewed them as socially distant and less human, described immigration in impersonal terms, and endorsed other social harms unrelated to immigration (e.g., the death penalty for convicted murderers). Crucially, endorsed social harms accounted for explained variance beyond simply holding conservative views. We position these data within the current literature on dehumanization theory and immigration issues.