Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,928 result(s) for "Fallacy"
Sort by:
Overarching Principles for the Organization of Socioemotional Constructs
Psychological scientists have intensively studied how people handle emotions and navigate social situations for more than a century. However, advancements in our understanding of socioemotional constructs have been hampered because of challenges in assessment. Several measurement problems have been identified; however, we want to bring attention to a potentially larger problem. Many operationalizations and measures of socioemotional constructs are poorly embedded within the larger body of psychological research, hampered by jingle and jangle fallacies. Jingle fallacies occur when assessment tools are assumed to measure the same construct but in practice measure different constructs. Jangle fallacies occur when assessment tools are assumed to measure different constructs but in practice measure the same construct. Both fallacies are primarily due to a qualitative divide between a construct’s definition and how it was measured. We discuss this issue, identify examples of jingle and jangle fallacies, and conclude with recommendations.
SURPRISED BY THE HOT HAND FALLACY? A TRUTH IN THE LAW OF SMALL NUMBERS
We prove that a subtle but substantial bias exists in a common measure of the conditional dependence of present outcomes on streaks of past outcomes in sequential data. The magnitude of this streak selection bias generally decreases as the sequence gets longer, but increases in streak length, and remains substantial for a range of sequence lengths often used in empirical work. We observe that the canonical study in the influential hot hand fallacy literature, along with replications, are vulnerable to the bias. Upon correcting for the bias, we find that the longstanding conclusions of the canonical study are reversed.
Committing Fallacies and the Appearance Condition
This appearance condition of fallacies refers to the phenomenon of weak arguments, or moves in argumentation, appearing to be okay when really they aren’t. Not all theorists agree that the appearance condition should be part of the conception of fallacies but this essay explores some of the consequences of including it. In particular, the differences between committing a fallacy, causing a fallacy and observing a fallacy are identified. The remainder of the paper is given over to discussing possible causes of mistakenly perceiving weak argumentation moves as okay. Among these are argument caused misperception, perspective caused misperception, discursive environment caused misperception and perceiver caused misperception. The discussion aims to be sufficiently general so that it can accommodate different models and standards of argumentation that make a place for fallacies.
Lie Machines
Technology is breaking politics - what can be done about it? Artificially intelligent \"bot\" accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our attention, poison our political conversations, and jeopardize our democracies. With massive amounts of social media and public polling data, and in depth interviews with political consultants, bot writers, and journalists, Philip N. Howard offers ways to take these \"lie machines\" apart. Lie Machines is full of riveting behind the scenes stories from the world's biggest and most damagingly successful misinformation initiatives-including those used in Brexit and U.S. elections. Howard not only shows how these campaigns evolved from older propaganda operations but also exposes their new powers, gives us insight into their effectiveness, and shows us how to shut them down.
Judgment Error in Lottery Play: When the Hot Hand Meets the Gambler’s Fallacy
We demonstrate that lottery markets can exhibit the “hot-hand” phenomenon, in which past winning numbers tend to have a greater share of the betting proportion in future draws even though past and future events are independent. This is surprising as previous works have instead documented the presence of an opposite effect, the “gambler’s fallacy” in the U.S. lottery market. The current literature also suggests that the gambler’s fallacy prevails when random numbers are generated by mechanical devices, such as in lottery games. We use two sets of naturally occurring data to show that both the gambler’s fallacy and the hot-hand fallacy can exist in different types of lottery games. We then run online experimental studies that mimic lottery games with one, two, or three winning numbers. Our experimental results show that the number of winning prizes impacts behavior. In particular, whereas a single-prize game leads to a strong presence of the gambler’s fallacy, we observe a significant increase in hot-hand behavior in multiple-prize games with two or three winning numbers. This paper was accepted by David Simchi-Levi, behavioral economics.
A Tool for Addressing Construct Identity in Literature Reviews and Meta-Analyses
The problem of detecting whether two behavioral constructs reference the same real-world phenomenon has existed for over 100 years. Discordant naming of constructs is here termed the construct identity fallacy (CIF). We designed and evaluated the construct identity detector (CID), the first tool with large-scale construct identity detection properties and the first tool that does not require respondent data. Through the adaptation and combination of different natural language processing (NLP) algorithms, six designs were created and evaluated against human expert decisions. All six designs were found capable of detecting construct identity, and a design combining two existing algorithms significantly outperformed the other approaches. A set of follow-up studies suggests the tool is valuable as a supplement to expert efforts in literature review and metaanalysis. Beyond design science contributions, this article has important implications related to the taxonomic structure of social and behavioral science constructs, for the jingle and jangle fallacy, the core of the Information Systems nomological network, and the inaccessibility of social and behavioral science knowledge. In sum, CID represents an important, albeit tentative, step toward discipline-wide identification of construct identities.
The Fallacy Fallacy: From the Owl of Minerva to the Lark of Arete
The fallacy fallacy is either the misdiagnosis of fallacy or the supposition that the conclusion of a fallacy must be a falsehood. This paper explores the relevance of these and related errors of reasoning for the appraisal of arguments, especially within virtue theories of argumentation. In particular, the fallacy fallacy exemplifies the Owl of Minerva problem, whereby tools devised to understand a norm make possible new ways of violating the norm. Fallacies are such tools and so are vices. Hence a similar problem arises with argumentative vices. Fortunately, both instances of the problem have a common remedy.
Lack of group-to-individual generalizability is a threat to human subjects research
Only for ergodic processes will inferences based on group-level data generalize to individual experience or behavior. Because human social and psychological processes typically have an individually variable and time-varying nature, they are unlikely to be ergodic. In this paper, six studies with a repeated-measure design were used for symmetric comparisons of interindividual and intraindividual variation. Our results delineate the potential scope and impact of nonergodic data in human subjects research. Analyses across six samples (with 87–94 participants and an equal number of assessments per participant) showed some degree of agreement in central tendency estimates (mean) between groups and individuals across constructs and data collection paradigms. However, the variance around the expected value was two to four times larger within individuals than within groups. This suggests that literatures in social and medical sciences may overestimate the accuracy of aggregated statistical estimates. This observation could have serious consequences for how we understand the consistency between group and individual correlations, and the generalizability of conclusions between domains. Researchers should explicitly test for equivalence of processes at the individual and group level across the social and medical sciences.
The number of available sample observations modulates gambler’s fallacy in betting behaviors
The gambler’s fallacy is a prevalent cognitive bias in betting behaviors, characterized by the mistaken belief that an independent and identically distributed random process exhibits negative serial correlation. This misconception often arises when individuals observe a series of realized outcomes from the process. We study how varying the quantity of information about the sample of realized outcomes influences individuals’ propensity towards the gambler’s fallacy in repeated betting. Experimentally, we uncover compelling evidence of the gambler’s fallacy and its counterpart, the hot-outcome fallacy, associated respectively with the frequency and duration of consecutive outcomes within the observed sample. While an increase in the number of sample observations marginally heightens the inclination towards the gambler’s fallacy with low winning probabilities, the effect is strikingly different when the likelihood of winning is 50% or more. In these cases, a small sample significantly exacerbates the gambler’s fallacy, whereas a larger sample substantially diminishes its impact. Furthermore, we identify individual variations in response to changes in information, influenced by factors such as gender, experience in lottery participation, and cognitive ability. Our findings underscore the sensitivity of gambling fallacies to contextual factors in decision-making, illustrating how the interplay of these factors modulate fallacious betting behaviors.
Understanding the dominant controls on litter decomposition
Litter decomposition is a biogeochemical process fundamental to element cycling within ecosystems, influencing plant productivity, species composition and carbon storage. Climate has long been considered the primary broad‐scale control on litter decomposition rates, yet recent work suggests that plant litter traits may predominate. Both decomposition paradigms, however, rely on inferences from cross‐biome litter decomposition studies that analyse site‐level means. We re‐analyse data from a classical cross‐biome study to demonstrate that previous research may falsely inflate the regulatory role of climate on decomposition and mask the influence of unmeasured local‐scale factors. Using the re‐analysis as a platform, we advocate experimental designs of litter decomposition studies that involve high within‐site replication, measurements of regulatory factors and processes at the same local spatial grain, analysis of individual observations and biome‐scale gradients. Synthesis. We question the assumption that climate is the predominant regulator of decomposition rates at broad spatial scales. We propose a framework for a new generation of studies focused on factoring local‐scale variation into the measurement and analysis of soil processes across broad scales. Such efforts may suggest a revised decomposition paradigm and ultimately improve confidence in the structure, parameter estimates and hence projections of biogeochemical models.