Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
59 result(s) for "Chandler, Jesse"
Sort by:
Running Experiments on Amazon Mechanical Turk
Although Mechanical Turk has recently become popular among social scientists as a source of experimental data, doubts may linger about the quality of data provided by subjects recruited from online labor markets. We address these potential concerns by presenting new demographic data about the Mechanical Turk subject population, reviewing the strengths of Mechanical Turk relative to other online and offline methods of recruiting subjects, and comparing the magnitude of effects obtained using Mechanical Turk and traditional subject pools. We further discuss some additional benefits such as the possibility of longitudinal, cross cultural and prescreening designs, and offer some advice on how to best manage a common subject pool.
Using Nonnaive Participants Can Reduce Effect Sizes
Although researchers often assume their participants are naive to experimental materials, this is not always the case. We investigated how prior exposure to a task affects subsequent experimental results. Participants in this study completed the same set of 12 experimental tasks at two points in time, first as a part of the Many Labs replication project and again a few days, a week, or a month later. Effect sizes were markedly lower in the second wave than in the first. The reduction was most pronounced when participants were assigned to a different condition in the second wave. We discuss the methodological implications of these findings.
The Average Laboratory Samples a Population of 7,300 Amazon Mechanical Turk Workers
Using capture-recapture analysis we estimate the effective size of the active Amazon Mechanical Turk (MTurk) population that a typical laboratory can access to be about 7,300 workers. We also estimate that the time taken for half of the workers to leave the MTurk pool and be replaced is about 7 months. Each laboratory has its own population pool which overlaps, often extensively, with the hundreds of other laboratories using MTurk. Our estimate is based on a sample of 114,460 completed sessions from 33,408 unique participants and 689 sessions across seven laboratories in the US, Europe, and Australia from January 2012 to March 2015.
Inside the Turk: Understanding Mechanical Turk as a Participant Pool
Mechanical Turk (MTurk), an online labor market created by Amazon, has recently become popular among social scientists as a source of survey and experimental data. The workers who populate this market have been assessed on dimensions that are universally relevant to understanding whether, why, and when they should be recruited as research participants. We discuss the characteristics of MTurk as a participant pool for psychology and other social sciences, highlighting the traits of the MTurk samples, why people become MTurk workers and research participants, and how data quality on MTurk compares to that from other pools and depends on controllable and uncontrollable factors.
Fast Thought Speed Induces Risk Taking
In two experiments, we tested for a causal link between thought speed and risk taking. In Experiment 1, we manipulated thought speed by presenting neutral-content text at either a fast or a slow pace and having participants read the text aloud. In Experiment 2, we manipulated thought speed by presenting fast-, medium-, or slow-paced movie clips that contained similar content. Participants who were induced to think more quickly took more risks with actual money in Experiment 1 and reported greater intentions to engage in real-world risky behaviors, such as unprotected sex and illegal drug use, in Experiment 2. These experiments provide evidence that faster thinking induces greater risk taking.
Use does not wear ragged the fabric of friendship: Thinking of objects as alive makes people less willing to replace them
Anthropomorphic beliefs about objects lead people to treat them as if they were alive. Two experiments test how anthropomorphic thought affects consumers' product replacement intentions. Consumers induced to think about their car in anthropomorphic terms (i) were less willing to replace it and (ii) gave less weight to its quality when making replacement decisions. Instead, they (iii) attended to (experimentally induced connotations of) the car's “warmth,” a feature usually considered relevant in the interpersonal domain. While anthropomorphic beliefs about brands are often seen as advantageous by marketers because they increase brand loyalty, similar beliefs about products may be less desirable.
Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers
Crowdsourcing services—particularly Amazon Mechanical Turk—have made it easy for behavioral scientists to recruit research participants. However, researchers have overlooked crucial differences between crowdsourcing and traditional recruitment methods that provide unique opportunities and challenges. We show that crowdsourced workers are likely to participate across multiple related experiments and that researchers are overzealous in the exclusion of research participants. We describe how both of these problems can be avoided using advanced interface features that also allow prescreening and longitudinal data collection. Using these techniques can minimize the effects of previously ignored drawbacks and expand the scope of crowdsourcing as a tool for psychological research.
Direct replications in the era of open sampling
Data collection in psychology increasingly relies on “open populations” of participants recruited online, which presents both opportunities and challenges for replication. Reduced costs and the possibility to access the same populations allows for more informative replications. However, researchers should ensure the directness of their replications by dealing with the threats of participant nonnaiveté and selection effects.
In the "I" of the Storm: Shared Initials Increase Disaster Donations
People prefer their own initials to other letters, influencing preferences in many domains. The “name letter effect” (Nuttin, 1987) may not apply to negatively valenced targets if people are motivated to downplay or distance themselves from negative targets associated with the self, as previous research has shown (e.g., Finch & Cialdini, 1989). In the current research we examine the relationship between same initial preferences and negatively valenced stimuli. Specifically, we examined donations to disaster relief after seven major hurricanes to test the influence of the name letter effect with negatively valenced targets. Individuals who shared an initial with the hurricane name were overrepresented among hurricane relief donors relative to the baseline distribution of initials in the donor population. This finding suggests that people may seek to ameliorate the negative effects of a disaster when there are shared characteristics between the disaster and the self.
Response to Comment on “Estimating the reproducibility of psychological science”
Gilbert et al . conclude that evidence from the Open Science Collaboration’s Reproducibility Project: Psychology indicates high reproducibility, given the study methodology. Their very optimistic assessment is limited by statistical misconceptions and by causal inferences from selectively interpreted, correlational data. Using the Reproducibility Project: Psychology data, both optimistic and pessimistic conclusions about reproducibility are possible, and neither are yet warranted.