Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
5 result(s) for "Hartwig, Marissa K."
Sort by:
Study strategies of college students: Are self-testing and scheduling related to achievement?
Previous studies, such as those by Kornell and Bjork (Psychonomic Bulletin & Review, 14:219–224, 2007 ) and Karpicke, Butler, and Roediger (Memory, 17:471–479, 2009 ), have surveyed college students’ use of various study strategies, including self-testing and rereading. These studies have documented that some students do use self-testing (but largely for monitoring memory) and rereading, but the researchers did not assess whether individual differences in strategy use were related to student achievement. Thus, we surveyed 324 undergraduates about their study habits as well as their college grade point average (GPA). Importantly, the survey included questions about self-testing, scheduling one’s study, and a checklist of strategies commonly used by students or recommended by cognitive research. Use of self-testing and rereading were both positively associated with GPA. Scheduling of study time was also an important factor: Low performers were more likely to engage in late-night studying than were high performers; massing (vs. spacing) of study was associated with the use of fewer study strategies overall; and all students—but especially low performers—were driven by impending deadlines. Thus, self-testing, rereading, and scheduling of study play important roles in real-world student achievement.
The Scarcity of Interleaved Practice in Mathematics Textbooks
A typical mathematics assignment consists of a block of problems devoted to the same topic, yet several classroom-based randomized controlled trials have found that students obtain higher test scores when most practice problems are mixed with different kinds of problems—a format known as interleaved practice. Interleaving prevents students from safely assuming that each practice problem relates to the same skill or concept as the previous problem, thus forcing them to choose an appropriate strategy on the basis of the problem itself. Yet despite the efficacy of interleaved practice, blocked practice predominates most mathematics textbooks. As an illustration, we examined 13,505 practice problems in six representative mathematics texts and found that only 9.7% of the problems were interleaved. This translates to only one or two interleaved problems per school day. In brief, strong evidence suggests that students benefit from heavy doses of interleaved practice, yet most mathematics texts provide scarcely any.
Students’ Perceptions of Effective Math Learning Strategies
Two highly effective math learning strategies are spaced practice (in which problems of the same kind are distributed across many sessions) and interleaved practice (in which problems of different kinds are mixed rather than blocked). Though these strategies are supported by data, students may be reluctant to use them if they perceive the strategies as ineffective or unpleasant. In Study 1, we surveyed 174 grade 7 math students about the efficacy and likability of spaced and interleaved practice. Spaced practice was often judged likable, but nearly half of students failed to recognize its efficacy. Interleaved practice was judged both unlikable and inefficacious by most students. In Study 2, we further explored perceptions of interleaving in a survey of 233 grade 7 math students. Again, students erroneously judged interleaved practice to have low efficacy. Compared to blocked practice, interleaved practice was judged less effective, less preferable, more time-consuming, and more difficult. This work identifies perceptions that may discourage students from using effective learning strategies and also shows that specific perceptions differ by strategy. Helping students overcome their negative perceptions of spacing and interleaving is an important future direction.
The contribution of judgment scale to the unskilled-and-unaware phenomenon: How evaluating others can exaggerate over- (and under-) confidence
The unskilled-and-unaware phenomenon occurs when low performers tend to overestimate their performance on a task, whereas high performers judge their performance more accurately (and sometimes underestimate it). In previous research, this phenomenon has been observed for a variety of cognitive tasks and judgment scales. However, the role of judgment scale in producing the unskilled-and-unaware phenomenon has not been systematically investigated. Thus, we present four studies in which all participants judged their performance on both a relative scale (percentile rank) and an absolute scale (number correct). The studies included a variety of performance tasks (general knowledge questions, math problems, introductory psychology questions, and logic questions) and test formats (multiple-choice, recall). Across all tasks and formats, the percentile-rank judgments were less accurate than the absolute judgments, particularly for low and high performers. Furthermore, in Studies 1–3, the absolute judgments were highly accurate, even when the percentile-rank judgments were not. Thus, differences in the accuracy of percentile-rank judgments across skill levels do not always represent differences in self -awareness, but rather they may arise from difficulties that performers have at evaluating how well others are performing. Most importantly, the unskilled-and-unaware phenomenon on a relative scale does not guarantee inaccurate self-evaluations of absolute performance.
Do test items that induce overconfidence make unskilled performers unaware?
When a person estimates their global (overall) performance on a test they just completed, low performers often overestimate their performance whereas high performers estimate more accurately or slightly underestimate. Thus, low performers have been described as 'unskilled and unaware' (Kruger & Dunning, 1999). However, recent evidence (Hartwig & Dunlosky, in press) demonstrates that low performers sometimes estimate accurately. What determines whether a participant estimates accurately vs. inaccurately remains unclear. Thus, the present research asks: What might participants use as the basis for their global estimates, and can it explain the accuracy of those estimates? One intuitive possibility is that participants use their response confidence in test items as the basis of their global estimates. A simple instantiation of this idea is described by the item-frequency hypothesis, which posits that participants compute the frequency of their high-confidence responses, and this frequency serves as an estimate of their global performance. A corollary of this hypothesis is that items that produce high confidence in wrong answers (i.e., false alarms, or FAs) will contribute to global overestimates, whereas items that produce low confidence in correct answers (i.e., misses) will contribute to global underestimates. Study 1 found preliminary support for the hypothesis, because the frequency of high-confidence responses on a typical trivia test was correlated with participants' global estimates, and the imbalance of FAs vs. misses predicted the accuracy of those estimates. To evaluate the hypothesis experimentally, Studies 2 and 3 manipulated the frequencies of FAs and misses that a trivia test was expected to yield, and participants were randomly assigned to receive one of the tests. Tests designed to yield many FAs (relative to misses) produced global overestimation, tests designed to yield more misses (relative to FAs) produced underestimation, and tests designed to yield a balance of FAs and misses produced accurate estimation. Thus, the selection of test items affects global estimates and their accuracy. The imbalance of FAs and misses could not explain all individual differences in estimation accuracy, but it nonetheless was a moderate predictor of global estimation accuracy.