Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
15,391
result(s) for
"Reasoning tests"
Sort by:
Reliability Evidence for the Gibson Assessment of Cognitive Skills (GACS): A Brief Tool for Screening Cognitive Skills Across the Lifespan
by
Moore, Amy Lawson
,
Miller, Terissa M
,
Ledbetter, Christina
in
Ability tests
,
Adults
,
Cognition
2021
The aim of the current study was to examine and report three sources of reliability evidence for the Gibson Assessment of Cognitive Skills, a paper-based, brief cognitive screening tool for children and adults measuring working memory, processing speed, visual processing, logic and reasoning, and three auditory processing constructs: sound blending, sound segmenting, sound deletion along with work attack skills.
The sample (n = 103) for the current study consisted of children (n = 73) and adults (n = 30) between the ages of 6 and 80 (
= 20.2), 47.6% of which were female and 52.4% of which were male. Analyses of test data included calculation of internal consistency reliability, split-half reliability, and test-retest reliability.
Overall coefficient alphas range from 0.80 to 0.94, producing a strong source of internal consistency reliability evidence. The split-half reliability coefficients ranged from 0.83 to 0.96 overall, producing a strong second source of reliability evidence. Across all ages, the test-retest reliability coefficients ranged from 0.83 to 0.98. For adults ages 18 to 80, test-retest reliability coefficients ranged from 0.73 to 0.99. For children ages 6 through 17, test-retest reliability coefficients ranged from 0.89 to 0.97. All correlations were statistically significant at p < 0.001, indicating strong test-retest reliability and stability across administrations.
The evidence collected for the current study suggests that the GACS is a reliable brief screening tool for assessing cognitive skill performance in both children and adults.
Journal Article
Incorporation of an Explicit Critical-Thinking Curriculum to Improve Pharmacy Students’ Critical-Thinking Skills
by
Bond, Rucha
,
Cone, Catherine
,
Godwin, Donald
in
clinical reasoning
,
Cohort Studies
,
critical thinking
2016
The Health Sciences Reasoning Test (HSRT) is a validated instrument to assess critical-thinking skills. The objective of this study was to determine if HSRT results improved in second-year student pharmacists after exposure to an explicit curriculum designed to develop critical-thinking skills.
In December 2012, the HSRT was administered to students who were in their first year of pharmacy school. Starting in August 2013, students attended a 16-week laboratory curriculum using simulation, formative feedback, and clinical reasoning to teach critical-thinking skills. Following completion of this course, the HSRT was readministered to the same cohort of students.
All students enrolled in the course (83) took the HSRT, and following exclusion criteria, 90% of the scores were included in the statistical analysis. Exclusion criteria included students who did not finish more than 60% of the questions or who took less than 15 minutes to complete the test. Significant changes in the HSRT occurred in overall scores and in the subdomains of deduction, evaluation, and inference after students completed the critical-thinking curriculum.
Significant improvement in HSRT scores occurred following student immersion in an explicit critical-thinking curriculum. The HSRT was useful in detecting these changes, showing that critical-thinking skills can be learned and then assessed over a relatively short period using a standardized, validated assessment tool like the HSRT.
Journal Article
COMPREHENSIVE ANALYSIS OF THE FORT INSTRUMENT: USING DISTRACTOR ANALYSIS TO EXPLORE STUDENTS’ SCIENTIFIC REASONING BASED ON ACADEMIC LEVEL AND GENDER DIFFERENCE
by
Ha, Minsu
,
Aini, Rahmi Qurota
,
Fadillah, Sarah Meilani
in
College Students
,
Education
,
Gender
2021
Scientific reasoning ability is essential to get developed in the current digital age, particularly in the process of judgement and decision-making in complex problems. Few studies have conducted an in-depth exploration of scientific reasoning ability, especially in relation to the confidence level and gender. The scientific reasoning ability of Indonesian upper-secondary school and university students were examined and compared with previous recorded data of US students. In this study, the data were collected from 372 university and 528 upper-secondary education students in Indonesia. Students’ scientific reasoning ability was measured using a scientific formal reasoning test (FORT). In addition, confidence level and metacognitive data was collected through self-reported measures. Two-way ANOVA was performed to compare mean differences between groups based on academic level and gender and to observe interaction between the variables. Students’ confidence level in selecting the correct answer and distractor answer was analyzed using an independent t-test. The results reveal that many Indonesian students selected specific distractors with relatively high confidence. Moreover, upper-secondary school students and female students selected more distractors than the groups’ counterparts. Finally, the factors related to Indonesian students’ responses to the scientific formal reasoning were discussed.
Journal Article
Turning molehills into mountains: Sleepiness increases workplace interpretive bias
2015
Three studies draw from evolutionary theory to assess whether sleepiness increases interpretive biases in workplace social judgments. Study 1 established a relationship between sleepiness and interpretive bias using ambiguous interpersonal scenarios from a measure commonly used in personnel selection (N = 148). Study 2 explored the boundary conditions of the sleepiness–interpretive bias link via an experimental online field survey of U.S. adults (N = 433). Sleepiness increased interpretive bias when social threats were clearly present (unfair workplace) but did not affect bias in the absence of threat (fair workplace). Study 3 replicated and extended findings from the previous two studies using objective measures of sleep loss and a quasi-experimental manipulation of minor sleep loss (N = 175). Negative affect, ego depletion, or personality variables did not influence the observed relationships. Overall, results suggest that a self-protection/evolutionary perspective best explains the effects of sleepiness on workplace interpretive biases. These studies advance the current research on sleep in organizations by adding a cognitive “threat interpretation” bias approach to past work examining the emotional reaction/behavioral side of sleep disruption. Interpretive biases due to sleepiness may have significant implications for employee health and counterproductive behavior.
Journal Article
How to select a true leader? Introducing methods for measurement of implicit power motive
by
Galić, Zvonimir
,
Trojak, Nataša
in
Business Economy / Management
,
Clinical psychology
,
Conditional Reasoning Test
2020
Organizations mark the life of every individual, and the success and well-being of an individual largely depends on the success of organizations they belong to. The success of an organization is significantly influenced by those who are in charge of it, leaders or managers, so it is important for organizations to choose those who will do this job well. There is a large number of studies with the subject of successful leadership, and the dominant ones are those in which the traits of a successful leader are investigated. One of the traits identified as an important element of a leader’s success is the power motive. It consists of implicit and explicit dimensions, and the implicit dimension has been shown to be an important, and yet mostly overlooked, determinant of leadership performance. Measurement of the implicit dimension requires specially crafted instruments, including the “classic” Thematic Apperception Test, as well as recently introduced instruments such as the Implicit Association Test and the Conditional Reasoning Test for Power Motive. In this paper, we argue that introduction of the tests that assess implicit power motive to human resource management practice of business organizations might significantly improve selection procedures for leadership positions.
Journal Article
Myths and Tradeoffs
by
Assessment, Board on Testing and
,
Council, National Research
,
Education, Division of Behavioral and Social Sciences and
in
ACT Assessment
,
Admission
,
Educational tests and measurements
2000,1999
More than 8 million students enrolled in 4-year, degree-granting postsecondary institutions in the United States in 1996. The multifaceted system through which these students applied to and were selected by the approximately 2,240 institutions in which they enrolled is complex, to say the least; for students, parents, and advisers, it is often stressful and sometimes bewildering. This process raises important questions about the social goals that underlie the sorting of students, and it has been the subject of considerable controversy.
The role of standardized tests in this sorting process has been one of the principal flashpoints in discussions of its fairness. Tests have been cited as the chief evidence of unfairness in lawsuits over admissions decisions, criticized as biased against minorities and women, and blamed for the fierce competitiveness of the process. Yet tests have also been praised for their value in providing a common yardstick for comparing students from diverse schools with different grading standards.
Myths and Tradeoffs identifies and corrects some persistent myths about standardized admissions tests and highlight some of the specific tradeoffs that decisions about the uses of tests entail; presents conclusions and recommendations about the role of tests in college admissions; and lays out several issues about which information would clearly help decision makers, but about which the existing data are either insufficient or need synthesis and interpretation. This report will benefit a broad audience of college and university officials, state and other officials and lawmakers, and others who are wrestling with decisions about admissions policies, definitions of merit, legal actions, and other issues.
Cognitive Reflection and Moral Reasoning
2022
The goal of this study was to examine the relationship between reflectivity/impulsivity and aspects of moral reasoning (general level and individual stages) while considering assessment times and relevance of moral arguments. The study involved 442 participants (163 female and 279 male) aged between 19 and 76, with different levels of education. The study was conducted online and two measuring instruments were applied: the cognitive reflection test and the test of moral reasoning The obtained results showed that problem solving time was significantly shorter for intuitive answers as opposed to correct answers. Predominantly reflective and predominantly impulsive individuals differed in various aspects concerning problem solving and the assessment of moral arguments. Predominantly impulsive individuals demonstrated: significantly longer problem solving time for correct answers (there were no differences for intuitive answers), lower general level of moral reasoning, longer assessment time, and higher assessment of the relevance of moral arguments (sensitivity to argument strength) in almost all stages of moral development. The results suggest that there are different ways in which dominant cognitive styles determine the effects in tasks of different types.
Journal Article
Internal Structure and Partial Invariance across Gender in the Spanish Version of the Reasoning Test Battery
2015
The Reasoning Test Battery (BPR) is an instrument built on theories of the hierarchical organization of cognitive abilities and therefore consists of different tasks related with abstract, numerical, verbal, practical, spatial and mechanical reasoning. It was originally created in Belgium and later adapted to Portuguese. There are three forms of the battery consisting of different items and scales which cover an age range from 9 to 22. This paper focuses on the adaptation of the BPR to Spanish, and analyzes different aspects of its internal structure: (a) exploratory item factor analysis was applied to assess the presence of a dominant factor for each partial scale; (b) the general underlined model was evaluated through confirmatory factor analysis, and (c) factorial invariance across gender was studied. The sample consisted of 2624 Spanish students. The results concluded the presence of a general factor beyond the scales, with equivalent values for men and women, and gender differences in the factorial structure which affect the numerical reasoning, abstract reasoning and mechanical reasoning scales.
Journal Article
The Persian adaptation of Baddeley’s 3-min grammatical reasoning test
by
Baghaei, Purya
,
Tabatabaee-Yazdi, Mona
,
Khoshdel-Niyat, Fahimeh
in
Adaptation
,
Behavioral Science and Psychology
,
Biological Psychology
2017
Baddeley’s grammatical reasoning test is a quick and efficient measure of fluid reasoning which is commonly used in research on cognitive abilities and the impact of stresses and environmental factors on cognitive performance. The test, however, is verbal and can only be used with native speakers of English. In this study, we adapted the test for application in the Persian language using a different pair of verbs and geometrical shapes instead of English letters. The adapted test had high internal consistency and retest reliability estimates. It also had an excellent fit to a one-factor confirmatory factor model and correlated acceptably with other measures of fluid intelligence and participants’ grade point average (GPA).
Journal Article