Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
322
result(s) for
"Educational tests and measurements -- Computer programs"
Sort by:
Computers and their impact on state assessments
2012
The \"Race To The Top\" program strongly advocates the use of computer technology in assessments. It dramatically promotes computer-based testing, linear or adaptive, in K-12 state assessment programs. Moreover, assessment requirements driven by this federal initiative exponentially increase the complexity in assessment design and test development. This book provides readers with a review of the history and basics of computer-based tests. It also offers a macro perspective for designing such assessment systems in the K-12 setting as well as a micro perspective on new challenges such as innovative items, scoring of such items, cognitive diagnosis, and vertical scaling for growth modeling and value added approaches to assessment. The editors' goal is to provide readers with necessary information to create a smarter computer-based testing system by following the advice and experience of experts from education as well as other industries. This book is based on a conference (http://marces.org/workshop.htm) held by the Maryland Assessment Research Center for Education Success. It presents multiple perspectives including test vendors and state departments of education, in designing and implementing a computer-based test in the K-12 setting. The design and implementation of such a system requires deliberate planning and thorough considerations. The advice and experiences presented in this book serve as a guide to practitioners and as a good source of information for quality control. The technical issues discussed in this book are relatively new and unique to K-12 large-scale computer-based testing programs, especially due to the recent federal policy. Several chapters provide possible solutions to psychometricians dealing with the technical challenges related to innovative items, cognitive diagnosis, and growth modeling in computer-based linear or adaptive tests in the K-12 setting. (DIPF/Verlag).
Technology-based assessments for 21st century skills
2012
In this book, leading scholars from multiple disciplines present their latest research on how to best measure complex knowledge, skills, and abilities using technology-based assessments. All authors discuss theoretical and practical implications from their research and outline their visions for the future of technology-based assessments. Chapters Two through Seven discuss game-based or simulation-based assessments developed using the ECD [Evidence Centered Design] framework. In Chapter 8, the authors present 'Good Assessment for Twentyfirstcentury Education' (GATE) and outline a framework for using games as assessments. In Chapter 9, the authors present an architecture for game-based assessment and describe relationships of learning goals, cognitive demands, and domain and task features. Chapters 10 and 11 focus specifically on using technology to assess inquiry learning. In Chapters 12, 13, and 14, the authors use cognitive load theory as a framework for designing technology-based assessments. The volume concludes with two Chapters, which explore technology-based assessments at a meta-level. (DIPF/Orig./pr).
Online learning and assessment in higher education : a planning guide
by
Benson, Robyn
,
Brack, Charlotte
in
College teaching
,
Computer programs
,
Computer-assisted instruction
2010
The use of e-learning strategies in teaching is becoming increasingly popular, particularly in higher education. Online Learning and Assessment in Higher Education recognises the key decisions that need to be made by lecturers in order to introduce e-learning into their teaching. An overview of the tools for e-learning is provided, including the use of Web 2.0 and the issues surrounding the use of e-learning tools such as resources and support and institutional policy. The second part of the book focuses on e-assessment; design principles, different forms of online assessment and the benefits and limitations of e-assessment. Provides an accessible introduction to teaching with technologyAddresses the basic aspects of decision-making for successful introduction of e-learning, drawing on relevant pedagogical principles from contemporary learning theoriesCrosses boundaries between the fields of higher education and educational technology (within the discipline of education), drawing on discourse from both areas
Technology-Based Assessments for 21st Century Skills
by
Michael C. Mayrath, Jody Clarke-Midura, Daniel H. Robinson
in
Education
,
Educational tests and measurements
,
TECHNOLOGY & ENGINEERING
2017,2012
Creative problem solving, collaboration, and technology fluency are core skills requisite of any nation's workforce that strives to be competitive in the 21st Century. Teaching these types of skills is an economic imperative, and assessment is a fundamental component of any pedagogical program. Yet, measurement of these skills is complex due to the interacting factors associated with higher order thinking and multifaceted communication. Advances in assessment theory, educational psychology, and technology create an opportunity to innovate new methods of measuring students' 21st Century Skills with validity, reliability, and scalability.In this book, leading scholars from multiple disciplines present their latest research on how to best measure complex knowledge, skills, and abilities using technology-based assessments. All authors discuss theoretical and practical implications from their research and outline their visions for the future of technology-based assessments.
Computers and Their Impact on State Assessments
2013,2012
The Race To The Top program strongly advocates the use of computer technology in assessments. It dramatically promotes computer-based testing, linear or adaptive, in K-12 state assessment programs. Moreover, assessment requirements driven by this federal initiative exponentially increase the complexity in assessment design and test development. This book provides readers with a review of the history and basics of computer-based tests. It also offers a macro perspective for designing such assessment systems in the K-12 setting as well as a micro perspective on new challenges such as innovative items, scoring of such items, cognitive diagnosis, and vertical scaling for growth modeling and value added approaches to assessment. The editors' goal is to provide readers with necessary information to create a smarter computer-based testing system by following the advice and experience of experts from education as well as other industries.This book is based on a conference (http://marces.org/workshop.htm) held by the Maryland Assessment Research Center for Education Success. It presents multiple perspectives including test vendors and state departments of education, in designing and implementing a computer-based test in the K-12 setting. The design and implementation of such a system requires deliberate planning and thorough considerations. The advice and experiences presented in this book serve as a guide to practitioners and as a good source of information for quality control.The technical issues discussed in this book are relatively new and unique to K-12 large-scale computer-based testing programs, especially due to the recent federal policy. Several chapters provide possible solutions to psychometricians dealing with the technical challenges related to innovative items, cognitive diagnosis, and growth modeling in computer-based linear or adaptive tests in the K-12 setting.
Evaluating podcasts as a tool for OSCE training: a randomized trial using generative AI-powered simulation
by
Pers, Yves-Marie
,
Guerrot, Dominique
,
Figueres, Lucile
in
Adult
,
Artificial Intelligence
,
Clinical Competence - standards
2025
Introduction
Objective Structured Clinical Examinations (OSCEs) are critical for assessing clinical competencies in medical education. While traditional teaching methods remain prevalent, this study introduces an innovative approach by evaluating the effectiveness of an OSCE preparation podcast in improving medical students’ OSCE performance using nephrology as a proof of concept. This novel method offers a flexible and accessible format for supplementary learning, potentially revolutionizing medical education.
Methods
A mono-centric randomized controlled trial was conducted among 50 fourth-year medical students. Participants were randomly assigned to either the podcast intervention group or a control group. Both groups completed six nephrology-specific OSCE stations on DocSimulator, a generative AI-powered virtual patient platform. Scores from three baseline and three post-intervention OSCE stations were compared. The primary outcome was the change in OSCE scores. Secondary outcomes included interest in nephrology and students’ self-reported competence in nephrology-related skills.
Results
The baseline OSCE scores did not differ between the two groups (23.8 ± 3.9 vs. 23.3 ± 5.3;
p
= 0.77). After the intervention, the podcast group demonstrated a significantly higher OSCE score compared to the control group (27.6 ± 3.6 vs. 23.6 ± 5.0;
p
= 0.002) with a greater improvement in OSCE scores (+ 3.52[0.7,6.5] vs. -1.22[-3,5.5];
p
= 0.03). While the podcast did not increase students’ intention to specialize in nephrology (4.2% vs. 4.0%;
p
= 0.99), it significantly improved their confidence in nephrology-related clinical skills (41.7% vs. 16%,
p
= 0.04). 68% of students in the podcast group found OSCE training podcast useful for their OSCE preparation, and 96% reported they would use it again.
Conclusions
The use of an OSCE preparation podcast significantly enhanced students’ performance in AI-based simulations and confidence in nephrology clinical competencies. Podcasts represent a valuable supplementary tool for medical education, providing flexibility and supporting diverse learning styles.
Trial Registration
Not applicable.
Journal Article
Can ChatGPT Pass High School Exams on English Language Comprehension?
by
de Winter, Joost C. F.
in
Answer Sheets
,
Application programming interface
,
Artificial Intelligence
2024
Launched in late November 2022, ChatGPT, a large language model chatbot, has garnered considerable attention. However, ongoing questions remain regarding its capabilities. In this study, ChatGPT was used to complete national high school exams in the Netherlands on the topic of English reading comprehension. In late December 2022, we submitted the exam questions through the ChatGPT web interface (GPT-3.5). According to official norms, ChatGPT achieved a mean grade of 7.3 on the Dutch scale of 1 to 10—comparable to the mean grade of all students who took the exam in the Netherlands, 6.99. However, ChatGPT occasionally required re-prompting to arrive at an explicit answer; without these nudges, the overall grade was 6.5. In March 2023, API access was made available, and a new version of ChatGPT, GPT-4, was released. We submitted the same exams to the API, and GPT-4 achieved a score of 8.3 without a need for re-prompting. Additionally, employing a bootstrapping method that incorporated randomness through ChatGPT’s ‘temperature’ parameter proved effective in self-identifying potentially incorrect answers. Finally, a re-assessment conducted with the GPT-4 model updated as of June 2023 showed no substantial change in the overall score. The present findings highlight significant opportunities but also raise concerns about the impact of ChatGPT and similar large language models on educational assessment.
Journal Article
Associations between an open-response situational judgment test and performance on OSCEs and fieldwork: implications for admissions decisions and matriculant diversity in an occupational therapy program
by
Roduta Roberts, Mary
,
Chen, Fu
,
Alves, Cecilia Brito
in
Academic Ability
,
Academic achievement
,
Addition
2024
Background
Casper, an online open-response situational judgement test that assesses social intelligence and professionalism [
1
], is used in admissions to health professions programs.
Method
This study (1) explored the incremental validity of Casper over grade point average (GPA) for predicting student performance on objective structured clinical examinations (OSCEs) and fieldwork placements within an occupational therapy program, (2) examined optimal weighting of Casper in GPA in admissions decisions using non-linear optimization and regression tree analysis to find the weights associated with the highest average competency scores, and (3) investigated whether Casper could be used to impact the diversity of a cohort selected for admission to the program.
Results
Multiple regression analysis results indicate that Casper improves the prediction of OSCE and fieldwork performance over and above GPA (change in Adj. R
2
= 3.2%). Non-linear optimization and regression tree analysis indicate the optimal weights of GPA and Casper for predicting performance across fieldwork placements are 0.16 and 0.84, respectively. Furthermore, the findings suggest that students with a slightly lower GPA (e.g., 3.5–3.6) could be successful in the program as assessed by fieldwork, which is considered to be the strongest indicator of success as an entry-level clinician. In terms of diversity, no statistically significant differences were found between those actually admitted and those who would have been admitted using Casper.
Conclusion
These results constitute preliminary validity evidence supporting the integration of Casper into applicant selection in an occupational therapy graduate program.
Journal Article
Exploring the impacts of learning modality changes: Validation of the learning modality change community of inquiry and self-efficacy scales
by
Jun, Hyun-Jin
,
Kulo, Violet
,
Hoang, Thuha
in
Active Learning
,
Allied Health Occupations Education
,
Basic Skills
2023
Abstract The rapid learning environment transition initiated by the COVID-19 pandemic impacted students’ perception of, comfort with, and self-efficacy in the online learning environment. Garrison’s Community of Inquiry framework provides a lens for examining students’ online learning experiences through three interdependent elements: social presence, cognitive presence, and teaching presence. Researchers in this study developed and validated the Learning Modality Change Community of Inquiry and Self-Efficacy scales to measure health professions students’ self-efficacy with online learning, while exploring how cognitive, social, and teaching presence is experienced by students who transition from one learning environment to another. The two scales demonstrate strong validity and reliability evidence and can be used by educators to explore the impacts of learning modality changes on student learning experiences. As learning environments continue to evolve, understanding the impact of these transitions can inform how educators consider curriculum design and learning environment changes.
Journal Article
Controlling construct-irrelevant factors through computer-based testing: disengagement, anxiety, & cheating
2019
A decision of whether to move from paper-and-pencil to computer-based tests is based largely on a careful weighing of the potential benefits of a change against its costs, disadvantages, and challenges. This paper briefly discusses the trade-offs involved in making such a transition, and then focuses on a relatively unexplored benefit of computer-based tests - the control of construct-irrelevant factors that can threaten test score validity. Several unique advantages provided by computer-based tests are described, and how these advantages can be used to manage the effects of several common construct-irrelevant factors is discussed. Ultimately, the potential for expanded control may prove to be one of the most important benefits of computer-based tests.
Journal Article