Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
64
result(s) for
"programmatic assessment"
Sort by:
Identifying High-Impact and Managing Low-Impact Assessment Practices
by
Kelley, Katherine A.
,
Sweet, Burgunda V.
,
Ray, Mary E.
in
assessment
,
curricular assessment
,
programmatic assessment
2019
Those in pharmacy education who are tasked with assessment may be overwhelmed by deadlines, data collection, and reporting, leaving little time to pause and examine the effectiveness of their efforts. However, assessment practices must be evaluated for their impact, including their ability to answer important questions, use resources effectively, and contribute to meaningful educational change. Often assessments are implemented, but then attention is diverted to another assessment before the data from the former assessment can be fully interpreted or used. To maximize the impact of assessment practices, tough and uncomfortable decisions may need to be made. In this paper, we suggest an approach for examining and making decisions about assessment activities and provide guidance on building high-impact assessment practices, evolving or “sunsetting” low-impact assessment practices, and managing mandated assessment.
Journal Article
A history of assessment in medical education
by
van der Vleuten, Cees P. M.
,
Schuwirth, Lambert W. T.
in
Construct Validity
,
Design
,
Education
2020
The way quality of assessment has been perceived and assured has changed considerably in the recent 5 decades. Originally, assessment was mainly seen as a measurement problem with the aim to tell people apart, the competent from the not competent. Logically, reproducibility or reliability and construct validity were seen as necessary and sufficient for assessment quality and the role of human judgement was minimised. Later, assessment moved back into the authentic workplace with various workplace-based assessment (WBA) methods. Although originally approached from the same measurement framework, WBA and other assessments gradually became assessment processes that included or embraced human judgement but based on good support and assessment expertise. Currently, assessment is treated as a whole system problem in which competence is evaluated from an integrated rather than a reductionist perspective. Current research therefore focuses on how to support and improve human judgement, how to triangulate assessment information meaningfully and how to construct fairness, credibility and defensibility from a systems perspective. But, given the rapid changes in society, education and healthcare, yet another evolution in our thinking about good assessment is likely to lurk around the corner.
Journal Article
Enhancing assessment quality and reducing workload in hospitality higher education: a comprehensive strategy
2025
This viewpoint article proposes a comprehensive assessment strategy to address two critical challenges in hospitality higher education: enhancing the quality of student work, while reducing educator workload. Drawing from programmatic assessment literature and practical experience in hospitality education, we present an integrated model combining low-, medium- and high-stakes assessments within a trialogical learning framework. The strategy leverages peer assessment, technological solutions and industry partnerships to distribute workload effectively while maintaining assessment quality. Our model emphasises continuous feedback, stakeholder involvement and using assessment as a learning tool rather than just evaluation. While implementation challenges exist, including initial resource investment and the need for mindset shifts, we argue that this transformation is necessary for the future of hospitality education. This approach aligns with industry needs and promotes sustainable assessment practices that benefit both students and educators. The article contributes to the ongoing discussion about assessment innovation in hospitality education and provides practical recommendations for institutions seeking to enhance their assessment strategies.
Journal Article
Assessment in the context of problem-based learning
by
van der Vleuten, Cees P. M.
,
Schuwirth, Lambert W. T.
in
Alternative Assessment
,
Cognition & reasoning
,
Competency Based Education
2019
Arguably, constructive alignment has been the major challenge for assessment in the context of problem-based learning (PBL). PBL focuses on promoting abilities such as clinical reasoning, team skills and metacognition. PBL also aims to foster self-directed learning and deep learning as opposed to rote learning. This has incentivized researchers in assessment to find possible solutions. Originally, these solutions were sought in developing the right instruments to measure these PBL-related skills. The search for these instruments has been accelerated by the emergence of competency-based education. With competency-based education assessment moved away from purely standardized testing, relying more heavily on professional judgment of complex skills. Valuable lessons have been learned that are directly relevant for assessment in PBL. Later, solutions were sought in the development of new assessment strategies, initially again with individual instruments such as progress testing, but later through a more holistic approach to the assessment program as a whole. Programmatic assessment is such an integral approach to assessment. It focuses on optimizing learning through assessment, while at the same gathering rich information that can be used for rigorous decision-making about learner progression. Programmatic assessment comes very close to achieving the desired constructive alignment with PBL, but its wide adoption—just like PBL—will take many years ahead of us.
Journal Article
Where the rubber meets the road — An integrative review of programmatic assessment in health care professions education
by
Heeneman, Sylvia
,
Schut, Suzanne
,
van Tartwijk, Jan
in
Clinical decision making
,
Committees
,
Competence
2021
Introduction
Programmatic assessment was introduced as an approach to design assessment programmes with the aim to simultaneously optimize the decision-making and learning function of assessment. An integrative review was conducted to review and synthesize results from studies investigating programmatic assessment in health care professions education in practice.
Methods
The authors systematically searched PubMed, Web of Science, and ERIC to identify studies published since 2005 that reported empirical data on programmatic assessment. Characteristics of the included studies were extracted and synthesized, using descriptive statistics and thematic analysis.
Results
Twenty-seven studies were included, which used quantitative methods (
n
= 10), qualitative methods (
n
= 12) or mixed methods (
n
= 5). Most studies were conducted in clinical settings (77.8%). Programmatic assessment was found to enable meaningful triangulation for robust decision-making and used as a catalyst for learning. However, several problems were identified, including overload in assessment information and the associated workload, counterproductive impact of using strict requirements and summative signals, lack of a shared understanding of the nature and purpose of programmatic assessment, and lack of supportive interpersonal relationships. Thematic analysis revealed that the success and challenges of programmatic assessment were best understood by the interplay between quantity and quality of assessment information, and the influence of social and personal aspects on assessment perceptions.
Conclusion
Although some of the evidence may seem compelling to support the effectiveness of programmatic assessment in practice, tensions will emerge when simultaneously stimulating the development of competencies and assessing its result. The identified factors and inferred strategies provide guidance for navigating these tensions.
Journal Article
A philosophical history of programmatic assessment : tracing shifting configurations
2021
Programmatic assessment is now well entrenched in medical education, allowing us to reflect on when it first emerged and how it evolved into the form we know today. Drawing upon the intellectual tradition of historical epistemology, we provide a philosophically-oriented historiographical study of programmatic assessment. Our goal is to trace its relatively short historical trajectory by describing shifting configurations in its scene of inquiry-focusing on questions, practices, and philosophical presuppositions. We identify three historical phases: emergence, evolution and entrenchment. For each, we describe the configurations of the scene; examine underlying philosophical presuppositions driving changes; and detail upshots in assessment practice. We find that programmatic assessment emerged in response to positivist 'turmoil' prior to 2005, driven by utility considerations and implicit pragmatist undertones. Once introduced, it evolved with notions of diversity and learning being underscored, and a constructivist ontology developing at its core. More recently, programmatic assessment has become entrenched as its own sub-discipline. Rich narratives have been emphasised, but philosophical underpinnings have been blurred. We hope to shed new light on current assessment practices in the medical education community by interrogating the history of programmatic assessment from this philosophical vantage point. Making philosophical presuppositions explicit highlights the perspectival nature of aspects of programmatic assessment, and suggest reasons for perceived benefits as well as potential tensions, contradictions and vulnerabilities in the approach today. We conclude by offering some reflections on important points to emerge from our historical study, and suggest 'what next' for programmatic assessment in light of this endeavour. [Author abstract]
Journal Article
Teacher, Gatekeeper, or Team Member: supervisor positioning in programmatic assessment
by
Hay, Margaret
,
Gibson, Simone
,
Jamieson, Janica
in
Academic Achievement
,
Competency Based Education
,
Education
2023
Competency-based assessment is undergoing an evolution with the popularisation of programmatic assessment. Fundamental to programmatic assessment are the attributes and buy-in of the people participating in the system. Our previous research revealed unspoken, yet influential, cultural and relationship dynamics that interact with programmatic assessment to influence success. Pulling at this thread, we conducted secondary analysis of focus groups and interviews (n = 44 supervisors) using the critical lens of Positioning Theory to explore how workplace supervisors experienced and perceived their positioning within programmatic assessment. We found that supervisors positioned themselves in two of three ways. First, supervisors universally positioned themselves as a
Teacher
, describing an inherent duty to educate students. Enactment of this position was dichotomous, with some supervisors ascribing a passive and disempowered position onto students while others empowered students by cultivating an egalitarian teaching relationship. Second, two mutually exclusive positions were described—either
Gatekeeper
or
Team Member
. Supervisors positioning themselves as
Gatekeepers
had a duty to protect the community and were vigilant to the detection of inadequate student performance. Programmatic assessment challenged this positioning by reorientating supervisor rights and duties which diminished their perceived authority and led to frustration and resistance. In contrast,
Team Members
enacted a right to make a valuable contribution to programmatic assessment and felt liberated from the burden of assessment, enabling them to assent power shifts towards students and the university. Identifying supervisor positions revealed how programmatic assessment challenged traditional structures and ideologies, impeding success, and provides insights into supporting supervisors in programmatic assessment.
Journal Article
Changing the culture of assessment: the dominance of the summative assessment paradigm
by
Könings, Karen D.
,
Schuwirth, Lambert W. T.
,
van der Vleuten, Cees P. M.
in
Assessment and evaluation of admissions
,
Culture
,
Design
2017
Background
Despite growing evidence of the benefits of including assessment for learning strategies within programmes of assessment, practical implementation of these approaches is often problematical. Organisational culture change is often hindered by personal and collective beliefs which encourage adherence to the existing organisational paradigm. We aimed to explore how these beliefs influenced proposals to redesign a summative assessment culture in order to improve students’ use of assessment-related feedback.
Methods
Using the principles of participatory design, a mixed group comprising medical students, clinical teachers and senior faculty members was challenged to develop radical solutions to improve the use of post-assessment feedback. Follow-up interviews were conducted with individual members of the group to explore their personal beliefs about the proposed redesign. Data were analysed using a socio-cultural lens.
Results
Proposed changes were dominated by a shared belief in the primacy of the summative assessment paradigm, which prevented radical redesign solutions from being accepted by group members. Participants’ prior assessment experiences strongly influenced proposals for change. As participants had largely only experienced a summative assessment culture, they found it difficult to conceptualise radical change in the assessment culture. Although all group members participated, students were less successful at persuading the group to adopt their ideas. Faculty members and clinical teachers often used indirect techniques to close down discussions. The strength of individual beliefs became more apparent in the follow-up interviews.
Conclusions
Naïve epistemologies and prior personal experiences were influential in the assessment redesign but were usually not expressed explicitly in a group setting, perhaps because of cultural conventions of politeness. In order to successfully implement a change in assessment culture, firmly-held intuitive beliefs about summative assessment will need to be clearly understood as a first step.
Journal Article
Programmatic assessment of competency-based workplace learning: when theory meets practice
by
van der Vleuten, Cees PM
,
Jaarsma, Debbie ADC
,
Theyse, Lars FH
in
Assessment and evaluation of admissions
,
Clinical competence
,
Competency based education
2013
Background
In competency-based medical education emphasis has shifted towards outcomes, capabilities, and learner-centeredness. Together with a focus on sustained evidence of professional competence this calls for new methods of teaching and assessment. Recently, medical educators advocated the use of a holistic, programmatic approach towards assessment. Besides maximum facilitation of learning it should improve the validity and reliability of measurements and documentation of competence development. We explored how, in a competency-based curriculum, current theories on programmatic assessment interacted with educational practice.
Methods
In a development study including evaluation, we investigated the implementation of a theory-based programme of assessment. Between April 2011 and May 2012 quantitative evaluation data were collected and used to guide group interviews that explored the experiences of students and clinical supervisors with the assessment programme. We coded the transcripts and emerging topics were organised into a list of lessons learned.
Results
The programme mainly focuses on the integration of learning and assessment by motivating and supporting students to seek and accumulate feedback. The assessment instruments were aligned to cover predefined competencies to enable aggregation of information in a structured and meaningful way. Assessments that were designed as formative learning experiences were increasingly perceived as summative by students. Peer feedback was experienced as a valuable method for formative feedback. Social interaction and external guidance seemed to be of crucial importance to scaffold self-directed learning. Aggregating data from individual assessments into a holistic portfolio judgement required expertise and extensive training and supervision of judges.
Conclusions
A programme of assessment with low-stakes assessments providing simultaneously formative feedback and input for summative decisions proved not easy to implement. Careful preparation and guidance of the implementation process was crucial. Assessment for learning requires meaningful feedback with each assessment. Special attention should be paid to the quality of feedback at individual assessment moments. Comprehensive attention for faculty development and training for students is essential for the successful implementation of an assessment programme.
Journal Article
Blended programmatic assessment for competency based curricula
2021
The uncertainty in all spheres of higher education due to the COVID-19 pandemic has had an unprecedented impact on teaching-learning and assessments in medical colleges across the globe. The conventional ways of assessment are now neither possible nor practical for certifying medical graduates. This has necessitated thoughtful considerations in making adjustments to the assessment system, with most institutions transitioning to online assessments that so far have remained underutilized. Programmatic assessment encourages the deliberate and longitudinal use of diverse assessment methods to maximize learning and assessment and at present can be utilized optimally as it ensures the collection of multiple low-stake assessment data which can be aggregated for high-stake pass/fail decisions by making use of every opportunity for formative feedback to improve performance. Though efforts have been made to introduce programmatic assessment in the competency-based undergraduate curriculum, transitioning to online assessment can be a potential opportunity if the basic tenets of programmatic assessment, choice of online assessment tools, strategies, good practices of online assessments and challenges are understood and explored explicitly for designing and implementing online assessments. This paper explores the possibility of introducing online assessment with face-to-face assessment and structuring a blended programmatic assessment in competency-based medical education.
Journal Article