Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
7
result(s) for
"Extended matching questions"
Sort by:
The pattern of reporting and presenting validity evidence of extended matching questions (EMQs) in health professions education: a systematic review
by
Yusoff, Muhamad Saiful Bahri
,
Taha, Mohamed H.
,
Gasmalla, Hosam Eldeen Elsadig
in
Education
,
Educational Measurement - methods
,
Educational Measurement - standards
2024
The Extended matching Questions (EMQs), or R-type questions, are format of selected-response. The validity evidence for this format is crucial, but there have been reports of misunderstandings about validity. It is unclear what kinds of evidence should be presented and how to present them to support their educational impact. This review explores the pattern and quality of reporting the sources of validity evidence of EMQs in health professions education, encompassing content, response process, internal structure, relationship to other variables, and consequences. A systematic search in the electronic databases including MEDLINE via PubMed, Scopus, Web of Science, CINAHL, and ERIC was conducted to extract studies that utilize EMQs. The framework for a unitary concept of validity was applied to extract data. A total of 218 titles were initially selected, the final number of titles was 19. The most reported pieces of evidence were the reliability coefficient, followed by the relationship to another variable. Additionally, the adopted definition of validity is mostly the old tripartite concept. This study found that reporting and presenting validity evidence appeared to be deficient. The available evidence can hardly provide a strong validity argument that supports the educational impact of EMQs. This review calls for more work on developing a tool to measure the reporting and presenting validity evidence.
Journal Article
Postexamination item analysis of undergraduate pediatric multiple-choice questions exam: implications for developing a validated question Bank
by
Rashwan, Nagwan I.
,
Nayel, Omnia A.
,
Aref, Soha R.
in
Accreditation
,
Analysis
,
Behavioral Objectives
2024
Introduction
Item analysis (IA) is widely used to assess the quality of multiple-choice questions (MCQs). The objective of this study was to perform a comprehensive quantitative and qualitative item analysis of two types of MCQs: single best answer (SBA) and extended matching questions (EMQs) currently in use in the Final Pediatrics undergraduate exam.
Methodology
A descriptive cross-sectional study was conducted. We analyzed 42 SBA and 4 EMQ administered to 247 fifth-year medical students. The exam was held at the Pediatrics Department, Qena Faculty of Medicine, Egypt, in the 2020–2021 academic year. Quantitative item analysis included item difficulty (P), discrimination (D), distractor efficiency (DE), and test reliability. Qualitative item analysis included evaluation of the levels of cognitive skills and conformity of test items with item writing guidelines.
Results
The mean score was 55.04 ± 9.8 out of 81. Approximately 76.2% of SBA items assessed low cognitive skills, and 75% of EMQ items assessed higher-order cognitive skills. The proportions of items with an acceptable range of difficulty (0.3–0.7) on the SBA and EMQ were 23.80 and 16.67%, respectively. The proportions of SBA and EMQ with acceptable ranges of discrimination (> 0.2) were 83.3 and 75%, respectively. The reliability coefficient (KR20) of the test was 0.84.
Conclusion
Our study will help medical teachers identify the quality of SBA and EMQ, which should be included to develop a validated question bank, as well as questions that need revision and remediation for subsequent use.
Journal Article
Predictors of clinical reasoning in neurological localisation: A study in internal medicine residents
by
Tan, Kevin
,
Loh, Kieng Wee
,
Rotgans, Jerome Ingmar
in
Academic achievement
,
Education
,
Gender
2020
Introduction: Clinical reasoning is the cognitive process of weighing clinical information together with past experience to evaluate diagnostic and management dilemmas. There is a paucity of literature regarding predictors of clinical reasoning at the postgraduate level. We performed a retrospective study on internal medicine residents to determine the sociodemographic and experiential correlates of clinical reasoning in neurological localisation, measured using validated tests.
Methods: We recruited 162 internal medicine residents undergoing a three-month attachment in neurology at the National Neuroscience Institute, Singapore, over a 2.5 year period. Clinical reasoning was assessed on the second month of their attachment via two validated tests of neurological localisation–Extended Matching Questions (EMQ) and Script Concordance Test (SCT). Data on gender, undergraduate medical education (local vs overseas graduates), graduate medical education, and amount of clinical experience were collected, and their association with EMQ and SCT scores evaluated via multivariate analysis.
Results: Multivariate analysis indicated that local graduates scored higher than overseas graduates in the SCT (adjusted R2 = 0.101, f2 = 0.112). Being a local graduate and having more local experience positively predicted EMQ scores (adjusted R2 = 0.049, f2 = 0.112).
Conclusion: Clinical reasoning in neurological localisation can be predicted via a two-factor model–undergraduate medical education and the amount of local experience. Context specificity likely underpins the process.
Journal Article
Estimating the Minimum Number of Judges Required for Test-centred Standard Setting on Written Assessments. Do Discussion and Iteration have an Influence?
by
Fowell, S. L.
,
McLaughlin, P. J.
,
Fewtrell, R.
in
Curriculum - standards
,
Education
,
Education, Medical - standards
2008
Absolute standard setting procedures are recommended for assessment in medical education. Absolute, test-centred standard setting procedures were introduced for written assessments in the Liverpool MBChB in 2001. The modified Angoff and Ebel methods have been used for short answer question-based and extended matching question-based papers, respectively. Data collected has been analysed to investigate whether reliable standards can be achieved for small-scale, medical school-based assessments, to establish the minimum number of judges required and the effect of a discussion phase on reliability. The root mean squared error (RMSE) has been used as a measure of reliability and used to compute 95% confidence intervals for comparison to the examination statistics. The RMSE has been used to calculate the minimum number of judges required to obtain a predetermined minimum level of reliability, and the effect of the number of judges and number of items have been examined. Values of the RMSE obtained vary from 0.9 to 2.2%. Using average variances across each paper type, the minimum number of judges to obtain a RMSE of less than 2% is 10 or more judges before discussion or 6 or more judges after discussion. The results indicate that including a discussion phase improves the reliability and reduces the minimum number of judges required. Decision studies indicate that increasing the number of questions included in the assessments would not significantly improve the reliability of the standard setting.
Journal Article
Assessment types: Part 1
by
McKimm, Judy
,
Forrest, Kirsty
,
Davis, Mike
in
assessment tools
,
competency‐based assessments
,
essay questions
2013
This chapter summarises the main characteristics of the assessment types available to the medical educator. Each one of these has potential strengths and weaknesses. The chapter illustrates a simple model for the competency‐based assessment of performance. All the tools mentioned in this model are able to assess one or more of the primary or secondary competencies in different environments. The tools described are direct observation of procedural skills (DOPS), objective structured clinical examinations (OSCES), mini‐clinical evaluation exercise (mini‐CEX), assessment on part‐task trainers, selection centre assessments, case‐based discussions (CBDS), high fidelity simulation, and incognito patients. The chapter also helps the reader be aware of the strengths and weaknesses of their formats, administration and marking arrangements.
Book Chapter
Assessing competencies in rheumatology
2005
Assessment of competencies in rheumatology is difficult, but possible, and is an important part of the evaluation of practising clinicians, helping to prevent poor performance. Competencies are currently assessed by the Royal College of Physicians, the General Medical Council, and the National Clinical Assessment Authority.
Journal Article
Complex Table Question Answering with Multiple Cells Recall Based on Extended Cell Semantic Matching
2025
Tables, as a form of structured or semi-structured data, are widely found in documents, reports, and data manuals. Table-based question answering (TableQA) plays a key role in table document analysis and understanding. Existing approaches to TableQA can be broadly categorized into content-matching methods and end-to-end generation methods based on encoder–decoder deep neural networks. Content-matching methods return one or more table cells as answers, thereby preserving the original data and making them more suitable for downstream tasks. End-to-end methods, especially those leveraging large language models (LLMs), have achieved strong performance on various benchmarks. However, the variability in LLM-generated expressions and their heavy reliance on prompt engineering limit their applicability where answer fidelity to the source table is critical. In this work, we propose CBCM (Cell-by-Cell semantic Matching), a fine-grained cell-level matching method that extends the traditional row- and column-matching paradigm to improve accuracy and applicability in TableQA. Furthermore, based on the public IM-TQA dataset, we construct a new benchmark, IM-TQA-X, specifically designed for the multi-row and multi-column cell recall task, a scenario underexplored in existing state-of-the-art content-matching methods. Experimental results show that CBCM improves overall accuracy by 2.5% over the latest row- and column-matching method RGCNRCI (Relational Graph Convolutional Networks based Row and Column Intersection), and boosts accuracy in the multi-row and multi-column recall task from 4.3% to 34%.
Journal Article