Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
44
result(s) for
"Physical Medicine -- Examination Questions"
Sort by:
Physical medicine and rehabilitation board review
by
Lee, Joseph
,
Cuccurullo, Sara J
in
Medical
,
Medical rehabilitation -- Examinations, questions, etc
,
Medicine, Physical
2014,2015
This third edition of the incomparable review bible for thePhysical Medicine and Rehabilitation Board Examinationhas been completely updated to reflect current practice and the core knowledge tested on the exam. Known for its organisation, consistency, and clarity, the book distils the essentials and provides focused reviews of all major topics. Coverage is expanded in the third edition to include dedicated sections on pain management, medical ethics, and ultrasound that reflect new board requirements.
Written in outline format for readability and easy access to information, content is modelled after the topic selection of the AAMPR Self-Directed Medical Knowledge Program used by residents nationwide. To aid in information retention, \"\"Pearls\"\" are designated with an open-book icon to highlight key concepts and stress clinical and board-eligible aspects of each topic.
The text is divided into major subspecialty areas written by authors with clinical expertise in each subject area, and content is reviewed by senior specialists to ensure the utmost accuracy. More than 500 high-quality illustrations clarify and reinforce concepts. The book also provides updated epidemiologic and statistical data throughout and includes a section on biostatistics in physical medicine and rehabilitation.
In addition to its proven value as a resource for exam preparation, the book is also a must-have for practicing physiatrists seeking recertification, and for PM&R instructors helping trainees to prepare for the exam.
New to the Third Edition:
Thoroughly reviewed, revised, and updated to reflect current practice and core knowledge tested on Boards
Improved organization, clarity, and consistency
Presents new chapters/sections on pain management, medical ethics, and ultrasound
Key Features:
Board\"\"Pearls\"\" are highlighted with an open-book icon throughout the text to flag key concepts and stress high-yield aspects of each topic
Models the table of contents after the topic selection of the AAPMR Self-Directed Medical Knowledge Program used by residents nationwide
Authored by physicians with special interest and clinical expertise in their respective areas and reviewed by senior specialists in those areas
Organizes information in outline format and by topic for easy reference
Includes over 500 illustrations to clarify concepts
Provides updated epidemiologic and statistical data throughout
Contains a section on biostatistics in physical medicine & rehabilitation
Large language models for generating medical examinations: systematic review
2024
Background
Writing multiple choice questions (MCQs) for the purpose of medical exams is challenging. It requires extensive medical knowledge, time and effort from medical educators. This systematic review focuses on the application of large language models (LLMs) in generating medical MCQs.
Methods
The authors searched for studies published up to November 2023. Search terms focused on LLMs generated MCQs for medical examinations. Non-English, out of year range and studies not focusing on AI generated multiple-choice questions were excluded. MEDLINE was used as a search database. Risk of bias was evaluated using a tailored QUADAS-2 tool.
Results
Overall, eight studies published between April 2023 and October 2023 were included. Six studies used Chat-GPT 3.5, while two employed GPT 4. Five studies showed that LLMs can produce competent questions valid for medical exams. Three studies used LLMs to write medical questions but did not evaluate the validity of the questions. One study conducted a comparative analysis of different models. One other study compared LLM-generated questions with those written by humans. All studies presented faulty questions that were deemed inappropriate for medical exams. Some questions required additional modifications in order to qualify.
Conclusions
LLMs can be used to write MCQs for medical examinations. However, their limitations cannot be ignored. Further study in this field is essential and more conclusive evidence is needed. Until then, LLMs may serve as a supplementary tool for writing medical examinations. 2 studies were at high risk of bias. The study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.
Journal Article
Advantages and pitfalls in utilizing artificial intelligence for crafting medical examinations: a medical education pilot study with GPT-4
2023
Background
The task of writing multiple choice question examinations for medical students is complex, timely and requires significant efforts from clinical staff and faculty. Applying artificial intelligence algorithms in this field of medical education may be advisable.
Methods
During March to April 2023, we utilized GPT-4, an OpenAI application, to write a 210 multi choice questions-MCQs examination based on an existing exam template and thoroughly investigated the output by specialist physicians who were blinded to the source of the questions. Algorithm mistakes and inaccuracies, as identified by specialists were classified as stemming from age, gender or geographical insensitivities.
Results
After inputting a detailed prompt, GPT-4 produced the test rapidly and effectively. Only 1 question (0.5%) was defined as false; 15% of questions necessitated revisions. Errors in the AI-generated questions included: the use of outdated or inaccurate terminology, age-sensitive inaccuracies, gender-sensitive inaccuracies, and geographically sensitive inaccuracies. Questions that were disqualified due to flawed methodology basis included elimination-based questions and questions that did not include elements of integrating knowledge with clinical reasoning.
Conclusion
GPT-4 can be used as an adjunctive tool in creating multi-choice question medical examinations yet rigorous inspection by specialist physicians remains pivotal.
Journal Article
An Aid to the MRCP PACES, Volume 2
by
Sukumar, N
,
Ryder, Robert E. J
,
Mir, M. Afzal
in
Communication in medicine
,
Examinations, questions, etc
,
Great Britain
2013
This new edition of An Aid to the MRCP Paces Volume 2: Stations 2 and 4 has been fully revised and updated, and reflects feedback from PACES candidates as to which cases frequently appear in each station. The cases and scenarios have been written in accordance with the latest examining and marking schemes used for the exam providing an invaluable training and revision aid for all MRCP PACES candidates.
ChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom)
by
Seow, Choon Sheong
,
Kulkarni, Dhananjay
,
Co, Michael Tiong-Hong
in
Artificial Intelligence
,
Biology and Life Sciences
,
Chatbots
2023
Large language models, in particular ChatGPT, have showcased remarkable language processing capabilities. Given the substantial workload of university medical staff, this study aims to assess the quality of multiple-choice questions (MCQs) produced by ChatGPT for use in graduate medical examinations, compared to questions written by university professoriate staffs based on standard medical textbooks.
50 MCQs were generated by ChatGPT with reference to two standard undergraduate medical textbooks (Harrison's, and Bailey & Love's). Another 50 MCQs were drafted by two university professoriate staff using the same medical textbooks. All 100 MCQ were individually numbered, randomized and sent to five independent international assessors for MCQ quality assessment using a standardized assessment score on five assessment domains, namely, appropriateness of the question, clarity and specificity, relevance, discriminative power of alternatives, and suitability for medical graduate examination.
The total time required for ChatGPT to create the 50 questions was 20 minutes 25 seconds, while it took two human examiners a total of 211 minutes 33 seconds to draft the 50 questions. When a comparison of the mean score was made between the questions constructed by A.I. with those drafted by humans, only in the relevance domain that the A.I. was inferior to humans (A.I.: 7.56 +/- 0.94 vs human: 7.88 +/- 0.52; p = 0.04). There was no significant difference in question quality between questions drafted by A.I. versus humans, in the total assessment score as well as in other domains. Questions generated by A.I. yielded a wider range of scores, while those created by humans were consistent and within a narrower range.
ChatGPT has the potential to generate comparable-quality MCQs for medical graduate examinations within a significantly shorter time.
Journal Article
An Aid to the MRCP PACES, Volume 3
by
Ryder, Robert E. J
,
Mir, M. Afzal
,
Fogden, Edward
in
Case studies
,
Examinations
,
Examinations, questions, etc
2013
An Aid to the MRCP PACES Volume 3: Station 5 is a brand new, fully updated edition of the best-selling PACES revision guide to address the newest Station, covering Integrated Clinical Assessment, with content guided by the experiences of PACES candidates. The cases and scenarios have been written in accordance with the latest examining and marking schemes used for the exam providing an invaluable training and revision aid for all MRCP PACES candidates. In order to fully support candidates taking the exam, this trilogy of best-selling revision aids is now presented as: An Aid to the MRCP PACES Volume 1: Stations 1 and 3, Fourth Edition 9780470655092 An Aid to the MRCP PACES Volume 2: Stations 2 and 4, Fourth Edition 9780470655184 An Aid to the MRCP PACES Volume 3: Station 5, Fourth Edition 9781118348055.
ChatGPT’s Response Consistency: A Study on Repeated Queries of Medical Examination Questions
by
Hoch, Cosima C.
,
Cotofana, Sebastian
,
Guntinas-Lichius, Orlando
in
Academic Achievement
,
Accuracy
,
Artificial intelligence
2024
(1) Background: As the field of artificial intelligence (AI) evolves, tools like ChatGPT are increasingly integrated into various domains of medicine, including medical education and research. Given the critical nature of medicine, it is of paramount importance that AI tools offer a high degree of reliability in the information they provide. (2) Methods: A total of n = 450 medical examination questions were manually entered into ChatGPT thrice, each for ChatGPT 3.5 and ChatGPT 4. The responses were collected, and their accuracy and consistency were statistically analyzed throughout the series of entries. (3) Results: ChatGPT 4 displayed a statistically significantly improved accuracy with 85.7% compared to that of 57.7% of ChatGPT 3.5 (p < 0.001). Furthermore, ChatGPT 4 was more consistent, correctly answering 77.8% across all rounds, a significant increase from the 44.9% observed from ChatGPT 3.5 (p < 0.001). (4) Conclusions: The findings underscore the increased accuracy and dependability of ChatGPT 4 in the context of medical education and potential clinical decision making. Nonetheless, the research emphasizes the indispensable nature of human-delivered healthcare and the vital role of continuous assessment in leveraging AI in medicine.
Journal Article