Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
200,951
result(s) for
"EXAMINATIONS"
Sort by:
Advantages and pitfalls in utilizing artificial intelligence for crafting medical examinations: a medical education pilot study with GPT-4
2023
Background
The task of writing multiple choice question examinations for medical students is complex, timely and requires significant efforts from clinical staff and faculty. Applying artificial intelligence algorithms in this field of medical education may be advisable.
Methods
During March to April 2023, we utilized GPT-4, an OpenAI application, to write a 210 multi choice questions-MCQs examination based on an existing exam template and thoroughly investigated the output by specialist physicians who were blinded to the source of the questions. Algorithm mistakes and inaccuracies, as identified by specialists were classified as stemming from age, gender or geographical insensitivities.
Results
After inputting a detailed prompt, GPT-4 produced the test rapidly and effectively. Only 1 question (0.5%) was defined as false; 15% of questions necessitated revisions. Errors in the AI-generated questions included: the use of outdated or inaccurate terminology, age-sensitive inaccuracies, gender-sensitive inaccuracies, and geographically sensitive inaccuracies. Questions that were disqualified due to flawed methodology basis included elimination-based questions and questions that did not include elements of integrating knowledge with clinical reasoning.
Conclusion
GPT-4 can be used as an adjunctive tool in creating multi-choice question medical examinations yet rigorous inspection by specialist physicians remains pivotal.
Journal Article
Exemplars of Assessment in Higher Education
by
Rose, Tara
,
Souza, Jane M. (Jane Marie)
,
Perfetti, Heather F.
in
Educational evaluation
,
Educational evaluation -- Australia
,
Educational evaluation -- United States
2021,2023
Co-published with \"While assessment may feel to constituents like an activity of accountability simply for accreditors, it is most appropriate to approach assessment as an activity of accountability for students. Assessment results that improve institutional effectiveness, heighten student learning, and better align resources serve to make institutions stronger for the benefit of their students, and those results also serve the institution or program well during the holistic evaluation required through accreditation.\" - from the foreword by Heather Perfetti, President of the Middle States Commission on Higher EducationColleges and universities struggle to understand precisely what is being asked for by accreditors, and this book answers that question by sharing examples of success reported by schools specifically recommended by accreditors. This compendium gathers examples of assessment practice in twenty-four higher education institutions: twenty-three in the U.S. and one in Australia. All institutions represented in this book were suggested by their accreditor as having an effective assessment approach in one or more of the following assessment focused areas: assessment in the disciplines, co-curricular, course/program/institutional assessment, equity and inclusion, general education, online learning, program review, scholarship of teaching and learning, student learning, or technology. These examples recommended by accrediting agencies makes this a unique contribution to the assessment literature. The book is organized in four parts. Part One is focused on student learning and assessment and includes ten chapters. The primary focus for Part Two is student learning assessment from a disciplinary perspective and includes four chapters. Part Three has a faculty engagement and assessment focus, and Part Four includes four chapters on institutional effectiveness and assessment, with a focus on strategic planning.This book is a publication of the Association for the
Large language models for generating medical examinations: systematic review
2024
Background
Writing multiple choice questions (MCQs) for the purpose of medical exams is challenging. It requires extensive medical knowledge, time and effort from medical educators. This systematic review focuses on the application of large language models (LLMs) in generating medical MCQs.
Methods
The authors searched for studies published up to November 2023. Search terms focused on LLMs generated MCQs for medical examinations. Non-English, out of year range and studies not focusing on AI generated multiple-choice questions were excluded. MEDLINE was used as a search database. Risk of bias was evaluated using a tailored QUADAS-2 tool.
Results
Overall, eight studies published between April 2023 and October 2023 were included. Six studies used Chat-GPT 3.5, while two employed GPT 4. Five studies showed that LLMs can produce competent questions valid for medical exams. Three studies used LLMs to write medical questions but did not evaluate the validity of the questions. One study conducted a comparative analysis of different models. One other study compared LLM-generated questions with those written by humans. All studies presented faulty questions that were deemed inappropriate for medical exams. Some questions required additional modifications in order to qualify.
Conclusions
LLMs can be used to write MCQs for medical examinations. However, their limitations cannot be ignored. Further study in this field is essential and more conclusive evidence is needed. Until then, LLMs may serve as a supplementary tool for writing medical examinations. 2 studies were at high risk of bias. The study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.
Journal Article
EnCase Computer Forensics -- The Official EnCE
The official, Guidance Software-approved book on the newest EnCE exam!
The EnCE exam tests that computer forensic analysts and examiners have thoroughly mastered computer investigation methodologies, as well as the use of Guidance Software's EnCase Forensic 7. The only official Guidance-endorsed study guide on the topic, this book prepares you for the exam with extensive coverage of all exam topics, real-world scenarios, hands-on exercises, up-to-date legal information, and sample evidence files, flashcards, and more.
* Guides readers through preparation for the newest EnCase Certified Examiner (EnCE) exam
* Prepares candidates for both Phase 1 and Phase 2 of the exam, as well as for practical use of the certification
* Covers identifying and searching hardware and files systems, handling evidence on the scene, and acquiring digital evidence using EnCase Forensic 7
* Includes hands-on exercises, practice questions, and up-to-date legal information
* Sample evidence files, Sybex Test Engine, electronic flashcards, and more
If you're preparing for the new EnCE exam, this is the study guide you need.
Performance of ChatGPT on Chinese national medical licensing examinations: a five-year examination evaluation study for physicians, pharmacists and nurses
by
Shen, Bairong
,
Wu, Erman
,
Wu, Rongrong
in
Accuracy
,
Artificial Intelligence
,
Artificial intelligence in medical and professional health education
2024
Background
Large language models like ChatGPT have revolutionized the field of natural language processing with their capability to comprehend and generate textual content, showing great potential to play a role in medical education. This study aimed to quantitatively evaluate and comprehensively analysis the performance of ChatGPT on three types of national medical examinations in China, including National Medical Licensing Examination (NMLE), National Pharmacist Licensing Examination (NPLE), and National Nurse Licensing Examination (NNLE).
Methods
We collected questions from Chinese NMLE, NPLE and NNLE from year 2017 to 2021. In NMLE and NPLE, each exam consists of 4 units, while in NNLE, each exam consists of 2 units. The questions with figures, tables or chemical structure were manually identified and excluded by clinician. We applied direct instruction strategy via multiple prompts to force ChatGPT to generate the clear answer with the capability to distinguish between single-choice and multiple-choice questions.
Results
ChatGPT failed to pass the accuracy threshold of 0.6 in any of the three types of examinations over the five years. Specifically, in the NMLE, the highest recorded accuracy was 0.5467, which was attained in both 2018 and 2021. In the NPLE, the highest accuracy was 0.5599 in 2017. In the NNLE, the most impressive result was shown in 2017, with an accuracy of 0.5897, which is also the highest accuracy in our entire evaluation. ChatGPT’s performance showed no significant difference in different units, but significant difference in different question types. ChatGPT performed well in a range of subject areas, including clinical epidemiology, human parasitology, and dermatology, as well as in various medical topics such as molecules, health management and prevention, diagnosis and screening.
Conclusions
These results indicate ChatGPT failed the NMLE, NPLE and NNLE in China, spanning from year 2017 to 2021. but show great potential of large language models in medical education. In the future high-quality medical data will be required to improve the performance.
Journal Article