Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
53
result(s) for
"Masters, Ken"
Sort by:
Assessing medical students’ readiness for artificial intelligence after pre-clinical training
2025
Background
Artificial intelligence (AI) is becoming increasingly relevant in healthcare, necessitating healthcare professionals’ proficiency in its use. Medical students and practitioners require fundamental understanding and skills development to manage data, oversee AI tools and make informed decisions based on AI applications. Integrating AI into medical education is essential to meet this demand.
Method
This cross-sectional study aimed to evaluate the level of undergraduate medical students’ readiness for AI as they enter their clinical years at Sultan Qaboos University’s College of Medicine and Health Sciences. The students’ readiness was assessed after being exposed to various AI related topics in several courses in the preclinical phases of the medical curriculum. The
Medical Artificial Intelligence Readiness Scale For Medical Students (MAIRS-MS)
questionnaire was used as the study instrument.
Results
A total of 84 out of 115 students completed the questionnaire (73.04% response rate). Of these, 45 (53.57%) were female while 39 (46.43%) were male. The cognition section, which evaluated the participants’ cognitive preparedness in terms of knowledge of medical AI terminology, the logic behind AI applications, and data science, received the lowest score (Mean = 3.52). Conversely, the vision section of the questionnaire, which assessed the participants’ capacity to comprehend the limitations and potential of medical AI, and anticipate opportunities and risks displayed the highest level of preparedness, had the highest score (Mean = 3.90). Notably, there were no statistically significant differences in AI competency scores by gender or academic year.
Conclusion
This study’s findings suggest while medical students demonstrate a moderate level of AI-readiness as they enter their clinical years, significant gaps remain, particularly in cognitive areas such as understanding AI terminology, logic, and data science.
The majority of students use ChatGPT as their AI tool
,
with a notable difference in attitudes between tech-savvy and non-tech-savvy individuals.
Further efforts are needed to improve students’ competency in evaluating AI tools. Medical schools should consider integrating AI into their curricula to enhance students’ preparedness for future medical practice. Assessing students’ readiness for AI in healthcare is crucial for identifying knowledge and skills gaps and guiding future training efforts.
Journal Article
Assessing ChatGPT’s Mastery of Bloom’s Taxonomy Using Psychosomatic Medicine Exam Questions: Mixed-Methods Study
by
Holderried, Friederike
,
Masters, Ken
,
Griewatz, Jan
in
Answers
,
Anxiety disorders
,
Application programming interface
2024
Large language models such as GPT-4 (Generative Pre-trained Transformer 4) are being increasingly used in medicine and medical education. However, these models are prone to \"hallucinations\" (ie, outputs that seem convincing while being factually incorrect). It is currently unknown how these errors by large language models relate to the different cognitive levels defined in Bloom's taxonomy.
This study aims to explore how GPT-4 performs in terms of Bloom's taxonomy using psychosomatic medicine exam questions.
We used a large data set of psychosomatic medicine multiple-choice questions (N=307) with real-world results derived from medical school exams. GPT-4 answered the multiple-choice questions using 2 distinct prompt versions: detailed and short. The answers were analyzed using a quantitative approach and a qualitative approach. Focusing on incorrectly answered questions, we categorized reasoning errors according to the hierarchical framework of Bloom's taxonomy.
GPT-4's performance in answering exam questions yielded a high success rate: 93% (284/307) for the detailed prompt and 91% (278/307) for the short prompt. Questions answered correctly by GPT-4 had a statistically significant higher difficulty than questions answered incorrectly (P=.002 for the detailed prompt and P<.001 for the short prompt). Independent of the prompt, GPT-4's lowest exam performance was 78.9% (15/19), thereby always surpassing the \"pass\" threshold. Our qualitative analysis of incorrect answers, based on Bloom's taxonomy, showed that errors were primarily in the \"remember\" (29/68) and \"understand\" (23/68) cognitive levels; specific issues arose in recalling details, understanding conceptual relationships, and adhering to standardized guidelines.
GPT-4 demonstrated a remarkable success rate when confronted with psychosomatic medicine multiple-choice exam questions, aligning with previous findings. When evaluated through Bloom's taxonomy, our data revealed that GPT-4 occasionally ignored specific facts (remember), provided illogical reasoning (understand), or failed to apply concepts to a new situation (apply). These errors, which were confidently presented, could be attributed to inherent model biases and the tendency to generate outputs that maximize likelihood.
Journal Article
Re: Female Patients and Informed Consent: Oman’s cultural background
2019
Dear Editor, I read with great interest the recent sounding board article by Al Balushi in the February 2019 issue of SQUMJ.1 The author raised the important issue of informed consent and ensuring that patients understand consent. Having a good informed consent form and asking the patient if they understand the information provided to them is a good start; however, this is not a valid way of gauging understanding. [...]understanding is gauged by the use of an examination.
Journal Article
Reflections From the Pandemic: Is Connectivism the Panacea for Clinicians?
by
MacNeill, Heather
,
Mehta, Neil
,
Benjamin, Jennifer
in
Analysis
,
Artificial Intelligence
,
COVID-19
2024
The COVID-19 pandemic and the recent increased interest in generative artificial intelligence (GenAI) highlight the need for interprofessional communities’ collaboration to find solutions to complex problems. A personal narrative experience of one of the authors compels us to reflect on current approaches to learning and knowledge acquisition and use solutions to the challenges posed by GenAI through social learning contexts using connectivism. We recognize the need for constructivism and experiential learning for knowledge acquisition to establish foundational understanding. We explore how connectivist approaches can enhance traditional constructivist paradigms amid rapidly changing learning environments and online communities. Learning in connectivism includes interacting with experts from other disciplines and creating nodes of accurate and accessible information while distinguishing between misinformation and accurate facts. Autonomy, connectedness, diversity, and openness are foundational for learners to thrive in this learning environment. Learning in this environment is not just acquiring new knowledge as individuals but being connected to networks of knowledge, enabling health professionals to stay current and up-to-date. Existing online communities with accessible GenAI solutions allow for the application of connectivist principles for learning and knowledge acquisition.
Journal Article
Nontechnological Online Challenges Faced by Health Professions Students during COVID-19: A Questionnaire Study
by
Masters, Ken
,
Alshamsi, Abdulmalik Khalid
in
biomedical sciences students
,
Coronaviruses
,
COVID-19
2022
COVID-19 forced universities to shift to online learning (emergency remote teaching (ERT)). This study aimed at identifying the nontechnological challenges that faced Sultan Qaboos University medical and biomedical sciences students during the pandemic. This was a survey-based, cross-sectional study aimed at identifying nontechnological challenges using Likert scale, multiple-choice, and open-ended questions. Students participated voluntarily and gave their consent; anonymity was maintained and all data were encrypted. The response rate was 17.95% (n = 131) with no statistically significant difference based on gender or majors (p-value > 0.05). Of the sample, 102 (77.9%) were stressed by exam location uncertainty, 96 (73.3%) felt easily distracted, 98 (74.8%) suffered physical health issues, and 89 (67.9%) struggled with time management. The main barriers were lack of motivation (92 (70.2%)), instruction/information overload (78 (59.5%)), and poor communication with teachers (74 (56.5%)). Furthermore, 57 (43.5%) said their prayer time was affected, and 65 (49.6%) had difficulties studying during Ramadan. The most important qualitative findings were poor communication and lack of motivation, which were reflected in student comments. While ERT had positive aspects, it precipitated many nontechnological challenges that highlight the inapplicability of ERT as a method of online learning for long-term e-learning initiatives. Challenges must be considered by the faculty to provide the best learning experience for students in the future.
Journal Article
Designing and developing an app to perform Hofstee cut-off calculations version 2; peer review: 2 approved
2021
Determining a Hofstee cut-off point in medical education student assessment is problematic: traditional methods can be time-consuming, inaccurate, and inflexible. To counter this, we developed a simple Android app that receives raw, unsorted student assessment data in .csv format, allows for multiple judges' inputs, mean or median inputs, calculates the Hofstee cut-off mathematically, and outputs the results with other guiding information. The app contains a detailed description of its functionality.
Journal Article
Automatic Generation of Medical Case-Based Multiple-Choice Questions (MCQs): A Review of Methodologies, Applications, Evaluation, and Future Directions
by
AlZaabi, Adhari
,
Al Shuraiqi, Somaiya
,
Aal Abdulsalam, Abdulrahman
in
Algorithms
,
Artificial intelligence
,
automatic question generation (AQG)
2024
This paper offers an in-depth review of the latest advancements in the automatic generation of medical case-based multiple-choice questions (MCQs). The automatic creation of educational materials, particularly MCQs, is pivotal in enhancing teaching effectiveness and student engagement in medical education. In this review, we explore various algorithms and techniques that have been developed for generating MCQs from medical case studies. Recent innovations in natural language processing (NLP) and machine learning (ML) for automatic language generation have garnered considerable attention. Our analysis evaluates and categorizes the leading approaches, highlighting their generation capabilities and practical applications. Additionally, this paper synthesizes the existing evidence, detailing the strengths, limitations, and gaps in current practices. By contributing to the broader conversation on how technology can support medical education, this review not only assesses the present state but also suggests future directions for improvement. We advocate for the development of more advanced and adaptable mechanisms to enhance the automatic generation of MCQs, thereby supporting more effective learning experiences in medical education.
Journal Article
Assessing medical students' readiness for artificial intelligence after pre-clinical training
by
Masters, Ken
,
AlZaabi, Adhari
in
Artificial intelligence
,
Educational aspects
,
Educational research
2025
Artificial intelligence (AI) is becoming increasingly relevant in healthcare, necessitating healthcare professionals' proficiency in its use. Medical students and practitioners require fundamental understanding and skills development to manage data, oversee AI tools and make informed decisions based on AI applications. Integrating AI into medical education is essential to meet this demand. This cross-sectional study aimed to evaluate the level of undergraduate medical students' readiness for AI as they enter their clinical years at Sultan Qaboos University's College of Medicine and Health Sciences. The students' readiness was assessed after being exposed to various AI related topics in several courses in the preclinical phases of the medical curriculum. The Medical Artificial Intelligence Readiness Scale For Medical Students (MAIRS-MS) questionnaire was used as the study instrument. A total of 84 out of 115 students completed the questionnaire (73.04% response rate). Of these, 45 (53.57%) were female while 39 (46.43%) were male. The cognition section, which evaluated the participants' cognitive preparedness in terms of knowledge of medical AI terminology, the logic behind AI applications, and data science, received the lowest score (Mean = 3.52). Conversely, the vision section of the questionnaire, which assessed the participants' capacity to comprehend the limitations and potential of medical AI, and anticipate opportunities and risks displayed the highest level of preparedness, had the highest score (Mean = 3.90). Notably, there were no statistically significant differences in AI competency scores by gender or academic year. This study's findings suggest while medical students demonstrate a moderate level of AI-readiness as they enter their clinical years, significant gaps remain, particularly in cognitive areas such as understanding AI terminology, logic, and data science. The majority of students use ChatGPT as their AI tool, with a notable difference in attitudes between tech-savvy and non-tech-savvy individuals. Further efforts are needed to improve students' competency in evaluating AI tools. Medical schools should consider integrating AI into their curricula to enhance students' preparedness for future medical practice. Assessing students' readiness for AI in healthcare is crucial for identifying knowledge and skills gaps and guiding future training efforts.
Journal Article