Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
77
result(s) for
"educational prompt engineering"
Sort by:
ChatGPT-assisted collaborative argumentation: Effects of role-playing prompts on students' argumentation outcomes, processes, and perceptions
2025
In traditional collaborative argumentation activities, students often struggle to present arguments from diverse perspectives. ChatGPT is capable of understanding user prompts and generating corresponding responses, and it can play different roles with diverse backgrounds to argue with students, creating the possibility of promoting the quality of their argumentation. However, to make ChatGPT's responses work well for argumentation, students need to give appropriate prompts. Therefore, this study proposed the role-playing prompt-based ChatGPT-assisted Collaborative Argumentation (CaCA) approach, and a quasi-experiment was conducted to examine its effects on students' argumentation outcomes, processes, and perceptions. Sixty-six first-year graduate students engaged in this experiment: the experimental group adopted the role-playing prompt-based CaCA approach, while the control group adopted the conventional CaCA approach. Results showed that the role-playing prompt-based CaCA approach broadened students' perspectives in their arguments and increased the connections between data and claims, forming the chain of arguments centered on warrant and backing in their discourse. However, it did not significantly enhance their ability to edit ideas deeply or increase their willingness to give rebuttals. This research provides new insights into the application of ChatGPT in a micro-level collaborative argumentation context.
Journal Article
Embracing the future of Artificial Intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education
The present discussion examines the transformative impact of Artificial Intelligence (AI) in educational settings, focusing on the necessity for AI literacy, prompt engineering proficiency, and enhanced critical thinking skills. The introduction of AI into education marks a significant departure from conventional teaching methods, offering personalized learning and support for diverse educational requirements, including students with special needs. However, this integration presents challenges, including the need for comprehensive educator training and curriculum adaptation to align with societal structures. AI literacy is identified as crucial, encompassing an understanding of AI technologies and their broader societal impacts. Prompt engineering is highlighted as a key skill for eliciting specific responses from AI systems, thereby enriching educational experiences and promoting critical thinking. There is detailed analysis of strategies for embedding these skills within educational curricula and pedagogical practices. This is discussed through a case-study based on a Swiss university and a narrative literature review, followed by practical suggestions of how to implement AI in the classroom.
Journal Article
Prompting Change: Exploring Prompt Engineering in Large Language Model AI and Its Potential to Transform Education
2024
This paper explores the transformative potential of Large Language Models Artificial Intelligence (LLM AI) in educational contexts, particularly focusing on the innovative practice of prompt engineering. Prompt engineering, characterized by three essential components of content knowledge, critical thinking, and iterative design, emerges as a key mechanism to access the transformative capabilities of LLM AI in the learning process. This paper charts the evolving trajectory of LLM AI as a tool poised to reshape educational practices and assumptions. In particular, this paper breaks down the potential of prompt engineering practices to enhance learning by fostering personalized, engaging, and equitable educational experiences. The paper underscores how the natural language capabilities of LLM AI tools can help students and educators transition from passive recipients to active co-creators of their learning experiences. Critical thinking skills, particularly information literacy, media literacy, and digital citizenship, are identified as crucial for using LLM AI tools effectively and responsibly. Looking forward, the paper advocates for continued research to validate the benefits of prompt engineering practices across diverse learning contexts while simultaneously promoting potential defects, biases, and ethical concerns related to LLM AI use in education. It calls upon practitioners to explore and train educational stakeholders in best practices around prompt engineering for LLM AI, fostering progress towards a more engaging and equitable educational future.
Journal Article
Prompt engineering in higher education: a systematic review to help inform curricula
2025
This paper presents a systematic review of the role of prompt engineering during interactions with Generative Artificial Intelligence (GenAI) in Higher Education (HE) to discover potential methods of improving educational outcomes. Drawing on a comprehensive search of academic databases and relevant literature, key trends, including multiple framework designs, are presented and explored to review the role, relevance, and applicability of prompt engineering to purposefully improve GenAI-generated responses in higher education contexts. Multiple experiments using a variety of prompt engineering frameworks are compared, contrasted and discussed. Analysis reveals that well-designed prompts have the potential to transform interactions with GenAI in higher education teaching and learning. Further findings show it is important to develop and teach pragmatic skills in AI interaction, including meaningful prompt engineering, which is best managed through a well-designed framework for creating and evaluating GenAI applications that are aligned with pre-determined contextual educational goals. The paper outlines some of the key concepts and frameworks that educators should be aware of when incorporating GenAI and prompt engineering into their teaching practices, and when teaching students the necessary skills for successful GenAI interaction.
Journal Article
Few-shot is enough: exploring ChatGPT prompt engineering method for automatic question generation in english education
by
Kim, Hyeoncheol
,
Lee, Unggi
,
Jeon, Younghoon
in
Chatbots
,
Computational linguistics
,
Computer Appl. in Social and Behavioral Sciences
2024
Through design and development research (DDR), we aimed to create a validated automatic question generation (AQG) system using large language models (LLMs) like ChatGPT, enhanced by prompting engineering techniques. While AQG has become increasingly integral to online learning for its efficiency in generating questions, issues such as inconsistent question quality and the absence of transparent and validated evaluation methods persist. Our research focused on creating a prompt engineering protocol tailored for AQG. This protocol underwent several iterations of refinement and validation to improve its performance. By gathering validation scores and qualitative feedback on the produced questions and the system’s framework, we examined the effectiveness of the system. The study findings indicate that our combined use of LLMs and prompt engineering in AQG produces questions with statistically significant validity. Our research further illuminates academic and design considerations for AQG design in English education: (a) certain question types might not be optimal for generation via ChatGPT, (b) ChatGPT sheds light on the potential for collaborative AI-teacher efforts in question generation, especially within English education.
Journal Article
The Impact of Prompt Engineering and a Generative AI-Driven Tool on Autonomous Learning: A Case Study
2025
This study evaluates “I Learn with Prompt Engineering”, a self-paced, self-regulated elective course designed to equip university students with skills in prompt engineering to effectively utilize large language models (LLMs), foster self-directed learning, and enhance academic English proficiency through generative AI applications. By integrating prompt engineering concepts with generative AI tools, the course supports autonomous learning and addresses critical skill gaps in language proficiency and market-ready capabilities. The study also examines EnSmart, an AI-driven tool powered by GPT-4 and integrated into Canvas LMS, which automates academic test content generation and grading and delivers real-time, human-like feedback. Performance evaluation, structured questionnaires, and surveys were used to evaluate the course’s impact on prompting skills, academic English proficiency, and overall learning experiences. Results demonstrated significant improvements in prompt engineering skills, with accessible patterns like “Persona” proving highly effective, while advanced patterns such as “Flipped Interaction” posed challenges. Gains in academic English were most notable among students with lower initial proficiency, though engagement and practice time varied. Students valued EnSmart’s intuitive integration and grading accuracy but identified limitations in question diversity and adaptability. The high final success rate demonstrated that proper course design (taking into consideration Panadero’s four dimensions of self-regulated learning) can facilitate successful autonomous learning. The findings highlight generative AI’s potential to enhance autonomous learning and task automation, emphasizing the necessity of human oversight for ethical and effective implementation in education.
Journal Article
Comparative analysis of AI and expert evaluations in engineering design pedagogy
by
Coşkun, Tuğra Karademir
,
Altan, Esra Bozkurt
in
Artificial Intelligence
,
Assessments
,
Chatbots
2025
Integrating engineering design processes into science education has become a significant priority in STEM instruction. However, many science teachers face difficulties incorporating these processes due to limited pedagogical expertise. Generative artificial intelligence (GAI) tools such as ChatGPT offer potential support mechanisms by evaluating lesson plans and providing formative feedback. This study investigates the reliability and validity of GAI evaluations compared to expert assessments.
This mixed-methods study involved 43 science teachers who received professional development over four months to integrate engineering design into their lesson plans. A total of 52 lesson plans were evaluated using structured and unstructured prompts via ChatGPT 4.5, alongside evaluations by expert mentors. Quantitative data were analyzed using the Intraclass Correlation Coefficient (ICC) and Bland-Altman methods to assess inter-rater consistency. Qualitative data was analyzed through open and deductive coding to interpret differences in evaluation rationale.
Findings revealed high consistency between structured prompt AI evaluations and expert assessments (ICC = 0.708), while unstructured prompts showed low and non-significant agreement (ICC = 0.076). Qualitative analysis indicated that AI evaluations, particularly those using structured prompts, tend to be more positive and holistic, whereas experts offered more detailed and critical feedback. Differences were also observed in evaluating dcomponents like problem definition, testability, and interdisciplinary integration.
Structured AI prompts offer reliable and valid evaluation results comparable to expert assessments and could serve as scalable tools in teacher support systems. However, unstructured prompts produce inconsistent outcomes and require refinement. The study highlights both the potential and limitations of using GAI tools for pedagogical evaluation in STEM education.
Journal Article
Teaching EFL students to write with ChatGPT: Students' motivation to learn, cognitive load, and satisfaction with the learning process
2024
This mixed methods study explores EFL students’ experiences and perceptions as they learn to write a composition with ChatGPT’s support in a classroom instructional context. Students’ perceptions are explored in terms of their motivation to learn about ChatGPT, cognitive load and satisfaction with the learning process. In a workshop format, twenty-one Hong Kong secondary school students were introduced to ChatGPT, learned prompt engineering skills, and attempted a 500-word English language writing task with ChatGPT’s support. Data collected included a pre-workshop motivation questionnaire, think-aloud protocols during the writing task, and a post-workshop questionnaire on motivation, cognitive load, and satisfaction. Results revealed no significant difference in students’ motivation before and after the workshop, but mean motivation scores increased slightly. Students reported high cognitive load during the writing task, especially during prompt engineering. However, students expressed high satisfaction with the workshop overall. Findings indicate ChatGPT’s potential to engage EFL students in the writing classroom, but its use can impose heavy cognitive demands. To ensure that ChatGPT use supports EFL writing without overwhelming students, educators should consider an iterative design process for activities and instructional materials and carefully scaffolding instruction, especially for prompt engineering.
Journal Article
Perspectives of Generative AI in Chemistry Education Within the TPACK Framework
by
Feldman-Maggor, Yael
,
Blonder, Ron
,
Alexandron, Giora
in
Artificial intelligence
,
Chemistry
,
Education
2025
Artificial intelligence (AI) has made remarkable strides in recent years, finding applications in various fields, including chemistry research and industry. Its integration into chemistry education has gained attention more recently, particularly with the advent of generative AI (GAI) tools. However, there is a need to understand how teachers’ knowledge can impact their ability to integrate these tools into their practice. This position paper emphasizes two central points. First, teachers technological pedagogical content knowledge (TPACK) is essential for more accurate and responsible use of GAI. Second, prompt engineering—the practice of delivering instructions to GAI tools—requires knowledge that falls partially under the technological dimension of TPACK but also includes AI-related competencies that do not fit into any aspect of the framework, for example, the awareness of GAI-related issues such as bias, discrimination, and hallucinations. These points are demonstrated using ChatGPT on three examples drawn from chemistry education. This position paper extends the discussion about the types of knowledge teachers need to apply GAI effectively, highlights the need to further develop theoretical frameworks for teachers’ knowledge in the age of GAI, and, to address that, suggests ways to extend existing frameworks such as TPACK with AI-related dimensions.
Journal Article
Research on Intelligent Grading of Physics Problems Based on Large Language Models
2025
The automation of educational and instructional assessment plays a crucial role in enhancing the quality of teaching management. In physics education, calculation problems with intricate problem-solving ideas pose challenges to the intelligent grading of tests. This study explores the automatic grading of physics problems through a combination of large language models and prompt engineering. By comparing the performance of four prompt strategies (one-shot, few-shot, chain of thought, tree of thought) within two large model frameworks, namely ERNIEBot-4-turbo and GPT-4o. This study finds that the tree of thought prompt can better assess calculation problems with complex ideas (N = 100, ACC ≥ 0.9, kappa > 0.8) and reduce the performance gap between different models. This research provides valuable insights for the automation of assessments in physics education.
Journal Article