Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
161 result(s) for "Wulff, Peter"
Sort by:
Learning against the machine: the double edged sword of (Gen)AI in STEM education
Generative artificial intelligence (GenAI) is rapidly permeating science, technology, engineering, and mathematics (STEM) education across the board—from learners, to institutions, to education policy. While GenAI tools promise benefits such as scalable feedback, personalized guidance, and automation of instructional tasks, their widespread adoption also raises critical concerns. In this commentary, we argue that many current GenAI applications conflate learning with performance and feedback, neglecting the active, reflective, and embodied processes that underpin meaningful learning in STEM disciplines. We identify three core challenges: (I) authentic learning requires active integration of knowledge, but (II) existing GenAI tools are not designed to support such processes, and thus (III) uncritical use of GenAI may undermine the very goals of STEM education by fostering (meta-)cognitive debt, de-skilling, and misplaced trust in machine-generated authority. We further highlight the systemic risks of commercial lock-in, epistemic opacity, and the erosion of educational institutions’ role in cultivating critical engagement with subject matter. Rather than assuming that access to GenAI equates to improved learning, we call for rigorous, discipline-based research to establish under which conditions GenAI can meaningfully contribute to STEM education. The central question is not whether GenAI will be adopted, but when, how, and why it should be used to enhance, rather than diminish, opportunities for active and reflective learning.
Modelling STEM Teachers’ Pedagogical Content Knowledge in the Framework of the Refined Consensus Model: A Systematic Literature Review
Science education researchers have developed a refined understanding of the structure of science teachers’ pedagogical content knowledge (PCK), but how to develop applicable and situation-adequate PCK remains largely unclear. A potential problem lies in the diverse conceptualisations of the PCK used in PCK research. This study sought to systematize existing science education research on PCK through the lens of the recently proposed refined consensus model (RCM) of PCK. In this review, the studies’ approaches to investigating PCK and selected findings were characterised and synthesised as an overview comparing research before and after the publication of the RCM. We found that the studies largely employed a qualitative case-study methodology that included specific PCK models and tools. However, in recent years, the studies focused increasingly on quantitative aspects. Furthermore, results of the reviewed studies can mostly be integrated into the RCM. We argue that the RCM can function as a meaningful theoretical lens for conceptualizing links between teaching practice and PCK development by proposing pedagogical reasoning as a mechanism and/or explanation for PCK development in the context of teaching practice.
Using a large language model to provide individualized feedback for pre-service physics teachers’ written reflections
Continuous professional development of science teachers is closely related to their ability to reflect on their science lessons. For teachers to effectively reflect on their lesson, they need opportunities to make meaningful teaching-related experiences and receive guiding feedback for their reflections during initial teacher education. A sensible means to provide reflection opportunities for teachers is through written reflection assignments. However, assessing written reflections is conceptually challenging and resource-intensive. Recent developments suggest that artificial intelligence (AI) might provide novel opportunities for science teacher education and assessing written reflections. Consequently, we investigate ways in which AI in the form of a large language model can be utilized in a specific science teacher training program. The potential of AI for the use in higher education have been highlighted in recent years. However, it is noted that large language models oftentimes are too generic to be useful in specific science teacher education contexts. To address this issue, we used a pre-trained large language model and fine-tuned it to provide individualized feedback for pre-service physics teachers’ written reflection and investigated to which degree contextual factors impact the classification accuracy. Our results show that the pre-trained language model allowed a sufficient classification and that the classification accuracy can be improved by context specification. Our study highlights opportunities and challenges when utilizing AI, in particular large language models, for reflective writing analytics in science education.
Engaging young women in physics: An intervention to support young women’s physics identity development
This study presents findings on the physics identity development of female students in the German Physics Olympiad who participated in an intervention designed to support their engagement in physics. Enrichment programs such as the Physics Olympiad have been found to positively impact students' engagement and intent to pursue a career in science. However, many enrichment programs, including the Physics Olympiad, suffer from an underrepresentation of young women. The intervention investigated in this study capitalizes on gender-related research in physics education in order to explore ways in which gender equity can be raised in enrichment programs. To this end, we designed an identity-safe learning environment that facilitates participating young women's physics identity development. For example, same-sex groupings and considerately adopted physics contents that particularly acknowledge young women's interests (e.g., relation to medical issues and the human body) were utilized. Overall, 30 Olympians took part in a one-day intervention (13 females, 17 males). Positive effects in two important physics identity constructs, namely, \"interest\" and \"competence,\" for young women were found, while at the same time no effects were found for young men. Furthermore, the young women were more likely to participate in next year's Physics Olympiad, compared to the overall female Physics Olympiad population. These results indicate that the careful design of an intervention based on gender research and science identity theory can support young women's physics identity development.
Are science competitions meeting their intentions? a case study on affective and cognitive predictors of success in the Physics Olympiad
Contemporary science competitions particularly have two intentions: (1) identifying the students demonstrating the highest levels of domain-specific cognitive abilities and (2) recognizing and valuing the efforts of engaged and motivated students, even those without exceptional abilities. This study aimed to examine the relative influence of affective and cognitive variables on predicting success among 136 participants of the first two rounds of the German Physics Olympiad, and based on that, evaluate the extent to which the Physics Olympiad meets the outlined intentions. Our findings indicate that the competition’s initial round erects a hurdle for engaged and motivated students who lack sufficient cognitive abilities, which goes against the above mentioned second intention. Conversely, the Physics Olympiad appears to effectively align with its first intention by successfully identifying students with high developed physics-specific abilities. Building on our findings, we discuss ways for better aligning the competition with its intentions, thus contributing to the ongoing further development of science competitions.
Network analysis of terms in the natural sciences insights from Wikipedia through natural language processing and network analysis
Scientists use specific terms to denote concepts, objects, phenomena, etc. The terms are then connected with each other in sentences that are used in science-specific language. Representing these connections through term networks can yield valuable insights into central terms and properties of the interconnections between them. Furthermore, understanding term networks can enhance assessment and diagnostics in science education. Computational means such as natural language processing and network analysis provide tools to analyze term networks in a principled way. This study utilizes natural language processing and network analysis to analyze linguistic properties of terms in the natural science disciplines (biology, chemistry, and physics). The language samples comprised German and English Wikipedia articles that are labelled according to the respective discipline. The different languages were used as contrasting cases. Natural language processing capabilities allowed us to extract term networks from the Wikipedia articles. The network analysis approach enabled us to gain insights into linguistic properties of science terms and interconnections among them. Our findings indicate that in German and English Wikipedia terms such as theory, time, energy, or system emerge as most central in physics. Moreover, the science-term networks display typical scale-free, complex systems behavior. These findings can enhance assessment of science learner’s language use. The tools of natural language processing and network analysis more generally can facilitate information extraction from language corpora in the education fields.
Utilizing a Pretrained Language Model (BERT) to Classify Preservice Physics Teachers’ Written Reflections
Computer-based analysis of preservice teachers’ written reflections could enable educational scholars to design personalized and scalable intervention measures to support reflective writing. Algorithms and technologies in the domain of research related to artificial intelligence have been found to be useful in many tasks related to reflective writing analytics such as classification of text segments. However, mostly shallow learning algorithms have been employed so far. This study explores to what extent deep learning approaches can improve classification performance for segments of written reflections. To do so, a pretrained language model (BERT) was utilized to classify segments of preservice physics teachers’ written reflections according to elements in a reflection-supporting model. Since BERT has been found to advance performance in many tasks, it was hypothesized to enhance classification performance for written reflections as well. We also compared the performance of BERT with other deep learning architectures and examined conditions for best performance. We found that BERT outperformed the other deep learning architectures and previously reported performances with shallow learning algorithms for classification of segments of reflective writing. BERT starts to outperform the other models when trained on about 20 to 30% of the training data. Furthermore, attribution analyses for inputs yielded insights into important features for BERT’s classification decisions. Our study indicates that pretrained language models such as BERT can boost performance for language-related tasks in educational contexts such as classification.
Bridging the Gap Between Qualitative and Quantitative Assessment in Science Education Research with Machine Learning — A Case for Pretrained Language Models-Based Clustering
Science education researchers typically face a trade-off between more quantitatively oriented confirmatory testing of hypotheses, or more qualitatively oriented exploration of novel hypotheses. More recently, open-ended, constructed response items were used to combine both approaches and advance assessment of complex science-related skills and competencies. For example, research in assessing science teachers’ noticing and attention to classroom events benefitted from more open-ended response formats because teachers can present their own accounts. Then, open-ended responses are typically analyzed with some form of content analysis. However, language is noisy, ambiguous, and unsegmented and thus open-ended, constructed responses are complex to analyze. Uncovering patterns in these responses would benefit from more principled and systematic analysis tools. Consequently, computer-based methods with the help of machine learning and natural language processing were argued to be promising means to enhance assessment of noticing skills with constructed response formats. In particular, pretrained language models recently advanced the study of linguistic phenomena and thus could well advance assessment of complex constructs through constructed response items. This study examines potentials and challenges of a pretrained language model-based clustering approach to assess preservice physics teachers’ attention to classroom events as elicited through open-ended written descriptions. It was examined to what extent the clustering approach could identify meaningful patterns in the constructed responses, and in what ways textual organization of the responses could be analyzed with the clusters. Preservice physics teachers (N = 75) were instructed to describe a standardized, video-recorded teaching situation in physics. The clustering approach was used to group related sentences. Results indicate that the pretrained language model-based clustering approach yields well-interpretable, specific, and robust clusters, which could be mapped to physics-specific and more general contents. Furthermore, the clusters facilitate advanced analysis of the textual organization of the constructed responses. Hence, we argue that machine learning and natural language processing provide science education researchers means to combine exploratory capabilities of qualitative research methods with the systematicity of quantitative methods.
Computer-Based Classification of Preservice Physics Teachers’ Written Reflections
Reflecting in written form on one’s teaching enactments has been considered a facilitator for teachers’ professional growth in university-based preservice teacher education. Writing a structured reflection can be facilitated through external feedback. However, researchers noted that feedback in preservice teacher education often relies on holistic, rather than more content-based, analytic feedback because educators oftentimes lack resources (e.g., time) to provide more analytic feedback. To overcome this impediment to feedback for written reflection, advances in computer technology can be of use. Hence, this study sought to utilize techniques of natural language processing and machine learning to train a computer-based classifier that classifies preservice physics teachers’ written reflections on their teaching enactments in a German university teacher education program. To do so, a reflection model was adapted to physics education. It was then tested to what extent the computer-based classifier could accurately classify the elements of the reflection model in segments of preservice physics teachers’ written reflections. Multinomial logistic regression using word count as a predictor was found to yield acceptable average human-computer agreement (F1-score on held-out test dataset of 0.56) so that it might fuel further development towards an automated feedback tool that supplements existing holistic feedback for written reflections with data-based, analytic feedback.