Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
35,012 result(s) for "Educational Researchers"
Sort by:
Artificial intelligence in higher education: the state of the field
This systematic review provides unique findings with an up-to-date examination of artificial intelligence (AI) in higher education (HE) from 2016 to 2022. Using PRISMA principles and protocol, 138 articles were identified for a full examination. Using a priori, and grounded coding, the data from the 138 articles were extracted, analyzed, and coded. The findings of this study show that in 2021 and 2022, publications rose nearly two to three times the number of previous years. With this rapid rise in the number of AIEd HE publications, new trends have emerged. The findings show that research was conducted in six of the seven continents of the world. The trend has shifted from the US to China leading in the number of publications. Another new trend is in the researcher affiliation as prior studies showed a lack of researchers from departments of education. This has now changed to be the most dominant department. Undergraduate students were the most studied students at 72%. Similar to the findings of other studies, language learning was the most common subject domain. This included writing, reading, and vocabulary acquisition. In examination of who the AIEd was intended for 72% of the studies focused on students, 17% instructors, and 11% managers. In answering the overarching question of how AIEd was used in HE, grounded coding was used. Five usage codes emerged from the data: (1) Assessment/Evaluation, (2) Predicting, (3) AI Assistant, (4) Intelligent Tutoring System (ITS), and (5) Managing Student Learning. This systematic review revealed gaps in the literature to be used as a springboard for future researchers, including new tools, such as Chat GPT.HighlightsA systematic review examining AIEd in higher education (HE) up to the end of 2022.Unique findings in the switch from US to China in the most studies published.A two to threefold increase in studies published in 2021 and 2022 to prior years.AIEd was used for: Assessment/Evaluation, Predicting, AI Assistant, Intelligent Tutoring System, and Managing Student Learning.
Determination of digital technologies preferences of educational researchers
Purpose: This study aims to investigate the preferences of 96 educational researchers on the use of digital technologies in scientific research. Design/methodology/approach: The study was designed as a quantitative-dominant sequential explanatory mixed-method research. Findings: Despite the spreading use of advanced technologies of big data and data mining, the most preferred digital technologies were found to be data analysis programs, databases and questionnaires. The primary reasons of using digital technology in scientific research were to collect data easily and quickly, to reduce research costs and to reach a higher number of participants. Originality/value: The use of digital technologies in scientific research is considered a revolutionary action, which creates innovative opportunities. Through digitalized life, probably for the first time in history, the educational researchers have analytical information, which we can benefit from more than the individual's own statements in research involving human factor. However, there are a few studies that investigated the preferences of educational researchers who use digital technologies in their scientific research.
The Impact of Peer Assessment on Academic Performance: A Meta-analysis of Control Group Studies
Peer assessment has been the subject of considerable research interest over the last three decades, with numerous educational researchers advocating for the integration of peer assessment into schools and instructional practice. Research synthesis in this area has, however, largely relied on narrative reviews to evaluate the efficacy of peer assessment. Here, we present a meta-analysis (54 studies, k = 141) of experimental and quasi-experimental studies that evaluated the effect of peer assessment on academic performance in primary, secondary, or tertiary students across subjects and domains. An overall small to medium effect of peer assessment on academic performance was found ( g = 0.31, p < .001). The results suggest that peer assessment improves academic performance compared with no assessment ( g = 0.31, p = .004) and teacher assessment ( g = 0.28, p = .007), but was not significantly different in its effect from self-assessment ( g = 0.23, p = .209). Additionally, meta-regressions examined the moderating effects of several feedback and educational characteristics (e.g., online vs offline, frequency, education level). Results suggested that the effectiveness of peer assessment was remarkably robust across a wide range of contexts. These findings provide support for peer assessment as a formative practice and suggest several implications for the implementation of peer assessment into the classroom.
Demystifying Content Analysis
Objective. In the course of daily teaching responsibilities, pharmacy educators collect rich data that can provide valuable insight into student learning. This article describes the qualitative data analysis method of content analysis, which can be useful to pharmacy educators because of its application in the investigation of a wide variety of data sources, including textual, visual, and audio files. Findings. Both manifest and latent content analysis approaches are described, with several examples used to illustrate the processes. This article also offers insights into the variety of relevant terms and visualizations found in the content analysis literature. Finally, common threats to the reliability and validity of content analysis are discussed, along with suitable strategies to mitigate these risks during analysis. Summary. This review of content analysis as a qualitative data analysis method will provide clarity and actionable instruction for both novice and experienced pharmacy education researchers.
The research we have is not the research we need
The special issue “A Synthesis of Systematic Review Research on Emerging Learning Environments and Technologies” edited by Drs. Florence Martin, Vanessa Dennen, and Curtis Bonk has assembled a noteworthy collection of systematic review articles, each focusing on a different aspect of emerging learning technologies. In this conclusion, we focus on these evidence-based reviews and their practical implications for practitioners as well as future researchers. While recognizing the merits of these reviews, we conclude our analysis by encouraging readers to consider conducting educational design research to address serious problems related to teaching, learning, and performance, collaborating more closely with teachers, administrators, and other practitioners in tackling these problems, and always striving to make a difference in the lives of learners around the world.
The impact of gamification in educational settings on student learning outcomes: a meta-analysis
Gamification research in educational settings has produced mixed results on student learning outcomes. Educational researchers and practitioners both struggle with identifying when, where, and how to use gamification design concepts. The present study provides findings from a meta-analysis that integrated the empirical, quantitative research on gamification in formal educational settings on student learning outcomes. This was achieved by examining the overall effect size, identifying which gamification design elements (e.g., badges) were used, and determining under what circumstances (e.g., engineering education) gamification works. The final corpus of data included 30 independent studies and associated effect sizes comparing gamification to non-gamification conditions while accounting for N  = 3083 participants. The overall effect size using a random-effects model is g  = .464 [.244 to .684] in favor of the gamification condition, which is a small to medium effect size. We examined 14 different gamification design elements (e.g., leaderboards) and showed that each leads to different effects on student learning outcomes. Further, the type of publication (e.g., journal article), student classification (e.g., undergraduate), and subject area (e.g., mathematics) are also investigated as moderators. We provide a discussion of our findings, some recommendations for future research, and some brief closing remarks.
A Cognitive Load Theory Approach to Defining and Measuring Task Complexity Through Element Interactivity
Educational researchers have been confronted with a multitude of definitions of task complexity and a lack of consensus on how to measure it. Using a cognitive load theory-based perspective, we argue that the task complexity that learners experience is based on element interactivity. Element interactivity can be determined by simultaneously considering the structure of the information being processed and the knowledge held in long-term memory of the person processing the information. Although the structure of information in a learning task can easily be quantified by counting the number of interacting information elements, knowledge held in long-term memory can only be estimated using teacher judgment or knowledge tests. In this paper, we describe the different perspectives on task complexity and present some concrete examples from cognitive load research on how to estimate the levels of element interactivity determining intrinsic and extraneous cognitive load. The theoretical and practical implications of the cognitive load perspective of task complexity for instructional design are discussed.
The Theory and Practice of Culturally Relevant Education: A Synthesis of Research Across Content Areas
Many teachers and educational researchers have claimed to adopt tenets of culturally relevant education (CRE). However, recent work describes how standardized curricula and testing have marginalized CRE in educational reform discourses. In this synthesis of research, we sought examples of research connecting CRE to positive student outcomes across content areas. It is our hope that this synthesis will be a reference useful to educational researchers, parents, teachers, and education leaders wanting to reframe public debates in education away from neoliberal individualism, whether in a specific content classroom or in a broader educational community.
Both Questionable and Open Research Practices Are Prevalent in Education Research
Concerns about the conduct of research are pervasive in many fields, including education. In this preregistered study, we replicated and extended previous studies from other fields by asking education researchers about 10 questionable research practices and five open research practices. We asked them to estimate the prevalence of the practices in the field, to self-report their own use of such practices, and to estimate the appropriateness of these behaviors in education research. We made predictions under four umbrella categories: comparison to psychology, geographic location, career stage, and quantitative orientation. Broadly, our results suggest that both questionable and open research practices are used by many education researchers. This baseline information will be useful as education researchers seek to understand existing social norms and grapple with whether and how to improve research practices.
A conceptual framework for integrated STEM education
The global urgency to improve STEM education may be driven by environmental and social impacts of the twenty-first century which in turn jeopardizes global security and economic stability. The complexity of these global factors reach beyond just helping students achieve high scores in math and science assessments. Friedman (The world is flat: A brief history of the twenty-first century, 2005) helped illustrate the complexity of a global society, and educators must help students prepare for this global shift. In response to these challenges, the USA experienced massive STEM educational reforms in the last two decades. In practice, STEM educators lack cohesive understanding of STEM education. Therefore, they could benefit from a STEM education conceptual framework. The process of integrating science, technology, engineering, and mathematics in authentic contexts can be as complex as the global challenges that demand a new generation of STEM experts. Educational researchers indicate that teachers struggle to make connections across the STEM disciplines. Consequently, students are often disinterested in science and math when they learn in an isolated and disjoined manner missing connections to crosscutting concepts and real-world applications. The following paper will operationalize STEM education key concepts and blend learning theories to build an integrated STEM education framework to assist in further researching integrated STEM education.