Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
575,824 result(s) for "EDUCATIONAL RESEARCH"
Sort by:
Research-Practice Partnerships in Education: Outcomes, Dynamics, and Open Questions
Policymakers, funders, and researchers today view research-practice partnerships (RPPs) as a promising approach for expanding the role of research in improving educational practice. Although studies in other fields provide evidence of the potential for RPPs, studies in education are few. This article provides a review of available evidence of the outcomes and dynamics of RPPs in education and related fields. It then outlines a research agenda for the study of RPPs that can guide funders' investments and help developing partnerships succeed.
Interpreting Effect Sizes of Education Interventions
Researchers commonly interpret effect sizes by applying benchmarks proposed by Jacob Cohen over a half century ago. However, effects that are small by Cohen’s standards are large relative to the impacts of most field-based interventions. These benchmarks also fail to consider important differences in study features, program costs, and scalability. In this article, I present five broad guidelines for interpreting effect sizes that are applicable across the social sciences. I then propose a more structured schema with new empirical benchmarks for interpreting a specific class of studies: causal research on education interventions with standardized achievement outcomes. Together, these tools provide a practical approach for incorporating study features, costs, and scalability into the process of interpreting the policy importance of effect sizes.
Methodological Guidance Paper: The Art and Science of Quality Systematic Reviews
The purpose of this article is to overview various challenges that prospective authors of quality systematic reviews should be prepared to address. These challenges pertain to all phases of the review process: from posing a critical question worthy of pursuit and executing a search procedure that is appropriately framed and transparently recorded, to discerning patterns and trends within the resulting data that speak directly to the critical question framing the review. For each of these challenges, suggestions are offered as to how authors might respond so as to enhance the quality of the review process and increase the value of findings for educational research, practice, and policymaking.
Two Decades of Artificial Intelligence in Education: Contributors, Collaborations, Research Topics, Challenges, and Future Directions
With the increasing use of Artificial Intelligence (AI) technologies in education, the number of published studies in the field has increased. However, no large-scale reviews have been conducted to comprehensively investigate the various aspects of this field. Based on 4,519 publications from 2000 to 2019, we attempt to fill this gap and identify trends and topics related to AI applications in education (AIEd) using topic-based bibliometrics. Results of the review reveal an increasing interest in using AI for educational purposes from the academic community. The main research topics include intelligent tutoring systems for special education; natural language processing for language education; educational robots for AI education; educational data mining for performance prediction; discourse analysis in computer-supported collaborative learning; neural networks for teaching evaluation; affective computing for learner emotion detection; and recommender systems for personalized learning. We also discuss the challenges and future directions of AIEd.
A Bibliometric Review of Research on Educational Administration: Science Mapping the Literature, 1960 to 2018
This systematic review used \"science mapping\" as a means of understanding the evolution of research in educational administration (EA). The review sought to document the size, growth trajectory, and geographic distribution of EA research, identify high impact scholars and documents, and illuminate the \"intellectual structure\" of the field. Although science mapping has been applied widely in science, medicine, and social sciences, it is still new in the field of education. The authors identified 22,361 peer-reviewed articles published in 22 Scopus-indexed EA journals between 1960 and 2018. The authors used VOSviewer, Excel, and Tableau software to analyze the data set. The review found that the E A knowledge base has grown dramatically since 1960 with an accelerating rate growth and increasing gender and geographic diversity during the past two decades. Using co-citation analysis, the review identified canonical documents, defined as highly influential documents whose impact has been sustained for a period of several decades. The review also identified four hey Schools of Thought that have emerged over time focusing on Leadership for Learning, Leadership and Cultural Change, School Effectiveness and School Improvement, and Leading Teachers. More broadly, our findings highlighted a paradigm shift from \"school administration\" to \"school leadership\" over the course of the six decades. Another significant finding identified \"leadership for student learning and development\" as the \"cognitive anchor\" of the intellectual structure of the EA knowledge base. The authors conclude that science mapping offers a new and useful means of unpacking the historical development of fields of study.
Artificial intelligence in higher education: the state of the field
This systematic review provides unique findings with an up-to-date examination of artificial intelligence (AI) in higher education (HE) from 2016 to 2022. Using PRISMA principles and protocol, 138 articles were identified for a full examination. Using a priori, and grounded coding, the data from the 138 articles were extracted, analyzed, and coded. The findings of this study show that in 2021 and 2022, publications rose nearly two to three times the number of previous years. With this rapid rise in the number of AIEd HE publications, new trends have emerged. The findings show that research was conducted in six of the seven continents of the world. The trend has shifted from the US to China leading in the number of publications. Another new trend is in the researcher affiliation as prior studies showed a lack of researchers from departments of education. This has now changed to be the most dominant department. Undergraduate students were the most studied students at 72%. Similar to the findings of other studies, language learning was the most common subject domain. This included writing, reading, and vocabulary acquisition. In examination of who the AIEd was intended for 72% of the studies focused on students, 17% instructors, and 11% managers. In answering the overarching question of how AIEd was used in HE, grounded coding was used. Five usage codes emerged from the data: (1) Assessment/Evaluation, (2) Predicting, (3) AI Assistant, (4) Intelligent Tutoring System (ITS), and (5) Managing Student Learning. This systematic review revealed gaps in the literature to be used as a springboard for future researchers, including new tools, such as Chat GPT.HighlightsA systematic review examining AIEd in higher education (HE) up to the end of 2022.Unique findings in the switch from US to China in the most studies published.A two to threefold increase in studies published in 2021 and 2022 to prior years.AIEd was used for: Assessment/Evaluation, Predicting, AI Assistant, Intelligent Tutoring System, and Managing Student Learning.
Demystifying Content Analysis
Objective. In the course of daily teaching responsibilities, pharmacy educators collect rich data that can provide valuable insight into student learning. This article describes the qualitative data analysis method of content analysis, which can be useful to pharmacy educators because of its application in the investigation of a wide variety of data sources, including textual, visual, and audio files. Findings. Both manifest and latent content analysis approaches are described, with several examples used to illustrate the processes. This article also offers insights into the variety of relevant terms and visualizations found in the content analysis literature. Finally, common threats to the reliability and validity of content analysis are discussed, along with suitable strategies to mitigate these risks during analysis. Summary. This review of content analysis as a qualitative data analysis method will provide clarity and actionable instruction for both novice and experienced pharmacy education researchers.
Understanding the relationship between teachers' pedagogical beliefs and technology use in education: a systematic review of qualitative evidence
This review was designed to further our understanding of the link between teachers' pedagogical beliefs and their educational uses of technology. The synthesis of qualitative findings integrates the available evidence about this relationship with the ultimate goal being to facilitate the integration of technology in education. A meta-aggregative approach was utilized to analyze the results of the 14 selected studies. The findings are reported in terms of five synthesis statements, describing (1) the bi-directional relationship between pedagogical beliefs and technology use, (2) teachers' beliefs as perceived barriers, (3) the association between specific beliefs with types of technology use, (4) the role of beliefs in professional development, and (5) the importance of the school context. By interpreting the results of the review, recommendations are provided for practitioners, policy makers, and researchers focusing on pre- and in-service teacher technology training.
Design-Based Research:A Decade of Progress in Education Research?
Design-based research (DBR) evolved near the beginning of the 21st century and was heralded as a practical research methodology that could effectively bridge the chasm between research and practice in formal education. In this article, the authors review the characteristics of DBR and analyze the five most cited DBR articles from each year of this past decade. They illustrate the context, publications, and most popular interventions utilized. They conclude that interest in DBR is increasing and that results provide limited evidence for guarded optimism that the methodology is meeting its promised benefits.
Research on Continuous Improvement: Exploring the Complexities of Managing Educational Change
As a result of the frustration with the dominant “What Works” paradigm of large-scale research-based improvement, practitioners, researchers, foundations, and policymakers are increasingly embracing a set of ideas and practices that can be collectively labeled continuous improvement (CI) methods. This chapter provides a comparative review of these methods, paying particular attention to CI methods’ intellectual influences, theories of action, and affordances and challenges in practice. We first map out and explore the shared intellectual forebears that CI methods draw on. We then discuss three kinds of complexity to which CI methods explicitly attend—ambiguity, variability, and interdependence—and how CI methods seek a balance of local and formal knowledge in response to this complexity. We go on to argue that CI methods are generally less attentive to the relational and political dimensions of educational change and that this leads to challenges in practice. We conclude by considering CI methods’ aspirations for impact at scale, and offer a number of recommendations to inform future research and practice.