Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
2 result(s) for "AI-based training and assessment"
Sort by:
A systematic review of the impact of artificial intelligence on educational outcomes in health professions education
Background Artificial intelligence (AI) has a variety of potential applications in health professions education and assessment; however, measurable educational impacts of AI-based educational strategies on learning outcomes have not been systematically evaluated. Methods A systematic literature search was conducted using electronic databases (CINAHL Plus, EMBASE, Proquest, Pubmed, Cochrane Library, and Web of Science) to identify studies published until October 1st 2024, analyzing the impact of AI-based tools/interventions in health profession assessment and/or training on educational outcomes. The present analysis follows the PRISMA 2020 statement for systematic reviews and the structured approach to reporting in health care education for evidence synthesis. Results The final analysis included twelve studies. All were single centers with sample sizes ranging from 4 to 180 participants. Three studies were randomized controlled trials, and seven had a quasi-experimental design. Two studies were observational. The studies had a heterogenous design. Confounding variables were not controlled. None of the studies provided learning objectives or descriptions of the competencies to be achieved. Three studies applied learning theories in the development of AI-powered educational strategies. One study reported the analysis of the authenticity of the learning environment. No study provided information on the impact of feedback activities on learning outcomes. All studies corresponded to Kirkpatrick’s second level evaluating technical skills or quantifiable knowledge. No study evaluated more complex tasks, such as the behavior of learners in the workplace. There was insufficient information on training datasets and copyright issues. Conclusions The results of the analysis show that the current evidence regarding measurable educational outcomes of AI-powered interventions in health professions education is poor. Further studies with a rigorous methodological approach are needed. The present work also highlights that there is no straightforward guide for evaluating the quality of research in AI-based education and suggests a series of criteria that should be considered. Trial registration Methods and inclusion criteria were defined in advance, specified in a protocol and registered in the OSF registries ( https://osf.io/v5cgp/ ). Clinical Trial number: not applicable.
A Scoping Review and Assessment Framework for Technical Debt in the Development and Operation of AI/ML Competition Platforms
Technical debt (TD) has emerged as a significant concern in the development of AI/ML applications, where rapid experimentation, evolving objectives, and complex data pipelines often introduce hidden quality and maintainability issues. Within this broader context, AI/ML competition platforms face heightened risks due to time-constrained environments and evolving requirements. Despite its relevance, TD in such competitive settings remains underexplored and lacks systematic investigation. This study addresses two research questions: (RQ1) What are the most significant types of technical debt recorded in AI-based systems? and (RQ2) How can we measure the technical debt of an AI-based competition platform? We present a scoping review of 100 peer-reviewed publications related to AI/ML competitions, aiming to map the landscape of TD manifestations and management practices. Through thematic analysis, the study identifies 18 distinct types of technical debt, each accompanied by a definition, rationale, and example grounded in competition scenarios. Based on this typology, a stakeholder-oriented assessment framework is proposed, including a detailed questionnaire and a methodology for the quantitative evaluation of TD across multiple categories. A novel contribution is the introduction of Accessibility Debt, which addresses the challenges associated with the ease and speed of immediate use of the AI/ML competition platforms. The review also incorporates bibliometric insights, revealing the fragmented and uneven treatment of TD across the literature. The findings offer a unified conceptual foundation for future work and provide practical tools for both organizers and participants to systematically detect, interpret, and address technical debt in competitive AI settings, ultimately promoting more sustainable and trustworthy AI research environments.