Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
126 result(s) for "human-centered AI"
Sort by:
Exploring Human-centered AI Literacy Education: Interpretation and Insights from UNESCO's AI Competency Framework for Teachers and Students
[Purpose/Significance] Improving artificial intelligence (AI) literacy has emerged as a critical focus in global education, reflecting the growing significance of AI in today's society. This study aims to explore and interpret the core elements and key competencies articulated in the UNESCO AI Competency Framework for teachers and students, and to provide practical guidance for educators and policymakers, offering insights that can facilitate the systematic integration of AI literacy education. A comprehensive approach to AI education is needed to equip both students and teachers with the skills and knowledge necessary to navigate and thrive in an increasingly intelligent era. The results of this study are intended to support the formulation of effective pedagogical strategies, thereby contributing to the enhancement of AI literacy among educational stakeholders. [Method/Process] The study analyzes the preliminary policy foundations and background that led to the creation of the UNESCO AI Competency Framework. It analyzes the content of both the AI CFS and the AI CFT, focusing on key principles and framework structure to systematically interpret the framework's content. In particular, this study explores the policy context in which these frameworks were developed and examines how global educational goals and technological advances have influenced the articulation of AI competencies. By understanding the development and rationale behind the UNESCO AI Competency Framework, this study aims to provide a comprehensive overview that can support the development of effective AI literacy initiatives. It also highlights the connections between the intentions of the frameworks and the practical competencies required of educators and students, thereby contributing to a deeper understanding of how AI literacy can be meaningfully integrated into educational practice. [ [Results/Conclusions] Based on the experience provided by the competency framework and considering the current state of AI literacy education, this study offers insights and recommendations for developing AI literacy education in China from four perspectives: core values, policy refinement, practical application, and future implementation. Specifically, this study emphasizes that all educational stakeholders should work together to improve AI educational content and methods, and move toward a teacher-student-AI interaction model that empowers teachers, fosters student creativity, and integrates AI as a facilitator of personalized, flexible, and multidirectional learning. In terms of policy refinement, this study advocates for the creation of a supportive policy environment that addresses the unique challenges faced by educators and learners in the Chinese context. For practical application, the study provides actionable recommendations for integrating AI literacy into curricula, emphasizing project-based learning, hands-on experiences, and interdisciplinary approaches that foster a comprehensive understanding of AI concepts. Finally, in terms of future implementation, this study highlights the need for ongoing professional development for educators, such as the establishment of assessment mechanisms to monitor and evaluate the effectiveness of AI literacy programs over time.
Human-Centered Design to Address Biases in Artificial Intelligence
The potential of artificial intelligence (AI) to reduce health care disparities and inequities is recognized, but it can also exacerbate these issues if not implemented in an equitable manner. This perspective identifies potential biases in each stage of the AI life cycle, including data collection, annotation, machine learning model development, evaluation, deployment, operationalization, monitoring, and feedback integration. To mitigate these biases, we suggest involving a diverse group of stakeholders, using human-centered AI principles. Human-centered AI can help ensure that AI systems are designed and used in a way that benefits patients and society, which can reduce health disparities and inequities. By recognizing and addressing biases at each stage of the AI life cycle, AI can achieve its potential in health care.
Digital Transformation in Smart Farm and Forest Operations Needs Human-Centered AI: Challenges and Future Directions
The main impetus for the global efforts toward the current digital transformation in almost all areas of our daily lives is due to the great successes of artificial intelligence (AI), and in particular, the workhorse of AI, statistical machine learning (ML). The intelligent analysis, modeling, and management of agricultural and forest ecosystems, and of the use and protection of soils, already play important roles in securing our planet for future generations and will become irreplaceable in the future. Technical solutions must encompass the entire agricultural and forestry value chain. The process of digital transformation is supported by cyber-physical systems enabled by advances in ML, the availability of big data and increasing computing power. For certain tasks, algorithms today achieve performances that exceed human levels. The challenge is to use multimodal information fusion, i.e., to integrate data from different sources (sensor data, images, *omics), and explain to an expert why a certain result was achieved. However, ML models often react to even small changes, and disturbances can have dramatic effects on their results. Therefore, the use of AI in areas that matter to human life (agriculture, forestry, climate, health, etc.) has led to an increased need for trustworthy AI with two main components: explainability and robustness. One step toward making AI more robust is to leverage expert knowledge. For example, a farmer/forester in the loop can often bring in experience and conceptual understanding to the AI pipeline—no AI can do this. Consequently, human-centered AI (HCAI) is a combination of “artificial intelligence” and “natural intelligence” to empower, amplify, and augment human performance, rather than replace people. To achieve practical success of HCAI in agriculture and forestry, this article identifies three important frontier research areas: (1) intelligent information fusion; (2) robotics and embodied intelligence; and (3) augmentation, explanation, and verification for trusted decision support. This goal will also require an agile, human-centered design approach for three generations (G). G1: Enabling easily realizable applications through immediate deployment of existing technology. G2: Medium-term modification of existing technology. G3: Advanced adaptation and evolution beyond state-of-the-art.
Explainable AI improves task performance in human–AI collaboration
Artificial intelligence (AI) provides considerable opportunities to assist human work. However, one crucial challenge of human–AI collaboration is that many AI algorithms operate in a black-box manner where the way how the AI makes predictions remains opaque. This makes it difficult for humans to validate a prediction made by AI against their own domain knowledge. For this reason, we hypothesize that augmenting humans with explainable AI improves task performance in human–AI collaboration. To test this hypothesis, we implement explainable AI in the form of visual heatmaps in inspection tasks conducted by domain experts. Visual heatmaps have the advantage that they are easy to understand and help to localize relevant parts of an image. We then compare participants that were either supported by (a) black-box AI or (b) explainable AI, where the latter supports them to follow AI predictions when the AI is accurate or overrule the AI when the AI predictions are wrong. We conducted two preregistered experiments with representative, real-world visual inspection tasks from manufacturing and medicine. The first experiment was conducted with factory workers from an electronics factory, who performed assessments of whether electronic products have defects. The second experiment was conducted with radiologists, who performed assessments of chest X-ray images to identify lung lesions. The results of our experiments with domain experts performing real-world tasks show that task performance improves when participants are supported by explainable AI with heatmaps instead of black-box AI. We find that explainable AI as a decision aid improved the task performance by 7.7 percentage points (95% confidence interval [CI]: 3.3% to 12.0%, ) in the manufacturing experiment and by 4.7 percentage points (95% CI: 1.1% to 8.3%, ) in the medical experiment compared to black-box AI. These gains represent a significant improvement in task performance.
Socially situated artificial intelligence enables learning from human interaction
Regardless of how much data artificial intelligence agents have available, agents will inevitably encounter previously unseen situations in real-world deployments. Reacting to novel situations by acquiring new information from other people—socially situated learning—is a core faculty of human development. Unfortunately, socially situated learning remains an open challenge for artificial intelligence agents because they must learn how to interact with people to seek out the information that they lack. In this article, we formalize the task of socially situated artificial intelligence—agents that seek out new information through social interactions with people—as a reinforcement learning problem where the agent learns to identify meaningful and informative questions via rewards observed through social interaction. We manifest our framework as an interactive agent that learns how to ask natural language questions about photos as it broadens its visual intelligence on a large photo-sharing social network. Unlike activelearning methods, which implicitly assume that humans are oracles willing to answer any question, our agent adapts its behavior based on observed norms of which questions people are or are not interested to answer. Through an 8-mo deployment where our agent interacted with 236,000 social media users, our agent improved its performance at recognizing new visual information by 112%. A controlled field experiment confirmed that our agent outperformed an active-learning baseline by 25.6%.This work advances opportunities for continuously improving artificial intelligence (AI) agents that better respect norms in open social environments.
Guest Editorial: Precision Education -A New Challenge for AI in Education
As addressed by Stephen Yang in his ICCE 2019 keynote speech (Yang, 2019), precision education is a new challenge when applying artificial intelligence (AI), machine learning, and learning analytics to improve teaching quality and learning performance. The goal of precision education is to identify at-risk students as early as possible and provide timely intervention on the basis of teaching and learning experiences (Lu et al., 2018). Drawing from this main theme of precision education, this special issue advocates an in-depth dialogue between cold technology and warm humanity, in turn offering greater understanding of precision education. For this special issue, thirteen research papers that specialize in precision education, AI, machine learning, and learning analytics to engage in an in-depth research experiences concerning various applications, methods, pedagogical models, and environments were exchanged to achieve better understanding of the application of AI in education.
Challenges of responsible AI in practice: scoping review and recommended actions
Responsible AI (RAI) guidelines aim to ensure that AI systems respect democratic values. While a step in the right direction, they currently fail to impact practice. Our work discusses reasons for this lack of impact and clusters them into five areas: (1) the abstract nature of RAI guidelines, (2) the problem of selecting and reconciling values, (3) the difficulty of operationalising RAI success metrics, (4) the fragmentation of the AI pipeline, and (5) the lack of internal advocacy and accountability. Afterwards, we introduce a number of approaches to RAI from a range of disciplines, exploring their potential as solutions to the identified challenges. We anchor these solutions in practice through concrete examples, bridging the gap between the theoretical considerations of RAI and on-the-ground processes that currently shape how AI systems are built. Our work considers the socio-technical nature of RAI limitations and the resulting necessity of producing socio-technical solutions.
A Data Centric HitL Framework for Conducting aSsystematic Error Analysis of NLP Datasets using Explainable AI
The interest in data-centric AI has been recently growing. As opposed to model-centric AI, data-centric approaches aim at iteratively and systematically improving the data throughout the model life cycle rather than in a single pre-processing step. The merits of such an approach have not been fully explored on NLP datasets. Particular interest lies in how error analysis, a crucial step in data-centric AI, manifests itself in NLP. X-Deep, a Human-in-the-Loop framework designed to debug an NLP dataset using Explainable AI techniques, is proposed to uncover data problems related to a certain task. Our case study addresses emotion detection in Arabic text. Using the framework, a thorough analysis that leveraged two Explainable AI techniques LIME and SHAP, was conducted of misclassified instances for four classifiers: Naive Bayes, Logistic Regression, GRU, and MARBERT. The systematic process has resulted in identifying spurious correlation, bias patterns, and other anomaly patterns in the dataset. Appropriate mitigation strategies are suggested for an informed and improved data augmentation plan for performing emotion detection tasks on this dataset.
Keeping the organization in the loop: a socio-technical extension of human-centered artificial intelligence
The human-centered AI approach posits a future in which the work done by humans and machines will become ever more interactive and integrated. This article takes human-centered AI one step further. It argues that the integration of human and machine intelligence is achievable only if human organizations—not just individual human workers—are kept “in the loop.” We support this argument with evidence of two case studies in the area of predictive maintenance, by which we show how organizational practices are needed and shape the use of AI/ML. Specifically, organizational processes and outputs such as decision-making workflows, etc. directly influence how AI/ML affects the workplace, and they are crucial for answering our first and second research questions, which address the pre-conditions for keeping humans in the loop and for supporting continuous and reliable functioning of AI-based socio-technical processes. From the empirical cases, we extrapolate a concept of “keeping the organization in the loop” that integrates four different kinds of loops: AI use, AI customization, AI-supported original tasks, and taking contextual changes into account. The analysis culminates in a systematic framework of keeping the organization in the loop look based on interacting organizational practices.
An Empirical Survey on Explainable AI Technologies: Recent Trends, Use-Cases, and Categories from Technical and Application Perspectives
In a wide range of industries and academic fields, artificial intelligence is becoming increasingly prevalent. AI models are taking on more crucial decision-making tasks as they grow in popularity and performance. Although AI models, particularly machine learning models, are successful in research, they have numerous limitations and drawbacks in practice. Furthermore, due to the lack of transparency behind their behavior, users need more understanding of how these models make specific decisions, especially in complex state-of-the-art machine learning algorithms. Complex machine learning systems utilize less transparent algorithms, thereby exacerbating the problem. This survey analyzes the significance and evolution of explainable AI (XAI) research across various domains and applications. Throughout this study, a rich repository of explainability classifications and summaries has been developed, along with their applications and practical use cases. We believe this study will make it easier for researchers to understand all explainability methods and access their applications simultaneously.