Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
61 result(s) for "Explainable AI in education"
Sort by:
Practical early prediction of students’ performance using machine learning and eXplainable AI
Abstract Predicting students’ performance in advance could help assist the learning process; if “at-risk” students can be identified early on, educators can provide them with the necessary educational support. Despite this potential advantage, the technology for predicting students’ performance has not been widely used in education due to practical limitations. We propose a practical method to predict students’ performance in the educational environment using machine learning and explainable artificial intelligence (XAI) techniques. We conducted qualitative research to ascertain the perspectives of educational stakeholders. Twelve people, including educators, parents of K-12 students, and policymakers, participated in a focus group interview. The initial practical features were chosen based on the participants’ responses. Then, a final version of the practical features was selected through correlation analysis. In addition, to verify whether at-risk students could be distinguished using the selected features, we experimented with various machine learning algorithms: Logistic Regression, Decision Tree, Random Forest, Multi-Layer Perceptron, Support Vector Machine, XGBoost, LightGBM, VTC, and STC. As a result of the experiment, Logistic Regression showed the best overall performance. Finally, information intended to help each student was visually provided using the XAI technique.
Explainable artificial intelligence for predictive modeling of student stress in higher education
Student stress in higher education remains a pervasive problem, yet many institutions lack affordable, scalable, and interpretable tools for its detection and management. Existing methods frequently depend on costly physiological sensors and opaque machine learning models, limiting their applicability in resource-constrained settings. The objective of this research is to develop a cost-effective, survey-based stress classification model using multiple machine learning algorithms and eXplainable Artificial Intelligence (XAI) to support transparent and actionable decision-making in educational environments. Drawing on a dataset of university students, the research applies a supervised machine learning pipeline to classify stress levels and identify key contributing variables. Six classification algorithms—Logistic Regression, Support Vector Machine (SVM), Decision Tree, Random Forest, Gradient Boosting, and XGBoost—were employed and optimized using grid search and cross-validation for hyperparameter tuning. Evaluation metrics included precision, recall, F1-score, and overall accuracy. The Random Forest model achieved the highest classification accuracy of 0.89, followed by XGBoost at 0.87, Gradient Boosting at 0.85, Decision Tree at 0.83, SVM at 0.82, and Logistic Regression at 0.81. SHAP (SHapley Additive exPlanations) analysis was conducted to interpret model predictions and rank feature importance. The analysis revealed five principal predictors: blood pressure, perceived safety, sleep quality, teacher-student relationship, and participation in extracurricular activities. Results demonstrate that both physiological indicators and psychosocial conditions contribute meaningfully to stress prediction. The study concludes that institutional interventions targeting health monitoring, campus safety, behavioral support, relational pedagogy, and extracurricular engagement can effectively mitigate student stress. These findings provide an empirical foundation for the development of integrated policies in higher education aimed at promoting student well-being.
Neuro-symbolic synergy in education: a survey of LLM-knowledge graph integration for explainable reasoning and emotion-aware student support
This article presents a structured survey of recent approaches integrating Large Language Models (LLMs) and Knowledge Graphs (KGs) in education, with a dual focus on explainable reasoning and emotion-aware student support. The objective is to assess how neuro-symbolic architectures and affective computing enhance both transparency and learner well-being in AI-driven tutoring systems. A multimodal literature review was conducted, combining keyword-based searches across Scopus, IEEE Xplore, and SpringerLink from 2020 to 2024, with inclusion criteria focusing on studies addressing LLM explainability, KG integration, and affective adaptation in education. The selected papers were analyzed using a three-axis framework: (1) technological synergy (LLM–KG–Affective AI), (2) evaluation metrics (Pedagogical Alignment Score, Anxiety Reduction Index, Scaffolding Perplexity Divergence), and (3) equity and explainability gaps. Results reveal that hybrid systems improve interpretability, engagement, and personalization, but remain limited by the absence of metacognitive modeling and standardized affective benchmarks. Practical implications include actionable strategies for developing transparent, stress-aware, and ethically grounded tutoring systems, such as emotion-adaptive scaffolding, blockchain-based validation of explanations, and federated learning for privacy-preserving personalization.
Metaverse in Healthcare Integrated with Explainable AI and Blockchain: Enabling Immersiveness, Ensuring Trust, and Providing Patient Data Security
Digitization and automation have always had an immense impact on healthcare. It embraces every new and advanced technology. Recently the world has witnessed the prominence of the metaverse which is an emerging technology in digital space. The metaverse has huge potential to provide a plethora of health services seamlessly to patients and medical professionals with an immersive experience. This paper proposes the amalgamation of artificial intelligence and blockchain in the metaverse to provide better, faster, and more secure healthcare facilities in digital space with a realistic experience. Our proposed architecture can be summarized as follows. It consists of three environments, namely the doctor’s environment, the patient’s environment, and the metaverse environment. The doctors and patients interact in a metaverse environment assisted by blockchain technology which ensures the safety, security, and privacy of data. The metaverse environment is the main part of our proposed architecture. The doctors, patients, and nurses enter this environment by registering on the blockchain and they are represented by avatars in the metaverse environment. All the consultation activities between the doctor and the patient will be recorded and the data, i.e., images, speech, text, videos, clinical data, etc., will be gathered, transferred, and stored on the blockchain. These data are used for disease prediction and diagnosis by explainable artificial intelligence (XAI) models. The GradCAM and LIME approaches of XAI provide logical reasoning for the prediction of diseases and ensure trust, explainability, interpretability, and transparency regarding the diagnosis and prediction of diseases. Blockchain technology provides data security for patients while enabling transparency, traceability, and immutability regarding their data. These features of blockchain ensure trust among the patients regarding their data. Consequently, this proposed architecture ensures transparency and trust regarding both the diagnosis of diseases and the data security of the patient. We also explored the building block technologies of the metaverse. Furthermore, we also investigated the advantages and challenges of a metaverse in healthcare.
Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?
We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice, this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool.
Interpretable Dropout Prediction: Towards XAI-Based Personalized Intervention
Student drop-out is one of the most burning issues in STEM higher education, which induces considerable social and economic costs. Using machine learning tools for the early identification of students at risk of dropping out has gained a lot of interest recently. However, there has been little discussion on dropout prediction using interpretable machine learning (IML) and explainable artificial intelligence (XAI) tools.In this work, using the data of a large public Hungarian university, we demonstrate how IML and XAI tools can support educational stakeholders in dropout prediction. We show that complex machine learning models – such as the CatBoost classifier – can efficiently identify at-risk students relying solely on pre-enrollment achievement measures, however, they lack interpretability. Applying IML tools, such as permutation importance (PI), partial dependence plot (PDP), LIME, and SHAP values, we demonstrate how the predictions can be explained both globally and locally. Explaining individual predictions opens up great opportunities for personalized intervention, for example by offering the right remedial courses or tutoring sessions. Finally, we present the results of a user study that evaluates whether higher education stakeholders find these tools interpretable and useful.
Mapping the landscape of ethical considerations in explainable AI research
With its potential to contribute to the ethical governance of AI, eXplainable AI (XAI) research frequently asserts its relevance to ethical considerations. Yet, the substantiation of these claims with rigorous ethical analysis and reflection remains largely unexamined. This contribution endeavors to scrutinize the relationship between XAI and ethical considerations. By systematically reviewing research papers mentioning ethical terms in XAI frameworks and tools, we investigate the extent and depth of ethical discussions in scholarly research. We observe a limited and often superficial engagement with ethical theories, with a tendency to acknowledge the importance of ethics, yet treating it as a monolithic and not contextualized concept. Our findings suggest a pressing need for a more nuanced and comprehensive integration of ethics in XAI research and practice. To support this, we propose to critically reconsider transparency and explainability in regards to ethical considerations during XAI systems design while accounting for ethical complexity in practice. As future research directions, we point to the promotion of interdisciplinary collaborations and education, also for underrepresented ethical perspectives. Such ethical grounding can guide the design of ethically robust XAI systems, aligning technical advancements with ethical considerations.
Learning analytics dashboard: a tool for providing actionable insights to learners
This study investigates current approaches to learning analytics (LA) dashboarding while highlighting challenges faced by education providers in their operationalization. We analyze recent dashboards for their ability to provide actionable insights which promote informed responses by learners in making adjustments to their learning habits. Our study finds that most LA dashboards merely employ surface-level descriptive analytics, while only few go beyond and use predictive analytics. In response to the identified gaps in recently published dashboards, we propose a state-of-the-art dashboard that not only leverages descriptive analytics components, but also integrates machine learning in a way that enables both predictive and prescriptive analytics. We demonstrate how emerging analytics tools can be used in order to enable learners to adequately interpret the predictive model behavior, and more specifically to understand how a predictive model arrives at a given prediction. We highlight how these capabilities build trust and satisfy emerging regulatory requirements surrounding predictive analytics. Additionally, we show how data-driven prescriptive analytics can be deployed within dashboards in order to provide concrete advice to the learners, and thereby increase the likelihood of triggering behavioral changes. Our proposed dashboard is the first of its kind in terms of breadth of analytics that it integrates, and is currently deployed for trials at a higher education institution.
Educing AI-Thinking in Science, Technology, Engineering, Arts, and Mathematics (STEAM) Education
In science, technology, engineering, arts, and mathematics (STEAM) education, artificial intelligence (AI) analytics are useful as educational scaffolds to educe (draw out) the students’ AI-Thinking skills in the form of AI-assisted human-centric reasoning for the development of knowledge and competencies. This paper demonstrates how STEAM learners, rather than computer scientists, can use AI to predictively simulate how concrete mixture inputs might affect the output of compressive strength under different conditions (e.g., lack of water and/or cement, or different concrete compressive strengths required for art creations). To help STEAM learners envision how AI can assist them in human-centric reasoning, two AI-based approaches will be illustrated: first, a Naïve Bayes approach for supervised machine-learning of the dataset, which assumes no direct relations between the mixture components; and second, a semi-supervised Bayesian approach to machine-learn the same dataset for possible relations between the mixture components. These AI-based approaches enable controlled experiments to be conducted in-silico, where selected parameters could be held constant, while others could be changed to simulate hypothetical “what-if” scenarios. In applying AI to think discursively, AI-Thinking can be educed from the STEAM learners, thereby improving their AI literacy, which in turn enables them to ask better questions to solve problems.
Integrating Explainable Artificial Intelligence in Extended Reality Environments: A Systematic Survey
The integration of Artificial Intelligence (AI) within Extended Reality (XR) technologies has the potential to revolutionize user experiences by creating more immersive, interactive, and personalized environments. Nevertheless, the complexity and opacity of AI systems raise significant concerns regarding the transparency of data handling, reasoning processes, and decision-making mechanisms inherent in these technologies. To address these challenges, the implementation of explainable AI (XAI) methods and techniques becomes imperative, as they not only ensure compliance with prevailing ethical, social, and legal standards, norms, and principles, but also foster user trust and facilitate the broader adoption of AI solutions in XR applications. Despite the growing interest from both research and practitioner communities in this area, there is an important gap in the literature concerning a review of XAI methods specifically applied and tailored to XR systems. On this behalf, this research presents a systematic literature review that synthesizes current research on XAI approaches applied within the XR domain. Accordingly, this research aims to identify prevailing trends, assess the effectiveness of various XAI techniques, and highlight potential avenues for future research. It then contributes to the foundational understanding necessary for the development of transparent and trustworthy AI systems for XR systems using XAI technologies while enhancing the user experience and promoting responsible AI deployment.