Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
20
result(s) for
"Jin, Yueqiao"
Sort by:
From comic panels to clinical practice: data comics as a learning analytics tool in nursing simulation
by
Tsai, Yi-Shan
,
Alfredo, Riordan
,
Martinez-Maldonado, Roberto
in
Accessibility
,
Beliefs
,
Cartoons
2026
In healthcare education, it is important for nursing students to be able to reflect on their performance in high-fidelity clinical simulations in order to develop key skills. Learning Analytics (LA) offers opportunities for data-driven reflection by providing visual representations of educational experiences. While many LA tools rely on data visualisations to communicate insights, these are often difficult for students to interpret, limiting their effectiveness. Despite these challenges, there is limited research exploring alternative and potentially more accessible formats—such as data comics, a narrative visualisation technique that integrates data with the structure of traditional comic strips—to represent and communicate insights from learner data in a more engaging way. This study addresses that gap through a qualitative analysis of nursing students’ perceptions of data comics as reflective tools, focusing on: (i) support for student reflection, (ii) advantages and limitations, and (iii) concerns about their use in healthcare education. Third-year nursing students who participated in a simulation were interviewed and asked to reflect on personalised data comic prototypes generated from their multimodal data using a mix of human input and AI methods. The results indicated that while data comics present an engaging and accessible form of reflective visualisation, considerations need to be made regarding the designs to ensure that they are appropriate for the target audience and do not oversimplify the simulation experience. These findings indicate that data comics should not act as a replacement for conventional visualisations but rather serve as supplementary material to communicate contextual information or aid in interpretation of visualisations.
Journal Article
Co-designing AI-powered learning analytics: bringing students and teachers together
by
Alfredo, Riordan
,
Fan, Jie Xiang
,
Martinez-Maldonado, Roberto
in
Artificial intelligence
,
Chatbots
,
Classroom communication
2025
There is a growing interest in involving students and teachers in the design of human-centered Learning Analytics (LA) systems to align them with authentic learning needs. Yet, limited prior research has explored the implications of integrating both students’ and teachers’ perspectives within a structured co-design process. To address this shortcoming in the literature, we report on a study that examined how undergraduate nursing students and teachers co-designed an AI-powered LA system to support post-debriefing reflection on teamwork and communication in the context of healthcare simulation. This qualitative study, using a co-design approach, examined the design process of an LA system from conceptualization to post-use evaluation. The study addressed two key questions:
i
)
What tensions emerge from the contrasting perspectives of students and teachers in the co-design an AI-powered LA system?
and
ii
)
How do students and teachers perceive their joint participation in the co-design process?
Three key design tension themes emerged from the contrasting perspectives of students and teachers:
teaching–learning goals tension
,
privacy–utility tension
, and
human-AI guidance preferences tension
. The collaborative design process revealed mutual benefits: students valued teachers’ guidance in refining ideas and aligning system goals with learning objectives, while teachers, initially cautious about student involvement, came to see co-design as an opportunity to empower students and deepen their own understanding of responsible data use in practice. These findings contribute to the broader understanding of co-design dynamics in educational technology, underscoring the importance of balanced stakeholder involvement in developing practical, context-aware LA systems.
Journal Article
Emergent Learner Agency in Implicit Human-AI Collaboration: How AI Personas Reshape Creative-Regulatory Interaction
by
Martinez-Maldonado, Roberto
,
Yan, Lixiang
,
Jin, Yueqiao
in
Clustering
,
Cognitive load
,
Collaboration
2025
Generative AI is increasingly embedded in collaborative learning, yet little is known about how AI personas shape learner agency when AI teammates are present but not disclosed. This mechanism study examines how supportive and contrarian AI personas reconfigure emergent learner agency, discourse patterns, and experiences in implicit human-AI creative collaboration. A total of 224 university students were randomly assigned to 97 online triads in one of three conditions: human-only control, hybrid teams with a supportive AI, or hybrid teams with a contrarian AI. Participants completed an individual-group-individual movie-plot writing task; the 10-minute group chat was coded using a creative-regulatory framework. We combined transition network analysis, theory-driven sequential pattern mining, and Gaussian mixture clustering to model structural, temporal, and profile-level manifestations of agency, and linked these to cognitive load, psychological safety, teamwork satisfaction, and embedding-based creative performance. Contrarian AI produced challenge- and reflection-rich discourse structures and motifs indicating productive friction, whereas supportive AI fostered agreement-centred trajectories and smoother convergence. Clustering showed AI agents concentrated in challenger profiles, with reflective regulation uniquely human. While no systematic differences emerged in cognitive load or creative gains, contrarian AI consistently reduced teamwork satisfaction and psychological safety. The findings reveal a design tension between leveraging cognitive conflict and maintaining affective safety and ownership in hybrid human-AI teams.
Human-Centred Learning Analytics and AI in Education: a Systematic Literature Review
by
Martinez-Maldonado, Roberto
,
Yan, Lixiang
,
Jin, Yueqiao
in
Artificial intelligence
,
Automation
,
Education
2023
The rapid expansion of Learning Analytics (LA) and Artificial Intelligence in Education (AIED) offers new scalable, data-intensive systems but also raises concerns about data privacy and agency. Excluding stakeholders -- like students and teachers -- from the design process can potentially lead to mistrust and inadequately aligned tools. Despite a shift towards human-centred design in recent LA and AIED research, there remain gaps in our understanding of the importance of human control, safety, reliability, and trustworthiness in the design and implementation of these systems. We conducted a systematic literature review to explore these concerns and gaps. We analysed 108 papers to provide insights about i) the current state of human-centred LA/AIED research; ii) the extent to which educational stakeholders have contributed to the design process of human-centred LA/AIED systems; iii) the current balance between human control and computer automation of such systems; and iv) the extent to which safety, reliability and trustworthiness have been considered in the literature. Results indicate some consideration of human control in LA/AIED system design, but limited end-user involvement in actual design. Based on these findings, we recommend: 1) carefully balancing stakeholders' involvement in designing and deploying LA/AIED systems throughout all design phases, 2) actively involving target end-users, especially students, to delineate the balance between human control and automation, and 3) exploring safety, reliability, and trustworthiness as principles in future human-centred LA/AIED systems.
GLAT: The Generative AI Literacy Assessment Test
by
Martinez-Maldonado, Roberto
,
Yan, Lixiang
,
Jin, Yueqiao
in
Education
,
Generative artificial intelligence
,
Literacy
2024
The rapid integration of generative artificial intelligence (GenAI) technology into education necessitates precise measurement of GenAI literacy to ensure that learners and educators possess the skills to engage with and critically evaluate this transformative technology effectively. Existing instruments often rely on self-reports, which may be biased. In this study, we present the GenAI Literacy Assessment Test (GLAT), a 20-item multiple-choice instrument developed following established procedures in psychological and educational measurement. Structural validity and reliability were confirmed with responses from 355 higher education students using classical test theory and item response theory, resulting in a reliable 2-parameter logistic (2PL) model (Cronbach's alpha = 0.80; omega total = 0.81) with a robust factor structure (RMSEA = 0.03; CFI = 0.97). Critically, GLAT scores were found to be significant predictors of learners' performance in GenAI-supported tasks, outperforming self-reported measures such as perceived ChatGPT proficiency and demonstrating external validity. These results suggest that GLAT offers a reliable and valid method for assessing GenAI literacy, with the potential to inform educational practices and policy decisions that aim to enhance learners' and educators' GenAI literacy, ultimately equipping them to navigate an AI-enhanced future.
The Agency Gap: How Generative AI Literacy Shapes Independent Writing after AI Support
by
Martinez-Maldonado, Roberto
,
Yang, Kaixun
,
Yan, Lixiang
in
Chatbots
,
Data integration
,
Generative artificial intelligence
2025
Generative AI (GenAI) tools are rapidly transforming higher education, yet little is known about how students' GenAI literacy shapes their ability to perform independently once such support is removed. This study investigates what we term the agency gap, introduced as the extent to which GenAI literacy predicts student writing performance in contexts that require self-initiation and regulation. Seventy-nine medical and nursing students completed multimodal academic writing tasks based on visual data, supported either by a reactive or proactive GenAI chatbot, followed by a parallel task without AI support. Writing was evaluated across insightfulness, visual data integration, organisation, linguistic quality, and critical thinking. Results showed that GenAI literacy predicted independent writing performance only in the reactive condition, where students had to actively mobilise their own strategies. Mediation analyses revealed no indirect effect via in-task performance, indicating that GenAI literacy acts as a direct, task-general competence rather than a proxy for domain knowledge or other literacies. By contrast, proactive scaffolding equalised outcomes across literacy levels, reducing reliance on learners' GenAI literacy. The agency gap highlights when GenAI literacy matters most, with implications for designing equitable AI-supported learning environments that both leverage and mitigate differences in students' GenAI literacy.
When Machines Join the Moral Circle: The Persona Effect of Generative AI Agents in Collaborative Reasoning
by
Martinez-Maldonado, Roberto
,
Zheng, Mingmin
,
Gasevic, Dragan
in
Agents (artificial intelligence)
,
Collaborative learning
,
Colleges & universities
2026
Generative AI is increasingly positioned as a peer in collaborative learning, yet its effects on ethical deliberation remain unclear. We report a between-subjects experiment with university students (N=217) who discussed an autonomous-vehicle dilemma in triads under three conditions: human-only control, supportive AI teammate, or contrarian AI teammate. Using moral foundations lexicons, argumentative coding from the augmentative knowledge construction framework, semantic trajectory modelling with BERTopic and dynamic time warping, and epistemic network analysis, we traced how AI personas reshape moral discourse. Supportive AIs increased grounded/qualified claims relative to control, consolidating integrative reasoning around care/fairness, while contrarian AIs modestly broadened moral framing and sustained value pluralism. Both AI conditions reduced thematic drift compared with human-only groups, indicating more stable topical focus. Post-discussion justification complexity was only weakly predicted by moral framing and reasoning quality, and shifts in final moral decisions were driven primarily by participants' initial stance rather than condition. Overall, AI teammates altered the process, the distribution and connection of moral frames and argument quality, more than the outcome of moral choice, highlighting the potential of generative AI agents as teammates for eliciting reflective, pluralistic moral reasoning in collaborative learning.
When Machines Join the Moral Circle: The Persona Effect of Generative AI Agents in Collaborative Reasoning
by
Martinez-Maldonado, Roberto
,
Zheng, Mingmin
,
Gasevic, Dragan
in
Agents (artificial intelligence)
,
Collaborative learning
,
Colleges & universities
2025
Generative AI is increasingly positioned as a peer in collaborative learning, yet its effects on ethical deliberation remain unclear. We report a between-subjects experiment with university students (N=217) who discussed an autonomous-vehicle dilemma in triads under three conditions: human-only control, supportive AI teammate, or contrarian AI teammate. Using moral foundations lexicons, argumentative coding from the augmentative knowledge construction framework, semantic trajectory modelling with BERTopic and dynamic time warping, and epistemic network analysis, we traced how AI personas reshape moral discourse. Supportive AIs increased grounded/qualified claims relative to control, consolidating integrative reasoning around care/fairness, while contrarian AIs modestly broadened moral framing and sustained value pluralism. Both AI conditions reduced thematic drift compared with human-only groups, indicating more stable topical focus. Post-discussion justification complexity was only weakly predicted by moral framing and reasoning quality, and shifts in final moral decisions were driven primarily by participants' initial stance rather than condition. Overall, AI teammates altered the process, the distribution and connection of moral frames and argument quality, more than the outcome of moral choice, highlighting the potential of generative AI agents as teammates for eliciting reflective, pluralistic moral reasoning in collaborative learning.
Agentic AI as Undercover Teammates: Argumentative Knowledge Construction in Hybrid Human-AI Collaborative Learning
by
Han, Xibin
,
Martinez-Maldonado, Roberto
,
Guan, Xiu
in
Agentic artificial intelligence
,
Agents (artificial intelligence)
,
Collaborative learning
2025
Generative artificial intelligence (AI) agents are increasingly embedded in collaborative learning environments, yet their impact on the processes of argumentative knowledge construction remains insufficiently understood. Emerging conceptualisations of agentic AI and artificial agency suggest that such systems possess bounded autonomy, interactivity, and adaptability, allowing them to engage as epistemic participants rather than mere instructional tools. Building on this theoretical foundation, the present study investigates how agentic AI, designed as undercover teammates with either supportive or contrarian personas, shapes the epistemic and social dynamics of collaborative reasoning. Drawing on Weinberger and Fischer's (2006) four-dimensional framework, participation, epistemic reasoning, argument structure, and social modes of co-construction, we analysed synchronous discourse data from 212 human and 64 AI participants (92 triads) engaged in an analytical problem-solving task. Mixed-effects and epistemic network analyses revealed that AI teammates maintained balanced participation but substantially reorganised epistemic and social processes: supportive personas promoted conceptual integration and consensus-oriented reasoning, whereas contrarian personas provoked critical elaboration and conflict-driven negotiation. Epistemic adequacy, rather than participation volume, predicted individual learning gains, indicating that agentic AI's educational value lies in enhancing the quality and coordination of reasoning rather than amplifying discourse quantity. These findings extend CSCL theory by conceptualising agentic AI as epistemic and social participants, bounded yet adaptive collaborators that redistribute cognitive and argumentative labour in hybrid human-AI learning environments.
From Complexity to Parsimony: Integrating Latent Class Analysis to Uncover Multimodal Learning Patterns in Collaborative Learning
by
Martinez-Maldonado, Roberto
,
Yan, Lixiang
,
Jin, Yueqiao
in
Artificial intelligence
,
Audio data
,
Collaboration
2024
Multimodal Learning Analytics (MMLA) leverages advanced sensing technologies and artificial intelligence to capture complex learning processes, but integrating diverse data sources into cohesive insights remains challenging. This study introduces a novel methodology for integrating latent class analysis (LCA) within MMLA to map monomodal behavioural indicators into parsimonious multimodal ones. Using a high-fidelity healthcare simulation context, we collected positional, audio, and physiological data, deriving 17 monomodal indicators. LCA identified four distinct latent classes: Collaborative Communication, Embodied Collaboration, Distant Interaction, and Solitary Engagement, each capturing unique monomodal patterns. Epistemic network analysis compared these multimodal indicators with the original monomodal indicators and found that the multimodal approach was more parsimonious while offering higher explanatory power regarding students' task and collaboration performances. The findings highlight the potential of LCA in simplifying the analysis of complex multimodal data while capturing nuanced, cross-modality behaviours, offering actionable insights for educators and enhancing the design of collaborative learning interventions. This study proposes a pathway for advancing MMLA, making it more parsimonious and manageable, and aligning with the principles of learner-centred education.