Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
871 result(s) for "Student behavior recognition"
Sort by:
Intelligent recognition of students’ behavior for smart learning environments
The automatic detection of student behaviors is essential for improving smart classroom technologies and offering data-driven insights regarding student engagement. Nevertheless, existing methods encounter considerable obstacles caused by class imbalance, restricted annotations, and the slight visual resemblances among behavior categories. To overcome these constraints, we present a meta-learning framework that combines Vision Transformers with Prototypical Networks, improved by supervised contrastive learning and hard negative mining. The process starts by preprocessing and cropping the input images, utilizing YAML annotations to focus on behavior-specific areas. Every input is converted into patch embeddings and handled by Transformer encoders, producing distinctive feature representations. Class prototypes are subsequently derived from the support set, and query samples are categorized through distance-based metrics within a few-shot learning framework based on episodes. Extensive experiments were carried out on the SCB-05 dataset under 5-way few-shot settings to confirm the effectiveness of the proposed framework. The findings show that combining Vision Transformers with contrastive learning greatly enhances feature distinctiveness, whereas hard negative mining additionally boosts generalization. Under the 5-way 10-shot evaluation protocol, our method attains a total accuracy of 91.3% and a mean Average Precision, exceeding the performance of both baseline ProtoNet and Transformer variants without hard negative mining. Further analyses, such as class-specific assessments, confusion matrices, and embedding visualizations, validate the strength and clarity of the suggested model. These results set a new standard for recognizing student behavior and emphasize the promise of meta-learning frameworks for practical uses in education.
CSSA-YOLO: Cross-Scale Spatiotemporal Attention Network for Fine-Grained Behavior Recognition in Classroom Environments
Under a student-centered educational paradigm, project-based learning (PBL) assessment requires accurate identification of classroom behaviors to facilitate effective teaching evaluations and the implementation of personalized learning strategies. The increasing use of visual and multi-modal sensors in smart classrooms has made it possible to continuously capture rich behavioral data. However, challenges such as lighting variations, occlusions, and diverse behaviors complicate sensor-based behavior analysis. To address these issues, we introduce CSSA-YOLO, a novel detection network that incorporates cross-scale feature optimization. First, we establish a C2fs module that captures spatiotemporal dependencies in small-scale actions such as hand-raising through hierarchical window attention. Second, a Shuffle Attention mechanism is then integrated into the neck to suppress interference from complex backgrounds, thereby enhancing the model’s ability to focus on relevant features. Finally, to further enhance the network’s ability to detect small targets and complex boundary behaviors, we utilize the WIoU loss function, which dynamically weights gradients to optimize the localization accuracy of occluded targets. Experiments involving the SCB03-S dataset showed that CSSA-YOLO outperforms traditional methods, achieving an mAP50 of 76.0%, surpassing YOLOv8m by 1.2%, particularly in complex background and occlusion scenarios. Furthermore, it reaches 78.31 FPS, meeting the requirements for real-time application. This study offers a reliable solution for precise behavior recognition in classroom settings, supporting the development of intelligent education systems.
Students’ behavior mining in e-learning environment using cognitive processes with information technologies
Rapid growth and recent developments in education sector and information technologies have promoted E-learning and collaborative sessions among the learning communities and business incubator centers. Traditional practices are being replaced with webinars (live online classes) E-Quizes (online testing) and video lectures for effective learning and performance evaluation. These E-learning methods use sensors and multimedia tools to contribute in resource sharing, social networking, interactivity and corporate trainings. While, artificial intelligence tools are also being integrated into various industries and organizations for students’ engagement and adaptability towards the digital world. Predicting students’ behaviors and providing intelligent feedbacks is an important parameter in the E-learning domain. To optimize students’ behaviors in virtual environments, we have proposed an idea of embedding cognitive processes into information technologies. This paper presents hybrid spatio-temporal features for student behavior recognition (SBR) system that recognizes student-student behaviors from sequences of digital images. The proposed SBR system segments student silhouettes using neighboring data points observation and extracts co-occurring robust spatio-temporal features having full body and key body points techniques. Then, artificial neural network is used to measure student interactions taken from UT-Interaction and classroom behaviors datasets. Finally a survey is performed to evaluate the effectiveness of video based interactive learning using proposed SBR system.
Csb-yolo: a rapid and efficient real-time algorithm for classroom student behavior detection
In recent years, the integration of artificial intelligence in education has become key to enhancing the quality of teaching. This study addresses the real-time detection of student behavior in classroom environments by proposing the Classroom Student Behavior YOLO (CSB-YOLO) model. We enhance the model’s multi-scale feature fusion capability using the Bidirectional Feature Pyramid Network (BiFPN). Additionally, we have designed a novel Efficient Re-parameterized Detection Head (ERD Head) to accelerate the model’s inference speed and introduced Self-Calibrated Convolutions (SCConv) to compensate for any potential accuracy loss resulting from lightweight design. To further optimize performance, model pruning and knowledge distillation are utilized to reduce the model size and computational demands while maintaining accuracy. This makes CSB-YOLO suitable for deployment on low-performance classroom devices while maintaining robust detection capabilities. Tested on the classroom student behavior dataset SCB-DATASET3, the distilled and pruned CSB-YOLO, with only 0.72M parameters and 4.3 Giga Floating-point Operations Per Second (GFLOPs), maintains high accuracy and exhibits excellent real-time performance, making it particularly suitable for educational environments.
Student behavior recognition based on multitask learning
The assessment of students’ classroom behavior is an important part of classroom teaching evaluation. However, teachers cannot timely, objectively and accurately evaluate the listening status of each student in the class. We offer a multitask classroom behavior recognition method that combines human pose estimation and object detection. First, the target detector extracts the individual region from the keyframe as the network’s input. Then, the multitask heatmap network (MTHN) module extracts the intermediate heatmap of multiscale feature association. The attitude estimation and target detection tasks are constructed by mapping relations to obtain the keypoints and object position information. Finally, the keypoints behavior vector and the metric vector are used to model the behavior, and a classroom behavior detection algorithm based on the fully connected network is designed. Additionally, we created a classroom dataset with pose estimation, objects, and behavior labels. Meanwhile, transfer learning is used to solve the problem of insufficient sample size. After several experiments, we show that the detection accuracy of the proposed multitask learning-based student behavior recognition algorithm reaches more than 90%.
SBR-YOLO: context-position attention and adaptive feature fusion for student behavior recognition
In classroom scenarios, student behaviors exhibit high intra-class variance and subtle inter-class differences, while complex backgrounds and severe occlusions pose significant challenges for accurate behavior recognition. SBR-YOLO is proposed as a student behavior detection framework for accurate and robust recognition in complex classroom environments. To address the challenges posed by visually similar behaviors and non-uniform spatial distributions of targets, a Behavior-aware Context-Position Attention module is designed, which leverages learnable positional encoding and inter-head interaction mechanisms to capture spatial dependencies among behavioral regions and enable discriminative feature learning. To handle substantial scale variations between front-row and back-row students, an Adaptive Spatial Feature Fusion mechanism is introduced at each output level of the neck, prior to the detection heads, which adaptively learns fusion weights for cross-scale feature integration. A Class-Aware Discriminative Loss function is further introduced to enhance fine-grained discrimination by enforcing intra-class compactness and inter-class separation constraints. Experiments on SCB-Dataset3 demonstrate that SBR-YOLO achieves 74.2% mAP@50, representing a 6.4 percentage point improvement over the YOLOv8n baseline, with the parameter count increasing moderately from 3.0 M to 4.6 M. Comprehensive ablation studies and comparative experiments with state-of-the-art methods confirm the effectiveness of SBR-YOLO for student behavior recognition in complex smart classroom environments.
Domain-adaptive multi-modal deep learning for monitoring student fatigue and engagement in remote ideological and political education
Attributing student engagement and involvement in ideological and political education (IPE) to learning value-oriented outcomes is an utmost importance. However, traditional classroom observations do not pick up on subtle signals of behavior-under-the-radar classes-student signs of being tired, inattentive, or passive. Analyzing behavior of a big bunch of students or online events might become impossible with observation interference such as engagement. Therefore, a multi-modal, interpretable, and domain-adaptive system for real-time behavior recognition and fatigue detection in IPE class has been put forward. In short, the system allows us to observe facial dynamics (including eyeblinks via eye aspect ratio, yawns via mouth aspect ratio, and the PERCLOS metric), skeletal posture features (various distances and joint angles), and apply temporal modelling with Temporal Shift Modules (TSM) to grab cues of cognitive and physical engagement. Meta-learning framework (MAML) allows fast domain adaptation to new classroom environments with only a few labelled data, thereby increasing generalizability. Experimental results of various classroom scenes show the system is capable of generating very accurate classification of behavior such as asking, looking, and boredom (F1-score ≥ 0.90), fatigue detection with up to 94.5% accuracy; the system also quantifies Q&A participation through an XP model and uncovers inequalities in student engagement, showing substantial agreement with teacher ratings (Cohen’s κ = 0.81). Transparency to models is ensured by visual explanations through heatmaps and time-series plots, thus enabling the ethical deployment of the system in the educational environment.
Avatar Assistant: Improving Social Skills in Students with an ASD Through a Computer-Based Intervention
This study assessed the efficacy of FaceSay , a computer-based social skills training program for children with Autism Spectrum Disorders (ASD). This randomized controlled study ( N  = 49) indicates that providing children with low-functioning autism (LFA) and high functioning autism (HFA) opportunities to practice attending to eye gaze, discriminating facial expressions and recognizing faces and emotions in FaceSay’s structured environment with interactive, realistic avatar assistants improved their social skills abilities. The children with LFA demonstrated improvements in two areas of the intervention: emotion recognition and social interactions. The children with HFA demonstrated improvements in all three areas: facial recognition, emotion recognition, and social interactions. These findings, particularly the measured improvements to social interactions in a natural environment, are encouraging.
Real-Time Attention Monitoring System for Classroom: A Deep Learning Approach for Student’s Behavior Recognition
Effective classroom instruction requires monitoring student participation and interaction during class, identifying cues to simulate their attention. The ability of teachers to analyze and evaluate students’ classroom behavior is becoming a crucial criterion for quality teaching. Artificial intelligence (AI)-based behavior recognition techniques can help evaluate students’ attention and engagement during classroom sessions. With rapid digitalization, the global education system is adapting and exploring emerging technological innovations, such as AI, the Internet of Things, and big data analytics, to improve education systems. In educational institutions, modern classroom systems are supplemented with the latest technologies to make them more interactive, student centered, and customized. However, it is difficult for instructors to assess students’ interest and attention levels even with these technologies. This study harnesses modern technology to introduce an intelligent real-time vision-based classroom to monitor students’ emotions, attendance, and attention levels even when they have face masks on. We used a machine learning approach to train students’ behavior recognition models, including identifying facial expressions, to identify students’ attention/non-attention in a classroom. The attention/no-attention dataset is collected based on nine categories. The dataset is given the YOLOv5 pre-trained weights for training. For validation, the performance of various versions of the YOLOv5 model (v5m, v5n, v5l, v5s, and v5x) are compared based on different evaluation measures (precision, recall, mAP, and F1 score). Our results show that all models show promising performance with 76% average accuracy. Applying the developed model can enable instructors to visualize students’ behavior and emotional states at different levels, allowing them to appropriately manage teaching sessions by considering student-centered learning scenarios. Overall, the proposed model will enhance instructors’ performance and students at an academic level.
Entrepreneurial intention of Indian university students: the role of opportunity recognition and entrepreneurship education
PurposeThe purpose of this study is to investigate the impact of opportunity recognition and entrepreneurial self-efficacy on the entrepreneurial intention of Indian university students. This paper also examines the moderating role of entrepreneurship education and gender on the opportunity recognition–intention and self-efficacy–intention relationships.Design/methodology/approachThe data were collected through a comprehensive questionnaire from 334 students having business and management background. Confirmatory factor analysis was used to ensure the reliability and validity of all the constructs, and structural equation modeling was used to test the proposed hypotheses.FindingsThis study unveils three important findings. First, opportunity recognition and self-efficacy both show a significant positive impact on the entrepreneurial intention of students. Second, education positively moderates “self-efficacy–intention relationship”, and third, gender negatively moderates “opportunity recognition–intention” and “self-efficacy–intention” relationships.Research limitations/implicationsThis study has been carried out using a sample of students from only one university, and the study included only business and management background students. Similar studies can be conducted by adding more motivational and contextual factors with an increased sample size of students having different educational backgrounds.Practical implicationsThis study provides pragmatic support to formulate new educational initiatives that can support students in their present or future entrepreneurial projects.Originality/valueThis study adds to the scarce literature on opportunity recognition and entrepreneurial intention and also highlights the moderating role of entrepreneurship education and gender on opportunity recognition–intention and entrepreneurial self-efficacy–intention relationships.