Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
2,113
result(s) for
"consistency of learning"
Sort by:
Visualising and evaluating learning/achievement consistency in introductory statistics
by
King, Rachel
,
Curtis, Elizabeth
,
Axelsen, Taryn
in
Assessment
,
combination analysis
,
consistency of learning
2025
In tertiary education, assessment plays a critical role in shaping student engagement and measuring learning outcomes. In introductory statistics courses, understanding earlier material is essential for later topics, necessitating consistent engagement to avoid fragmented learning. Assessment influences motivation and the depth of conceptual understanding upon course completion. Traditional methods such as cumulative grading and learning analytics often fail to capture the complexity of student knowledge. This research employed a multi-layered approach, including innovative 'consistency of learning', 'combination analysis' and 'heatmap' techniques, to examine performance across 11 learning modules. Results showed that Pass-grade (50-64%) students often did not complete key modules adequately, resulting in fragmented understanding. The study highlighted the limitations of traditional evaluation methods in capturing the complexity and variability of student knowledge. It further emphasized the importance of thoughtful assessment design to ensure that students developed a cohesive understanding of the material regardless of the grade level they achieve. Given the increasing importance of statistical literacy in today's data-centric society, it is vital to equip students with the knowledge to make informed data decisions. By integrating these novel evaluation methods, educators can better understand and support student achievement and improve learning outcomes in introductory statistics.
Journal Article
Style-Hallucinated Dual Consistency Learning: A Unified Framework for Visual Domain Generalization
by
Sebe, Nicu
,
Lee, Gim Hee
,
Zhong, Zhun
in
Annotations
,
Artificial neural networks
,
Computer science
2024
Domain shift widely exists in the visual world, while modern deep neural networks commonly suffer from severe performance degradation under domain shift due to poor generalization ability, which limits real-world applications. The domain shift mainly lies in the limited source environmental variations and the large distribution gap between source and unseen target data. To this end, we propose a unified framework, Style-HAllucinated Dual consistEncy learning (SHADE), to handle such domain shift in various visual tasks. Specifically, SHADE is constructed based on two consistency constraints, Style Consistency (SC) and Retrospection Consistency (RC). SC enriches the source situations and encourages the model to learn consistent representation across style-diversified samples. RC leverages general visual knowledge to prevent the model from overfitting to source data and thus largely keeps the representation consistent between the source and general visual models. Furthermore, we present a novel style hallucination module (SHM) to generate style-diversified samples that are essential to consistency learning. SHM selects basis styles from the source distribution, enabling the model to dynamically generate diverse and realistic samples during training. Extensive experiments demonstrate that our versatile SHADE can significantly enhance the generalization in various visual recognition tasks, including image classification, semantic segmentation, and object detection, with different models, i.e., ConvNets and Transformer.
Journal Article
Task-like training paradigm in CLIP for zero-shot sketch-based image retrieval
by
Jiang, He
,
Liu, Jingjing
,
Zhang, Haoxiang
in
Collaboration
,
Computer Communication Networks
,
Computer Science
2024
The Contrastive Language-Image Pre-training model (CLIP) has recently gained attention in the zero-shot domain. However it still falls short in addressing cross-modal perception, and the semantic gap between seen and unseen classes in Zero-Shot Sketch-Based Image Retrieval (ZS-SBIR). To overcome these obstacles, we propose a Task-Like Training paradigm (TLT). In this work, we view the cross-modal perception and the semantic gap as a multi-task learning process. Before tackling the challenges, we fully utilize CLIP’s text encoder and propose text-based identification learning mechanism to assist the model to learn discriminative features quickly. Next, we propose text prompt tutoring and the cross-modal consistency learning to solve cross-modal perception and the semantic gap, respectively. Meanwhile, we present a collaborative architecture to explore the potential shared information between tasks. Extensive results show that our approach significantly outperforms the state-of-the-art methods on Sketchy, Sketchy-No, Tuberlin, and QuickDraw datasets.
Journal Article
Swin-Fake: A Consistency Learning Transformer-Based Deepfake Video Detector
by
Gong, Liang Yu
,
Li, Xue Jun
,
Chong, Peter Han Joo
in
Algorithms
,
Classification
,
Cybersecurity
2024
Deepfake has become an emerging technology affecting cyber-security with its illegal applications in recent years. Most deepfake detectors utilize CNN-based models such as the Xception Network to distinguish real or fake media; however, their performance on cross-datasets is not ideal because they suffer from over-fitting in the current stage. Therefore, this paper proposed a spatial consistency learning method to relieve this issue in three aspects. Firstly, we increased the selections of data augmentation methods to 5, which is more than our previous study’s data augmentation methods. Specifically, we captured several equal video frames of one video and randomly selected five different data augmentations to obtain different data views to enrich the input variety. Secondly, we chose Swin Transformer as the feature extractor instead of a CNN-based backbone, which means that our approach did not utilize it for downstream tasks, and could encode these data using an end-to-end Swin Transformer, aiming to learn the correlation between different image patches. Finally, this was combined with consistency learning in our study, and consistency learning was able to determine more data relationships than supervised classification. We explored the consistency of video frames’ features by calculating their cosine distance and applied traditional cross-entropy loss to regulate this classification loss. Extensive in-dataset and cross-dataset experiments demonstrated that Swin-Fake could produce relatively good results on some open-source deepfake datasets, including FaceForensics++, DFDC, Celeb-DF and FaceShifter. By comparing our model with several benchmark models, our approach shows relatively strong robustness in detecting deepfake media.
Journal Article
SADCL-Net: Sparse-driven Attention with Dual-Consistency Learning Network for Incomplete Multi-view Clustering
by
Xue, Sicheng
,
Zhu, Changming
in
Clustering
,
Computer Communication Networks
,
Computer Graphics
2024
In recent years, various deep learning-based methods have been proposed to address the problem of incomplete multi-view clustering. However, these methods still face two major challenges: firstly, due to the incompleteness of views, they are susceptible to noise and irrelevant features, leading to redundant and untargeted learned features; secondly, although these methods attempt to leverage the consistent information across different views for cross-view learning, they often overemphasize consistency, thereby neglecting inter-view differences. To address these issues, this paper proposes a novel method: Sparse-driven Attention with Dual-Consistency Learning Network for Incomplete Multi-view Clustering (SADCL-Net). Specifically, we utilize a sparsity-constrained self-attention module to effectively capture the intrinsic sparsity and local salient information of the data, thereby highlighting key features. Subsequently, we design a Dual-Consistency Learning module, which incorporates a joint entropy term into the consistency loss function to balance the consistency and difference information across views, thereby optimizing the contribution of different views to the final clustering results. Additionally, to avoid potential issues arising from imputing missing data, we integrate feature projection and alignment modules. Our proposed method has been extensively evaluated on six widely-used multi-view datasets, with results prominently demonstrating the significant advantages of SADCL-Net in terms of clustering performance.
Journal Article
Multi-UAV Roundup Inspired by Hierarchical Cognition Consistency Learning Based on an Interaction Mechanism
2023
This paper is concerned with the problem of multi-UAV roundup inspired by hierarchical cognition consistency learning based on an interaction mechanism. First, a dynamic communication model is constructed to address the interactions among multiple agents. This model includes a simplification of the communication graph relationships and a quantification of information efficiency. Then, a hierarchical cognition consistency learning method is proposed to improve the efficiency and success rate of roundup. At the same time, an opponent graph reasoning network is proposed to address the prediction of targets. Compared with existing multi-agent reinforcement learning (MARL) methods, the method developed in this paper possesses the distinctive feature that target assignment and target prediction are carried out simultaneously. Finally, to verify the effectiveness of the proposed method, we present extensive experiments conducted in the scenario of multi-target roundup. The experimental results show that the proposed architecture outperforms the conventional approach with respect to the roundup success rate and verify the validity of the proposed model.
Journal Article
Semi-Supervised Building Detection from High-Resolution Remote Sensing Imagery
2023
Urban building information reflects the status and trends of a region’s development and is essential for urban sustainability. Detection of buildings from high-resolution (HR) remote sensing images (RSIs) provides a practical approach for quickly acquiring building information. Mainstream building detection methods are based on fully supervised deep learning networks, which require a large number of labeled RSIs. In practice, manually labeling building instances in RSIs is labor-intensive and time-consuming. This study introduces semi-supervised deep learning techniques for building detection and proposes a semi-supervised building detection framework to alleviate this problem. Specifically, the framework is based on teacher–student mutual learning and consists of two key modules: the color and Gaussian augmentation (CGA) module and the consistency learning (CL) module. The CGA module is designed to enrich the diversity of building features and the quantity of labeled images for better training of an object detector. The CL module derives a novel consistency loss by imposing consistency of predictions from augmented unlabeled images to enhance the detection ability on the unlabeled RSIs. The experimental results on three challenging datasets show that the proposed framework outperforms state-of-the-art building detection methods and semi-supervised object detection methods. This study develops a new approach for optimizing the building detection task and a methodological reference for the various object detection tasks on RSIs.
Journal Article
Multi-Source Consistency Deep Learning for Semi-Supervised Operating Condition Recognition in Sucker-Rod Pumping Wells
2024
How making full use of the multiple measured information sources obtained from the sucker-rod pumping wells based on deep learning is crucial for precisely recognizing the operating conditions. However, the existing deep learning-based operating condition recognition technology has the disadvantages of low accuracy and weak practicality owing to the limitations of methods for handling single-source or multi-source data, high demand for sufficient labeled data, and inability to make use of massive unknown operating condition data resources. To solve these problems, here we design a semi-supervised operating condition recognition method based on multi-source consistency deep learning. Specifically, on the basis of the framework of WideResNet28-2 convolutional neural network (CNN), the multi-head self-attention mechanism and feedforward neural network are first used to extract the deeper features of the measured dynamometer cards and the measured electrical power cards, respectively. Then, the consistency constraint loss based on cosine similarity measurement is introduced to ensure the maximum similarity of the final features expressed by different information sources. Next, the optimal global feature representation of multi-source fusion is obtained by learning the weights of the feature representations of different information sources through the adaptive attention mechanism. Finally, the fused multi-source feature combined with the multi-source semi-supervised class-aware contrastive learning is exploited to yield the operating condition recognition model. We test the proposed model with a dataset produced from an oilfield in China with a high-pressure and low permeability thin oil reservoir block. Experiments show that the method proposed can better learn the critical features of multiple measured information sources of oil wells, and further improve the operating condition identification performance by making full use of unknown operating condition data with a small amount of labeled data.
Journal Article
Reverse collaborative fusion model for co-saliency detection
2022
The purpose of co-saliency detection is to find out the salient and common objects of related images. This paper proposes a novel reverse collaborative fusion model (RCFM) for co-saliency detection. The model is mainly composed of two parts: reverse message fusion module (RMFM) and collaborative consistency learning module (CCLM). Specifically, we first aggregate the features in high-level layers as global guidance by using the cascaded decoder (CD). Then, we propose repeated RMFMs on each side output to complete the complementary fusion of deep and shallow information. Then, we fuse multi-scale feature maps as initial co-saliency maps. Finally, the CCLM extracts the collaborative information between images to improve the quality of the initial co-saliency map to obtain the final co-saliency map. The model fully considers the semantic features of high-level and the boundary features of low-level, thereby correcting some deviation predictions and improving the accuracy of co-saliency detection. Compared to the state-of-the-art approaches, experimental results demonstrate that our proposed approach achieves the best performance on four evaluation indicators of three datasets.
Journal Article
Smoothness-based consistency learning for macaque pose estimation
2023
Macaques are a rare substitute and play an important role in study of human psychology and spiritual science. Accurate estimation of macaque pose information is key to these studies, macaque pose estimation remains to be hindered by the scarcity of labeled images. To address this problem, this work introduces a novel semi-supervised approach called smoothness-based spatio-temporal consistency learning (SSTCL) and a dual network structure (DNS) to leverage the amounts of unlabeled real images. Specifically, the SSTCL introduces the smoothness assumption to help the model generalize from the labeled training images to the unlabeled images, and the spatio-temporal consistency is designed to leverage both spatial and temporal consistencies to pick the most reliable pseudo-labels. Moreover, a dual network structure (DNS) is proposed to empower the model the ability of self-correction, which can prevent the degeneration caused by the noisy pseudo-labels in semi-supervised learning. In ablation experiments, the effectiveness of DNS for pseudo-label quality assurance is demonstrated. We evaluate the proposed method on the public OpenMonkeyPose dataset, the results show that the proposed method can achieve competitive performance while using less labeled images, and the final accuracy surpasses the strong baseline HRNet-w48 of 2.1 AP.
Journal Article