Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,952 result(s) for "Self-supervised learning"
Sort by:
Survey on Self-Supervised Learning: Auxiliary Pretext Tasks and Contrastive Learning Methods in Imaging
Although deep learning algorithms have achieved significant progress in a variety of domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) using unlabeled data has emerged as an alternative, as it eliminates manual annotation. To do this, SSL constructs feature representations using pretext tasks that operate without manual annotation, which allows models trained in these tasks to extract useful latent representations that later improve downstream tasks such as object classification and detection. The early methods of SSL are based on auxiliary pretext tasks as a way to learn representations using pseudo-labels, or labels that were created automatically based on the dataset’s attributes. Furthermore, contrastive learning has also performed well in learning representations via SSL. To succeed, it pushes positive samples closer together, and negative ones further apart, in the latent space. This paper provides a comprehensive literature review of the top-performing SSL methods using auxiliary pretext and contrastive learning techniques. It details the motivation for this research, a general pipeline of SSL, the terminologies of the field, and provides an examination of pretext tasks and self-supervised methods. It also examines how self-supervised methods compare to supervised ones, and then discusses both further considerations and ongoing challenges faced by SSL.
Context Autoencoder for Self-supervised Representation Learning
We present a novel masked image modeling (MIM) approach, context autoencoder (CAE), for self-supervised representation pretraining. We pretrain an encoder by making predictions in the encoded representation space. The pretraining tasks include two tasks: masked representation prediction—predict the representations for the masked patches, and masked patch reconstruction—reconstruct the masked patches. The network is an encoder–regressor–decoder architecture: the encoder takes the visible patches as input; the regressor predicts the representations of the masked patches, which are expected to be aligned with the representations computed from the encoder, using the representations of visible patches and the positions of visible and masked patches; the decoder reconstructs the masked patches from the predicted encoded representations. The CAE design encourages the separation of learning the encoder (representation) from completing the pertaining tasks: masked representation prediction and masked patch reconstruction tasks, and making predictions in the encoded representation space empirically shows the benefit to representation learning. We demonstrate the effectiveness of our CAE through superior transfer performance in downstream tasks: semantic segmentation, object detection and instance segmentation, and classification. The code will be available at https://github.com/Atten4Vis/CAE.
Drug-target binding affinity prediction using message passing neural network and self supervised learning
Background Drug-target binding affinity (DTA) prediction is important for the rapid development of drug discovery. Compared to traditional methods, deep learning methods provide a new way for DTA prediction to achieve good performance without much knowledge of the biochemical background. However, there are still room for improvement in DTA prediction: (1) only focusing on the information of the atom leads to an incomplete representation of the molecular graph; (2) the self-supervised learning method could be introduced for protein representation. Results In this paper, a DTA prediction model using the deep learning method is proposed, which uses an undirected-CMPNN for molecular embedding and combines CPCProt and MLM models for protein embedding. An attention mechanism is introduced to discover the important part of the protein sequence. The proposed method is evaluated on the datasets Ki and Davis, and the model outperformed other deep learning methods. Conclusions The proposed model improves the performance of the DTA prediction, which provides a novel strategy for deep learning-based virtual screening methods.
MSResNet: Multiscale Residual Network via Self-Supervised Learning for Water-Body Detection in Remote Sensing Imagery
Driven by the urgent demand for flood monitoring, water resource management and environmental protection, water-body detection in remote sensing imagery has attracted increasing research attention. Deep semantic segmentation networks (DSSNs) have gradually become the mainstream technology used for remote sensing image water-body detection, but two vital problems remain. One problem is that the traditional structure of DSSNs does not consider multiscale and multishape characteristics of water bodies. Another problem is that a large amount of unlabeled data is not fully utilized during the training process, but the unlabeled data often contain meaningful supervision information. In this paper, we propose a novel multiscale residual network (MSResNet) that uses self-supervised learning (SSL) for water-body detection. More specifically, our well-designed MSResNet distinguishes water bodies with different scales and shapes and helps retain the detailed boundaries of water bodies. In addition, the optimization of MSResNet with our SSL strategy can improve the stability and universality of the method, and the presented SSL approach can be flexibly extended to practical applications. Extensive experiments on two publicly open datasets, including the 2020 Gaofen Challenge water-body segmentation dataset and the GID dataset, demonstrate that our MSResNet can obviously outperform state-of-the-art deep learning backbones and that our SSL strategy can further improve the water-body detection performance.
A Review of Predictive and Contrastive Self-supervised Learning for Medical Images
Over the last decade, supervised deep learning on manually annotated big data has been progressing significantly on computer vision tasks. But, the application of deep learning in medical image analysis is limited by the scarcity of high-quality annotated medical imaging data. An emerging solution is self-supervised learning (SSL), among which contrastive SSL is the most successful approach to rivalling or outperforming supervised learning. This review investigates several state-of-the-art contrastive SSL algorithms originally on natural images as well as their adaptations for medical images, and concludes by discussing recent advances, current limitations, and future directions in applying contrastive SSL in the medical domain.
Denoising self-supervised learning for disease-gene association prediction
Understanding the interplay between diseases and genes is crucial for gaining deeper insights into disease mechanisms and optimizing therapeutic strategies. In recent years, various computational methods have been developed to uncover potential disease-gene associations. However, existing computational approaches for disease-gene association prediction still face two major limitations. First, most current studies focus on constructing complex heterogeneous graphs using multi-dimensional biological entity relationships, while overlooking critical latent interaction patterns, namely, disease neighbor interactions and gene neighbor interactions—which are more valuable for association prediction. Second, in self-supervised learning (SSL), the presence of noise in auxiliary tasks commonly affects the accurate modeling of diseases and genes. In this study, we propose a novel denoising method for disease-gene association prediction, termed DGSL. To address the first issue, we utilize bipartite graphs corresponding to diseases and genes to derive disease-disease and gene-gene similarities, and further construct disease and gene interaction graphs to capture the latent interaction patterns. To tackle the second challenge, we implement cross-view denoising through adaptive semantic alignment in the embedding space, while preserving useful neighbor interactions. Extensive experiments on benchmark datasets demonstrate the effectiveness of our method.
Multi-Source Remote Sensing Pretraining Based on Contrastive Self-Supervised Learning
SAR-optical images from different sensors can provide consistent information for scene classification. However, the utilization of unlabeled SAR-optical images in deep learning-based remote sensing image interpretation remains an open issue. In recent years, contrastive self-supervised learning (CSSL) methods have shown great potential for obtaining meaningful feature representations from massive amounts of unlabeled data. This paper investigates the effectiveness of CSSL-based pretraining models for SAR-optical remote-sensing classification. Firstly, we analyze the contrastive strategies of single-source and multi-source SAR-optical data augmentation under different CSSL architectures. We find that the CSSL framework without explicit negative sample selection naturally fits the multi-source learning problem. Secondly, we find that the registered SAR-optical images can guide the Siamese self-supervised network without negative samples to learn shared features, which is also the reason why the CSSL framework outperforms the CSSL framework with negative samples. Finally, we apply the CSSL pretrained network without negative samples that can learn the shared features of SAR-optical images to the downstream domain adaptation task of optical transfer to SAR images. We find that the choice of a pretrained network is important for downstream tasks.
Self-supervised learning for medical image analysis: a comprehensive review
Deep learning and advancements in computer vision offer significant potential for analyzing medical images resulting in better healthcare and improved patient outcomes. Currently, the dominant approaches in the field of machine learning are supervised learning and transfer learning. These methods are not only prevalent in medicine and healthcare but also across various other industries. They rely on large datasets that have been manually annotated to train increasingly sophisticated models. However, the manual labeling process results in a wealth of untapped, unlabeled data that is accessible in both public and private data repositories. Self-supervised learning (SSL), an emerging field within machine learning, provides a solution by leveraging this untapped, unlabeled data. Unlike traditional machine learning paradigms, SSL algorithms pre-train models using artificial supervisory signals generated from the unlabeled data. This comprehensive review article explores the fundamental concepts, approaches, and advancements in self-supervised learning, with a particular emphasis on medical image datasets and their sources. By summarizing and highlighting the main contributions and findings from the article, this analysis and synthesis aim to shed light on the current state of research in self-supervised learning. Through these rigorous efforts, the existing body of knowledge is synthesized, and implementation recommendations are provided for future researchers interested in harnessing self-supervised learning to develop classification models for medical imaging.
Multi-Stage Prompt Tuning for Political Perspective Detection in Low-Resource Settings
Political perspective detection in news media—identifying political bias in news articles—is an essential but challenging low-resource task. Prompt-based learning (i.e., discrete prompting and prompt tuning) achieves promising results in low-resource scenarios by adapting a pre-trained model to handle new tasks. However, these approaches suffer performance degradation when the target task involves a textual domain (e.g., a political domain) different from the pre-training task (e.g., masked language modeling on a general corpus). In this paper, we develop a novel multi-stage prompt tuning framework for political perspective detection. Our method involves two sequential stages: a domain- and task-specific prompt tuning stage. In the first stage, we tune the domain-specific prompts based on a masked political phrase prediction (MP3) task to adjust the language model to the political domain. In the second task-specific prompt tuning stage, we only tune task-specific prompts with a frozen language model and domain-specific prompts for downstream tasks. The experimental results demonstrate that our method significantly outperforms fine-tuning (i.e., model tuning) methods and state-of-the-art prompt tuning methods on the SemEval-2019 Task 4: Hyperpartisan News Detection and AllSides datasets.
FedUTN: federated self-supervised learning with updating target network
Self-supervised learning (SSL) is capable of learning noteworthy representations from unlabeled data, which has mitigated the problem of insufficient labeled data to a certain extent. The original SSL method centered on centralized data, but the growing awareness of privacy protection restricts the sharing of decentralized, unlabeled data generated by a variety of mobile devices, such as cameras, phones, and other terminals. Federated Self-supervised Learning (FedSSL) is the result of recent efforts to create Federated learning, which is always used for supervised learning using SSL. Informed by past work, we propose a new FedSSL framework, FedUTN. This framework aims to permit each client to train a model that works well on both independent and identically distributed (IID) and independent and non-identically distributed (non-IID) data. Each party possesses two asymmetrical networks, a target network and an online network. FedUTN first aggregates the online network parameters of each terminal and then updates the terminals’ target network with the aggregated parameters, which is a radical departure from the update technique utilized in earlier studies. In conjunction with this method, we offer a novel control algorithm to replace EMA for the training operation. After extensive trials, we demonstrate that: (1) the feasibility of utilizing the aggregated online network to update the target network. (2) FedUTN’s aggregation strategy is simpler, more effective, and more robust. (3) FedUTN outperforms all other prevalent FedSSL algorithms and outperforms the SOTA algorithm by 0.5%∼ 1.6% under regular experiment con1ditions.