contact@ijirct.org      

 

Publication Number

2412073

 

Page Numbers

1-10

 

Paper Details

Self-Supervised Learning: How Self-Supervised Learning Approaches Can Reduce Dependence on Labeled Data

Authors

Gaurav Kashyap

Abstract

A promising paradigm that lessens the need for sizable labeled datasets for machine learning model training is self-supervised learning (SSL). SSL models are able to learn data representations through pretext tasks by utilizing unlabeled data. These representations can then be refined for tasks that come after. The development of self-supervised learning, its underlying techniques, and its potential to address the difficulties associated with obtaining labeled data are all examined in this paper. We go over the main self-supervised methods, their uses, and how they might improve the generalization and scalability of machine learning models. We also look at the difficulties in implementing SSL in various domains and potential avenues for future research. This study investigates how self-supervised learning strategies can result in notable gains across a range of machine learning tasks, especially when there is a shortage of labeled data. [1] [2]

Keywords

Self-Supervised Learning, Labeled Data, Unsupervised Learning, Deep Learning, Representation Learning.

 

. . .

Citation

Self-Supervised Learning: How Self-Supervised Learning Approaches Can Reduce Dependence on Labeled Data. Gaurav Kashyap. 2024. IJIRCT, Volume 10, Issue 4. Pages 1-10. https://www.ijirct.org/viewPaper.php?paperId=2412073

Download/View Paper

 

Download/View Count

6

 

Share This Article