all AI news
The Common Stability Mechanism behind most Self-Supervised Learning Approaches
Feb. 26, 2024, 5:42 a.m. | Abhishek Jha, Matthew B. Blaschko, Yuki M. Asano, Tinne Tuytelaars
cs.LG updates on arXiv.org arxiv.org
Abstract: Last couple of years have witnessed a tremendous progress in self-supervised learning (SSL), the success of which can be attributed to the introduction of useful inductive biases in the learning process to learn meaningful visual representations while avoiding collapse. These inductive biases and constraints manifest themselves in the form of different optimization formulations in the SSL techniques, e.g. by utilizing negative examples in a contrastive formulation, or exponential moving average and predictor in BYOL and …
abstract arxiv biases constraints cs.cv cs.lg inductive introduction learn manifest process progress self-supervised learning ssl stability success supervised learning type visual
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Principal Applied Scientist
@ Microsoft | Redmond, Washington, United States
Data Analyst / Action Officer
@ OASYS, INC. | OASYS, INC., Pratt Avenue Northwest, Huntsville, AL, United States