all AI news
MVEB: Self-Supervised Learning with Multi-View Entropy Bottleneck
March 29, 2024, 4:44 a.m. | Liangjian Wen, Xiasi Wang, Jianzhuang Liu, Zenglin Xu
cs.CV updates on arXiv.org arxiv.org
Abstract: Self-supervised learning aims to learn representation that can be effectively generalized to downstream tasks. Many self-supervised approaches regard two views of an image as both the input and the self-supervised signals, assuming that either view contains the same task-relevant information and the shared information is (approximately) sufficient for predicting downstream tasks. Recent studies show that discarding superfluous information not shared between the views can improve generalization. Hence, the ideal representation is sufficient for downstream tasks …
abstract arxiv cs.ai cs.cv entropy generalized image information learn regard representation self-supervised learning supervised learning tasks type view
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Machine Learning Engineer
@ Samsara | Canada - Remote