all AI news
Task Specific Pretraining with Noisy Labels for Remote sensing Image Segmentation
Feb. 27, 2024, 5:47 a.m. | Chenying Liu, Conrad Albrecht, Yi Wang, Xiao Xiang Zhu
cs.CV updates on arXiv.org arxiv.org
Abstract: In recent years, self-supervision has drawn a lot of attention in remote sensing society due to its ability to reduce the demand of exact labels in supervised deep learning model training. Self-supervision methods generally utilize image-level information to pretrain models in an unsupervised fashion. Though these pretrained encoders show effectiveness in many downstream tasks, their performance on segmentation tasks is often not as good as that on classification tasks. On the other hand, many easily …
abstract arxiv attention cs.cv deep learning demand fashion image information labels pretraining reduce segmentation sensing society supervision training type unsupervised
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Data Scientist, gTech Ads
@ Google | Mexico City, CDMX, Mexico
Lead, Data Analytics Operations
@ Zocdoc | Pune, Maharashtra, India