Web: http://arxiv.org/abs/2201.10963

Jan. 27, 2022, 2:10 a.m. | Sinuo Deng, Lifang Wu, Ge Shi, Lehao Xing, Meng Jian

cs.CV updates on arXiv.org arxiv.org

Contrastive Language-Image Pre-training (CLIP) represents the latest
incarnation of pre-trained vision-language models. Although CLIP has recently
shown its superior power on a wide range of downstream vision-language tasks
like Visual Question Answering, it is still underexplored for Image Emotion
Classification (IEC). Adapting CLIP to the IEC task has three significant
challenges, tremendous training objective gap between pretraining and IEC,
shared suboptimal and invariant prompts for all instances. In this paper, we
propose a general framework that shows how CLIP can …

arxiv classification cv emotion learning

More from arxiv.org / cs.CV updates on arXiv.org

Senior Data Analyst

@ Fanatics Inc | Remote - New York

Data Engineer - Search

@ Cytora | United Kingdom - Remote

Product Manager, Technical - Data Infrastructure and Streaming

@ Nubank | Berlin

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Principal Data Scientist

@ Zuora | Remote

Data Engineer

@ Veeva Systems | Pennsylvania - Fort Washington