April 24, 2023, 12:45 a.m. | Yewon Kim, YongTaek Lim, Dokyung Yoon, KyungWoo Song

cs.LG updates on arXiv.org arxiv.org

Large-scale pre-trained models have been known that they are transferable,
and they generalize well on the unseen dataset. Recently, multimodal
pre-trained models such as CLIP show significant performance improvement in
diverse experiments. However, when the labeled dataset is limited, the
generalization of a new dataset or domain is still challenging. To improve the
generalization performance on few-shot learning, there have been diverse
efforts, such as prompt learning and adapter. However, the current few-shot
adaptation methods are not interpretable, and they …

arxiv clip computation cost dataset diverse few-shot learning graph improvement knowledge knowledge graph multimodal performance pre-trained models prompt prompt learning scale show study

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne