all AI news
Quantified Task Misalignment to Inform PEFT: An Exploration of Domain Generalization and Catastrophic Forgetting in CLIP
Feb. 16, 2024, 5:46 a.m. | Laura Niss, Kevin Vogt-Lowell, Theodoros Tsiligkaridis
cs.CV updates on arXiv.org arxiv.org
Abstract: Foundations models are presented as generalists that often perform well over a myriad of tasks. Fine-tuning these models, even on limited data, provides an additional boost in task-specific performance but often at the cost of their wider generalization, an effect termed catastrophic forgetting. In this paper, we analyze the relation between task difficulty in the CLIP model and the performance of several simple parameter-efficient fine-tuning methods through the lens of domain generalization and catastrophic forgetting. …
abstract arxiv boost catastrophic forgetting clip cost cs.cv data domain exploration fine-tuning peft performance tasks type
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Global Clinical Data Manager
@ Warner Bros. Discovery | CRI - San Jose - San Jose (City Place)
Global Clinical Data Manager
@ Warner Bros. Discovery | COL - Cundinamarca - Bogotá (Colpatria)
Ingénieur Data Manager / Pau
@ Capgemini | Paris, FR