June 21, 2024, 4:47 a.m. | Vaibhav Singh, Rahaf Aljundi, Eugene Belilovsky

cs.LG updates on arXiv.org arxiv.org

arXiv:2406.13653v1 Announce Type: new
Abstract: Foundational vision-language models have shown impressive performance on various downstream tasks. Yet, there is still a pressing need to update these models later as new tasks or domains become available. Ongoing Continual Learning (CL) research provides techniques to overcome catastrophic forgetting of previous information when new knowledge is acquired. To date, CL techniques focus only on the supervised training sessions. This results in significant forgetting yielding inferior performance to even the prior model zero shot …

abstract arxiv become catastrophic forgetting continual cs.lg data domains foundational information knowledge language language models later performance research tasks test time data type update vision vision-language vision-language models

AI Focused Biochemistry Postdoctoral Fellow

@ Lawrence Berkeley National Lab | Berkeley, CA

Senior Data Engineer

@ Displate | Warsaw

Solutions Architect

@ PwC | Bucharest - 1A Poligrafiei Boulevard

Research Fellow (Social and Cognition Factors, CLIC)

@ Nanyang Technological University | NTU Main Campus, Singapore

Research Aide - Research Aide I - Department of Psychology

@ Cornell University | Ithaca (Main Campus)

Technical Architect - SMB/Desk

@ Salesforce | Ireland - Dublin