all AI news
Parameter Efficient Fine-tuning of Self-supervised ViTs without Catastrophic Forgetting
April 29, 2024, 4:45 a.m. | Reza Akbarian Bafghi, Nidhin Harilal, Claire Monteleoni, Maziar Raissi
cs.CV updates on arXiv.org arxiv.org
Abstract: Artificial neural networks often suffer from catastrophic forgetting, where learning new concepts leads to a complete loss of previously acquired knowledge. We observe that this issue is particularly magnified in vision transformers (ViTs), where post-pre-training and fine-tuning on new tasks can significantly degrade the model's original general abilities. For instance, a DINO ViT-Base/16 pre-trained on ImageNet-1k loses over 70% accuracy on ImageNet-1k after just 10 iterations of fine-tuning on CIFAR-100. Overcoming this stability-plasticity dilemma is …
abstract acquired artificial artificial neural networks arxiv catastrophic forgetting concepts cs.cv fine-tuning issue knowledge leads loss networks neural networks observe pre-training tasks training transformers type vision vision transformers
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US