all AI news
[D] LLMs are known for catastrophic forgetting during continual fine-tuning
Feb. 6, 2024, 3:57 p.m. | /u/kekkimo
Machine Learning www.reddit.com
In other words, how can LLMs remember the data that they learned in the initial training batches (in both, during pre-training and continual fine-tuning)?
catastrophic forgetting chatgpt chatgpt-4 continual data fine-tuning llms machinelearning pre-training training words
More from www.reddit.com / Machine Learning
[R] KAN: Kolmogorov-Arnold Networks
19 hours ago |
www.reddit.com
[D] TensorDock — GPU Cloud Marketplace, H100s from $2.49/hr
1 day, 1 hour ago |
www.reddit.com
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Business Data Scientist, gTech Ads
@ Google | Mexico City, CDMX, Mexico
Lead, Data Analytics Operations
@ Zocdoc | Pune, Maharashtra, India