Web: http://arxiv.org/abs/2004.12908

Jan. 12, 2022, 2:10 a.m. | Joshua T. Vogelstein, Jayanta Dey, Hayden S. Helm, Will LeVine, Ronak D. Mehta, Ali Geisa, Haoyin Xu, Gido M. van de Ven, Emily Chang, Chenyu Gao, Wei

cs.LG updates on arXiv.org arxiv.org

In biological learning, data are used to improve performance not only on the
current task, but also on previously encountered, and as yet unencountered
tasks. In contrast, classical machine learning starts from a blank slate, or
tabula rasa, using data only for the single task at hand. While typical
transfer learning algorithms can improve performance on future tasks, their
performance on prior tasks degrades upon learning new tasks (called
catastrophic forgetting). Many recent approaches for continual or lifelong
learning have attempted to maintain performance given new tasks. But striving
to …

ai arxiv complexity for learning

Statistics and Computer Science Specialist

@ Hawk-Research | Remote

Data Scientist, Credit/Fraud Strategy

@ Fora Financial | New York City

Postdoctoral Research Associate - Biomedical Natural Language Processing and Deep Learning

@ Oak Ridge National Laboratory - Oak Ridge, TN | Oak Ridge, TN, United States

Senior Machine Learning / Computer Vision Engineer

@ Glass Imaging | Los Altos, CA

Research Scientist in Biomedical Natural Language Processing and Deep Learning

@ Oak Ridge National Laboratory | Oak Ridge, TN

W3-Professorship for Intelligent Energy Management

@ Universität Bayreuth | Bayreuth, Germany