Nov. 5, 2023, 6:44 a.m. | Md Yousuf Harun, Jhair Gallardo, Tyler L. Hayes, Ronald Kemker, Christopher Kanan

cs.LG updates on arXiv.org arxiv.org

In supervised continual learning, a deep neural network (DNN) is updated with
an ever-growing data stream. Unlike the offline setting where data is shuffled,
we cannot make any distributional assumptions about the data stream. Ideally,
only one pass through the dataset is needed for computational efficiency.
However, existing methods are inadequate and make many assumptions that cannot
be made for real-world applications, while simultaneously failing to improve
computational efficiency. In this paper, we propose a novel continual learning
method, SIESTA …

arxiv assumptions computational continual data dataset data stream deep neural network dnn efficiency network neural network offline sleep through

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne