Nov. 5, 2023, 6:44 a.m. | Md Yousuf Harun, Jhair Gallardo, Tyler L. Hayes, Ronald Kemker, Christopher Kanan

cs.LG updates on arXiv.org arxiv.org

In supervised continual learning, a deep neural network (DNN) is updated with
an ever-growing data stream. Unlike the offline setting where data is shuffled,
we cannot make any distributional assumptions about the data stream. Ideally,
only one pass through the dataset is needed for computational efficiency.
However, existing methods are inadequate and make many assumptions that cannot
be made for real-world applications, while simultaneously failing to improve
computational efficiency. In this paper, we propose a novel continual learning
method, SIESTA …

arxiv assumptions computational continual data dataset data stream deep neural network dnn efficiency network neural network offline sleep through

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US