Nov. 16, 2022, 2:11 a.m. | Yiming Fei, Jiangang Li, Yanan Li

cs.LG updates on arXiv.org arxiv.org

When performing real-time learning tasks, the radial basis function neural
network (RBFNN) is expected to make full use of the training samples such that
its learning accuracy and generalization capability are guaranteed. Since the
approximation capability of the RBFNN is finite, training methods with
forgetting mechanisms such as the forgetting factor recursive least squares
(FFRLS) and stochastic gradient descent (SGD) methods are widely used to
maintain the learning ability of the RBFNN to new knowledge. However, with the
forgetting mechanisms, …

approximation arxiv least memory networks neural networks real-time recursive squares

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne