all AI news
On the influence of stochastic roundoff errors on the convergence of the gradient descent method with low-precision floating-point computation. (arXiv:2202.12276v2 [cs.LG] UPDATED)
cs.LG updates on arXiv.org arxiv.org
When implementing the gradient descent method in low precision, the
employment of stochastic rounding schemes helps to prevent stagnation of
convergence caused by the vanishing gradient effect. Unbiased stochastic
rounding yields zero bias by preserving small updates with probabilities
proportional to their relative magnitudes. This study provides a theoretical
explanation for the stagnation of the gradient descent method in low-precision
computation. Additionally, we propose two new stochastic rounding schemes that
trade the zero bias property with a larger probability to …
arxiv computation convergence errors gradient influence low precision stochastic