Sept. 28, 2022, 1:11 a.m. | Thomas Hamm

cs.LG updates on arXiv.org arxiv.org

We present a derivation of the gradients of feedforward neural networks using
Fr\'echet calculus which is arguably more compact than the ones usually
presented in the literature. We first derive the gradients for ordinary neural
networks working on vectorial data and show how these derived formulas can be
used to derive a simple and efficient algorithm for calculating a neural
networks gradients. Subsequently we show how our analysis generalizes to more
general neural network architectures including, but not limited to, …

arxiv calculus derivation network neural network

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Staff Software Engineer, Generative AI, Google Cloud AI

@ Google | Mountain View, CA, USA; Sunnyvale, CA, USA

Expert Data Sciences

@ Gainwell Technologies | Any city, CO, US, 99999