all AI news
Leveraging Intrinsic Gradient Information for Further Training of Differentiable Machine Learning Models. (arXiv:2112.00094v2 [cs.LG] UPDATED)
cs.LG updates on arXiv.org arxiv.org
Designing models that produce accurate predictions is the fundamental
objective of machine learning (ML). This work presents methods demonstrating
that when the derivatives of target variables (outputs) with respect to inputs
can be extracted from processes of interest, e.g., neural networks (NN) based
surrogate models, they can be leveraged to further improve the accuracy of
differentiable ML models. This paper generalises the idea and provides
practical methodologies that can be used to leverage gradient information (GI)
across a variety of …
arxiv gradient information learning machine machine learning machine learning models training