April 14, 2023, 3 p.m. | Google AI (noreply@blogger.com)

Google AI Blog ai.googleblog.com

Posted by Matthew Streeter, Software Engineer, Google Research


Derivatives play a central role in optimization and machine learning. By locally approximating a training loss, derivatives guide an optimizer toward lower values of the loss. Automatic differentiation frameworks such as TensorFlow, PyTorch, and JAX are an essential part of modern machine learning, making it feasible to use gradient-based optimizers to train very complex models.




But are derivatives all we need? By themselves, derivatives only tell us how a …

algorithms beyond derivatives differentiation engineer example frameworks function google google research gradient guide jax loss machine machine learning making optimization part pytorch research role scale software software engineer tensorflow training training loss values

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne