July 11, 2022, 1:57 p.m. | Roland Schätzle

Towards Data Science - Medium towardsdatascience.com

Photo by SIMON LEE on Unsplash

Flux.jl on MNIST — What about ADAM?

So far we’ve seen a performance analysis using the standard gradient descent optimizer. But which results do we get, if we use a more sophisticated one like ADAM?

In Flux.jl on MNIST — Variations of a theme, I presented three neural networks for recognizing handwritten digits as well as three variations of the gradient descent algorithm (GD) for training these networks.

The follow-up article Flux.jl on …

adam data science julia machie-learning mnist neural networks

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Sr. BI Analyst

@ AkzoNobel | Pune, IN