June 27, 2022, 1:10 a.m. | Moshe Sipper

cs.LG updates on arXiv.org arxiv.org

Activation functions (AFs), which are pivotal to the success (or failure) of
a neural network, have received increased attention in recent years, with
researchers seeking to design novel AFs that improve some aspect of network
performance. In this paper we take another direction, wherein we combine a slew
of known AFs into successful architectures, proposing three methods to do so
beneficially: 1) generate AF architectures at random, 2) use Optuna, an
automatic hyper-parameter optimization software framework, with a
Tree-structured Parzen …

arxiv la networks neural networks

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada