all AI news
[R] Zero Mean Leaky ReLu
March 26, 2024, 1:55 p.m. | /u/1nyouendo
Machine Learning www.reddit.com
At the risk of groans of "not another ReLu activation function variant", I thought I'd share a simple trick to make the (Leaky)ReLu better behaved, in particular to address criticism about the (Leaky)ReLu not being zero-centred.
The simple trick is to offset the (Leaky)ReLu unit by the expectation of the output under a zero-mean normally distributed input:
Zero Mean Leaky ReLu:
y(x) = max(x, a\*x) - k
k=((1 - a)\*s)/sqrt(2\*pi)
y' = a, for y<-k, 1 otherwise
The resulting …
function machinelearning mean relu risk simple thought trick
More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst (Digital Business Analyst)
@ Activate Interactive Pte Ltd | Singapore, Central Singapore, Singapore