all AI news
Zero Mean Leaky ReLu
March 26, 2024, 3:07 p.m. | /u/1nyouendo
Deep Learning www.reddit.com
At the risk of groans of "not another ReLu activation function variant", I thought I'd share a simple trick to make the (Leaky)ReLu better behaved, in particular to address criticism about the (Leaky)ReLu not being zero-centred.
The simple trick is to offset the (Leaky)ReLu unit by the expectation of the output under a zero-mean normally distributed input:
Zero Mean Leaky ReLu:
y(x) = max(x, a\*x) - k
k=((1 - a)\*s)/sqrt(2\*pi)
y' = a, for y<-k, 1 otherwise
The resulting …
deeplearning function mean normally relu risk simple thought trick
More from www.reddit.com / Deep Learning
Classical ML interview
1 day, 17 hours ago |
www.reddit.com
Talking face generation!!
2 days, 11 hours ago |
www.reddit.com
Learning Deep Learning from scratch
3 days, 13 hours ago |
www.reddit.com
Latency of dilated convolutions
3 days, 15 hours ago |
www.reddit.com
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Principal Machine Learning Engineer (AI, NLP, LLM, Generative AI)
@ Palo Alto Networks | Santa Clara, CA, United States
Consultant Senior Data Engineer F/H
@ Devoteam | Nantes, France