March 21, 2024, 4:24 a.m. | /u/Fun-5749

Deep Learning www.reddit.com

If we are talking non- linear activation function for hidden layer, but the ReLU is linear for the positive activation.
How this maintain non-linearity ?
Can we say that the feature can not be negative, that why ReLU turn off the neuron?

deeplearning feature function hidden layer linear negative network neural network neuron positive relu

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Senior Applied Data Scientist

@ dunnhumby | London

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV