March 21, 2024, 4:24 a.m. | /u/Fun-5749

Deep Learning www.reddit.com

If we are talking non- linear activation function for hidden layer, but the ReLU is linear for the positive activation.
How this maintain non-linearity ?
Can we say that the feature can not be negative, that why ReLU turn off the neuron?

deeplearning feature function hidden layer linear negative network neural network neuron positive relu

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Machine Learning Engineer

@ Samsara | Canada - Remote