Jan. 2, 2024, 9:39 p.m. | /u/IWearMyFace

Machine Learning www.reddit.com

When training neural networks, we typically borrow from the statistics practice of MLE with IID data and minimize the mean of a loss function per sample.

However, to a first approximation, biological natural selection has had to mitigate extreme negative outcomes (that is, prevent death) instead of optimizing average outcomes. I wonder if this accounts for some of the difference in inductive priors between animal brains and our current neural networks.

So who’s run the following experiment (or something similar) …

approximation data death function loss machinelearning mean mle natural negative networks neural networks per practice sample statistics training

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Data Engineer (m/f/d)

@ Project A Ventures | Berlin, Germany

Principle Research Scientist

@ Analog Devices | US, MA, Boston