April 5, 2022, 1:40 p.m. | /u/seraschka

Machine Learning www.reddit.com

A little blog post to explain that the negative log likelihood loss, logistic regression loss, and binary cross-entropy loss are the same thing. Plus, a short discussion why using BCEWithLogits loss is numerically more stable than using BCELoss with probabilities.

[https://sebastianraschka.com/blog/2022/losses-learned-part1.html](https://sebastianraschka.com/blog/2022/losses-learned-part1.html)

cross-entropy entropy machinelearning negative part pytorch

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne