Sept. 8, 2022, 5:50 p.m. | /u/TheRPGGamerMan

Deep Learning www.reddit.com

I came across an issue when writing my own neural network. I was messing around with back propagation and noticed it seems to be limited to only being able to correct outputs as if they are the answer themselves.

For example: I was attempting to get my machine to do simple multiplication. I decided to use a single normalized float output that represents 0-1000. No matter how much I trained it on multiplication, it just returned complete trash. But then …

back propagation coding deeplearning neural net

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Business Intelligence Architect - Specialist

@ Eastman | Hyderabad, IN, 500 008