April 15, 2024, 4:53 p.m. | /u/Will_Tomos_Edwards

Data Science www.reddit.com



[ Step 1: there is some loss\/cost function but we don't know its optimal parameters ](https://preview.redd.it/vo2fb58taouc1.png?width=723&format=png&auto=webp&s=d700eadb8435238bcf549c71cf7974d0d1d27cc1)



[ Step 2: solve for the derivatives at random points for the parameters and obtain tangent vectors for those points. ](https://preview.redd.it/td8kuu9waouc1.png?width=619&format=png&auto=webp&s=5e0d879dda0ebb502895350fc23a302393268f74)



[ Step 3: Solve for where the vectors \\"cross\\" \(when stretched\) in terms of the parameters, and plug those parameters into the loss function. If it seems to be a good place, you could try gradient descent\/back-prop starting from here. …

datascience dimensions easy example gradient people think

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer - AWS

@ 3Pillar Global | Costa Rica

Cost Controller/ Data Analyst - India

@ John Cockerill | Mumbai, India, India, India