March 28, 2024, 6:33 a.m. | /u/naniramd

Deep Learning www.reddit.com

In ML and DL, our main focus goes to optimization and we do this using Gradient Descent.

Since, we have certain cost functions already defined for different cases,why don't we take the derivatives of the cost function and solve for dy/dx =0 and then solving this for max or min point



I know, we may have some problems while getting those extreme points, but the GD optimization too has so many complexities.

cases cost deeplearning derivatives focus function functions gradient max optimization solve

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Intern Large Language Models Planning (f/m/x)

@ BMW Group | Munich, DE

Data Engineer Analytics

@ Meta | Menlo Park, CA | Remote, US