all AI news
Why gradient descent? and why not solve for dy/dx = 0
March 28, 2024, 6:33 a.m. | /u/naniramd
Deep Learning www.reddit.com
Since, we have certain cost functions already defined for different cases,why don't we take the derivatives of the cost function and solve for dy/dx =0 and then solving this for max or min point
I know, we may have some problems while getting those extreme points, but the GD optimization too has so many complexities.
cases cost deeplearning derivatives focus function functions gradient max optimization solve
More from www.reddit.com / Deep Learning
What deep learnng theory we really need?
1 day, 14 hours ago |
www.reddit.com
Classical ML interview
2 days, 8 hours ago |
www.reddit.com
Talking face generation!!
3 days, 2 hours ago |
www.reddit.com
Learning Deep Learning from scratch
4 days, 4 hours ago |
www.reddit.com
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Intern Large Language Models Planning (f/m/x)
@ BMW Group | Munich, DE
Data Engineer Analytics
@ Meta | Menlo Park, CA | Remote, US